Glad to see people asking good questions and taking a critical look at data, its gathering process, its analysis and the way it’s correlated loosely to arrive at theoretical conclusions that are often treated as facts.
Originally posted on Quartz:
Can an algorithm be racist? It’s a question that should be of concern for all data-driven organizations.
From analytics that help law enforcement predict future crimes, to retailers assessing the likelihood of female customers being pregnant (in the case of Target, without their knowledge), the increasing scale of computer cognizance is raising difficult ethical questions for business.
Witness the controversy that the crime app SketchFactor caused in launching its crowdsourced service in the US. The app works by allowing users to report, in real time, how subjectively “sketchy” a particular neighborhood may be, enabling an algorithm to determine the apparent safety of the area for pedestrians. Inevitably, the app has drawn accusations of racism, with some commentators labeling it a service that literally color-codes neighborhoods.
Of course, marketers have always targeted racially defined customer-bases—typically to adjust price ranges along socio-economic lines. But with ever more data becoming available, the risk…
View original 698 more words