If you follow the tech industry at all, you know that machine learning is having a moment right now. However, some have questioned whether we rely too much on machine learning and other algorithmic approaches. Such reliance could be seen as an attempt to avoid responsibility for the results of these approaches. While browsing Twitter this morning, I came across the following tweet:

First off, I love the idea that there’s an organization called AlgorithmWatch. They’re concerned with ethics in what they call “algorithmic decision making,” or ADM for short. Their manifesto is a good starting point for discussion. The first point alone is enough to get people talking:

ADM is never neutral.

As they point out, ADM is often used to predict human behavior or affect human decisions. The thing is that these algorithms tend to take on the biases of the people who create them.

For example, the criminal justice system in Florida used a program to predict a person’s risk of re-offending. Unfortunately, the algorithm tended to predict a higher recidivism risk for black people than the numbers bore out. These scores are also factored in for things like setting bail amounts, determining sentences, evaluating whether a person has been rehabilitated, etc.

The results of these algorithms have meaningful effects on people’s lives, and if they just encode the same racial biases that have been around, that doesn’t really help anybody, except maybe the companies that create these programs and for-profit prisons. That’s another discussion entirely.

More recently, Facebook has been criticized for its lack of human oversight over ads and news articles that are posted on the site. Initially, Mark Zuckerberg tried to hide behind the perception of technology’s moral neutrality, although he has since changed his rhetoric. Whether he changes policies at Facebook remains to be seen.

I also want to show some love to the person AlgorithmWatch quoted in their tweet, Cathy O’Neil. O’Neil has a Ph.D. in mathematics and has worked as a data scientist and as a Wall Street quantitative analyst. This makes her well positioned to see up close how algorithms can significantly affect people. I might check out her recent book, Weapons of Math Destruction. The title sounds a bit alarmist, but maybe the data backs her up. If the last 18-24 months have taught us nothing else, it’s taught us that societal ills are often perpetuated through seemingly innocuous channels.

In a New York Times op-ed, O’Neil proposes having an academic organization to step up and provide the kind of ethical leadership we need in this space. Don’t get me wrong: I agree that we need strong ethical leadership in this area. But I wonder if academia is the place to make that happen. A generation ago it almost certainly would have been, but academia is a very different place these days. Then again, the whole reason we’re having this discussion in the first place is that companies don’t seem to be stepping up to do anything.

It’s stories like these that make me think many of the artificial intelligence scaremongers out there are wide of the mark, although I guess that opinion is obvious if I’m calling them “scaremongers.” I am not the least bit worried about technology suddenly becoming sentient and choosing to wipe us out.

Disastrous consequences from stupid human errors when programming AI systems, though? Yeah, I worry about that stuff all the time.

What algorithms affect your life? Do you have any insight on machine learning or big data that you’d like to share? Let’s discuss it in the comments!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>