In Response to Manufacturing Truth: Machine Learning and Bias (New Museum, NY)

Ahn Bustamante 2017

While in New York, I get the chance to see a talk at the New Museum: Manufacturing Truth: Machine Learning and Bias.

It brought together experts from fields including art, journalism, and sociology, the panel investigated how algorithms shape our lives in realms as disparate as criminal justice, online shopping, and social media. Algorithms affect everything from healthcare and insurance premiums to job opportunities and predictive policing. The question was “How might we resist discriminatory artificial intelligence and become informed digital citizens?”

In listening to the talk, I find it interesting that they didn’t invite a technologist in the panel. As someone from technology there are some points that I agree on and some points that are missing.

I’m glad they pointed the study about the technology industry being mostly predominantly white and Asian. It all stems from a variety of factors from choosing a degree and the availability of recruits. If you have good connections, it can land you to the right places. Unfortunately for some races the opportunity is a bit scarce, or they are not aware that such opportunity exists. Here’s an interesting study by the New York times: Why Tech Degrees Are Not Putting More Blacks and Hispanics into Tech Jobs. 

I remember a time when facial recognition was first sold to the public. The first Sony Camera that I owned has this feature. I noticed that the facial recognition feature does not detect dark faces, thus it doesn’t focus as much on a dark person’s face. The reason is because those who tested it are mostly Asians, all of them have a brighter complexion. This is why there is bias in the systems we use. There is not enough people to test scenarios to make it inclusive.

In reference to this facial recognition technology. The technology has been used in camera surveillance wherein the camera detects people with darker complexion as someone who can be classified as a fugitive. Here’s an article from the Atlantic: Facial-Recognition Software Might Have a Racial Bias Problem.

As a technologist, this is not an appropriate way of using technology. Investigations about lawlessness should be dealt by a real person with expertise. Tagging people just because their features similar to the ones that have been caught is not the best way to make technology work for the greater good.

Technology has always been a work in progress, it will never be perfect on the first release. Most people are concerned that things wouldn’t go any better or this will cause issues in the long run but as someone belonging to the Technology space, we as humans are aware of our flaws and that we (from the technology sector) work towards making things better. We are aware of this biases. Most software companies use User Stories and an Iterative approach to be able to make great products designed for people.

To conclude: Bias in Machine Learning will be there unless we as humans are aware of this biases. We need to bring people together from the functional side, the technology sector and sociologist to be able to make a great product. Overall it was a great talk, however I wish this organized talks would include people in technology to challenge their thoughts.