Coded Bias talks about the implicit biases that are often found in all kinds of algorithms. The most basic example is that of a computer vision face recognition algorithm. Some face recognition algorithms won’t recognize something as a face unless it is of a white complexion, and sometimes even then only if it is a white male’s face. This is directly due to the fact that a majority of the training data used for these types of algorithms includes pictures of mostly white males, so the computer got really good at recognizing specifically white males.

One potential reason for this bias is the biases of the original creators. If someone who is racist, sexist, or discriminatory in some other way decides to only pull pictures of a certain group to use as training data for their “AI”, then that will directly affect the future accuracy and outputs of that model. Going a step further, the discriminatory biases don’t even have to result from the creators of the model, but from the actual data provided. The training sets themselves can sometimes be the source of a models implicit bias, and therefore the creators of the training sets are more at fault than the creator of the model.

AI as it exists today is nothing like it is in the movies. We often think of AI as some all knowing machine able to answer all our questions. The AI we really ended up with that is most ofen used is an algorithm able to take in large amounts of data and come up with some results based on its training. However, the implicit, unconscious biases of the creators often end up embedded into these technologies. If a group of white males who are racist (even if its unconscious racism) create a facial recognition algorithm, they may accidentally train the computer to just not recognize faces of dark complexion.

This whole documentary reminded me of an incident that occurred recently with Googles image generation “AI”. People would ask the model to generate an image of “nazis” and the AI would output images of soldiers wearing Nazi symbology. However, the soldiers would be dark skinned. This can be taken as the AI model connecting “dark skin” with more negative concepts, such as nazi ideology. This likely stems from a similar sort of implicit bias from its training, potentially in training on public sentiment from the likes of Twitter or Facebook.

The last major thing I got from the documentary is how pertinent it is that we get laws surrounding the use of “AI” and algorithms. There are virtually no laws regarding the use of algorithms in filtering people for different applications. For example, companies could have an algorithm that filters resumes automatically based on criteria. The “AI” doing this could be trained on bad data, causing it to filter out all resumes with say less than a year of experience, even if some of those resumes may have been good applicants. The problem isn’t so much the fact the company uses this AI but more that they use it without any repercussions and without any transparency of its use.

In addition to this issue, there is also no law regarding the collection of data and the use of that data for targeted ads. I often think when I see these ads or people mentioning how bad they are that it isn’t that hard to just…not click on the ad. However, the amount of profiling and targeting that goes on with the ads has gotten to a point where companies can build a profile of you as an individual and use that to sell you. We have gotten to a point in our world where we are no longer using free services, instead the free services are using us. They treat us like we are products to be sold and fed advertisements. Laws need to be put in place to help alleviate these issues.

This covers some of the important points I thought about during my watch through of Coded Bias. Machine learning and algorithms often operate as a blackbox to even the developers. We don’t quite know what goes on behind the scenes with these algorithms and still for some reason trust their results. It’s time for us to start paying closer attention to these algorithms and their uses, instilling new laws to prevent their abuse and misuse in the future.