![[MV5BNWRlNWViNDEtZTZlMS00YmY5LThjZjctZjdiNDRlMjZlMzY3XkEyXkFqcGdeQXVyODQyNjk3MTQ@.V1.jpg]]

Prompt

Think about what is presented and discussed in the video.

Write a 1–2 page personal reflection on the video discussing the potential causes and implications of biases such as those discussed.  Include a short discussion of why these issues are significant.

Turn in a PDF with your reflection.  Be sure that you have your full name on the paper.

Reflections should be in 12 point type, 1 inch margins all around, with 1.5 line spacing.

Notes

  • some computer vision algorithims wont detect a face unless it is of a white complexion
    • inherent bias in the training data used when training these models
    • bias may either come from the people training the model (racism) or maybe it stems from just the majority of images available to those people being of whiter skin tones? which in turn may have resulted from other people’s biases and racism
    • datasets also contain majority men and whiter individuals
    • this is probably the most OBVIOUS place where bias could show up in computers
  • “AI” is NOTHING like the ai of movies and scifi, its all just math in the end
    • it started in 1956 at dartmouth
    • they started “ai” by having a computer play chess and win
  • intelligence is MORE than just winning a game
  • “everybody has unconcious bias. and people embed their own biases into techonlogy”
  • most face recognition algos work better on lighter male faces than female or darker faces
  • largely skewed data sets leads to skewed results
  • reminds me of the incident where googles image generation ai would generate images of “black nazis” even though obviously nazis were not black or chinese, they were german etc.
  • with the advent of machine learning and ai “who owns/made the code/tool” is becoming just as important as “what we do with the code/tool”
  • facial recognition systems to recognize the “bad guys” are often incorrect
    • this is possibly due to some bias in the facial recognition algorithms being used
  • there are laws around taking random peoples fingerprints, but there are no laws around police taking pictures for facial recognition even though they both accomplish the same task
  • machine learning is a blackbox to even the programmers, we just feed it data but we dont actually fully understand how things happen inside and why the computer can quickly learn some things and not others etc
  • primarily 9 companies building the future of artificial intelligence
    • 3 of them are in china, who requires facial recognition to use the internet which then gives them a BOATLOAD of data
    • the US instead gathers data from social media and is primairly used for advertisers, finding what ads people might best be suited for and other commercial related purposes
  • algorithms that determine if you get into college or are credit worthy or if your resume is to be actually used
    • for example programs that check resumes may just be biased and reject all women for an unknown reason
    • the AI didn’t know it should choose women, it just saw “every person in a high up job is a man so i should choose more men” which isnt the right choice
  • one of the biggest issues regarding AI today is that there are just no laws regarding it
  • it is scary how much data companies have on us and how easily they can market to us. i dont like thle idea that they know that much information about us. when it comes to marketing i dont see as much of an issue as marketing ads are easy to ignore, and im not sure whether the issue is more people not understanding ads or more companies being able to market to people prone to clicking certain ads. it should be easy enough to NEVER trust an ad…right?
  • the fact that facebook has the capability to manipulate elections is wild
  • computers learning from humans online via twitter for example tend to learn awful things, not that everything on there is awful its just so much data available that without proper curation its easy for the bad things to get stuck in the system
  • its crazy that smart people like the woman in the documentary are just discredited due to gender or race, and even crazier that due to gender or race they sometimes cant get jobs or cant use cool technology like facial recognition or have it used poorly against them purely out of the fact that the computers were trained with implicit bias
  • algorithms cannot be used to determine what is just