An article on Politico describes a unique feature in Google Translate: It apparently assigns genders to certain occupations, assuming that, for instance, doctors are male and nurses are female.  In addition, a ProPublica investigation two years ago found that software used to predict inmates’ likelihood of being a high risk for recidivism was nearly twice as likely to be inaccurate when assessing African-American inmates versus white inmates.  And many of us have heard about the 2015 scandal when Google’s photo-recognition software tagged a group of dark-skinned African men as “gorillas.”

As Artificial Intelligence is being applied to identifying objects and people, providing descriptions and assuring accessibility, we’re seeing more and more of this kind of thing.  Pundits are quick to use words like “racism” and “sexism” when these things happen… but it’s important to realize that this is purely for sensationalist value, and isn’t really what’s happening at all.

Cray supercomputerComputers aren’t people; they’re machines that crunch numbers to arrive at solutions.  Even the smartest AI is just a machine that crunches huge amounts of numbers.  So, when software tags doctors as being most likely male, it is going by the data it has stored, indicating that the vast majority of doctors in its databases are male.

And the Google AI that identified a bunch of men as gorillas was using what data it had to match up the figures it saw, and it turned out it was not responsive enough to be able to tell the difference between men and gorillas.  The result isn’t racist… it’s embarassing as all hell, but it’s not racist.  The AI didn’t call these men gorillas because it had been taught that African men are the same as apes.

The eyes of David from AI: Artificial Intelligence

These are not deliberate, derogatory or malevolent acts perpetrated by these machines, these are errors caused by a lack of comprehensive programming.  In this way, you have to compare these AI systems to 5-year-old children: They’ve learned a lot, but they don’t know everything… and quite a lot has to be passed on to them, with explanation and in proper context, so they understand and know in the future what to look for to identify these lessons.

As an example, a 5-year-old girl growing up in an African-American household may hear the “N” word applied casually and non-threateningly to friends or family around her.  One day, she’s with her parent at a store, sees a boy she knows from school, and maybe she addresses him with the “N” word in a friendly fashion.  Problem is, the other boy is Asian.  And his father is standing right next to him.

The parents understand what just happened, but the children probably do not.  Now it’s the job of the girl’s parent to explain the context of the “N” word, where it is and is not proper to use it, and by the way, apologize to the other father, who may have to explain to his son why what the girl said was wrong.

Once the girl understands the context of the “N” word, she will know where to use (and not use) it.  She may decide it’s not proper to use at all, or maybe she’ll use it only with people she hears using it themselves.  But without context, she doesn’t build the right pathways of understanding that she needs to make the right choices.

The Google AI that mis-identified the African men did not have the information (maybe just enough visual resolution) it needed to be able to distinguish them from another animal.  And it certainly hadn’t had a few hundred years of racist history explained to it, so it wouldn’t have known that its unfortunate choice of animal would be so highly shocking and insulting to society at large, possibly prompting it to look closer for telltale signs that it might not be identifying them properly.  (For instance, the fact that Gorillas rarely wear clothing.)

Unfortunately, just about every AI system is designed, programmed and taught privately, by its owner institutions, and “in the dark” related to other AIs.  These institutions mean well, but they also often omit things that they either don’t think are relevant, or don’t think of at all, in order to accomplish their chosen goals.  And we now see the results of this method: Data, but not enough context to truly understand.

AI needs to be consistently trained Our daily lives, the input of our families, our societies and our educational institutions provide humans with this data and context.  Once armed, we rarely (intentionally) make the mistakes that these AI have made.  This is why there needs to be a universal training system for AI, used to provide the education and context they need to make common decisions.

As we move forward with AI development, we need to also develop this universal (and constantly reviewed and updating)  training matrix, a set of data and context not dissimilar to the life lessons a child receives from school and home, to be loaded into every AI regardless of intention or purpose.  And every AI should be tested in a standardized format, to make sure they’ve absorbed the data and context of its programming, and can function as a true intelligence.

Only in this way can we effectively and consistently remove the sort of errors that are common to AIs and 5-year-olds, and incidentally, make them better people and AIs going forward.