Could machines be racist or sexist? What lessons are robots picking from humanity? While machines have picked up different positive phases of humans; they have not unlearned human social tragedies. In a recent competition tagged #Metoo, the algorithm shows different lights into racism and sexism. #Metoo was a beauty contest of women who submitted their photographs. However, these photos were of women of different races. Moreover, different emotional graphs on them show what Artificial Intelligence could propagate in terms of racism.
It’s clear that artificial intelligence can learn to be racist or sexist from human interactions. And of course, AI doesn’t just receive bias, but can even propagate it by recreating tradition gender roles in service-oriented software, for example. It’s worth thinking about this in the context of the recent #MeToo campaign, where people shared their experiences and thoughts on sexual harassment and assault.
We think of “big data” as something amorphous and separate from humans. But data is formed by millions of our daily interactions. For example, in the future, some algorithms may allow artificial intelligence to group together a bunch of tweets under the label “sexual abuse,” while other algorithms allow it to understand that the Twitter handles attached to these tweets largely includes names considered “female.” What will the machine make of that? Communication may be still a human endeavor, but we live in a world that is increasingly shaped by machine intelligence, and it’s worth thinking about what machines may learn about humanity by what we say—and the gaps they may fill in by what we don’t.