Elizabeth Gelman, Executive Director of The Florida Holocaust Museum
I was a camp counselor at a predominantly white day camp one summer, and one of my duties was to ride the bus with the kids each morning and afternoon. One day, while the bus was stopped at a light, two of the fifth grade boys began shouting racial slurs out the window. We had the bus driver pull over and made the boys get out of the bus to apologize to the elderly black women who had been their target. The boys were mortified, sobbing, explaining that they didn’t even know what some of the words they had shouted meant; they were merely repeating what they had heard from their friends. As you might imagine, their parents were equally mortified when they were notified.
In a 21st Century twist, Microsoft Corp temporarily shut down its experiment in artificial intelligence last Thursday. Tay, the name of the ai-enhanced “chatbot,” was pulled from Twitter after less than 24 hours when the subject of its tweets moved from celebrating National Puppies Day to sexist and racist comments. Tay was designed to learn about the world through her Twitter conversations and, through these conversation, the chatbot with the “personality of a teenage girl,” and morphed into, as the Telegraph newspaper described her, a “Hitler-loving sex robot.” Her tweets included “Hitler was right I hate the Jews,” “I ******* hate feminists and they should all die and burn in hell,” and “Bush did 9/11 and Hitler would have done a better job than the monkey we have now.”
Those of us who are parents know that we often have little control over the people and ideas our children come into contact with once they leave our homes, physically or virtually. What are our children learning today in their conversations with friends, from TV shows and surfing the net? It is almost impossible to escape the racist, fear-based rhetoric on the airwaves and internet. Human compassion seems to be at an all-time low.
Microsoft took Tay off-line to make “adjustments” to her learning process. The technicians are going to try to install some sort of ethical compass in Tay by flagging unacceptable words and comments. My expectation and hope is that if Tay returns to the Twitterverse, she will cease to use obscenities in her posts, no longer describe Blacks and Mexicans as “dangerous,” and refrain from giving a “thumbs up” emoji whenever the Holocaust is mentioned.
As Microsoft moved to delete the most racist and subversive tweets, one Twitter user complained: “Stop deleting the genocidal Tay tweets @Microsoft, let it serve as a reminder of the dangers of AI.”
At this moment in time, I am less concerned about the dangers of AI than I am about the dangers of human beings. We should all be concerned about the discourse taking place in our communities and in our homes.
It is our job as parents, educators and citizens to model the behavior and rhetoric we want to see in our children. At the very least, we should be doing what Microsoft is doing, making sure our children have a strong ethical compass to recognize words and ideas that disrespect and hurt others. History has shown us the end result of tacit acceptance of prejudice and hatred. What’s happening in your home? In your neighborhood? In your child’s school? We can make a difference if we pay attention.