Artificial Intelligence in the time of racism, intolerance, bigotry & more
Sunday, April 9, 2017

Recently, the Mint carried an interview  with Yannick Binvel of Korn Ferry on his take on what the future of the workplace will look like. 

http://www.livemint.com/Leisure/kQMkzYo0DcAw40NsWSsvHK/Yannick-Binvel-What-the-future-of-the-workplace-will-look-l.html He talks of reverse mentoring (my previous article) but he also talks about AI technology changing the work place.  

So heres my take on AI….A few days ago I caught up with a friend whom I have not seen in a long time and who has changed his job. Now after being in the driving seat and Director of a US$6.3 billion company he was now being given a different mandate. His area of operation too had changed to a very large geographical area. He has always been a very humble man when it comes to describing the work he does and this time was no different. ‘I am a landlord. I have to look after buildings, and chairs and tables, and make sure employees have laptops’, was his response to what he is now doing for the company. Conversation moved from real estate and procuring square feet to how he now has to prepare for the challenging time of having to replace men with bots.

Businesses have come to realize that going digital is the way to do business in the new world today. Making work processes easier, more energy efficient, data warehousing etc. But listening to him break down the reality of machines taking the place of his employees in his very easy conversational style, hit me hard. Well just to give you a flavor of just how he said things - ‘I have to start a new process that needs about 500 people. And so I have to find place for them to sit, find chairs, find laptops for them to work and tables to place these laptops on. This is for now. But in about five years I also have to replace these people with 10 bots that will fit into your hall (dimensions of my living room: a 3 bed house, so you can imagine) and will more efficiently do the work of these 500 people with a whole lot less cost. No real estate, no laptops, no chairs, no tables’.

Now from a business point of view this sounds hunky dory. Imagine managing bots. No late coming, no coffee breaks, smoke breaks, absenteeism, office romances, nothing. Just work, productivity, efficiency...all the words that corporates like to hear.

So sometime soon instead of getting Mr. Shukla, Relationship Manager at the end of line, explaining to you why your check didn't clear or such like (whilst ‘neath his breath he mutters, ‘Dumb Maca Pao, sign your f....check first’.) it is likely that you will have a bot telling you much the same even down to the profanity. How can this be possible? Aren't the machines designed to do away with communication slippages and make things easier for you. Wasn’t this why we thought it was good to interface with a machine. We wouldn't have rude government officials blissfully ignoring us even though we are right up against their noses, trying to get their attention?

Beauty.ai announced a beauty contest that was to be judged by Artificial Intelligence (AI). The idea was to take the bias out of judging. The project was to also study what constituted beauty objectively. A fair number of people sent in their pictures which were fed into the system. But when the results came out, 37 of the 44 winners were white. Well Beauty.ai poo poohed it away by saying that most of the sample pictures that got sent in were of white people. So the results were skewed.

But not too long ago Microsoft launched a twitter bot named Tay. Well Tay got things mixed up too. Tay hated feminists and said that they needed to burn in hell and found ole Dolphie Hitler someone he’d like to invite to tea and play Monopoly with.

Google Photos (GP) was to identify objects on its own. Well that didn't work too well either. GP labelled a couple of African-origin, as ‘gorillas’.

Seems like replacing humans with machines doesn't seem to have gotten rid of the bias. What are our Indian Bots going to be saying? Those Chinkys have great fashion sense. Or those Madrasis only know Mathematics? What I learnt 32 years ago still hold true. GIGO: Garbage in Garbage out.

AI bots are very much like kids. A bot undergoes something called Deep Learning where it is exposed to humungous quantities of data and human behavior. With algorithms, the bot then forms its own ways of deciding how to act. Well, our world is made up of so much data and that data is a reflection of who we are as people. Which is: biased, biased, biased data. Sometimes positively but mostly negatively as we know. All the literature, advertising and other written material that exists on the internet simmers with bias. The bot is being sensitized in much the same way that we are being sensitized today. When reading about rape, terrorism, death everyday, don't we too get desensitized. It doesn't bother us too much to learn of yet another rape, death, falling building, plane crash, the list goes on.

What is of more concern is that if a bot is exposed to data like this early on in its training, much like a child, this is what it is unlikely to forget and it colors his every action and communication henceforth. When computers have done word association with words like ‘management’ and ‘salary’ it threw up more male names than female names. And words like ‘home’, ‘family’ were associated with female ones.

Seems all is not lost though, as Abhinav Aggarwal of Fluid.ai let us know that does not let the AI it puts in banks know what the customers they are interacting with look like. With the definite possibility of greater interaction with bots in the near future we have to accept that prejudices are intrinsic to our society and it will take a great amount of proactivity to stop bots from continuing them.

It is worrying that AI’s biases are being revealed in real decisions like the ones I have mentioned above. In some prisons software decides which prisoners are more likely to commit a crime again and this affects length of incarceration, bail and parole. No surprises there, the software consistently throws up black criminals as re-offenders.

To the large majority of us understanding how these algorithms work is like black hole. So calling out a bot on the basis of racism is that much harder than calling someone a bigot. Geeks still refer to AI as impartial when it is clear that bots adopt our prejudices and are far from partial.

I think that my friend will face challenges of a different kind and possibly of larger magnitude when he gets his 10 bots to work his processes. Can he ensure that his bots have been ‘deep trained’ not just to learn from humans but also guard themselves against picking up their biases?


  Back  

 Comments (0)






Refrain from posting comments that are obscene, defamatory or inflammatory, and do not indulge in personal attacks, name calling or inciting hatred against any community. Help us delete comments that do not follow the guidelines by marking them offensive. Let's work together to keep the conversation civil.