Technology is supposed to help humans be more productive, and algorithms are taking all kinds of tasks out of our hands. But when the algorithms go wrong, it can be a true horror story.
Like when an algorithm to aid Amazon’s hiring process suggested only male candidates. Or the times when Google’s image recognition software kept mixing blacks with gorillas and telling Asians to open their eyes.
So what about that?
Can algorithms be harmed?
Lorena Jaume-Palasi, the founder of the Ethical Tech Society in Berlin, says it’s more complicated than that. “People are always the reason for discrimination,” she says.
“Rather than trying to regulate the reasons for discrimination, we are focusing on technology, which simply reflects discriminatory practices,” she says.
Algorithms are instructions on how to solve a particular problem. They say to the machine: this is the way to do this. Artificial intelligence (AI) is based on algorithms.
AI copies smart actions and the machine is instructed to make informed decisions. To do this successfully, you need large amounts of data, which you can use to recognize patterns and make decisions based on those patterns.
Are artificial intelligence algorithms good or bad?
Here’s one explanation for why algorithms can be so unpleasant – they often make decisions based on old data.
“In the past, companies had work practices that favored white men,” says Susanne Dehmel of Bitkom. If you train an algorithm using this historical data, it will choose candidates that fit that bill.
When it comes to racist photo recognition software, it’s also very likely that it wasn’t the algorithm’s fault; Instead, the choice of images used to train the machine may have been problematic in the first place.
Now, there is a silver lining to all of this: machines are holding up a mirror for human society, and they show us a pretty ugly picture. Discrimination is a big problem.
One solution is for tech companies to take a more active role in what algorithms spit out and correct behaviors when necessary.
This is already done. For example, when American professor Safiya Umoja Noble published her book Algorithms Of Oppression, in which she criticized the fact that Google search results for the term “black girls” were extremely racist and sexist, the tech giant decided to do some changes.
We must ask ourselves how we can ensure that artificial intelligence (AI) technologies make better and fairer decisions in the future. Dehmel says there is no need for government regulation.
“It is a competition problem. When you understand how technology works, then you can carefully counteract discrimination, ”he says.
The previous examples have already shown that it is not enough simply to obtain information about gender and race; the algorithms could still make discriminatory connections and produced the same results. Instead, Dehmel suggests that developers create diverse data sets and conduct careful testing before training the machines.
Jaume-Palasi believes that continuous controls in algorithm-based systems are necessary, and AI must be created by more than just a developer and a data scientist.
“You need sociologists, anthropologists, ethnologists, political scientists. People who are better at contextualizing the results that are used in various sectors, ”he says.
“We need to get away from the idea that AI is a mathematical or technological problem. These are socio-technological systems, and the work profiles that we need in this field must be more diverse ”.