Algorithms are programs that use large databases to predict the future. The large databases reflect the history, and they are susceptible to bias.
In the last millennium, the world’s largest and most valuable companies belonged to the Oil and Gas, Automobile industries. The likes of Chevron, Exxon Mobil, General Motors, et al. were the largest and most valuable companies. They were replaced by technology companies famously known by the acronym FAANG (Facebook, Apple, Amazon, Netflix, and Google) or MAANG (Meta, Apple, Amazon, Netflix, and Google). Other giants like IBM and Microsoft are not too far behind in valuation. It is not an exaggeration to say that the economy transformed into ‘Techonomy’.
The rise of Artificial Intelligence and Machine Learning has given new wings to tech giants. Netflix suggests what movie you might like, YouTube pushes the videos related or linked to your previously watched content, Amazon showcases the products that go well with your current selection or previous purchases, and the list goes on.
There has been a rise in the applications of Artificial Intelligence and organizations, including Government(s) are vying to take advantage.
The Chinese government openly installed facial recognition cameras across the country and used facial recognition technology to create records of its citizens. The UK government tried similar practices to identify criminals in the streets. The US government implemented ideas that helped it to capture the faces of people who are likely to commit crimes. The technologies or algorithms were supplied by the tech giants to the respective governments. Among the top 10 companies that use AI for facial recognition, 6 are from the US, and 3 are from China.
These algorithms were found to be biased. Every human being has biases, conscious or unconscious. Biases against race, color, creed, caste, country, etc. have been prevalent for ages. Algorithms are programs that use large databases to predict the future. The large databases reflect the history, and they are susceptible to bias.
Deep-rooted tech bias
These algorithms decide who is creditworthy and who is not, who can get an education loan or a home loan and who cannot. It was found that these facial recognition programs were unduly biased against people of color in the US.
The biases embedded in the past are now coded and modeled. Our faith in technology makes us believe that the outputs are accurate. People of color are more likely to commit crimes, less likely to get a loan for education and their resumes have a higher chance of rejection.
Dr. Joy Boulamwini, an MIT scholar, recognized this bias when she experienced this. When she tried to capture her face through the optical vision of the computer, it was getting rejected, and when she wore a white mask, the computer identified the face. She also stumbled upon similar experiences for other black women. Her collaboration with Dr. Cathy O’Neil, author of the bestselling book ‘Weapons of Math Destruction’, and Deborah Raji revealed violations of trust by large-tech companies.
Dr. Cathy says that the process behind the algorithms is similar to the drug discovery methods, and the former needs an ‘FDA’ to validate ethical practices. She adds that such skewed algorithms are grounds for breeding inequity in organizations. As an example, she quotes Amazon, the world’s largest online retailer, which has built an algorithm that screens resumes and eliminates female applicants with 98% accuracy. Amazon decided to do away with that algorithm, but many companies might be implementing lopsided algorithms against women and people of color, leading to disastrous results.
Other examples include how Facebook can lead the public to vote in favor of one candidate non-consciously. In the US, a particular algorithm recommended the firing of a teacher, who in the past has won multiple ‘teacher of the year’ awards. The court finally ruled in favor of him, forcing the company to reveal the algorithm model to the teachers before implementing it. Surprisingly, the other companies running complex mathematical models and algorithms have not declared the sanctity. The reason for it: the companies do not accurately know the frameworks behind their algorithms and models.
IBM invited Dr. Joy to their headquarters to remodel their algorithms and programs to eliminate the bias. This time the scholar’s face was recognized with precision, and no white mask was required. Dr. Joy was made part of a congressional committee that argued against the damaging consequences of these algorithms. She founded a non-profit organization ‘The Algorithmic Justice League’ continues to advocate the elimination of bias in technology and AI/ML. Her vision is to transform technology into ethical and unbiased tools for companies that result in equity and inclusion.
A long way to go
This is not the end of the story. Still, a lot of issues need the attention of the legislature. In the Indian context, this problem may be more pronounced as the discussions related to the ethical usage of technology are not mainstream. With access to a smartphone and internet usage having gained a strong foothold, Indians might have to dabble in gambling and other addictive pastimes. The biases in HR technology might turn out to be a deterrent for a large population that comes from underprivileged backgrounds.
Reforms at the grass root level in education, a clear and thorough understanding of technology by both the legislators and the public, fair and transparent practices by corporate India, and strong laws aided by tight implementation require a lot of effort and time. The early India rises to this challenge, the better it is for its diversity.