Salesforce Research Breakthrough Confronts Gender Biases in AI Data Models

Share this article...

Salesforce committed to ensuring equality and equal rights not only amongst their internal culture, but also everywhere they get involved. Gender equality has been a priority for Salesforce CEO, Marc Benioff, since 2012 but as we will see, even good intentions aren’t always enough to stamp out gender disparities – those disparities that lie hidden among large quantities of data. As AI becomes increasingly adopted by businesses globally, how do we prevent AI models from inheriting gendered words?

Fortunately, researchers at Salesforce and the University of Virginia have discovered a new method for removing gendered associations with certain words. I recently read an article on VentureBeat that explains, in detail, how ‘word embeddings’ can carry gender bias when employed in natural language processing.

It’s fitting that Salesforce research teams would be the ones to drive these new methods to beat gender bias, helping more organizations avoid falling into the same traps they once did, themselves.

Salesforce and Gender Bias: A Brief Recap

Salesforce Inc., as a powerful organization has been leveraged as a platform for change, committed to ensuring equality and equal rights not only amongst their internal culture, but also everywhere they get involved.

Readers will find a whole chapter in Benioff’s book ‘Trailblazer’ dedicated to equality. This opens with Benioff narrating the events of how the company-wide pay equality audit came about in 2015, and his shock over the $6 million pay disparity between the sexes and other minority groups that was uncovered. The results were especially shocking, as Benioff had announced back in 2012 that gender equality was a priority with the ‘Women’s Surge’ initiative to make sure talented female employees were being considered for leadership roles at Salesforce.

What this story teaches us, is the differences between visible gender equality, and the invisible biases causing disparities that lie hidden among the data.

How Salesforce proposes to stamp out Gender Bias in AI models

VentureBeat offered a clear description of how gender biases in AI models came about – and how Salesforce propose to mitigate them.

“Word embeddings capture semantic and syntactic meanings of words and relationships with other words” but often, these turn gender neutral words into gendered words. The example they call upon are the neutral words ‘brilliant’ and ‘genius’ which can become associated to ‘he’ if left unchecked.

Source: Salesforce

A post-processing step can ‘debias’ a significant proportion of biased word embeddings, but  gender bias can still creep back post-debiasing.

The new method is Double-Hard Debias, which you can read more about how it works in the VentureBeat article.

Source: Salesforce

Why does it matter?

If left unchecked, AI models could end up exacerbating our own, insidious gender biases that have caused unjust inequality for decades. Awareness and education has meant many industries are taking steps in the right direction, but we have to extend that ‘education’ to AI models before they amplify inequality at scale.

A concerning example would be the use of AI in screening candidates for a job position: “Imagine that people develop a resume filtering model based on biased word embeddings. This model can potentially filter out female candidates for positions like programmer and also exclude male candidates for positions like hairdresser”, Salesforce explain.

Leave a Reply