Robot holding scales representing bias, illustrating debiasing artificial intelligence.

Debiasing artificial intelligence

AI platforms are only as good as the data they keep

Written by: Charna Parkey, cross-posted from Medium

More and more companies are turning to AI platforms for critical decision-making within their organization. With that, there is rising concern that there should be no biases in the insights provided by AI. There are numerous examples of how bias creeps in to AI, leading to terrible outcomes, and no company wants to be caught up in that.

If your company is evaluating AI platforms, it’s important to consider how the system is built and how that influences the actions it might suggest for you. Recently we received a detailed questionnaire at Textio from one of our customers, a major consumer brand, and we thought the questions were valuable enough to share more broadly. If you are a buyer of AI solutions, here are a three good questions you’ll want any AI vendor to answer before you make a purchase.

  1. How do you currently train your AI? (Meaning: what data, logic, and processes the vendor uses to eliminate statistical and machine learning pitfalls.)
  2. Do you think your system is at risk of discriminatory outcomes via unconscious bias — If so, why? If not, why not?
  3. What actions are you taking to ensure your system removes bias or risk of discriminatory outcomes?

How Textio tackles bias in its own algorithms

Debiasing artificial intelligence is a critical topic at Textio and something our team has thought deeply about in designing the way our augmented writing platform works. One of the things that makes Textio unique is that you not only receive the output of an algorithm doing the work for you, but you will also see information about how and why you are getting the guidance inside Textio’s augmented writing platform.

Let’s start with the learning loop: this begins with using Textio and goes all the way through data exchange and product changes continuing the virtuous cycle. Ninety-five percent of Textio customers participate in data exchange. This means that what happens as a result of your language impacts future guidance in Textio’s platform. The Textio Score is trained on the time to fill and the quality of the candidate pool and response rate across all of Textio’s data.

Similarly, Textio’s gender tone meter is trained on how applicants identify their gender, if they choose to. Textio then shows you, statistically, where unconscious bias shows up in your writing at scale. This means that as your users write, they are interrupted at the moment of potential bias to be able to remove those phrases.

Then, at scale, in your library and usage reports you have the ability to monitor your company’s progress towards a high score and a neutral tone ensuring you are reaching the broadest applicant pool.

Once you begin data exchange, and you contribute enough qualifying jobs, your customer success engineer will be able to tell you how the choices your users make while writing impacts your applicant pool.

Because Textio continuously collects data from all over the world, in every industry, the algorithms are also updated constantly to reflect the statistical impact your words are making on candidates. This continuously updating data ensures that your users receive the data they need to interrupt bias the moment it happens.

Learn more about how language impacts your hiring at textio.com

Tune into Charna’s Podcast: Signal from Noise for topics like these