5 lessons on how to fight bias in AI from Pause Fest

3 min read

Lessons from Google, Microsoft, and the best tech companies


Last week I was invited by KJR, a strategic IT advisory firm, to speak on their panel at Pause Fest on ‘The risk is real: uncovering ethics and bias in AI’, joining other AI experts from AI Health and KJR.

Bias, ethics, and diversity in tech seemed to be a big topic at Pause Fest with several sessions covering these areas, including from Google and Microsoft.

Here are 5 lessons from the panel and Pause Fest in general.

1. Develop targets for team diversity

Sam Keene, AR Lead UX Engineer at Google, gave a talk on creating world-scale AR. This talk occurred the day before Google announced their new Google Maps AR product, which has been released to a select few users only. Are you one of the lucky ones?

Sam mentioned that almost everyone on the team is cross-disciplinary and involved in research, development, and design. He also mentioned that diversity is something he strives for in his team.

Having a clear idea of what a good mix of people in a team looks like, makes it easier to hire correctly. Developing targets for gender mix percentages, educational backgrounds, and even cultural background can help. Companies can no longer design for a homogeneous-looking market, therefore it makes sense to have a diverse team.

While there is a debate over hiring based on merit, you don’t have to compromise. You just have to know who you want in your team and make sure you’re reaching that demographic.

2. Mandate code and solution reviews

Cody Middlebrooke, Founder of AI Health mentioned his time at Microsoft where a number of measures to limit the chance of poor features or products being released were put in place.

Microsoft had a process where engineers couldn’t check code into source control, unless the code was reviewed by an independent person not part of the project.

There was also a series of automated testing, testing variants and tolerances to be included for that feature.

For every new feature or project, engineers were also required to come up with three solutions, for example, three design patterns, and engineers must explain the rationale behind the one being chosen, followed by an independent review on the three solutions.

Develop a robust code review process and include unit testing to check that the output is what you expect out of a model.

3. Adopt an AI ethics framework

Code reviews to check for bias and ethics should be based on an ethics framework. You need to know what you’re checking against.

On the panel, I mentioned Microsoft’s six ethical principles to guide the development and use of AI:

  • Fairness — AI systems should treat all people fairly
  • Reliability and safety — AI systems should perform reliably and safely
  • Privacy and security — AI systems should be secure and respect privacy
  • Inclusiveness — AI systems should empower everyone and engage people
  • Transparency — AI systems should be understandable
  • Accountability — AI systems should have algorithmic accountability

It is similar to how Google’s assesses its AI applications against these objectives and believes it should:

  • Be socially beneficial
  • Avoid creating or reinforcing unfair bias
  • Be built and tested for safety
  • Be accountable to people
  • Incorporate privacy design principles
  • Uphold high standards of scientific excellence
  • Be made available for uses that accord with these principles

These principles should be a good base to develop your own AI ethics framework.

4. Use tools for detecting algorithmic bias

IBM has developed a real-time algorithm bias checker that can run over any algorithm, to analyse how and why algorithms make decisions. The Fairness 360 Kit will also scan for signs of bias and recommend adjustments.

Google has launched a ‘What-If’ tool which helps users understand how their machine learning algorithms are working.

Microsoft also announced in early 2018 it is launching a bias detection toolkit of its own, it does not seem to be available yet unless it is baked into it’s AI services it already offers.

Karolyn Gainfort, Principle Consultant at KJR talked about a workshop that Google held in which they discussed how to build ethics rules for driver-less cars. At the end of the workshop, the solution architect said it is useless, because human drivers don’t make complex calculations on the value of human life based on characteristics such as your gender and age. You just slam the brakes and hope you don’t kill anybody.

When in doubt, build in human intuition, humans don’t live a life based on complex rules of who to kill when on the road.

5. Allow users to easily provide feedback

Ricardo Prada, Director and Principle UX Researcher at Google talked about how Google allows users to report errors or inappropriate content in services they offer by embedding a reporting tool right into that service.

In some of Google’s most advanced technologies using machine learning, they have been working to prevent that technology perpetuating human bias. This includes removing offensive or misleading information from the top of search results, and adding a feedback tool in the search bar so people can flag inappropriate auto-complete search results.

Here is a video that explains more on how Google is tackling human bias in machine learning.


I write about AI and transhumanism. Follow me on Medium if you’re also trying to make sense of a world impacted by emerging technologies.

Alyse Sue Alyse Sue is a freelance writer who covers emerging tech and transhumanism. Alyse is also a developer working on solving grand global challenges with AI and Blockchain. She is also co-founder of two healthtech startups. Prior to this, Alyse was a founding team member of KPMG's Innovate team, focused on helping corporates innovate.

Leave a Reply

Your email address will not be published. Required fields are marked *