Hundreds of millions of people are using voice assistants – whether through smart speakers, smartphones or other devices – but if those people are black, are they being poorly served?
A new study from Stanford University suggests that they might, in the US. “Our results point to hurdles faced by African Americans in using increasingly widespread tools driven by speech recognition technology,” is how the study described its findings. It tested Amazon’s Alexa, Apple’s Siri, Google Assistant, Microsoft’s Cortana and IBM’s Watson Assistant by getting them to transcribe structured interviews with 42 white people and 73 black people.
“We found that all five ASR systems exhibited substantial racial disparities, with an average word error rate (WER) of 0.35 for black speakers compared with 0.19 for white speakers,” explains the study. It also suggests “using more diverse training datasets that include African American Vernacular English” as one possible solution. If they’re not doing that already, we’d hope the study will give these companies a firm prod in the right direction.