Lucretia Williams, a researcher at Howard University, is regularly interviewed about her work, and she’s noticed that her quotes sometimes aren’t quite right (the “-ed” might get dropped from a word, for example). She attributes this issue to transcription technology, which—like all voice-recognition software—can struggle with Black speech. That’s a problem that she and her team are now trying to address. Williams has spent much of the past two years leading an effort to help voice-recognition technology better understand Black voices.
A partnership between Howard and Google, Project Elevate Black Voices, as it’s called, is an effort to broaden the data sets that teach software to recognize human speech. Black speech hasn’t traditionally been well represented, leading the systems to struggle with the unique grammar, pronunciation, and vocabulary of African American English. The result is that Black speakers using voice-driven AI tools and other technologies often encounter responses like “I’m not sure I understand.” The idea for Project Elevate Black Voices was originally hatched by another Howard professor, Gloria Washington, and Google researcher Courtney Heldreth. (Google is funding the project.)
Tools like Siri and Amazon’s Alexa interpret and respond to commands like “Play Beyoncé,” but a 2020 study found that top-tier voice-interpretation systems have higher error rates for Black users than for white ones—a 22 percent gap for Apple, 15 percent for Amazon, and 12 percent for Google. Williams says this forces many Black Americans to address these pieces of technology in standard English. “You shouldn’t have to code-switch when you talk to your personal devices,” she says.
To help solve that issue, Williams and her team recruited more than 530 African Americans across 32 US states to participate. “Oftentimes, when research is done like this in vulnerable communities, the researchers get more out of it than the actual participants,” Williams says. In this case, participants received up to $599 for three weeks of answering questions, and the team is being careful with how it uses the material it has collected. The data is currently available only to Google and to historically Black colleges and universities that apply for access for specific projects. Howard is maintaining ownership in order to ensure that it isn’t misused in ways that would compromise privacy or otherwise harm participants.
As a result of all that collection effort, Project Elevate Black Voices has now created a data set of 600 hours of responses to questions like “What are your hobbies?” A Black-owned transcription company then processed all the recordings, and Google will use them to improve its offerings.
That’s just the first stage of the project, Williams says. The hope is to expand the data set to include additional dialects from across the African diaspora that are spoken in the United States. “When certain voices don’t get understood, it is a problem,” says Williams. “You shouldn’t have to feel excluded from the technologies that you use and pay for.”
How Howard University Is Helping Tech Understand Black Speech
Howard and Google teamed up to improve voice recognition.
Lucretia Williams, a researcher at Howard University, is regularly interviewed about her work, and she’s noticed that her quotes sometimes aren’t quite right (the “-ed” might get dropped from a word, for example). She attributes this issue to transcription technology, which—like all voice-recognition software—can struggle with Black speech. That’s a problem that she and her team are now trying to address. Williams has spent much of the past two years leading an effort to help voice-recognition technology better understand Black voices.
A partnership between Howard and Google, Project Elevate Black Voices, as it’s called, is an effort to broaden the data sets that teach software to recognize human speech. Black speech hasn’t traditionally been well represented, leading the systems to struggle with the unique grammar, pronunciation, and vocabulary of African American English. The result is that Black speakers using voice-driven AI tools and other technologies often encounter responses like “I’m not sure I understand.” The idea for Project Elevate Black Voices was originally hatched by another Howard professor, Gloria Washington, and Google researcher Courtney Heldreth. (Google is funding the project.)
Tools like Siri and Amazon’s Alexa interpret and respond to commands like “Play Beyoncé,” but a 2020 study found that top-tier voice-interpretation systems have higher error rates for Black users than for white ones—a 22 percent gap for Apple, 15 percent for Amazon, and 12 percent for Google. Williams says this forces many Black Americans to address these pieces of technology in standard English. “You shouldn’t have to code-switch when you talk to your personal devices,” she says.
To help solve that issue, Williams and her team recruited more than 530 African Americans across 32 US states to participate. “Oftentimes, when research is done like this in vulnerable communities, the researchers get more out of it than the actual participants,” Williams says. In this case, participants received up to $599 for three weeks of answering questions, and the team is being careful with how it uses the material it has collected. The data is currently available only to Google and to historically Black colleges and universities that apply for access for specific projects. Howard is maintaining ownership in order to ensure that it isn’t misused in ways that would compromise privacy or otherwise harm participants.
As a result of all that collection effort, Project Elevate Black Voices has now created a data set of 600 hours of responses to questions like “What are your hobbies?” A Black-owned transcription company then processed all the recordings, and Google will use them to improve its offerings.
That’s just the first stage of the project, Williams says. The hope is to expand the data set to include additional dialects from across the African diaspora that are spoken in the United States. “When certain voices don’t get understood, it is a problem,” says Williams. “You shouldn’t have to feel excluded from the technologies that you use and pay for.”
This article appears in the August 2025 issue of Washingtonian.
Most Popular in News & Politics
Some DC Residents Are Actually Leaving the Country
A Bizarre Taco Bell-Fueled Ultramarathon Is Coming to DC
The Shutdown Is About to Get Really Bad, Shootings Plagued DC Over the Weekend, and a Furloughed Fed Flogs Frankfurters
Meet Adelita Grijalva, the Arizona Congresswoman-Elect Who Can’t Take Her Seat
Sandwich Guy Skeletons Are This Halloween’s Must-Have Decoration in DC
Washingtonian Magazine
November Issue: Top Doctors
View IssueSubscribe
Follow Us on Social
Follow Us on Social
Related
Congressman Ro Khanna Has a Lot More to Talk About Than Epstein
This Unusual Virginia Business Offers Shooting and Yoga
Why Is Studio Theatre’s David Muse Stepping Down?
Want to Live in a DC Firehouse?
More from News & Politics
Congressman Ro Khanna Has a Lot More to Talk About Than Epstein
DC Businesses and Nonprofits Helping Federal Workers—and How You Can Help
Winsome Earle-Sears’s Bus Caught on Fire, Noem Declines Request to Stop Tear-Gassing Chicagoans Over Halloween, and Kennedy Center Ticket Sales Plummet
Guest List: 5 People We’d Love to Hang Out With This November
White House Says It Posts “Banger Memes,” National Guard Troops Will Stand Around in DC Until February, Police Say Naked Man Terrorized Area Walmart Customers
Photos: Thousands Turn Out for DC’s Annual High Heel Race
Sandwich Guy Skeletons Are This Halloween’s Must-Have Decoration in DC
Judge Blocks Shutdown Layoffs, Border Patrol Urged to Stop Tear-Gassing Children, Post Editorial Board Keeps Forgetting to Mention Owner’s Economic Interests