Doctors are known to work long hours and a team of researchers at Google want to help them out by developing speech recognition technology targeted at the medical community.
Working with Stanford University scientists, the Google Brain team hopes to make use of artificial intelligence (AI) technologies already available in Google Assistant, Google Home and Google Translate to build what they call the Automatic Speech Recognition (ASR) to transcribe conversations involving multiple speakers.
The researchers wrote in a blog post: “While most of the current ASR solutions in medical domain focus on transcribing doctor dictations (ie, single speaker speech consisting of predictable medical terminology), our research shows that it is possible to build an ASR model which can handle multiple speaker conversations covering everything from weather to complex medical diagnosis.”
The system will not only transcribe the conversation, but take relevant notes automatically, reducing workload and freeing up time for doctors to focus on patients.
The researchers used 14,000 hours of speech from anonymised conversations to train their ASR model.
The best results had an error rate of 18.3%.
The hope is that with more training, the system could improve its accuracy and be considered reliable enough for use.
The researchers added: “We hope these technologies will not only help return joy to practice by facilitating doctors and scribes with their everyday workload, but also help the patients get more dedicated and thorough medical attention, ideally, leading to better care.”
The findings are published in the open access journal ArXiv.