• Health Is Wealth
  • Posts
  • Google health home for doctors (consultation and documentation) , smart health device review

Google health home for doctors (consultation and documentation) , smart health device review

Google health home at the service of doctors recording and transcribing documentation by Rodolphe Degandt Date of last modification: 22 Apr 2019

Google health home for physician documentation

Now the different worlds of the blog meet, a smart health smart health connected object with voice recognition in the service of e-health. Something to delight and excite me.The voice recognition technology used by Google Assistant could soon become a transcription tool to document conversations between patients and doctors. The technical difficulty consists in identifying the various interlocutors during the recording of the conversation in order to achieve a faithful text transcription. But imagine Google health home where the Google health home mini or Google Max accompany the doctor in his dialogues with his patients for everything record, convert to text and classify.

A study by Google researchers

Thus a group of researchers from Google presented on November 20, the results around two methodologies of automatic voice recognition (ASR) to record medical conversations. They concluded that these two models could be used to save practitioners' time.

According to their study, doctors now spend an average of 6 of the 11 hours a day in an EHR environment1. Among his 6 hours, 1h30 hours is specifically devoted to documentation. Given the growing shortage of doctors, high levels of stress and burnout, ASR technology2 which could speed up the transcription of the clinical visit seemed imminent. It’s a fundamental technology that information retrieval and summarization technologies can use to ease the documentation burden. ”

1 EHR or electronic health record or electronic medical record (EMR) is the collection and storage of patient health data in a digital format (dictaphone + manual transcription).

2 ASR for Automatic Speech Recognition or automatic speech recognition (we often speak, me in any case of voice recognition).

Two tested methods of speech recognition of a multi-speaker conversation

Currently, most ASR products designed for medical transcription are limited to medical dictation. This voice recognition technology only manages one speaker. Conversations between physicians and their patients were more difficult due to overlapping dialogues, distance and quality of voice, differences in speech patterns and the lexical field used.

To explore how transcription of conversations could help, the researchers developed and evaluated two ASR techniques.

  • The first is the CTC model for Contextual Temporal Classification, class it this name! In reality it is a technique that cuts the sound into a phoneme in a given context. A decoder system then transcribes it to assign it to the right speaker.The CTC was originally designed to facilitate multi-language speech recognition. This may be the case between a surgeon and his patient.

  • The other, known as the Listening, Presence and Spelling Model (LAS), is a multi-part neural network that translates speech into individual characters of language, then sequentially selects subsequent inputs based on previous predictions.

Each model has been tested with over 14,000 hours of (anonymous) medical conversation recording.

Review results

While significant work was required to clean up the recordings, it has produced encouraging results.

  • The model CTC finally reached a 20.1% word error rate. The researchers' analysis of the errors showed that most of the errors occurred near the beginning and the end of the dialogues, during the interventions of less than a second, and more often during the speech of the patients more than during the speech. from a doctor. So I still think that doctors speak better than they write.

  • The system THE ACE withstood data alignment errors as well as noise. They reached an error rate of 18.3%.Note that the errors were rarely related to medical terms. Most of them occur among the more conversational expressions.In addition, the LAS model achieved a 98.2% recall rate for drug names mentioned in a medical conversation.

In conclusion, the research team said that these tests were very promising because the technical medical terms did not present any particular constraints. Two of the authors, Katherine Chou, product manager, and Chung-Cheng Chiu, software engineer, also said that they would work with doctors and researchers at Stanford University to keep moving forward.

So Google health home at the service of doctors?

VSas always, Google thinks of its users, here doctors and surgeons: “We hope that these technologies will not only help to give pleasure to practice by making the daily work of doctors and scribes easier, but that they will also help patients to get more committed and thorough medical attention, which should ideally lead to better care. ” So that's one more idea for doing business with Google health home.

Download the study "SPEECH RECOGNITION FOR MEDICAL CONVERSATIONS" from November 20, 2017 Names of Google researchers who conducted the study: Chung-Cheng Chiu, Anshuman Tripathi, Katherine Chou, Chris Co, Navdeep Jaitly, Diana Jaunzeikare, Anjuli Kannan, Patrick Nguyen, Hasim Sak, Ananth Sankar, Justin Tansuwan, Nathan Wan, Yonghui Wu and Xuedong Zhang.

If you have found a spelling error, please notify us by selecting the text and pressing Ctrl + Enter .

Readers: 2,625