Speech Recognition

Hello OEMR Users,

I work in the speech recognition space and I had a client reach out asking about this solution. After reading some and learning more about this project I wanted to reach out and see what other folks have used on this platform for speech recognition and whether or not there was any growing need that isn’t addressed yet?

If anyone out there has something that they would like addressed by speech recognition I would be interested hear about it.

Thank you,

Chris

Welcome Chris
You’d have to do a search on our wiki or this forum. While we do have a dictation encounter form, we don’t have an embedded engine for the purpose.
We have easy module support for third party use or we would consider interesting modules included as part of production builds.
So if you do decide an integration, please consider asking your sponsors about donating your product to the community version.
Dictation is useful in all our settings so i’d think any well done implementation would be welcomed.

Hello Jerry,

Thank you for the feedback. To better clarify my original post, I work in the speech recognition space and I am familiar with just about every commercial option out there (e.g. Dragon PE/NE/One, DAX, Medic, CSpeak, etc). In most of those solutions, they have ways of interacting with external systems and controls so that type of integration is already there in most of the modern speech recognition solutions.

I am not looking to build an OpenEMR specific speech recognition solution, but curious if a particular need exists on this platform for something that speech recognition can provide. For example I worked on a project for AthenaOne to help users navigate the EMR with their voice. So for example they could say, “go to hpi”, and that text field would be loaded and focused, kind of thing.

Every EMR/EHR I have worked with has a different user culture, some really embrace speech recognition others (e.g. Greenway Intergy) are more click-oriented and don’t. I posted in this category to try and poll the users to see what they felt may be missing that speech recognition solution could provide. If this happens to be an EMR that has a user base that isn’t really interested in speech recognition that is cool too. The website made me curious enough to reach out here and find out.

Thank you.

One of the areas that would help clinicians is dictating notes. This project uses CKEditor for entering formatted notes in a form. We tried a plugin but that did not go further as that practice was looking for text to reflect the context - e.g. voice input ‘last encounter on’ would have software look up that patient’s previous encounter and include the date ‘last encounter on 1/28/2023’ as output. Theoretically that seemed feasible but project’s budget was not realistic.

Televisit documentation may be another area that could be a good fit for speech recognition.

If there is no client-side setup needed, patient portal may benefit from having the voice interface as an option.

Hello MD Support,

I have some history with CKEditor. Looking in the demo system I am not seeing an instance of CKE though. All of the text fields I have seen are either input or textarea HTML elements. If you are familiar with a place that has a CKEditor field in OEMR, would you point me in that direction?

Out of the gate, Clinically Speaking’s CSpeak and Dragon One have compatibility with CKEditor, but that requires the Nuance license fee for it. The actual dictation is very do-able. As you noted, the command integration your talking about (e.g. dictate: “insert last encounter” and the service would pull that info and write it out) is feasible, but ultimately those would be something to the effect of two solutions working together. The first would be the speech recognition and the second would be the integration in the OEMR side of things that the speech rec solution would be able to call that info from.

If I get some time this weekend I will try to look into the second half of that equation and see what groundwork may be necessary for something like that to happen. The request kind of reminds me of how Centricity (now Athena Practice/Flow) works with their MEL system.

Thanks.

The editor is brought in as part of ‘layout’ - user specified encounter/patient information form. To see the thing in action follow these steps from docs. We tried the Ckwebspeech plugin available for CKEditor. If a practice requires just plain transcribing, that plugin or various products you already referred to work well. But in age of GPTs everyone - patients and clinicians - want their minds to be read flawlessly!!!

From dev perspective, project isn’t married to CKEditor. If something more comprehensive and flexible is used to fill up an textarea, it can work.

Personally I think speech recognition will work well for interactions that can be guided like your examples.

This one made me laugh pretty hard. You are absolutely right. People have some serious overestimation on how these new tools work.

Here are the some of the solutions that other psychiatrists are using:

I have used MdHub a bit but it’s cumbersome. Despite that, it can create an entire encounter note very impressively.