Bringing Suki.ai to OpenEMR, looking for early adopters

Some of you may have heard of Suki.ai It is interesting that in their promotional material. They use OpenEMR. I have talked with them and the platform is expensive. $189/mo/provider. That money goes to Suki. For the company that does the integration, it will cost $10k/year in platform fees alone.
It would take all registered users of OpenEMR to pay the integration company a one-time fee of $3. This would cover the platform fees each year. In addition to the monthly fee, providers would be paying.

Is the juice worth the squeeze?
Let me hear from you all.

Page 1

Page 2

Page 3

Page4

Page 5

Me!! Me!! Me!!

I am currently using Nabla for AI listening, but that is not likely to be sustainable. I have also tried MedHub.io along with ChartNote listener, but Nable has been the best since it has a chrome plugin for easy of use.

It would be amazing if Suki could fill in all the bits and bobs of encounters and maybe other sections of the EMR.

Thanks as always, Sherwin :vulcan_salute:

Also, we pay for yearly use of POE for access to all the other GPT agents. Just additional information for you.

Thank you for your comments. I really appreciate your insight.
We will be starting an Indiegogo crowdfunding campaign to fund this endeavor.
Be on the lookout for the post here.

We need 1000 users to pull this off. $389 will cover the cost of one module license.

I looked over Suki’s page, I didn’t see any listing on what patient data they collect and how it’s being used.

ONC certification is now requiring a lot of information on how these AI systems are processing and utilizing the data if the AI system comes bundled w/ OpenEMR. That requirement goes in place Jan 1st 2025. We’ll need to have more information on all of that if we want an integrated option.

This will also have ramifications in EU markets where AI is getting much more tightly regulated if there isn’t any information on how the model trains on, consumes, and utilizes patient data.

One big area of concern is how they are siloing patient data. Many of the LLM models have been broken to reveal prompt responses from other users (chatGPT for example) and that would be very concerning in terms of patient safety and legal compliance. I’m assuming the large fee is hopefully setting up a siloed instance of the LLM that would exist independently of other instances. If its just a backend to Claude or ChatGPT (non-azure entreprise version) it won’t work. Again, we’d need more information on this before you probably want to go too deep down this avenue.

For those interested these are the pieces of information we need to be able to provide for ONC certification on any kind of predictive (AI) decision support intervention system:

  1. For Predictive Decision Support Interventions:
  2. Details and output of the intervention, including:
    1. Name and contact information for the intervention developer;
    2. Funding source of the technical implementation for the intervention(s) development;
    3. Description of value that the intervention produces as an output; and
    4. Whether the intervention output is a prediction, classification, recommendation, evaluation, analysis, or other type of output.
  3. Purpose of the intervention, including:
    1. Intended use of the intervention;
    2. Intended patient population(s) for the intervention’s use;
    3. Intended user(s); and
    4. Intended decision-making role for which the intervention was designed to be used/for (e.g., informs, augments, replaces clinical management).
  4. Cautioned out-of-scope use of the intervention, including:
    1. Description of tasks, situations, or populations where a user is cautioned against applying the intervention; and
    2. Known risks, inappropriate settings, inappropriate uses, or known limitations.
  5. Intervention development details and input features, including at a minimum:
    1. Exclusion and inclusion criteria that influenced the training data set;
    2. Use of variables in paragraph (b)(11)(iv)(A)(5)-(13) as input features;
    3. Description of demographic representativeness according to variables in paragraph (b)(11)(iv)(A)(5)-(13) including, at a minimum, those used as input features in the intervention;
    4. Description of relevance of training data to intended deployed setting; and
  6. Process used to ensure fairness in development of the intervention, including:
    1. Description of the approach the intervention developer has taken to ensure that the intervention’s output is fair; and
    2. Description of approaches to manage, reduce, or eliminate bias.
  7. External validation process, including:
    1. Description of the data source, clinical setting, or environment where an intervention’s validity and fairness has been assessed, other than the source of training and testing data
    2. Party that conducted the external testing;
    3. Description of demographic representativeness of external data according to variables in paragraph (b)(11)(iv)(A)(5)-(13) including, at a minimum, those used as input features in the intervention; and
    4. Description of external validation process.
  8. Quantitative measures of performance, including:
    1. Validity of intervention in test data derived from the same source as the initial training data;
    2. Fairness of intervention in test data derived from the same source as the initial training data;
    3. Validity of intervention in data external to or from a different source than the initial training data;
    4. Fairness of intervention in data external to or from a different source than the initial training data;
    5. References to evaluation of use of the intervention on outcomes, including, bibliographic citations or hyperlinks to evaluations of how well the intervention reduced morbidity, mortality, length of stay, or other outcomes;
  9. Ongoing maintenance of intervention implementation and use, including:
    1. Description of process and frequency by which the intervention’s validity is monitored over time;
    2. Validity of intervention in local data;
    3. Description of the process and frequency by which the intervention’s fairness is monitored over time;
    4. Fairness of intervention in local data; and
  10. Update and continued validation or fairness assessment schedule, including:
    1. Description of process and frequency by which the intervention is updated; and
    2. Description of frequency by which the intervention’s performance is corrected when risks related to validity and fairness are identified.

@adunsulag In the documentation and calls that I have had so far. They cover all of these aspects and more. Any data that is share in the process in deidentified. Because of my NDA. I feel I should not say more than that.

@adunsulag would you like to join me on a call with Suki? I can have them send you an NDA to sign.

I’ll think about it but my initial reaction is no.

If they aren’t sharing that data publicly and openly, ONC is going to whack them hard next year.

This is what I found on Suki’s website.

Suki uses industry-leading security tools to protect customers. All data is encrypted in-transit and at-rest with modern ciphers and maximum strength cryptography. Run-time analysis is conducted to detect anomalies or suspicious software behavior, to protect against breaches. Suki has also received SOC2 Type 1 and SOC2 Type 2 certifications.

I have gone over the information posted above and in my opinion it would only apply if Suki was becoming an integrated part of the OpenEMR codebase. It will not be a part of the codebase. As an addon item after the certification process is completed. The Suki module will not interfere with the ONC certification of OpenEMR. The module will have no standing or consideration during the ONC certification process.

Another thought after reading this again. As stated, Suki does not collect any patient data. The data is deidentified before it goes to them for processing and return.