Portal usage scenarios

Considering turning the patient portal on, but…

  1. For security, want to limit network exposure as much as possible for main EMR interface.
  2. Portal is tightly coupled to main EMR and by default shares the same network exposure.
  3. Portal has features (self-registration et. al.) that seem to make the most sense only with unlimited network exposure.

These considerations have been discussed over the years on this forum, and at times has catalyzed efforts for “offsite” portal implementations and other mitigating solutions.

I would like to know how people are dealing with this in 2022. How are people utilizing the excellent v6/v7 patient portal?

  • Main EMR and portal exposed to entire Internet?
  • Local network exposure only for both, with kiosk/tablet/workstation in clinic lobby?
  • Main EMR restricted by client certificates (as espoused by @brady.miller at one time)
  • Dual EMR instances with one used only for portal and living in a DMZ (as espoused by @mdsupport at one time. How does this work?)
  • Whitelisting IPs for non-portal directories in Apache config? Does this work?
  • Whitelisted separate custom app replicating portal functions via API?
  • Other?

Appreciate any information, feel free to PM if not comfortable sharing publicly.

Thanks!

IP whitelisting should work, although you’d want to test it. I’d tell you to check https://github.com/openemr/openemr-devops/blob/master/docker/openemr/7.0.0/openemr.conf for the file we use in the Docker but I think you already know where it is.

IP whitelisting would fail (or require more thinking) if the portal and OpenEMR share resources (like logo files or script packages or endpoints).

Dual instances would probably work just fine as long their sites were shared between them (probably through NFS?), and then you’d use the same kind of Apache Location directives to block access that you would’ve used for IP whitelisting.

Are you having to deal with US regulation? ONC is coming down pretty hard on patient access to data for interoperability requirements in the USA. I would believe that local onsite access would be considered Information Blocking and would be a non-compliant use of the 7.0.0 certified version of OpenEMR. I base this on all the stuff and hoops we jumped through and regulations we had to read in building the SMART on FHIR mechanism as well as the CCDA api export. If you want to look this up you can read up on 2015 g9 and g10 certification crtieria for the 2015 Cures Update.

I personally wouldn’t run OpenEMR with a self-registration component and would tightly control what users are allowed access to the EMR. Even though its annoying, I’d let the front office deal with setting up users and their initial registration credentials. I’d also carefully review any smart FHIR apps requesting user/* or system/* scope permissions.

The dual instance DMZ zone is an interesting option I haven’t heard from anyone before and we didn’t consider that use case when we certified. The interoperability requirements give a certain timeframe that data needs to be made available to patients, I want to say 48-72 hours (I’d have to look it up) so that could be doable but it’d go against our certification documentation as we certified for our CQM and AMC reporting (mips reporting requirements) that our patient data was available immediately so that could be dicey to run OpenEMR that way.

If you don’t have to deal with US healthcare regulation then none of this may be relevant to you.

1 Like

It’s all relevant to us, and mentions things I hadn’t considered. Thank you!