Onsite or offsite patient portal

jason0 wrote on Friday, October 28, 2011:

Hi Yehster,

I really like the direction of this last idea.  In fact, I have a couple of directions to suggest:

1) on the main mysql server, create a mysql user for openemr patient portal.  This user will have read-only access to the relevent tables and fields to serve the patient portal.  you will need to allow this user access from the portal server.  configure the portal server to connect to the main mysql server and run queries.

Or

2) use mysql replication, and copy the data to the patient portal server.  You must replicate ONLY the relevent tables and fields.

I think the largest benefit of using either one of these ideas is that you are using the built-in functionality of mysql, rather than any third-party software that may or may not port properly to your platform.

With any of the above cases including Yehster’s suggestions, you will need to solve how to get the data through the firewall.

-jason

yehster wrote on Friday, October 28, 2011:

Jason,
A MySQL user with read-only access for the portal is also a good idea. That user would also need update and insert  permissions on the patient_access_onsite table to allow patients to change their passwords.  It may need write access to other tables as well (event logging?) .  An inventory of the tables the portal uses would be useful regardless of mechanism.  Can anyone confirm if that the onsite portal needs access to the documents directories/files (upload .pdfs, .tiffs etc?)  I don’t believe that it does, and things would be much simpler if this is true.  Incidentally, today was my first time looking at the patient portal code.

One big advantage of an “offsite” read-only user is that it’s simple to setup.  One disadvantage is that the offsite server can affect the performance of the onsite server since they will be running against the same MySQL instance.  Whether that performance impact is significant is going to depend on your onsite hardware, # of users, # of patients, etc… This might not matter, but it’s something to keep  in mind.

If using mysql replication, it would good to replicate a minimal set of data as that would reduce the impact of a data breach as well as cut down on bandwidth and disk usage.  Is there anything else I’m missing as to why “ONLY relevant data” needs to be replicated.  Yeah, transferring billing and other non-clinical data is probably a waste.

However, replicating extra data wouldn’t be all bad as you would have an automatic offsite backup.  Consider an extreme scenario where your onsite server failed catastrophically.  An organization with a contingency plan could quickly switch to using the offsite machine while the onsite machine is restored.

On the other hand, once an offsite portal were configured appropriately, we could remove all of the irrelevant php files stored on the offsite server to make it a “less inviting target”.

SSH Tunneling or VPN are the best options in my mind for getting data through a firewall.
http://en.wikipedia.org/wiki/Tunneling_protocol
I personally use SSH Tunneling to get access to my machines located at different sites.  I can even access my machines on my Android phone with SSH.
I know less about VPN, but am certain that it will work.

bradymiller wrote on Saturday, October 29, 2011:

Hi,

Not sure I am seeing the point of a read only instance, considering the big risk here is leaking confidential patient data. This will also take away features, such as setting appt and filling out forms.I think the ideal security method will actually differ between users that use the onsite (native) patient portal vs the offsite (third party) patient portal. See below wiki page for description of each:
http://open-emr.org/wiki/index.php/Patient_Portal

Note they can both be tested easily on the online demo also:
http://open-emr.org/wiki/index.php/OpenEMR_Version_4.1.0_Demo

Placed some thoughts on how to secure both the onsite and offsite portals here:
http://open-emr.org/wiki/index.php/Securing_OpenEMR
(at bottom of the Apache section)

I actually think the offsite portal has more potential for security if able to set up some sort of SSH/VPN tunnel via the local OpenEMR instance and the offsite portal (ie. Z&H’s portal). If the local instance logged into the offsite portal via a SSH tunnel and kept the connection active, then basically their could be secure cross-communication between the two without needing to be open to the internet or requiring a a staic IP address. For the users sake, though, hopefully there is some sort of open source package that can do this rather than the users needing to set up these ssh/vpn tunnels.

Of course, this means the offsite portal site (ie. Z&H), needs to be secure in order to avoid malicious users from cracking the offsite portal and then having access to all the ssh/vpn tunneled connections from likely hundreds local openemr instances.

-brady

yehster wrote on Saturday, October 29, 2011:

Brady,
A read-only portal is extremely useful to a provider trying to achieve meaningful use.  It would help meet the requirement of having clinical summaries available for 50% of encounters available within 3 days.  The alternative is printing at time of encounter,or snail mailing or emailing after the fact.  Honestly, I think that having the clinical summary available at time of encounter serves the patient better than through a portal, but that’s pretty hard to achieve from a workflow standpoint.

The risk of leaking protected health information is is “non-zero” no matter what you do since the goal is presenting that information to a patient through the internet.

What a read-only approach protects you from is for a malicious user from affecting the workflow/data/information on your main OpenEMR instance if the portal were breached.  As an example.if the portal has read/write access to MySQL  Suppose a patient’s user id/password were compromised.  An attacker could wipe out the patient’s demographic information, or he could screw up your calendar by posting appointments in every slot.  The worst potential vulnerability would be if a SQL injection attack could make it back from the portal to the main server.

If the portal is read/only with respect to the main server, none of those potential vulnerabilities can mess up your onsite system.

Since the appointment functionality is broken for now anyway, you don’t lose that with a read-only instance. Also if one chose MySQL replication as the synchronization method,  one could decide later to do bi-directional replication for some data.

SSH tunnelling is not so complicated that it needs a “package” to setup and configure. 
Under linux, all you do to establish a tunnel from the command line like this on Machine A:
ssh -N -u username@remote.host.ip -L 9999/127.0.0.1/3096
User authentication could be done by password or by public/private key pair.
Now at this point, the machine I issued the command from has access to the MySQL server running on remote.host.ip.  However, instead of making reference to remote.host.ip directly, Machine A accesses the remote machine by addressing it as localhost:9999 (i.e. localhost on port 9999 instead of the standard MySQL port 3096).
I use putty on Windows which is a GUI rather than a command line program to do the same thing, but the concept is exactly the same.

If the offsite portal gets cracked at the “command level” all bets are off anyway.  However, if the tunnels are setup so that the openemr instance  logs into to the offsite portal, rather than the other way around,  all of the authentication information would be distributed across the openemr instances rather than centrally located on ZH’s machine.  The main thing that would allow an attacker to do in this case additional is to have access to the webserver of the openemr instance and/or issue SOAP commands.  The attacker would need to break another layer of security before he could cause too much more damage.

bradymiller wrote on Saturday, October 29, 2011:

Hi,

The ideal solution will really be dependent on which portal is used:

1. Native portal (this is at patients directory in OpenEMR).
This is currently a read-only portal anyways, so agree it won’t matter. I think the spectrum of security (from least to most) will be (in all, would remove any unneeded scripts, which are listed at http://open-emr.org/wiki/index.php/Securing_OpenEMR) :
-open to internet over https (and hope for the best)
-open to internet over https and use a client side certificate to specifically protect the OpenEMR login script
-create a patient portal specific sqlconf.php file in the portal directory and have globals.php use this when the patient portal is used. By default, this will simply call the main sqlconf.php, but this gives the user the option to set up a separate mysql user which can be limited to read-only.
-with above, sqlconf.php method, could also use a completely separate database to further secure
-could have two completely instance of openemr (one the mirror that you discuss, which would be read-only)

2. Third Party portal using the API
This currently supports creation of new patients, new documents, and changing demographic information by patient. I think the spectrum of security (from least to most) will be (in all, would remove any unneeded scripts, which are listed at http://open-emr.org/wiki/index.php/Securing_OpenEMR) :
-open to internet over https (and hope for the best)
-open to internet over https and use a client side certificate to specifically protect the OpenEMR login script
-open to internet over https and use a client side certificate to protect entire openemr codebase (so the third party portal would need this)
-close completely to internet and connect to the third party portal via a secure bi-directional tunnel (such as VPN or SSH)

Above last entry is why it seems the third party portal will be more secure and functional. Because with this, the only point of entry of the third party portal, which will have less code(and newer code) and likely much easier to ensure security. In fact, it should explicitly guarantee security (at least to HIPAA level), so physicians are comfortable using it. Perhaps the thing to do is for Z&H to get their portal working with your above ssh tunneling command (seems like a way to provide certificates would be the most secure, although password should suffice; this is because even if the credentials when logging in from local OpenEMR to the third party portal (the physician portal) is hacked, that’s not a very big deal, because all of the patient data and credentials are actually stored in the local OpenEMR, so the only point of attack seems to be where the patient logs into the offsite patient portal. If they get the ssh tunnel method working and secure, then could find easier to install methods/applications (maybe just a putty howto) for the large number of windows users. Hope I’m making sense.

Quick question on shh tunneling. Lets say I create a tunnel with your server. Can your server then independently send information(API requests) back to me?

-brady

cverk wrote on Sunday, October 30, 2011:

In my approach to workflow, I tend to complete charts at lunch or at the end of the day.  I have my MA working on medication and problem lists and putting in vitals, and my front office person completing demographics. Prescriptions are done during visits, but I dictate chart notes later to keep patient flow going.  I picture perhaps a grease monkey type script like Kevin made for allscripts that places a button somewhere for whatever vendor you select. Once you complete that visit, pushing the button creates the SSH tunnel and transmits the patient visit info/CCD to the offsite portal site or to a separate read only database instance.  This could be ordered by patient ID to overwrite old entries with new entries the next time you see that patient. That way you are only putting up patient info after a visit, not everyone in your database. You could even have a separate store function to your separate database built in to the program and a transmit button script taking you to a sign in at the offsite portal to sync databases. That sync could also be set by the offsite area to create and e-mail temporary passwords to patients once their info is updated. By meaningful use you would have 3 business days to complete and transmit that visits information. Maybe the big labs would be interested in supporting such an idea as a way for them to get order info/icd-9/demographics etc and transmit back results each time you log into the site.  Then you could just put a link to the portal on your office website. Would that make any sense, or is that just redundant.

bradymiller wrote on Sunday, October 30, 2011:

Hi,

Specifically for the SSH bidirectional tunnel(actually is reverse SSH), here are some nice links:
http://www.howtoforge.com/reverse-ssh-tunneling
http://notepad.patheticcockroach.com/2051/digging-a-tunnel-to-freedom-symmetrical-ssh-tunnel-finally/

So, if the local OpenEMR instance does following ssh statement:
ssh -R 19999:localhost:443  username@third-party.ip

Where username@third-party.ip is a account on the third-party (ie. Z&H) server. This means that Z&H can then access this user’s local OpenEMR instance and API’s at Z&H’s local port 19999. Then, under this type of design, each OpenEMR instance (ie. account on Z&H server) would need a separate ssh account and port assignment.

-brady

bradymiller wrote on Sunday, October 30, 2011:

Hi,

Z&H. My thought here is to have different setup methods and flag them. For example:
1. normal (what you now have where the IP address is directly available on the internet)
2. ssh (for user’s that have up a reverse ssh with you; so will need to record your local port setting)
3. and so on (will hopefully offer better methods that won’t require separate ssh accounts and port assignments on your end)

-brady

bradymiller wrote on Sunday, October 30, 2011:

Actually,

Since passing it through ssh, shouldn’t need to use https(443), so could just use port 80:
ssh -R 19999:localhost:80  username@third-party.ip

-brady

bradymiller wrote on Sunday, October 30, 2011:

And to clarify how this would work on third party sites end. If the third party portal wanted to make SOAP calls to a external OpenEMR instance whom has set up a reverse SSH tunnel as above (and were assigned to port 19999) with the third party site, the third party site (Z&H) would use following in their web path to make the SOAP call:
http://localhost:19999/<SOAP call>

-brady

bradymiller wrote on Sunday, October 30, 2011:

clarify above, the third party site will also need to know the path to openemr instance in the web address (this is usually going to be ‘openemr’):
http://localhost:19999/<OpenEMR path, usually ‘openemr’>/<SOAP call>

yehster wrote on Sunday, October 30, 2011:

Brady,
Even though the tunnel encrypts the traffic, it’s probably safer to just always use https, rather than having to track between Type 1 connections (over the internet) and Type 2 tunneled.  The issue of “double encrypting” tunneled traffic, instead of accidentally not encrypting open traffic is a lesser problem.  This way, all that ZH needs to track is what port to use for SOAP to the tunneled connection and not having to add additional logic for choosing http vs. https.

bradymiller wrote on Sunday, October 30, 2011:

Hi,

Agree with just using https(443).

On further thinking, one thing that seems a bit troublesome with this method is that the user has control over the local port on the third party (Z&H side). What would happen if they used the wrong port and overlapped with another’s port? I think the authentication in the API’s would stop any badness from happening, but the duplication of ports I’m guessing would break things. I wonder if there is a method for the local OpenEMR instance to simply hand over a port to the remote ssh, and the remote SSH (ie. Z&H) to assign a port dependent on the user etc.).

-brady

yehster wrote on Sunday, October 30, 2011:

Brady,
SSH can be configured to restrict what the remote user is allowed to including what port it is allowed to tunnel.
If the portal provider doesn’t restrict ssh access, the default is to allow shell access, which means that their clients would be able to execute commands on the portal server.
http://stackoverflow.com/questions/8021/allow-user-to-set-up-an-ssh-tunnel-but-nothing-else

cverk wrote on Monday, October 31, 2011:

In one of the informational posts it shows using a self authored certificate and the browser overrides. If it was a connection just used between an office and the offsite portal, why would you have to use a paid for certificate. There would need to be an official certificate from the server to the patients to avoid all the error messages of course.

yehster wrote on Monday, October 31, 2011:

Cverk,
The short of it is that for a connection between an offsite portal and your local system behind your firewall, you shouldn’t really need a certificate from a certificate authority. Especially if you are using my proposed mechanism and one “entity” is in control of both the onsite system and the offsite system hosting the portal.

Whenever you establish an SSL connection (both HTTPS and SSH use SSL), if you connect to something that doesn’t have an official certificate, you have the option of confirming that you trust the site you are connecting to.  With SSH, after confirming the information of the unofficially generated certificate and the IP Address you connect to gets saved on your system in the .ssh/known_hosts file.  This way if you try to connect to that server again in the future, it checks to make sure the certificate it presents hasn’t changed.  The purpose is to prevent someone from trying to impersonate that “known_host” in the future.  The known_hosts file is basically your own “local list” of trusted sites, that takes the place of a central certificate authority for cases when you know you can trust the server.  If some tries to impersonate that trusted server in the future, the certificate info in known_host won’t match and you will be warned. 

cverk wrote on Monday, October 31, 2011:

Since you have to stop mysql to run a backup, it seems there would need to be a fairly simple mechanism to sign back on perhaps like xampp runs apache and mysql as a service when you reboot or uses the xampp control panel. So maybe in globals the offsite portal would have not only a user name and password you set, but a port number. And then perhaps just booting up your office server would establish that connection. Offices could then even decide what hours their portal would be accessable by shutting down their server.  I didn’t see anything in the guidelines suggesting it needed to be on 24/7. It seems you could point that same mechanism at any web address you cared to establish data connection to, as long as they opened the port and took your sign in user and password.

cverk wrote on Monday, October 31, 2011:

So would this same issue of connection and security apply for bidirectional lab and e-prescribing connection, or are those already secured as they are offered?  It would not be good for the project or vendors to have a spectacular security failure. I was talking to an IT person who predicted there was likely to eventually be such a failure at some point involving perhaps  a large hospital system,insurance plan, or EMR company that could be a huge mess like the sony playstation.

yehster wrote on Monday, October 31, 2011:

I run mysqldump on my server all the time without stopping it first.  For the purposes of synchronization with a remote database instance you really just want your mysql data in a “stable” state. If you do the dump after hours when no one is making changes, everything will be fine.  If someone happens to be making a change while the dump is running, you might not pick up that change on the portal until the next time you synch, but no big deal as long as you haven’t missed the three day window. Furthermore, one of the purposes of MySQL replication in more general use is to provide better server availability, certainly no need to for server shutdown with that mechanism.

cverk wrote on Thursday, November 03, 2011:

So is there any chance that ZH  will make this kind of security happen for windows users on their offsite portal. I sent some e-mail to labcorp and quest labs to see if they have some interest in sponsoring connections to openemr. I think if lots of offices were to do that,  then labs would have some motivation to work out something secure and bidirectional, perhaps via the same connection.
  My shutting down mysql has to do with using the backup command scripts, since everybody seems to have so much trouble with the built in backup in openemr under windows.