I am trying to design a configuration that when “the Internet is down” a satellite office can continue seeing patients and doing business from a local server instance on a powerful laptop.
I would like to have multiple permanent sites also protected from WAN failures by placing servers at each location as well, but I need the databases at all locations to remain synchronized, as the providers cover for each other and need access to any patients record at all sites.
Is anyone doing this already? I don’t want to reinvent the wheel.
Is NDB clustering the database a viable solution? Is it overkill?
What temporary workarounds have been used successfully while the EMR is unavailable? (Am I fussing over a non-issue?)
Replication/Synchronization of multiple databases is going to be a big challenge.
The primary problem is that OpenEMR’s ID generation strategies make extensive use of AutoIncrement columns and GenID.
Because of this, ID generation needs to be coordinated through a “central authority” and can’t be easily accomplished in a disconnected environment.
Basic example is as follows… Your remote site loses connection to the “main server” The next ID to be assigned is “1000” Then your remote site goes to create a new encounter it needs a new encounter ID which is basically done by a “locked increment.” and it uses 1000. However, the main site doesn’t know the remote site is using “1000” so an encounter gets created at the main location with ID 1000. Now you’ve got a conflict between your main site and your remote site with two different encounters that were assigned the same ID.
Lots of similar problems will occur with every table that uses Autoincrement for the ID column.
On the other hand it’s very reasonable to have a single server for production that’s accessed by all users, and to have that periodically synced to a backup site using something like rsync. That way if the production site goes down you can revert to the backup site, losing what was done since the last sync. Access to the production site should be disabled during a sync process, but that should be a brief period.
Even if your “remote site” is operating in “read only mode” and isn’t updating any clinical information/forms, there still are still “write conflicts” that will occur because of the audit/logging required for HIPAA compliance.
A fail-over strategy like Rod suggests is highly recommended, but doesn’t really address the WAN failure situation.
Am I fussing over a non-issue?
Don’t know… depends on how reliable your networks are, and what you perceive is the impact of losing access. However, I suspect that having a plan, like using remote access through alternate networks (like a wireless data plan) is more likely to be viable than something like database clustering.
If your OpenEMR instance is hosted with a reliable (and HIPAA compliant) hosting service, then access to the site via the internet is usually very stable and redundant at the hosting site. In which case you are really just trying to cover internet being down at the clinical site (end user side). This is easy to cover using a cellular hotspot as the backup internet access point. If you throw in a laptop that you keep plugged in, you then have a backup even when power is down at the end user side.
That is one of the many reasons to use a good hosting provider.
WOW! Great responses and turnaround time! A testament to openEMR community.
OK… With the auto-increment and GenID techniques restricting the usefulness of replication, I’m leaning toward using DRDB replication at the main administrative office. I’ll also initially look to set up at least one additional rsync backup server at the office located the furthest away. (I know, I stand somewhere between paranoia and Business Continuance.)
I’ll also look at what I can do about network resiliency at the main office, perhaps we’ll lease a small dedicated fiber VLAN to do the rsync across (from another Internet provider) which could be used as a back door in a pinch.