OpenEMR Devops Overview

OpenEMR has a paired project repository, openemr-devops, serving as the workbench for our development, deployment, and orchestration solutions since 2017. I don’t feel it’s necessary for a developer or administrator to be fully aware of every bit and bob in here… but we’ve gone to a lot trouble to solve a lot of problems, and it wouldn’t hurt to know where to find the source of the tools you’re using, if you find a need to extend or debug them.



Dockerfile doesn’t do much other than install services into the Alpine container and copy the bits for the next part. script handles initial setup, calling a couple of subordinate scripts and moving a couple configuration files into place, and launches Apache. The most interesting thing going on here is handling situations when setup should not be run and methods for choosing who should run setup in a leaderless swarm.


The flex container shares its environment with the host to allow development against a running instance.


Late last year we started distributing cross-compiled containers to work with ARM processors, which include the Raspberry Pi and Amazon’s inexpensive Graviton2 instances. At press time we’re still pairing it with a community MySQL container instead of anything like the following, though.


This is a forked version of the official MySQL 5.7 docker with Percona’s XtraBackup spooled into and configured. The real meat of it are the community xbackup and xrestore scripts which I modified and wrapped a little to provide hot incremental backups, and to destructively restore a MySQL server on demand.

(It’s as impressive as anything – XtraBackup embeds a MySQL server within it and it plays the the database logs into the server, running each incremental fragment in sequence until it’s rebuilt the target’s mysql directory locally. Then we bring MySQL down, move XtraBackup’s replacement data directory into place, and restart the server.)

… but how do we use them?


OpenEMR Lightsail is two parts – a hundred lines or so of script to configure an Ubuntu 20.04 LTS instance, and a few ancillary scripts to drive Duplicity and XtraBackup. If you view as essentially a Dockerfile you’re not far wrong, it configures Ubuntu, loads packages and security updates, snags a couple of prebuilt docker-compose files, sets up a backup service, and launches the whole thing. The containers will self-configure and once the script ends you’ll be ready to log in.

Most of the complexity of the launch script is irrelevant to production use; you can select container versions for OpenEMR, or the devops branch to launch Lightsail from, or skip autoconfiguration entirely if you’ve got reasons to.

Lightsail will make daily backups of the database and the OpenEMR workspace (picking up both configuration files and patient documents), with a full backup once a week and rotating them to keep no more than two full backups on hand at any time. The restore process can be launched from the command line.

Shared Hosting

Lightsail is my go-to recommendation for anybody who wants to set up their own OpenEMR instance for examination. If the demos aren’t to your liking or you don’t want to work with Docker yourself, then get a DigitalOcean droplet or run VMWare or just plain put Ubuntu 20.04 on your own hardware and launch Lightsail and we’ll handle the everything.

Virtual Machines

When we’ve made virtual machines available via SourceForge, they weren’t anything more than Lightsail instances – start machine, log in, run Lightsail installer, quit machine and export for distribution. Once installed, Lightsail is quite happy to function without an internet connection.

AWS Marketplace: OpenEMR Express

OpenEMR Express diverges minimally from this model – we install Lightsail, shut the machine off, and submit it to the AWS Marketplace. In deference to Marketplace requirements, we provided a mechanism to lock the admin account and once the machine boots for the first time in a new user’s environment, we change the admin password to one only the account owner knows.

Since the product is delivered as an AMI without any orchestration support, what you get is what you get; full-disk encryption and non-local backup storage are not available.

Which brings us to…


AWS Marketplace: OpenEMR Standard

Standard is the first descendent of “OpenEMR Full Stack”, which was our first attempt at pre-container deployment using Amazon’s Elastic Beanstalk and a zipped repo. With containers and CloudFormation, we have significantly more opportunity to arrange resources the way we’d prefer, and Standard creates…

  • A VPC, with subnets and security groups
  • CloudTrail audit support
  • A KMS key to encrypt resources (and provide a handle for audits to track decryption)
  • An RDS database instance with full-disk encryption (through KMS)
  • An encrypted (through KMS) volume attached to the instance to store patient records
  • An off-instance S3 encrypted bucket (through, yes, KMS) for off-site storage of Duplicity backups, which only extend to the patient records since RDS makes its own arrangements.
  • IAM instance roles allowing access to Amazon resources without baked-in passwords.

We don’t quite build this with Lightsail, although if you look at the scripts in the AMI directory you’ll see a lot of similarities – we build the core image, running security updates, loading the containers but not launching them, and we submit the constructed AMI as one of the parts to Marketplace. (Why load the containers at all? Marketplace wants instances that can launch without internet access, and docker hub is on the internet.)

The second part we submit is the CloudFormation template, which will handle setting all those services and instances into motion. The template is generated by a fairly procedural Python script with troposphere, allowing me to branch and create different templates with entirely different goals.

CloudFormation will take user-supplied answers to things like “admin password” and insert them into files injected into the instance, dynamically creating a docker-compose file with credentials for the initial setup, as well as providing references to tell Duplicity where to look for the bucket it should send backups to. Duplicity has built-in support for both S3 and KMS, so it’s capable of honoring encrypted-in-motion and encrypted-at-rest HIPAA directives.

Developer Mode

Stacks written in developer mode are much more aggressive about deleting their footprint when the stack is deleted – it might not be reasonable for a production stack to delete its backups or its key, but my throwaway tests don’t need to keep those around. This alternate stack facility completely justified the added complexity of the Python stack builder.

Note that the developer templates aren’t in the repo – you’ll need to run the Python stackbuilder to create them.

Recovery Mode

Standard’s recovery stack allows a user to specify an RDS snapshot, a bucket, and a key and rebuild a new copy of Standard from those existing resources. It’s not especially complex technically, the stack just…

  • Uses user selections for key and bucket instead of creating its own
  • Inflates a snapshot instead of creating a fresh RDS instance
  • Skips OpenEMR setup entirely, and instead…
  • Runs a Duplicity restore to pick up the filesystem (and configuration) on launch.

AWS: OpenEMR Express Plus

Express Plus is a hybrid of the best parts of everything that came before it, deployed with nothing but a template, taking all the best lessons from Standard while keeping to a single instance configured on demand, using Lightsail-style backups but adding the off-instance bucket storage for them.

Conceptually everything’s basically just a reprise of Standard, with the CloudFormation template constructing a relatively concise script that launches Lightsail internally and then tweaks the backups. The CFT has an index of Ubuntu 20.04 instances available in the regions I’ve preconfigured, in place of a Marketplace-provided AMI.

Developer Mode

Much like Standard, this works harder about cleaning up after itself, but I’ve also added the ability to pick the devops branch it should launch from so I can work in peace.

Recovery Mode

This mirrors Standard’s recovery mode but improves on it, with the stack now properly defining the older resources as recovery resources which it won’t use after the initial load. The new stack gets new resources with a new key; the old resources can be deleted in total without impacting the recovered instance, so this can be used as a migration mode or to test updates and patches without any negative consequence.

Lightsail now supports an “empty” launch mode that will forcibly skip autoconfiguration, which recovery mode uses because it’s not going to bother setting itself up if the recovery just deletes everything right after.


I’ve provided an example Kubernetes clustered service based off Brady’s work, but I’m not a Kubernetes expert and I can’t guarantee I’ve adhered to any kind of standard format. Servers are defined and made available to be passed into the master openemr service; we support clustered sessions through Redis and a shared volume for patient records.

The 6.1.0 OpenEMR container saw a lot of changes aimed at supporting various Kubernetes deployment models; we introduced the authority and operator roles to allow for the concept of a leaderless swarm electing an agent to run initial setup, or a job runner that will configure OpenEMR and then quit, or workers who know they’re never to run the setup process and they’ll quit if they find a blank shared volume, which I’ve seen as a complaint before on the forums.


OpenEMR Full Stack

Essentially “what happens when you take a step-by-step installation manual and turn it into a CloudFormation template” before we even had a container. It worked but it was expensive (used a load balancer and Elastic Beanstalk), it hasn’t been maintained, and oh boy but we’ve learned a lot since then. Standard, Express Plus, or Kubernetes are all better choices.


Google’s equivalent of Express, same sensibilities, for their own marketplace. Haven’t kept it updated and nobody’s asked about it…


(What’s going on here? It was suggested that people might want to know more about what’s going on with some of the dark magic in builds and packages and deployments and what have you. I could go back and add links to GitHub - openemr/openemr-devops: OpenEMR administration and deployment tooling later, but this a fair first draft.)


Is there a similar explanation of the AWS Fargate Setup anywhere?

(this write-up was EXTREMELY helpful - Massive Thank You!)


This minute, no, nothing beyond Jake’s index page, but I found a largely unchallenging read.

1 Like