OpenEMR 6.1.0 Devops

Alright, tomorrow I’ll stop what I’m doing and switch to testing Lightsail installation with 6.1.0 – I think it’s fine but I have to check. After that I’ll update Express Plus (which works but needs feedback from AWS because of weird CFN deployment hitches).

After both of those are done (and while I’m waiting for feedback) I’m going to tackle Standard, since it’s broken right now, and get it updated to use t3 instances and Ubuntu 20. Express will be last.

Hopefully wrap up this weekend.


Okay, Lightsail 6.1.0 is cleared for use.

Isn’t there a way to download and pipe a script into bash as a one-shot, though? I want to do better than the twoliner I’ve got…

Now sitting down with Standard to integrate things.


Thanks for that work Asher!

1 Like

Alright, first walkthrough of Standard has been completed (though nothing’s been built or tested, that’s for tomorrow).


  • now based on Ubuntu 20.04 LTS
  • OpenEMR updated to 6.1.0
  • fixes MySQL version creep (thanks Ralf!)
  • fixes Python version creep for Ubuntu 20 (thanks Jason!)
  • instance catalog list updated to top-of-the-line instances (Ralf again)
  • …which break things if they don’t have their volume trickery rebuilt for Nitro nvme (Jason again)
  • all Docker version conflicts repaired since Docker SOTA in 20 has advanced

If I’m fortunate I can have this built and submitted to AWS by Friday; do not anticipate it’ll be longer than the weekend.

1 Like

Surprise special issue–

The problem with Express Plus deployment is resolved, and it can now be launched (from us-east-1, at least) from the magic index links as normal. ( if you don’t remember those.)

I’ll get the other buckets later this week, although I’ll take requests if there’s one you want manually populated ahead of time.


Got Standard candidate built but won’t have time to test it tonight. Python is as usual the worst thing that ever happened to system administration.

I did realize something, though – all that trouble we went through to get the docker containers cross-compiling for ARMv8 and the Raspberry Pi? That does actually mean our containers are compatible with Amazon’s custom Graviton2 ARM instances, which are the lowest cost-per-hour they’ve got available.

I should do all the actual work first but I’m sorta tempted to put something together for it.


Please grant us poor souls an upgrade path to your new hotness before you tackle the ARM on AWS shiny! :smiley:

Huh, worked the first time. Did not expect that.

Alright! OpenEMR 6.1.0 Standard is now ready… toooooo be submitted to AWS Marketplace in several pieces which will require human review and oversight before release, which I’ll do today or tomorrow. In the meantime, however, I’ll make available this prerelease template that’s using an AMI served up from our own account instead of Marketplace, if anybody just absolutely has to get moving ASAP.

Just hand this template to CloudFormation (in us-east-1 only – sorry, them’s the breaks) and you should get going.

OpenEMR-Standard.json (39.5 KB)


Okay, Standard’s in the hopper. Now for Express…

1 Like

Alright, so.

Express has a vaild image getting scanned at Marketplace now, but that’s only the first step of the process and I don’t know if it’ll be done tonight, so I expect to send Express in for review tomorrow.

Finally got some automation in place for Express image building though. Not /all/ of it but at least it’s a script without human steps now like it used to be.

1 Like

Special late night update–

In the years since CloudFormation’s been a thing, AWS has finally cracked the problem of CF not being able to load templates from S3 buckets in different regions, which is why I had to have as many buckets as there /were/ regions which was very annoying thanks.

I’ve revised the Express Plus readme to be one link and a template URL instead of enumerating every region; Express Plus is now finished (though Jason’s not getting what he wants yet).


Okay, back and forth on some submission issues but I cleared those hurdles, Express has now been submitted to AWS for posting.

Current standing:

Lightsail (shared hosting): 6.1.0 OK
Express: In process at Marketplace
Express Plus: 6.1.0 OK
Standard: In process at Marketplace

Next steps:
Express Plus recovery pathing via specified parent’s S3/KMS
Graviton support for ARM (just as a proof of concept)

Is it finally time to retire Full Stack? I’m not sorry I built it, it was the foundations for everything else, but nobody uses it, I never got a request to update it to 6.0.0, and I’m not sure I would’ve honored it if I had, because Standard does 95% of what it does better – basically everything but the clustering.


Quick update:

  • Express is now up in Marketplace.
  • Marketplace staff have questions about Standard I’ll need to answer but I should straighten out tonight.
  • The first brushes of a recovery path for Express Plus are in progress.

Alright, Standard 6.1.0 is up in Marketplace. I don’t think I’ll get much on Express Plus recovery done today or tomorrow but I’d at least like to do some stack testing this weekend.


Okay, so lot of things coming, some of which nobody but me will care about, some of which nobody but me will care about in a very different way. (None of these are committed, they’re in a dev branch I’m working with.)

  • Lightsail can be launched from and point to a devops branch for testing that doesn’t involve pushing live and finding out.
  • Lightsail has an “empty” launch mode now that will skip autoconfiguration.
  • Lightsail uses better technology for waitloops and won’t exit the launcher script before system configuration is done, which will mean no more dumb “wait for it and it’ll be done eventually” issues, if the script is running installation isn’t done yet.
  • Express Plus now uses gp3 volumes, not gp2.
  • Express Plus (and everything that leans on Lightsail) should have the same trick, the stack won’t finish until /everything’s done/.
  • Express Plus UTF16 problems from PowerShell and Python conflicts are now understood and worked around.
  • Express Plus developer mode will now ask for a devops branch to launch from.
  • Express Plus recovery mode is coming – just like Standard, you should be able to specify an EP parent stack key and bucket, and have recovery mode load those backups into the instance. The recovery stack has its own, seperate key and bucket, and only read access to both resources, so the backups themselves are inviolate and can’t be injured by the recovery instance. As long as your instance doesn’t call out it should be safe to spawn and examine and delete – or to migrate to and leave the old instance behind, if you’re trying to (say) migrate regions.

Oh my goodness but I can’t believe it all hooked together.

Express Plus recovery is in – seamless recovery from backups via child stack. I’ll do a longer writeup later; for now, you can verify by:

  • Launching the regular Express Plus stack.
  • Make changes (create patient, add document to patient, change password on admin user).
  • Manually connect to instance and force backups to run (sudo bash and then /etc/cron.daily/duplicity-backups).
  • Verify backups have been created in S3 bucket (probably from the AWS console, it’s what I did).
  • Launch the recovery template from the devops repo and answer the questions it asks with your stack’s KMS key arn and your stack’s bucket name.
  • When the new stack comes up, verify you can log in with your new password and retrieve the document you uploaded from the patient.

Current project queue, let’s see how far we get this weekend:

  • Starting work on scripted ingestion of OpenEMR backups (the tarball) directly into Lightsail because it can’t be /that/ hard to get right if I’ve got the docker-compose file with the answers right there.
  • You know what, I’ve never messed with Amazon’s SSM Agent and maybe it’d be nice if people could one-click their way into their instances. (But will that cause HIPAA concerns? I may learn how to do it just to decide that no, I’m not going to allow it.)
  • Proof-of-concept Lightsail fork for ARM instances (no xtradb backups, sorry) for AWS Graviton and Raspberry Pi.
1 Like

Well, fancy that.

The Ubuntu 20.04 AMI I use for Lightsail supports EC2 Instance Connect out of the box – buuuut there’s a catch about security groups! If you’re properly careful, your ssh port is only open to your personal IP, and EIC sessions launched from the AWS Console don’t come from your browser, they come from IPs at the mothership. You need to identify the IP range the requests will come from* if you want the connection to go through, or you need to download the console Instant Connect application for your desktop.

You could also use the SSM stuff, but that’ll take an assigned IAM role and if you’re using Express Plus or Standard I’ve already got such a role in place, so you’ll need to add policies.

(*): Troubleshoot connecting with EC2 Instance Connect or just curl -s| jq -r '.prefixes[] | select(.region=="us-east-1") | select(.service=="EC2_INSTANCE_CONNECT") | .ip_prefix' if you want to get on with things.


Oh god, I can’t believe this is a thing, I cannot believe this is a thing

mkdir /tmp/backup-ingestion
cp /tmp/backup-ingestion
tar -xf ~/emr_backup.tar --no-same-owner

# restore site
mkdir webroot
tar -zxf openemr.tar.gz -C webroot
rm openemr.tar.gz
# still in progress

# restore database
gzip -d openemr.sql.gz
echo 'USE openemr;' | cat - openemr.sql | docker exec -i $(docker ps | grep _openemr | cut -f 1 -d " ") /bin/sh -c 'mysql -p"$MYSQL_ROOT_PASS"'
rm openemr.sql

Still a WIP, but I was 100% not aware that “piping a SQL log into a container” was a workable decision. I figured I’d end up doing a buncha file copies, but no.

1 Like

Okay! First draft of a single-site OpenEMR backup file import into Lightsail.

This still can’t be used (hardcoded file names, bleah) but it’s what I’m thinking, if anybody finds it useful to read or correct. Cleanup and final testing are ongoing.


# Some notes.
# One: We do not try to handle multisite, and we assume your openemr DB is named openemr. Will make that a parameter later.
# Two: Current I target only the sites directory, which may leave customization behind. You'll want to extend this recovery to
#      pick up those changes (and maybe rerun composer?) but I'm not familiar with that part. I just know copying node_modules
       *has* to be the wrong decision.
# Three: You'll want a /lot/ of room to unpack. An eight-gig instance won't cut it.

mkdir /tmp/backup-ingestion
cd /tmp/backup-ingestion
tar -xf ~/emr_backup.tar --no-same-owner

# retrieve site
mkdir webroot
tar -zxf openemr.tar.gz -C webroot
rm openemr.tar.gz
docker cp $(docker ps | grep _openemr | cut -f 1 -d " "):/var/www/localhost/htdocs/openemr/sites/default/sqlconf.php webroot/sites/default
docker cp webroot/sites/default $(docker ps | grep _openemr | cut -f 1 -d " "):/var/www/localhost/htdocs/openemr/sites/default-recovery

# straighten out internal permissions
docker exec -i $(docker ps | grep _openemr | cut -f 1 -d " ") /bin/sh -s << "EOF"
cd /var/www/localhost/htdocs/openemr/sites
chown -R apache:root default-recovery
chmod -R 400 default-recovery
chmod 500 default-recovery
chmod -R 500 default-recovery/LBF default-recovery/images
chmod -R 700 default-recovery/documents
mv default /root/default-old
mv default-recovery default

# restore database
gzip -d openemr.sql.gz
echo 'USE openemr;' | cat - openemr.sql | docker exec -i $(docker ps | grep _openemr | cut -f 1 -d " ") /bin/sh -c 'mysql -p"$MYSQL_ROOT_PASS"'
rm openemr.sql

# swift kick to PHP
docker restart $(docker ps | grep _openemr | cut -f 1 -d " ")

cd /
rm -rf /tmp/backup-ingestion

echo Restore operation complete!
1 Like