Alright, tomorrow I’ll stop what I’m doing and switch to testing Lightsail installation with 6.1.0 – I think it’s fine but I have to check. After that I’ll update Express Plus (which works but needs feedback from AWS because of weird CFN deployment hitches).
After both of those are done (and while I’m waiting for feedback) I’m going to tackle Standard, since it’s broken right now, and get it updated to use t3 instances and Ubuntu 20. Express will be last.
Got Standard candidate built but won’t have time to test it tonight. Python is as usual the worst thing that ever happened to system administration.
I did realize something, though – all that trouble we went through to get the docker containers cross-compiling for ARMv8 and the Raspberry Pi? That does actually mean our containers are compatible with Amazon’s custom Graviton2 ARM instances, which are the lowest cost-per-hour they’ve got available.
I should do all the actual work first but I’m sorta tempted to put something together for it.
Alright! OpenEMR 6.1.0 Standard is now ready… toooooo be submitted to AWS Marketplace in several pieces which will require human review and oversight before release, which I’ll do today or tomorrow. In the meantime, however, I’ll make available this prerelease template that’s using an AMI served up from our own account instead of Marketplace, if anybody just absolutely has to get moving ASAP.
Just hand this template to CloudFormation (in us-east-1 only – sorry, them’s the breaks) and you should get going.
Express has a vaild image getting scanned at Marketplace now, but that’s only the first step of the process and I don’t know if it’ll be done tonight, so I expect to send Express in for review tomorrow.
Finally got some automation in place for Express image building though. Not /all/ of it but at least it’s a script without human steps now like it used to be.
In the years since CloudFormation’s been a thing, AWS has finally cracked the problem of CF not being able to load templates from S3 buckets in different regions, which is why I had to have as many buckets as there /were/ regions which was very annoying thanks.
I’ve revised the Express Plus readme to be one link and a template URL instead of enumerating every region; Express Plus is now finished (though Jason’s not getting what he wants yet).
Okay, back and forth on some submission issues but I cleared those hurdles, Express has now been submitted to AWS for posting.
Current standing:
Lightsail (shared hosting): 6.1.0 OK
Express: In process at Marketplace
Express Plus: 6.1.0 OK
Standard: In process at Marketplace
Next steps:
Express Plus recovery pathing via specified parent’s S3/KMS
Graviton support for ARM (just as a proof of concept)
Is it finally time to retire Full Stack? I’m not sorry I built it, it was the foundations for everything else, but nobody uses it, I never got a request to update it to 6.0.0, and I’m not sure I would’ve honored it if I had, because Standard does 95% of what it does better – basically everything but the clustering.
Alright, Standard 6.1.0 is up in Marketplace. I don’t think I’ll get much on Express Plus recovery done today or tomorrow but I’d at least like to do some stack testing this weekend.
Okay, so lot of things coming, some of which nobody but me will care about, some of which nobody but me will care about in a very different way. (None of these are committed, they’re in a dev branch I’m working with.)
Lightsail can be launched from and point to a devops branch for testing that doesn’t involve pushing live and finding out.
Lightsail has an “empty” launch mode now that will skip autoconfiguration.
Lightsail uses better technology for waitloops and won’t exit the launcher script before system configuration is done, which will mean no more dumb “wait for it and it’ll be done eventually” issues, if the script is running installation isn’t done yet.
Express Plus now uses gp3 volumes, not gp2.
Express Plus (and everything that leans on Lightsail) should have the same trick, the stack won’t finish until /everything’s done/.
Express Plus UTF16 problems from PowerShell and Python conflicts are now understood and worked around.
Express Plus developer mode will now ask for a devops branch to launch from.
Express Plus recovery mode is coming – just like Standard, you should be able to specify an EP parent stack key and bucket, and have recovery mode load those backups into the instance. The recovery stack has its own, seperate key and bucket, and only read access to both resources, so the backups themselves are inviolate and can’t be injured by the recovery instance. As long as your instance doesn’t call out it should be safe to spawn and examine and delete – or to migrate to and leave the old instance behind, if you’re trying to (say) migrate regions.
Current project queue, let’s see how far we get this weekend:
Starting work on scripted ingestion of OpenEMR backups (the tarball) directly into Lightsail because it can’t be /that/ hard to get right if I’ve got the docker-compose file with the answers right there.
You know what, I’ve never messed with Amazon’s SSM Agent and maybe it’d be nice if people could one-click their way into their instances. (But will that cause HIPAA concerns? I may learn how to do it just to decide that no, I’m not going to allow it.)
Proof-of-concept Lightsail fork for ARM instances (no xtradb backups, sorry) for AWS Graviton and Raspberry Pi.
The Ubuntu 20.04 AMI I use for Lightsail supports EC2 Instance Connect out of the box – buuuut there’s a catch about security groups! If you’re properly careful, your ssh port is only open to your personal IP, and EIC sessions launched from the AWS Console don’t come from your browser, they come from IPs at the mothership. You need to identify the IP range the requests will come from* if you want the connection to go through, or you need to download the console Instant Connect application for your desktop.
You could also use the SSM stuff, but that’ll take an assigned IAM role and if you’re using Express Plus or Standard I’ve already got such a role in place, so you’ll need to add policies.
(*): Troubleshoot connecting with EC2 Instance Connect or just curl -s https://ip-ranges.amazonaws.com/ip-ranges.json| jq -r '.prefixes[] | select(.region=="us-east-1") | select(.service=="EC2_INSTANCE_CONNECT") | .ip_prefix' if you want to get on with things.
Still a WIP, but I was 100% not aware that “piping a SQL log into a container” was a workable decision. I figured I’d end up doing a buncha file copies, but no.
Okay! First draft of a single-site OpenEMR backup file import into Lightsail.
This still can’t be used (hardcoded file names, bleah) but it’s what I’m thinking, if anybody finds it useful to read or correct. Cleanup and final testing are ongoing.
#!/bin/bash
# Some notes.
# One: We do not try to handle multisite, and we assume your openemr DB is named openemr. Will make that a parameter later.
# Two: Current I target only the sites directory, which may leave customization behind. You'll want to extend this recovery to
# pick up those changes (and maybe rerun composer?) but I'm not familiar with that part. I just know copying node_modules
*has* to be the wrong decision.
# Three: You'll want a /lot/ of room to unpack. An eight-gig instance won't cut it.
mkdir /tmp/backup-ingestion
cd /tmp/backup-ingestion
tar -xf ~/emr_backup.tar --no-same-owner
# retrieve site
mkdir webroot
tar -zxf openemr.tar.gz -C webroot
rm openemr.tar.gz
docker cp $(docker ps | grep _openemr | cut -f 1 -d " "):/var/www/localhost/htdocs/openemr/sites/default/sqlconf.php webroot/sites/default
docker cp webroot/sites/default $(docker ps | grep _openemr | cut -f 1 -d " "):/var/www/localhost/htdocs/openemr/sites/default-recovery
# straighten out internal permissions
docker exec -i $(docker ps | grep _openemr | cut -f 1 -d " ") /bin/sh -s << "EOF"
cd /var/www/localhost/htdocs/openemr/sites
chown -R apache:root default-recovery
chmod -R 400 default-recovery
chmod 500 default-recovery
chmod -R 500 default-recovery/LBF default-recovery/images
chmod -R 700 default-recovery/documents
mv default /root/default-old
mv default-recovery default
EOF
# restore database
gzip -d openemr.sql.gz
echo 'USE openemr;' | cat - openemr.sql | docker exec -i $(docker ps | grep _openemr | cut -f 1 -d " ") /bin/sh -c 'mysql -p"$MYSQL_ROOT_PASS"'
rm openemr.sql
# swift kick to PHP
docker restart $(docker ps | grep _openemr | cut -f 1 -d " ")
cd /
rm -rf /tmp/backup-ingestion
echo Restore operation complete!