How to set up AWS Standard-equivalent on Google Cloud?

I have taken a look at the AWS standard installation and want to try to do something similar on Google Cloud using Cloud SQL and Cloud Storage. Muddled around until I almost got it but it only worked for a second.

Steps I took:

  1. Set up the Cloud SQL instance + enabled private networking. Copied the private ip.
  2. Set up the VM on Debian with 2 persistent disks: logvolume01 and sitevolume
    a. Also set up the custom hostname to be openemr.{mydomain}.com and pointed my domain to the public ip of the VM by adding an A record to my nameserver.
  3. Set up docker and docker compose. used this guide
  4. Mounted the disks and edited fstab.
    a. Encountered errors with hydrating the docker volumes when I tried to bind them to the mount locations of the disks. Seemed to only hydrate when the volumes where mounted as named volumes, so I mounted the disks where docker mounts named volumes: /var/lib/docker/volumes/{VOLUME NAME}/ instead of at /mnt/disks/{VOLUME NAME}. some related documentation of this issue
  5. Copied the following docker-compose.yaml onto my VM and spun up my container.
version: '3.1'
    restart: always
    image: openemr/openemr:7.0.0
    - 80:80
    - 443:443
    - logvolume01:/var/log
    - sitevolume:/var/www/localhost/htdocs/openemr/sites
      MYSQL_HOST: ${private ip copied from step 1}
      MYSQL_ROOT_PASS: root
      MYSQL_USER: openemr
      MYSQL_PASS: openemr
      OE_USER: admin
      OE_PASS: pass
  logvolume01: {}
  sitevolume: {}

OpenEMR Version
I’m using OpenEMR version 7.0.0

Operating System
Google Cloud VM - Debian GNU/Linux 11 (bullseye)

What happened/Issues

  1. I was able to get it up and running. Logged in once. And then pow! Website complained about a missing site_id the second time I tried to log in :cry:
  2. Also… i’m not sure what the AWS buckets were used for in the AWS standard installation… backups? but I wanted to use a Cloud Storage Bucket for the patient documents because our practice has a ton of scanned documents. That makes more sense to me given the large file sizes of some medical …
  3. How would I get SSL encryption ???
  4. I’m lost as to how I would be able to upgrade in the future without deleting my database tables? Whenever I tried to create a new docker container it would complain that my tables were already created :frowning:

Sorry for the long post and many questions.

Sometimes I get the site_id thing when I’m building a bunch of machines and trying new things and I still don’t get why. One login then womp-womp, just as you describe. I don’t get it when I log in with incognito and nobody else gripes so I assume there’s just something awful going on with my browser that cookie-clearing would help with.

AWS buckets are used for daily backups. Work to use S3 for first-class patient document storage is ongoing.

You’d get SSL in four ways:

  • The OpenEMR container is an Apache container. Add SSL certs and configuration to it and bounce it.
  • If your DNS is already correctly configured, add DOMAIN and EMAIL environment variables to your docker-compose file and re-up the stack to ask LEt’s Encrypt to acquire and install a cert for you.
  • Add a new container to the stack just to handle SSL (like and reverse-proxy it.
  • Use a Google loadbalancer that supports SSL (something akin to Amazon’s Application Load Balancer).

You won’t be able to create a new docker container without a fresh DB, that’s correct. However, changing the version of the container in your docker-compose file (from 7.0.0 to 7.0.1, say, when the time comes) and rerunning your up will load in a new version of our container, and our containers handle database upgrades as part of the patching process.

Finally, a warning – OpenEMR Standard has spent a lot of time on the concept of HIPAA eligibility, including things like making sure we’ve got audit trails, making sure to use a unique, managed encryption key as part of that audit, and going to a lot of trouble to ensure that all of our data is encrypted at rest and in motion. If I had to guess you’re most likely to run into trouble with your SQL connection not being secure because it’s not using the SSL certificate it could be using. Please consider carefully reviewing all the parts of openemr-devops/packages/standard at master · openemr/openemr-devops · GitHub , especially including the scripts and the Python stackbuilder file, because it’s a long list of all the steps I thought were necessary for HIPAA eligibility in AWS and you may find it helpful to think about the kinds of problems I was trying to solve (and the hooks we developed to help solve them, like where you should put a SQL server’s SSL certificate for it to get found).

Thank you so much for the reply! GCP has snapshotting built it, so I’ll be using that instead of using duplicity + bucket.

I was able to relaunch everything and it works. One thing I changed to match the Standard install is that I mounted 1 disk as my docker-volume and moved docker onto it. So maybe the snapshot will be wasteful. I will probably go back to mounting a disk as the sites volume.

For HIPAA eligibility: I am using Private Networking access for the Cloud SQL which I should confirm is HIPAA complaint, but I presume it is so because I executed a Google Cloud BAA.

I will look into mounting a bucket with fuse where documents are stored! … because most documents aren’t edited only view/upload, i think it should be a more cost efficient storage method.

Happy to help!

Standard mounts a disk like that because the AMI that you spawn from the Marketplace has no useful drive encryption at rest, because of how things come from the Marketplace. The volume I provide does, though, so I splash Docker onto it during first boot. If you’ve got that covered another way you should be fine.

I wasn’t convinced private networking (in my case, an AWS private VPC) met encryption-in-motion requirements when the SQL server could be in a whole 'nother building, but I’m not your compliance officer.

Fuse… well, Good Luck With That. Just looking at the reviews for it turned me off, and so far I know of zero OpenEMR success stories that employ it. I agree with you that it sounds like it oughta be a good fit, and I agree that it’s not going to be random access, just blind writes and buffered reads, but nobody’s come back to me and told me how it was a big hit for them, and I have heard from people who had to back off it.

Whew what a journey. Updates:

  • Enabled SSL encryption for the cloud database and am using cloud-sql-proxy to connect to the database securely
  • Mounted a bucket successfully for the /sites/default/documents folder via fuse

Questions I have:
Are all the files/folders within the sites/default/documents folder safe to store in bucket storage without slowing down the site? Seems like some encryption/decryption keys are stored there and would need to be accessed often…

They’re not safe, for exactly why you think. Some of those documents have to be read every pageload. S3 is suited for patient documents (created once, read several times over patient lifetime) and it also works great for images and compiled framework packages read by a user once per session (and still very suited for caching beyond even that timeframe).

Do you know how I could change the $GLOBALS['oer_config']['documents']['repository'] value?

Currently it is set to $GLOBALS['OE_SITE_DIR'] . "/documents/";

Am I correct in assuming this is what controls where patient docs end up? I’m not sure how else to isolate patient documents from other files in the documents folder.

You’re further into OpenEMR’s internals than I’ve gone, sorry, so I don’t know how to answer this, but I’m very interested in your results.