I first played with openemr back during the pandemic, and want to experiment further - I’m not ready to deploy it just yet.
I have followed the directions for the docker container solution, however given I am rolling it out on an AWS EC2 instance, I wanted to store the …/sites directory on an S3 drive. I am able to mount the drive as /mnt/openemrbucket and there is a folder called sites within. I have updated the .yaml file to try to bind the /var/www/html/openemr/sites/ folder and its subdirectories to the S3 mounted on the EC2 instance, but it still seems to run out of the internal file on the docker container.
I am happy to share the .yaml file if it is useful - it is currently packed with commented out lines from the default, and experimental configurations, so it might require some sanitisation first.
Has anyone else had experience doing this? Or opinions about the wisdom of this approach? My main concern is persistence of the documents and setup/configurations, which I believe is the downside of Docker.
S3 doesn’t work like a “drive” – it’s a key-value storage bucket somewhere else. EFS is a drive but not S3. There are tools that will claim to bridge that gap and present S3 like a drive, but I’ve never been impressed with their performance or stability.
You do have some options, though.
One, consider narrowing the scope of the shared partition. While trying to serve OpenEMR from S3 will never be performant, one of these “drives” that just served the patient documents (and not OpenEMR’s templates)… I dunno, might work? Any solution that puts something like sqlconf.php anywhere other than the local machine is just a catastrophe waiting to happen.
Two, if document storage is your primary concern, consider spinning up a CouchDB instance and using that for document storage.
Three, consider Amazon EFS, which is a networked file share that’ll do what you’re wanting to see done here.
As covered by Asher above, you really don’t want to do that. In addition to likely performance issues, you may be unhappy when you realize that you’re being billed for every single file operation that happens in your sites folder. (Documents, keys, logs, portal documents, EDI/ERA data, labs, etc). AWS is going to make you pay for every disk I/O operation regardless, but S3 will make it even worse, plus if I remember correctly, depending on your S3 subscription, you could end up with extra charges for too many file transactions during a billing period.