Trying to migrate from one hosting company to AWS and trying to figure out which AWS cloud instance will support patient documents greater than one GB.
I have looked through all of the documentation I cannot find anywhere is a mention of data storage in the docker volumes. We need to move 1.5GB of data but the micro is too small. But there is nothing that tells us which size to pick.
We increased the size of the host storage but that did nothing for the container storage which is still at 1.1GB.
There is no easy way that I have found to increase the size of the AWS cloud without breaking the image and rebuilding it.
So, does anyone know where to find the storage sizes of the AWS cloud instances?
GiB is how much (dynamic, computing, not permanent) memory the EC2 instance has.
The database and its storage is a separate animal … you can choose how big you want that to be (they give you several options) when you set up your virtual instance.
Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability. Amazon EBS volumes offer the consistent and low-latency performance needed to run your workloads. With Amazon EBS, you can scale your usage up or down within minutes – all while paying a low price for only what you provision.
Amazon EBS is designed for application workloads that benefit from fine tuning for performance, cost and capacity. Typical use cases include Big Data analytics engines (like the Hadoop/HDFS ecosystem and Amazon EMR clusters), relational and NoSQL databases (like Microsoft SQL Server and MySQL or Cassandra and MongoDB), stream and log processing applications (like Kafka and Splunk), and data warehousing applications (like Vertica and Teradata).
Hi Sherwin,
I guess my question to you is : why do you feel the need to use docker? I’ll catch flak for this but I just never understood the fascination with docker for live sites.
Think about it in that you will be running an os layer running another os to run your application. Both sharing resources. Also the fact to do any maintenance on the application it becomes a pain to move files in and out of the container. Even to get a terminal inside the container is a pain.
Too me, docker was mainly meant to be a development tool and is perfect for Brady’s demo farm or quickly setting up versions of testing environments but for running live sites, makes no sense to me. Your question is case in point.
Against my advice I helped someone set up a large practice on AWS using docker and from the experience we have seen constant instance upgrades with rising costs. More than anything, I found it to be just a hassle to work on.
If you insist on running on AWS then why not just setup a T2(depending on site traffic) Ubuntu machine and leave docker out of it. Sorry docker fans but you will never convince me that docker is appropriate in this use case.
I share your view. However, like you I get help requests. I really dug in this time to look into dockers and it is good to a point but as you stated the box is a box. I am not fond of that either.
So, my questions come from trying to help out with an AWS micro setup that was brought to me.
My whole point of asking the question is to try and allesid a response to documentation that I can’t find to make a good decision/selection on which docker container to chose to go into.
Sometimes I post conversations not for me but for those that may be thinking like me but won’t write a post.
I think you missed my point. I’m saying just run a machine in the cloud without docker. I’d would like to know what you deem a plus with running the docker.
For production purposes, one big benefit of OpenEMR docker is that it allowed us to offer an assortment of cloud packages that are all based on same modular, infrastructure. So rebuilding them for patches, new releases, etc. is much easier (much easier than for example the resources required to support the openemr ubuntu package).
A dockerized OpenEMR is running in a known state, blindly reproducible on a thousand machines and configured per best practices. Anybody who pulls an image can expect it to run, anybody who tries to fix an image knows exactly what they’re going to find, and anybody who installs a second image as part of a recovery process will have the exact same image they had the first time, even if details of the host machines don’t precisely match up.
Docker brings versioned, testable reproduciblity to software installations of all stripes, and we lean on that when we’re offering automated installs via Lightsail, the AWS Marketplace options, the virtualized appliance, and our solution in Google Cloud. Without Docker most of those wouldn’t exist.
This is a prolog: after writing all the stuff below, what I did not want to get lost is the fact there is missing information. What size is each docker container? Right now I have to load each one and test it to see what it is when I would be better served by the information being listed for me.
The information that I am trying to find it what is the capacity limits of the different docker containers. As an IT pro, I need to know before I suggest or load up an image what am I getting. All I have been trying to say it is that information is nowhere to be found. I will have to load each and every instance of Dockerized OEMR and test it to find out. That should not be.
I know you can increase the size of the instance by stopping and scaling. But again there is no documentation as to what size will the docker container be when that is done. AWS documents what you will be scaling the host machine too but we don’t document what the Docker container size is going to be once the scaling is over.
So, I have a client that has 30GB of patient data files from the documents folder that I need to import into the docker container. Which docker image do I use to hold that 30GB of data files?
That was the situation I was in. We stopped the instance and increase the hard drive/ storage from 8GB to 30GB on the host machine and then tried to import the files and still got the error not enough space. So the host drive size increased but the dock container did not scale. Dockercker container stayed at 1.0GB.
@jesdynf, not that was not done.
In all my installs on AWS and increasing, ubuntu drive or windows drive capacity. I have never had to do that separately. I have resized drives on the fly and not had to go back and separately move the partition. I will keep this in mind next time.
But still everyone keeps talking about the host machine and the size of the host drive, to my knowledge(Which is really limited), does not automatically increase the size of the docker container.
Please, somebody, talk about resizing the docker container and not the host machine.