Clarification on AWS cloud storage sizes


(Sherwin Gaddis) #1

Trying to migrate from one hosting company to AWS and trying to figure out which AWS cloud instance will support patient documents greater than one GB.

I have looked through all of the documentation I cannot find anywhere is a mention of data storage in the docker volumes. We need to move 1.5GB of data but the micro is too small. But there is nothing that tells us which size to pick.

We increased the size of the host storage but that did nothing for the container storage which is still at 1.1GB.
There is no easy way that I have found to increase the size of the AWS cloud without breaking the image and rebuilding it.

So, does anyone know where to find the storage sizes of the AWS cloud instances?

(Ralf Lukner MD PhD) #2
Model vCPU* CPU Credits / hour Mem (GiB) Storage Network Performance
t2.nano 1 3 0.5 EBS-Only Low
t2.micro 1 6 1 EBS-Only Low to Moderate
t2.small 1 12 2 EBS-Only Low to Moderate
t2.medium 2 24 4 EBS-Only Low to Moderate
t2.large 2 36 8 EBS-Only Low to Moderate
t2.xlarge 4 54 16 EBS-Only Moderate
t2.2xlarge 8 81 32 EBS-Only Moderate

(Sherwin Gaddis) #3

is Mem (GiB) the same as data storage?
Also is that the same as container size for data storage?
Thanks for the chart!

The chart tells me about the host and not the docker container.

(Ralf Lukner MD PhD) #4

GiB is how much (dynamic, computing, not permanent) memory the EC2 instance has.

The database and its storage is a separate animal … you can choose how big you want that to be (they give you several options) when you set up your virtual instance.

Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability. Amazon EBS volumes offer the consistent and low-latency performance needed to run your workloads. With Amazon EBS, you can scale your usage up or down within minutes – all while paying a low price for only what you provision.

Amazon EBS is designed for application workloads that benefit from fine tuning for performance, cost and capacity. Typical use cases include Big Data analytics engines (like the Hadoop/HDFS ecosystem and Amazon EMR clusters), relational and NoSQL databases (like Microsoft SQL Server and MySQL or Cassandra and MongoDB), stream and log processing applications (like Kafka and Splunk), and data warehousing applications (like Vertica and Teradata).

(Sherwin Gaddis) #5

Again, thank you for the host machine information.
This is all information about the host machine which does not directly impact the docker container.

@MatthewVita @brady.miller

(Jerry P) #6

Hi Sherwin,
I guess my question to you is : why do you feel the need to use docker? I’ll catch flak for this but I just never understood the fascination with docker for live sites.
Think about it in that you will be running an os layer running another os to run your application. Both sharing resources. Also the fact to do any maintenance on the application it becomes a pain to move files in and out of the container. Even to get a terminal inside the container is a pain.
Too me, docker was mainly meant to be a development tool and is perfect for Brady’s demo farm or quickly setting up versions of testing environments but for running live sites, makes no sense to me. Your question is case in point.
Against my advice I helped someone set up a large practice on AWS using docker and from the experience we have seen constant instance upgrades with rising costs. More than anything, I found it to be just a hassle to work on.
If you insist on running on AWS then why not just setup a T2(depending on site traffic) Ubuntu machine and leave docker out of it. Sorry docker fans but you will never convince me that docker is appropriate in this use case.

(Sherwin Gaddis) #7


I share your view. However, like you I get help requests. I really dug in this time to look into dockers and it is good to a point but as you stated the box is a box. I am not fond of that either.

So, my questions come from trying to help out with an AWS micro setup that was brought to me.

My whole point of asking the question is to try and allesid a response to documentation that I can’t find to make a good decision/selection on which docker container to chose to go into.

Sometimes I post conversations not for me but for those that may be thinking like me but won’t write a post.


I received your point.

(Jerry P) #8

I think you missed my point. I’m saying just run a machine in the cloud without docker. I’d would like to know what you deem a plus with running the docker.

(Brady Miller) #9


@sjpadgett , just give it time and docker will win you over :slight_smile:

@juggernautsei, on aws, you can both upgrade instance and increase the drive size (just need to shut down instance first):


(Brady Miller) #10

Also rec backing up the AMI of instance before doing this stuff.

(Jerry P) #11

Not running a production site it won’t. What exactly is the overriding benefit for running OpenEMR inside docker on an AWS machine?

(Sherwin Gaddis) #12

Brady, is not the case. We increased the size of the host drive but that did nothing for the docker container.

(Brady Miller) #13

hi @juggernautsei ,
What size of drive open space are you seeing when do ‘df -h’ on the host?

(Brady Miller) #14

hi @sjpadgett ,

For production purposes, one big benefit of OpenEMR docker is that it allowed us to offer an assortment of cloud packages that are all based on same modular, infrastructure. So rebuilding them for patches, new releases, etc. is much easier (much easier than for example the resources required to support the openemr ubuntu package).


(Asher Densmore-Lynn) #15

Hi, Jerry.

A dockerized OpenEMR is running in a known state, blindly reproducible on a thousand machines and configured per best practices. Anybody who pulls an image can expect it to run, anybody who tries to fix an image knows exactly what they’re going to find, and anybody who installs a second image as part of a recovery process will have the exact same image they had the first time, even if details of the host machines don’t precisely match up.

Docker brings versioned, testable reproduciblity to software installations of all stripes, and we lean on that when we’re offering automated installs via Lightsail, the AWS Marketplace options, the virtualized appliance, and our solution in Google Cloud. Without Docker most of those wouldn’t exist.

(Sherwin Gaddis) #16


This is a prolog: after writing all the stuff below, what I did not want to get lost is the fact there is missing information. What size is each docker container? Right now I have to load each one and test it to see what it is when I would be better served by the information being listed for me.

The information that I am trying to find it what is the capacity limits of the different docker containers. As an IT pro, I need to know before I suggest or load up an image what am I getting. All I have been trying to say it is that information is nowhere to be found. I will have to load each and every instance of Dockerized OEMR and test it to find out. That should not be.

I know you can increase the size of the instance by stopping and scaling. But again there is no documentation as to what size will the docker container be when that is done. AWS documents what you will be scaling the host machine too but we don’t document what the Docker container size is going to be once the scaling is over.

So, I have a client that has 30GB of patient data files from the documents folder that I need to import into the docker container. Which docker image do I use to hold that 30GB of data files?

That was the situation I was in. We stopped the instance and increase the hard drive/ storage from 8GB to 30GB on the host machine and then tried to import the files and still got the error not enough space. So the host drive size increased but the dock container did not scale. Dockercker container stayed at 1.0GB.

  • When I run the docker info command it does not return the desired information.

  • ubuntu@ip-172-31-25-253:~$ sudo docker info
    Containers: 2
    Running: 2
    Paused: 0
    Stopped: 0
    Images: 2
    Server Version: 17.05.0-ce
    Storage Driver: aufs
    Root Dir: /var/lib/docker/aufs
    Backing Filesystem: extfs
    Dirs: 27
    Dirperm1 Supported: true
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Volume: local
    Network: bridge host macvlan null overlay
    Swarm: inactive
    Runtimes: runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
    runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
    init version: 949e6fa
    Security Options:
    Profile: default
    Kernel Version: 4.4.0-1067-aws
    Operating System: Ubuntu 16.04.5 LTS
    OSType: linux
    Architecture: x86_64
    CPUs: 1
    Total Memory: 990.7MiB
    Name: ip-172-31-25-253
    Docker Root Dir: /var/lib/docker
    Debug Mode (client): false
    Debug Mode (server): false
    Experimental: false
    Insecure Registries:
    Live Restore Enabled: false

WARNING: No swap limit support

I get your point

(Sherwin Gaddis) #17

I do realize that part of this is a learning curve for us all.

This is the image I was looking for.

So right not to find out this information, I would have to go through and load each docker image to find this out.

(Jerry P) #18

Thanks, all good to know.

(Asher Densmore-Lynn) #19

After you increased the EBS volume size, did you use resize2fs?

(Sherwin Gaddis) #20

@jesdynf, not that was not done.
In all my installs on AWS and increasing, ubuntu drive or windows drive capacity. I have never had to do that separately. I have resized drives on the fly and not had to go back and separately move the partition. I will keep this in mind next time.

But still everyone keeps talking about the host machine and the size of the host drive, to my knowledge(Which is really limited), does not automatically increase the size of the docker container.

Please, somebody, talk about resizing the docker container and not the host machine.