Advise on new server build for openemr

Hello,

I was wondering if anyone had some advice for us on hardware specs for a new server.

In short we would like to move OpenEMR to a dedicated server running Linux. Is there any minimum requirements? Recommendations for, one office with about 10 active users (office staff, dentists, and lab techs) and a very large patient database?

More detailed information:

A friend of mine runs a dental office in the Netherlands. Up until now we have been running OpenEMR on a Windows 2012 R2 Server, that also has some other business applications running on it. I have been slowly making modifications to fit OpenEMR to a dental practice (Still now done yet), however, as we have added more patients and the practice has grown quite large and we have noticed a decrease in performance. This could be due to many factors - including the other applications running on the server. We have decided that we are going to make a dedicated OpenEMR server. I was leaning toward Ubuntu, and spinning up OpenEMR from Docker. Can anyone give me advice on hardware? Specs? any help/advice would be greatly appreciated.

Thank you for your time.

Julie

Hello and congratulations…

Although Linux requirements are lower than windows don’t go lower than an i5 or its equivalent or less than 8 on RAM.
And at a minimally a software RAID 1 configuration.
We’ve seen an average of .5mb per patient in storage so select accordingly.
You miles may vary…

Thank you very much for your reply.

This is what I thinking of… raid 1 on the normal hard drives. and the new m2 nvme SSD for good read right database speed. I think this should give us some room to grow.

PCPartPicker Part List

hi @Julie_b, looks like your raid 1 would be using the twin Toshiba drives? That would probably hurt performance since SSDs are usually 10 to 20 times faster.

@stephenwaite
I was thinking about using those only for the back up that runs every 12 hours. The database and docker would run on the new ssds. Should I also do raid on those?

you’ll want raid 1 on the database and is usually accomplished with 2 separate disks

Alright, I’ll get two m2’s and do raid on them. thank you!

Few thoughts on your setup:

  1. Originally I have planned my setup around AMD Ryzen 3000 series. That was back in Nov / Dec of last year time-frame. I don’t remember exact combo of AMD CPU (with built-in graphics) / Mobo part numbers at this time. I had no luck with reliably running Ubuntu 18.04 server. After several failed tries I learned that AMD drivers were not as refined and stable as one might wish. So after building your system on AMD technology, test it’s stability thoroughly. BTW - I switched to Intel setup.

  2. In general the biggest adversary of stable, around the clock computing, is heat generated within the system enclosure. And your CPU (and often the video card) contribute to it to largest extend. When building a departmental server (ie. for low number of users), that will have to accommodate fairly small amount of traffic, I would go with rather “under-powered” CPU. 8-core, 4 GHz beast is an overkill here. Even a lowly Celeron / i3 (on the Intel side) or Athlon / Ryzen 3 (on the AMD side) would suffice for a Linux server duty. Removing 35W of heat is fairly easy. Not so easy to quietly cool 100+ W space heater.

  3. In general, your specs are closer to what I would consider a mid- to higher-end gaming setup. I would tone down on the gaming-oriented hardware, and put a $1000 into a middle-of-the-road hardware-based RAID 5 subsystem. Maybe even self-encrypting drives…

1 Like

Thanks @jerry for your response.

  1. I was also concerned about this too. However, they say the problem is fixed. I will do a lot of testing and if it doesn’t live up to expectations it will be switched out to Intel. One of the things that swayed me toward AMD was their speed, and the capacity for direct M2 SSD drive to CPU with no middleman.
  2. I have never had any issues with heat, but I’ll keep an eye on it.
  3. I’m not sure what would be considered gaming-oriented hardware. The motherboard was chosen because of the IOMMU groups which (from what I understand) allows faster data transfer etc. Link for reference http://vfio.blogspot.com/2014/08/iommu-groups-inside-and-out.html . Other than that, I’m not sure what you mean sorry. Can you explain why raid 5 over say raid 1? Also, great idea about self-encrypting drives, I’ll have to investigate that.

A. “Gaming Oriented”
The use scenario you described (10 users + very large Pt DB) could be easily handled by 10-year old technology. Something in the range of first gen Intel Core or AMD A-series processors. Having newer chips, especially with smaller die size, is great as it lowers heat output at a given performance level. Still… What you have picked for CPU is a 16-thread beast. Your OpenEMR will never require that much muscle. Same goes for 64GB of memory. Same goes for your storage subsystem bells and whistles. The truth is that a typical LAMP installation needs only a fraction of what you are showing on your list.
With all that unused computing power, you might as well run bitcoin mining on a side.

The point I am trying to make would be even more underscored, if you would decide to run your server in CLI mode only (text only, no GUI / no windowing interface) or perhaps completely headless (connection to server via SSH only, no local console, no monitor, or even video card).
However, at least initially, it is still “comforting” to have an access via GUI interface…

B. RAID Subsystem
On a production system like this you will need RAID for your “peace of mind” - if something goes wrong with one of the drives that you can recover from remaining drive(s).
The way I am reading your hardware specs, you will employ BIOS RAID / software RAID. That’s better than no-RAID. But what happens if your motherboard goes? What happens if your OS goes?
With hardware based RAID (using discrete RAID card) if either of those things happen I just move the whole setup to a new computer and I should be able to have it running within some reasonable amount of time. May not be as easy with BIOS / software RAID.
Another thing that I like about the hardware based RAID is that oftentimes I can rebuild a failed array without booting the OS, as many cards can do that from within their own firmware.
Lastly, RAID 1 vs RAID 5 issue - mostly preference here.
Bottom line. In your case, you will end up with 1TB on a RAIDed NVMe’s and 1TB on RAIDed Toshibas. I can imagine that you will store scanned documents / imaging on the Toshibas – that 1TB might be on a skimpy side…

Yeah, the consensus is that consumer-grade hardware, like gaming mobos or non-ecc memory, or consumer hard drives, are to be avoided in business or enterprise systems.

One thing I’ve learned over the years is that linux distros are all lean systems without a lot of bloatware. That said, you want to make sure that you minimize points of failure…

Go with enterprise, 5+ year warranty hard drives.
Go with ECC Memory. Most server memory are of this type.
As far as raid goes, you’d probably want raid 5 or 10. especially for the database.
Go with a raid card that is actually hardware raid. Many are still software type.
With raid 5 or 10, you’d need at least 3 hard drives I believe to make the array work.
All same size, speed, model etc… The best thing I’ve found, is a hot spare can come in handy when you need your system to keep running and be accessible in the event of degradation.

M.2 technology is interesting. Especially if you can get enterprise M.2 ssd modules.
But you can save some money probably and get more space with traditional hard drives.

This is just a start. Man, theres a lot to think about.