Update Your Amazon RDS SSL/TLS Certificates by February 5, 2020

Hello,

I’m pretty sure fix Standard MySQL cert error · openemr/openemr-devops@9ea0a9f · GitHub picks up those changes. If you’re using a version of Standard that predates it and you need to pull in a certificate file manually, that patch will show you where you need to put it. See https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html for more information about where to get a valid certificate file.

Hello @jesdynf,

So if I understand you correctly, all Standard (AWS) “unpatched” versions of OpenEMR 5.0.2 or earlier will stop communicating with their database on or soon after February 5, 2020!

Maybe I’m easily impressed, but this seems to me that it is critical repair of the highest possible degree that all production versions of OpenEMR be upgraded/patched to 5.0.2(1) (or later once more patches are released). Correct me if I’m wrong, but it is my understanding that without the ability to communicate with the database, OpenEMR Standard (AWS) is non-functional.

Also, I do not see the “cert error” fix explicitly listed in the description of the 5.0.2 patch. Could someone a lot more knowledgeable about what is in the releases/patches than me confirm that the “cert error” patch is in 5.0.2(1)?

Thank you,
–Ralf

I mean, you’re not wrong, but it’s also not life-or-death panic time; keeping up with a moving target is just life in the cloud. I’m expecting a new version of OpenEMR to come out this week or next, and will be building new GCP and Marketplace packages, and will guarantee that they’ll be ready, but making this fix on your own system? Pretty simple.

  1. Grab the cert file with curl from the AWS document page I linked.
  2. Copy the container’s existing /var/www/localhost/htdocs/openemr/sites/default/documents/certificates/mysql-ca somewhere safe.
  3. Replace the container’s cert file with the new one (and chmod it readable and chown it to the local context.)
  4. Restart the container to pick it up.

Alternately, the text of the patch will give you a hint about how to do it without connecting to the container at all by manipulating the volume manually; be sure to chmod the file properly if you do.

1 Like

@jesdynf,

Thank you! That’s an easy fix. I’ll implement it this weekend when the system is in lighter use.

Sure. Over these past 2 years that I have been using OpenEMR, I believe that upgrades/patches of OpenEMR were always enhancements of security, features, repairs of less impactful issues, etc. … patches that did not involve preventing OpenEMR AWS from completely not working anymore … I do not recall even one case where I had to upgrade or patch or my version of OpenEMR would stop working completely, but it sounds like I was spoiled :joy:.

Thank you again!
–Ralf

@jesdynf & @brady.miller
I have applied the 5.0.2 patch 1 … and restarted my AWS EC2 instance.
How do I know if OpenEMR is using the “new” certificates?
I suspect that my now newly patched 5.0.2(1) instance is not using a “new” certificate because I can still connect to the database with OpenEMR. If there was some kind of “new” certificate with SSL/TLS encryption, the “old” database, using the “old” certificate should reject a connection from the openemr using a “new” certificate … correct?

Do I still need to “manually” copy (or “grab” with “curl” somehow) or create a “new” certificate and implement it into OpenEMR somehow? How would I have a secure connection to the mysql database with the “copied” certificate (is there something private that is not copied)?
–Ralf

I’m actually not 100% sure the new cert wouldn’t work with the old server. If you want to check, I’d just compare the new certificate to the one you have installed.

As for how you have a secure connection, this is a public SSL certificate issued by Amazon, capable of authenticating a connection to a holder of a key issued by Amazon.

I have figured out after patching 5.0.2 with patch 1 (the one and only at this point) does NOT update the certificate file mysql-ca. The certificate file that is there is from 2018 … not new and was not changed by applying the patch. It’s also not the same size as the new certificate file. Thus, the one and only patch for 5.0.2 at this point does not correct the issue.

sudo ls -l /mnt/docker/volumes/standard_sitevolume/_data/default/documents/certificates/mysql-ca
-rwx------ 1 ubuntu root 21672 Jun 3 2018 /mnt/docker/volumes/standard_sitevolume/_data/default/documents/certificates/mysql-ca

@jesdynf,
I created the following update-rds-ca.sh script to help update the SSL/TLS certificate based on the ami-configure.sh script in the master branch and your helpful advice above:

(NOTE: THIS PROCEDURE WILL DISCONNECT OPENEMR STANDARD on AWS FROM THE MySQL DATABASE (hopefully only temporarily) … make sure you are either using a test system or have backup images of your instance, database before starting this procedure)

#!/bin/sh
# update-rds-ca.sh
mydir=/mnt/docker/volumes/standard_sitevolume/_data/default/documents/certificates/
cp -i ${mydir}mysql-ca ${mydir}mysql-ca.old
curl -sS "https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem" > ${mydir}mysql-ca
chown 1000 ${mydir}mysql-ca

I run the above script with the following …

sudo bash ./update-rds-ca.sh

Then I restart the docker with the following (standard_openemr_1 references my docker) …

sudo docker restart standard_openemr_1

After the docker restarts, if I try to use OpenEMR STANDARD, I get the following error in the web browser:

Check that you can ping the server xxxxxxxx.yyyyyyyyyy.rds.amazonaws.com.

Thus, I now suspect that the new certificate file has been loaded, and OpenEMR STANDARD cannot communicate with the database using the old certificate.

Now I go into AWS RDS to update the certificate on the database instance.


I choose “Update now” since I want to correct the issue immediately and return to a working OpenEMR system.

I then check the "I understand … " box that pops up and then apply the change. Status changes to “Pending” while the change is being applied.

After this, make sure you can log back into OpenEMR (worked for me). Thank you @jesdynf for helping guide me through this!!
–Ralf

1 Like

I just tried to do all of this with my openemr standard install and while the amazon side of the cert upgrade worked, I don’t think the openemr side did as I cannot access openemr via browser. Is there a way to check if I did it properly? I followed your instructions.

to piggyback on @hcrossen’s post; it seems like the openemr docker container is running out of memory on restart? I can SSH into the EC2 instance for a brief window after stopping and starting it again, but after that both SSH and the website become unresponsive; even though the EC2 dashboard still shows the instance as reachable. Is there a way to turn off the auto-start of the docker container somehow at the instance level? ETA: the CPU utilization seems to hover at around 20% now after this…what on earth is happening inside the Docker container?

Hi, Henry.

There’s two possibilities here, as I see it.

  • There’s a problem with the certificate such that OpenEMR can’t find it or use it (bad permissions, bad contents), Open the new certificate file (with less) to make sure that it looks reasonable, and verify that the permissions and ownership are a perfect match.

  • There’s a problem with the MySQL server such that it’s not willing to use the new certificate. Have you performed the certificate upgrade on your MySQL instance?

Hello,

This sounds like a question that needs its own forum post, unless you believe this has to do with the MySQL certificate upgrade from Amazon RDS.

1 Like
  1. What is the time stamp on your mysql-ca file?
  2. When you go into Amazon RDS, does it show that the database certificate needs to be updated?

Hi, I’ve been helping @hcrossen out with his AWS setup. The new certificate is larger than the older one and seems to have more data in it:

Since doing this, starting the docker container leads to system unusability…where would I look for logs to see what is going on? Alternately, should I just revert to using the backed up file to see if that fixes the docker seemingly locking up?

1 Like

Try to revert back to 5.0.2. Make sure you applied patch 1 for 5.0.2. You can revert the Database certificate from the 2019 to 2015 version with a database modify from the console. Make sure everything works (log in to OpenEMR).

Then use my script/ procedure to update the EC2 instance (OpenEMR). At this point, you should not be able to log into OpenEMR. Then update the database instance certificate from the AWS console.

Now everything should work (verify that you can log into OpenEMR).

Also, the permissions on your mysql-ca file don’t look like mine … mine are …

-rwx------ 1 ubuntu root 65484 Jan 27 01:33 mysql-ca
-rwx------ 1 root   root 21672 Jan 25 22:18 mysql-ca.old
-rwx------ 1 ubuntu root  1344 Apr 28  2018 README.md

—Ralf

1 Like

I ended up reverting the changes but we’re still having issues. Now attempting the Automated Recovery thing through new stack creation. I tried copying the old mysql-ca one back in place, and doing the chmods specified, but we’re still getting the memory/cpu spikes. I’m a wee bit out of my depth here, so hoping the Recovery thing works.

1 Like

Oh, we were also on 5.0.1-3…crap.

1 Like

Oh my. You need to upgrade to the latest patches for that, then upgrade, then patch.

Usually before I do anything risky, I create images of my EC2 instance and a snapshot of my database. By default the system probably creates a few snapshots of your database as backup … if set up correctly. You can always restore these images.

Ralf

1 Like

I’m curious what will happen with this Stack Creation that’s using the 5.0.2 recovery script against our 5.0.1-3 data? Unknown?

1 Like

Unless the system is in a working to begin with, I don’t know how well the recovery will work crosses fingers. I haven’t had much luck with the recovery feature in the past but that was a while back (circa 5.0.0 or so). I always use the AWS images and database snapshots as backup because that way I know the instance (EC2 or database) is fully and properly backed up as a working entity — very robust.

If you are restoring an image or snapshot (“stack”) … it should work very well.

Ralf

1 Like