Read about OpenEMR's Response to the COVID-19 Pandemic at

Update Your Amazon RDS SSL/TLS Certificates by February 5, 2020

I received the following message from Amazon (AWS). What does this update involve with respect to OpenEMR Standard Edition?
Thank you,

Important Reminder: Update Your Amazon RDS SSL/TLS Certificates by February 5, 2020 [AWS Account: ZZZZZZZZZZZZ]

Amazon Web Services, Inc.

Dec 11, 2019, 2:46 AM (3 days ago)

to me

We previously sent a communication in early October to update your RDS SSL/TLS certificates by October 31, 2019. We have extended the dates and now request that you act before February 5, 2020 to avoid interruption of your applications that use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) to connect to your RDS and Aurora database instances. Note that this new date is only 4 weeks before the actual Certificate Authority (CA) expiration on March 5, 2020. Because our own deployments, testing, and scanning to validate all RDS instances are ready for the expiry must take place during the final 4 weeks, the February 5th date cannot be further extended.

You are receiving this message because you have an Amazon RDS database instance(s) that requires action in the XXXXXXXX Region, listed at the end of the email.

To protect your communications with RDS database instances, a CA generates time-bound certificates that are checked by your client applications that connect via SSL/TLS to authenticate RDS databases before exchanging information. AWS renews the CA and creates new root certificates every five years to ensure RDS customer connections are properly protected for years to come.

The current CA expires on March 5, 2020, requiring updates to client applications and database instances that have certificates referencing the current CA. Client applications must add new CA certificates (root and intermediate where necessary) to their trust stores, and RDS database instances must separately use new server certificates before this hard expiration date. However, we strongly recommend you complete these changes before February 5, 2020. After February 5, 2020, we will begin scheduling certificate rotations for your RDS database instances prior to the March 5, 2020 deadline. The automatic update(s) will be scheduled within your maintenance window.

Additionally, any new RDS database instances created after January 14, 2020 (previously November 1, 2019) will default to using the new certificates. If your client applications have not been updated to add the new certificates to their trust stores, these applications will fail to connect to any new instances created after this date. If you wish to temporarily modify new instances to use the old certificates, you can do so using the AWS console, the RDS API, and the AWS CLI. Any instances created prior to January 14, 2020 will have the old certificates until you update them to the rds-ca-2019 version.

If your applications connect to RDS database instances using the SSL/TLS protocol, please follow the detailed instructions in the links below. Based on your feedback, we have provided, per database engine, further instructions on 1.) how to determine whether your client applications are connecting to your RDS databases via SSL/TLS and 2.) how to update your client application trust stores to include the new CA certificates.

If your applications do not use SSL/TLS to connect, there are no required actions that you need to take. However, using SSL/TLS is a security best practice so we recommend all customers perform this upgrade so that your applications can start using SSL seamlessly. Before March 5, 2020, RDS will schedule and perform pending maintenance actions which you can view in the RDS console to ensure you have valid certificates after the current certificates expire. The automatic update(s) will be scheduled within your maintenance window.

For RDS:
For Aurora:

We encourage you to test these steps in a development or staging environment before implementing them in your production environments. If not completed, your applications using SSL/TLS will fail to connect to your existing database instances as soon as RDS rotates your certificates on the database side prior to March 5, 2020.

  • Bump *
    Does the AWS version(s) of OpenEmr use Amazon RDS SSL/TLS certificates that need to be updated to avoid interruption in service?

hi @Ralf_Lukner, pretty sure @jesdynf is looking into this

1 Like


I’m pretty sure fix Standard MySQL cert error · openemr/openemr-devops@9ea0a9f · GitHub picks up those changes. If you’re using a version of Standard that predates it and you need to pull in a certificate file manually, that patch will show you where you need to put it. See for more information about where to get a valid certificate file.

Hello @jesdynf,

So if I understand you correctly, all Standard (AWS) “unpatched” versions of OpenEMR 5.0.2 or earlier will stop communicating with their database on or soon after February 5, 2020!

Maybe I’m easily impressed, but this seems to me that it is critical repair of the highest possible degree that all production versions of OpenEMR be upgraded/patched to 5.0.2(1) (or later once more patches are released). Correct me if I’m wrong, but it is my understanding that without the ability to communicate with the database, OpenEMR Standard (AWS) is non-functional.

Also, I do not see the “cert error” fix explicitly listed in the description of the 5.0.2 patch. Could someone a lot more knowledgeable about what is in the releases/patches than me confirm that the “cert error” patch is in 5.0.2(1)?

Thank you,

I mean, you’re not wrong, but it’s also not life-or-death panic time; keeping up with a moving target is just life in the cloud. I’m expecting a new version of OpenEMR to come out this week or next, and will be building new GCP and Marketplace packages, and will guarantee that they’ll be ready, but making this fix on your own system? Pretty simple.

  1. Grab the cert file with curl from the AWS document page I linked.
  2. Copy the container’s existing /var/www/localhost/htdocs/openemr/sites/default/documents/certificates/mysql-ca somewhere safe.
  3. Replace the container’s cert file with the new one (and chmod it readable and chown it to the local context.)
  4. Restart the container to pick it up.

Alternately, the text of the patch will give you a hint about how to do it without connecting to the container at all by manipulating the volume manually; be sure to chmod the file properly if you do.

1 Like


Thank you! That’s an easy fix. I’ll implement it this weekend when the system is in lighter use.

Sure. Over these past 2 years that I have been using OpenEMR, I believe that upgrades/patches of OpenEMR were always enhancements of security, features, repairs of less impactful issues, etc. … patches that did not involve preventing OpenEMR AWS from completely not working anymore … I do not recall even one case where I had to upgrade or patch or my version of OpenEMR would stop working completely, but it sounds like I was spoiled :joy:.

Thank you again!

@jesdynf & @brady.miller
I have applied the 5.0.2 patch 1 … and restarted my AWS EC2 instance.
How do I know if OpenEMR is using the “new” certificates?
I suspect that my now newly patched 5.0.2(1) instance is not using a “new” certificate because I can still connect to the database with OpenEMR. If there was some kind of “new” certificate with SSL/TLS encryption, the “old” database, using the “old” certificate should reject a connection from the openemr using a “new” certificate … correct?

Do I still need to “manually” copy (or “grab” with “curl” somehow) or create a “new” certificate and implement it into OpenEMR somehow? How would I have a secure connection to the mysql database with the “copied” certificate (is there something private that is not copied)?

I’m actually not 100% sure the new cert wouldn’t work with the old server. If you want to check, I’d just compare the new certificate to the one you have installed.

As for how you have a secure connection, this is a public SSL certificate issued by Amazon, capable of authenticating a connection to a holder of a key issued by Amazon.

I have figured out after patching 5.0.2 with patch 1 (the one and only at this point) does NOT update the certificate file mysql-ca. The certificate file that is there is from 2018 … not new and was not changed by applying the patch. It’s also not the same size as the new certificate file. Thus, the one and only patch for 5.0.2 at this point does not correct the issue.

sudo ls -l /mnt/docker/volumes/standard_sitevolume/_data/default/documents/certificates/mysql-ca
-rwx------ 1 ubuntu root 21672 Jun 3 2018 /mnt/docker/volumes/standard_sitevolume/_data/default/documents/certificates/mysql-ca

I created the following script to help update the SSL/TLS certificate based on the script in the master branch and your helpful advice above:

(NOTE: THIS PROCEDURE WILL DISCONNECT OPENEMR STANDARD on AWS FROM THE MySQL DATABASE (hopefully only temporarily) … make sure you are either using a test system or have backup images of your instance, database before starting this procedure)

cp -i ${mydir}mysql-ca ${mydir}mysql-ca.old
curl -sS "" > ${mydir}mysql-ca
chown 1000 ${mydir}mysql-ca

I run the above script with the following …

sudo bash ./

Then I restart the docker with the following (standard_openemr_1 references my docker) …

sudo docker restart standard_openemr_1

After the docker restarts, if I try to use OpenEMR STANDARD, I get the following error in the web browser:

Check that you can ping the server

Thus, I now suspect that the new certificate file has been loaded, and OpenEMR STANDARD cannot communicate with the database using the old certificate.

Now I go into AWS RDS to update the certificate on the database instance.

I choose “Update now” since I want to correct the issue immediately and return to a working OpenEMR system.

I then check the "I understand … " box that pops up and then apply the change. Status changes to “Pending” while the change is being applied.

After this, make sure you can log back into OpenEMR (worked for me). Thank you @jesdynf for helping guide me through this!!

1 Like

I just tried to do all of this with my openemr standard install and while the amazon side of the cert upgrade worked, I don’t think the openemr side did as I cannot access openemr via browser. Is there a way to check if I did it properly? I followed your instructions.

to piggyback on @hcrossen’s post; it seems like the openemr docker container is running out of memory on restart? I can SSH into the EC2 instance for a brief window after stopping and starting it again, but after that both SSH and the website become unresponsive; even though the EC2 dashboard still shows the instance as reachable. Is there a way to turn off the auto-start of the docker container somehow at the instance level? ETA: the CPU utilization seems to hover at around 20% now after this…what on earth is happening inside the Docker container?

Hi, Henry.

There’s two possibilities here, as I see it.

  • There’s a problem with the certificate such that OpenEMR can’t find it or use it (bad permissions, bad contents), Open the new certificate file (with less) to make sure that it looks reasonable, and verify that the permissions and ownership are a perfect match.

  • There’s a problem with the MySQL server such that it’s not willing to use the new certificate. Have you performed the certificate upgrade on your MySQL instance?


This sounds like a question that needs its own forum post, unless you believe this has to do with the MySQL certificate upgrade from Amazon RDS.

1 Like
  1. What is the time stamp on your mysql-ca file?
  2. When you go into Amazon RDS, does it show that the database certificate needs to be updated?

Hi, I’ve been helping @hcrossen out with his AWS setup. The new certificate is larger than the older one and seems to have more data in it:

Since doing this, starting the docker container leads to system unusability…where would I look for logs to see what is going on? Alternately, should I just revert to using the backed up file to see if that fixes the docker seemingly locking up?

1 Like

Try to revert back to 5.0.2. Make sure you applied patch 1 for 5.0.2. You can revert the Database certificate from the 2019 to 2015 version with a database modify from the console. Make sure everything works (log in to OpenEMR).

Then use my script/ procedure to update the EC2 instance (OpenEMR). At this point, you should not be able to log into OpenEMR. Then update the database instance certificate from the AWS console.

Now everything should work (verify that you can log into OpenEMR).

Also, the permissions on your mysql-ca file don’t look like mine … mine are …

-rwx------ 1 ubuntu root 65484 Jan 27 01:33 mysql-ca
-rwx------ 1 root   root 21672 Jan 25 22:18 mysql-ca.old
-rwx------ 1 ubuntu root  1344 Apr 28  2018


1 Like

I ended up reverting the changes but we’re still having issues. Now attempting the Automated Recovery thing through new stack creation. I tried copying the old mysql-ca one back in place, and doing the chmods specified, but we’re still getting the memory/cpu spikes. I’m a wee bit out of my depth here, so hoping the Recovery thing works.

1 Like

Oh, we were also on 5.0.1-3…crap.

1 Like