Hi,
Would be interesting to see what happens if upgrade the jquery from 1.3.2 on the patient summary page to 1.6.4 to see if this improves performance of the ajax request (especially the issue of them seeming to all finish at the same time). Reading online, it sounds like jquery’s management of ajax calls has improved drastically since its older 1.3 version.
-brady
1. I set vmstat from Linux command line to run every 3 seconds for a few minutes while I was using openemr. This keeps track of whether swap memory is being used. I found that memory was not swapping to virtual memory in the swap partition.
2. I changed the clinical rules for diagnoses on the diabetes and hypertension sections to use only the very few typical diagnosis codes we actually use. I also deleted all but the exact match for the pneumococcal and influenza vaccine codes we actually use. With this, performance is still slow, but not terrible.
3. I set one of the Internet Explorer browsers to display a notification about every script error. Here is a typical display I get:
Webpage error details
User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)
Timestamp: Thu, 29 Sep 2011 12:51:12 UTC
Check out the following wiki page section for mysql indexes to add. (There are about 10 of them to add and these appear to drastically speed things up; note that for upgraders to 4.0 to ensure you add the one in form_encounters also): http://www.open-emr.org/wiki/index.php/CDR_Performace#MySQL_Optimization
I am not quite sure how to add indexes to tables? But I am interested in getting the spinning circles to go away. If this works in testing is it possibly going to be part of a patch?
Up until now, the standard patches have not been able to modify the database. I think it’s time to change this for several reasons:
1. Have a mechanism for laypeople to add these keys
2. Allow addition of new rows/tables to OpenEMR 4.1 (of course, after good testing) since the Certification is for this version (I don’t know this, perhaps somebody can weigh in, but will version 4.2 in the future need to be recertified?)
In order to get the new keys in, another function needs to be built into the sql_upgrade.php script:
1. Check if a key exist
2. If key doesn’t exist, then add it
Then when have this mechanism, can make a script that can be used in a patch mechanism to deal with database upgrades(the first being the addition of the keys).
Any developers want to to take a stab at adding the mechanism to the sql_upgrade.php script? Guessing it will involve utilization of the mysql SHOW INDEX and CREATE INDEX commands.
Adding the ten file indexes sped up things considerably, but it was still a problem so we got a new $1300 server which has 8 MB memory and a really fast CPU. We got one with a heavy duty battery backup system and four hard drives. (We put the first three into a RAID configuration and saved the last one for conventional use.) Now things are so fast we don’t notice any delay at all.
As a matter of practicality, I don’t think the memory was a problem, as I was able to show before that we were not swapping to and from disk. Also, the RAID drive is just icing on the cake - the real difference was in picking a CPU of 2-4 GHz rather than 300 MHz or so.
MySQL by default may not be configured to optimally use your available memory. Although it wasn’t swapping, it might be hitting the hard disk to read data when it could just read from memory instead.
I think you are right that the memory wasn’t the fundamental problem, but I think you may be able to better take advantage of the memory you’ve got.
Every since I upgraded my people to 4.0. The calendar has gotten slower and slower. I have read through this post and done everthing in this posting and then some except the firebug. I have not understood how to install it yet. I have setup the eaccelorator for PHP and indexed all the tables and installed the latest patch 5 and updated the database. It still takes upto a full 20 - 30 seconds to load the calendar at every change. I am getting screamed at to make something happen faster. I don’t know what to do. I have 4GB of RAM in the system and 90GB of free space on the hard drive. It is a windows system running server 2003. I am perplexed as to make it go any faster.
Sherwin,
I suspect the bottleneck is the size of your openemr_postcalendar_* tables. A stop gap may be to delete historical records. However, given that amount of time I’ll bet the database is table scanning rather than using the indices.
Best way to figure out the exact problem is to profile the code as I’ve mentioned in a couple of other threads. In addition if you can look at the query plans for the calendar activity, you can see if it is indeed table scanning as I suspect, or if it’s something else.
-Kevin Yeh kevin.y@integralemr.com
thanks for the reply.
I am trying to grasp what you are saying and get onboard. The openemr_postcalendar_events table has 6,588 records in it. Is this what you suspect in slowing down the loading of the calendar? The rest of the postcalendar table have nothing in them. So, this one alone has the most data.
You mentioned “profile the code”.
Could you point me to the thread where this is discussed? Is that what FireBug is used for?
Should I delete the records that are over a year old in the postcalendar table?
Hmm… 6000 entries doesn’t really seem like that many, but if my suspicion is correct that the database is table scanning instead of using a good index, then the performance is going to be related to the size of that table, so the more rows you can delete, the faster I think it will run.
Firebug can give you info on the “browser side” about how long things take. Profiling with xdebug breaks down the code running on the server side and shows you how long each operation takes.
I took some time to work on this issue again. I finally got firebug loaded to FireFox and ran the profiler on loading the patients demographics page and got this report