At our site, we’re experiencing a 12 second pause on the login page load. The problem stems from the feature detailing the last login date and time for a given user on a given computer. Specifically this query is causing the slow-down:
$query = “select user, date, comments from log where event like ‘login’ and comments like '%”.$ip."’ order by date desc limit 1";
$statement = sqlStatement($query);
and is located in openemr/interface/login/login.php. Our log table currently has 23,253,377 records. Is there something that the developers recommend to correct this extra load time? Also, with so many records being generated, is there a strategy in place for log maintenance.
I think Brady can answer with a bit more certainty but… I think that sql statement was to prevent excessive log in attempts from a single machine. As the log table grows the ‘like’ command will become slower and slower. Just like what you’re seeing.
In your specific case, you could remove that query, since you know where to find it.
In a more general sense, we ought to come up with a different solution to prevent excessive log in attempts from a single IP address.
One idea is to store invalid log in attempts in the SESSION variable. After a defined number of attempts, refuse access to the log in screen until a future time or until the user closes and reopens the browser or clears their cookie cache.
hey,
I think Mark put that there. Would be nice to have several options here. For now control with globals.php , but again some day these options will be placed into an admin->ConfigLogin section.
Probably having below login checking options would be good:
-nothing (no checks)
-warning if previous bad login (like mark’s current code; probably need to optimize this to avoid the long query)
-mechanism to lock out with so many bad logins from IP (like Jason’s proposition)
The log mechanisms are likely going to get a large overhaul with the CCHIT projects going on. I forgot, who’s working on that stuff?
The login routine is a security issue and that has been assigned to Sena Palanasami and Visolve.com. Selvi is the lead developer.
Logging will like likely need to be part of the sql.inc class. Since we are planning to force all database connections through sql.inc this will be a logical point to ensure that all the appropriate database changes are logged correctly.
A couple of quick things may help:
1) In accordance with CFAPress LIKE kills speed. How about changing “event like ‘login’” to "event =‘login’? I can understand how comments can be unstructured, but not event.
2) How about indexing the table to a timestamp field and ignoring all events from a couple of years past and beyond… maybe less.
Is it necessary to keep these huge number of records in the database?.. Can we plan for some kind of log maintenance which purges the old unused records from the tables?
The query as written searches the entire log for the last login attempts. I don’t think is going to work as it is currently written.
This is just the log. Not the actual patient records themselves. The logging is going to get a lot worse as we improve the logging. It is likely possible to archive the data on a regular basis, but yes, we will have to preserve all this data. The rule I was taught was 7 years for adults. Pediatrics is 23 years.
It might be wise to have a monthly archive process that moves records from the log table into a table named ‘log_archive’ or something. This would preserve the data while keeping the log table reasonably unburdened. The log_archive could be searched when the log is searched for reporting and auditing purposes. But, for day-to-day actions, only the log table itself is acted upon.