This is more of a report than a request for help. It is to document what I have run into again. I have just upgraded from 4.2.2 and the patient_data table has 25113 records. When they try to create a new patient with the native code there is an error thrown by the database that says there is a duplicate record 10000.
To fix this issue, I changed the code from this.
$result = sqlQuery(“SELECT MAX(pid)+1 AS pid FROM patient_data”);
$newpid = 1;
if ($result[‘pid’] > 1) $newpid = $result[‘pid’];
To this:
$result = sqlQuery(“SELECT id AS pid FROM patient_data ORDER BY id DESC LIMIT 1”);
$newpid = 1;
if ($result[‘pid’] > 1) $newpid = $result[‘pid’] + 1; //Changed by Sherwin
setpid($newpid);
There seems to be a limit as to how many records the MAX(pid)+1 can read. It always stops at 10000.
Hi,
I was unable to find a MAX limit in mysql/mariadb docs. Super odd. The fix seems reasonable(although it will break if there is 1 patient
).
should likely look something like this(note the change from 1 to 0):
$result = sqlQuery("SELECT id AS pid FROM patient_data ORDER BY id DESC LIMIT 1"); $newpid = 1; if ($result['pid'] > 0) $newpid = $result['pid'] + 1; //Changed by Sherwin
Can you place this fix on github?
-brady
Sure, I will work something up for 1 patient. This is an example of replying without reading and comprehending what was in the previous message. 
1 Like
Note my example above should fix the 1 patient issue.
Here is the new pull request.
I want to see if I can replicate this before the fix gets into the codebase
nice @robert.down ,
This will be a good testcase your demo data generator:
-brady
I tried, but apparently there’s a problem with the demo data generator - I can’t generate that many accounts at one time. For whatever reason it errors out at around ~500 iterations
This error only happens if there are more than 10000 records in the patient data table. The user is a multi-doctor practice so most solo and practices with under that number won’t experience this issue.
It seems as though MAX() is a summing tool:
https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_max
I posted this question on stackoverflow.com
http://stackoverflow.com/questions/44085332/does-maxexpr-have-a-limit
Hi,
I would be ok bringing this fix into the codebase(rather than wait too long for somebody else to confirm this). @juggernautsei is definitely having issues with this and I don’t think the fixed code solution is inferior to the original code solution (they are both a bit awkward
).
-brady
I agree they are a both seem to not be needed since the database could take care of this itself with auto increment. However, I received responses to my stack overflow question and the respondent posted this:
Please be careful. You are trying to use a method for assigning id values that are notorious for working correctly during development and test, then failing – with user data loss – in production. Because: race conditions. Please edit your question to present the output of SHOW CREATE TABLE patient_data, then pay attention to the answers you get. – O. Jones 22 hours ago
This could be one of those things that get refactored in the code base modernization project.
@juggernautsei ,
The id (note not the PID) field is using autoincrement and can’t have more than 1 of these in a table(see 2nd answer on this http://stackoverflow.com/questions/13642915/mysql-table-with-more-than-one-auto-incremented-column).
It’s currently a bit messy since in theory if 2 users tried to insert new patient’s at the exact same time, 1 may error out. This is likely why there was the following commented out code block there:
// here, we lock the patient data table while we find the most recent max PID // other interfaces can still read the data during this lock, however // sqlStatement("lock tables patient_data read");
Would be interested if there were any opinions on this issue by others?
thanks,
-brady