The problem is that you are recreating the dialog content every time instead of just doing it once. Move the .dialog functions out of the click events and only call .dialog(‘open’) on the click event. Then the .dialog will get formatted well before it needs to be displayed.
Thanks for the comment. The dialog content is unique for each link, so the html in the <div id=‘tbcsvhist’></div> needs to be replaced before the dialog is opened. The user will be putting encounter numbers in the input to see what is in the tables for that encounter; or looking at a table of claim responses.
For example, suppose a user is looking at the results from uploading and processing a new 277CA file and there are 8 accepts and 3 rejects. The html will have a link for the status for each claim (or maybe only rejects are shown) which calls the function to read the file and get the status for that claim and generate the html. In this case, the function is $(’#processfiles’).click(function() … This sets up the “bindlink” function for all <a class=clmstatus href=‘url’ /> elements in the “process new files” response. When the status link is clicked the dialog with the status information is opened. When a different status link is clicked, the last response has to be replaced with the new response for the different claim and the dialog is replaced with a new dialog with new content.
I was testing to see if I could see this issue, but It would not happen this time. I guess my computer is faster today.
Once Brady decides things are good enough to put into the actual development/testing code and people can try the code, it will be much more clear. Your comment indicates to me that this type of issue can be dealt with rather easily, once one knows which line to change. Appearance is so important, and I really appreciate your input.
The dialog content is unique for each link, so the html in the <div id=‘tbcsvhist’></div> needs to be replaced before the dialog is opened
You still don’t necessarily have to use the full .dialog() creation function each time though. You could add another sub-DIV to tbcsvhist and update just that part. Or create separate .dialogs for each link type once at document creation time.
Kevin,
I did some jquery stuff that was based on replies from the database for the accordion object in the external data load code. Might be worth a peek.
That is in the Snomed contribution, /interface/code_systems/ ? Javascript has quite a syntax IMHO, and it is hard to see what is going on. PHP is so straightforward comparatively. In dataloads_ajax.php it appears the jquery .each() function is applied to the db_list array to create <div> elements and then a dialogue is attached to the <div>. Looks pretty clever.
Brady suggested that I use the OpenEMR date selection methods instead of jquery-ui datepicker. I have been looking through the code to see how it is done (various ways it seems). Apparently the /library/dynarch_calendar.js is the main library used. This is a bit of an unexpected research effort, so it will be next week before I try the revisions.
(a quick aside, note this code was done before the xlt() and xla() functions existed, which are basically short for htmlspecialchars( xl(’’), ENT_NOQUOTES) and htmlspecialchars( xl(’’), ENT_QUOTES) respectively)
Form your “main stuff” quote:
onkeyup=‘datekeyup(this,mypcc)’ onblur='dateblur(this,mypcc)
I found these functions in /library/textformat.js, I believe, but I did not figure out where the ‘mypcc’ argument comes from. These functions format keyboard input.
I have a draft of the revisions. It just takes time to study the code and debug.
I just pushed revisions for using the dynarch calendar to edihistory_2.
Note: my testing OpemEMR is the development version and Ihave little or nothing in the database. It does not seem to be all working - white screens, etc. But the stuff I am working on seems to work, except that the xlt() and xla() functions return empty strings. I think I could spend a lot of time (that I don’t have) getting a test OpenEMR up to satisfactory performance. Maybe it would be better to install the 4.1.1 working version and patches. Then the github remote would have to change.
The invisible xla and xlt stuff is my fault. I forgot to tell you to also use echo in the statements. I fixed this (and a couple of other minor things) and commited your code to sourceforge. Check out my code review here for things to work on and explanation of the minor changes I made: http://github.com/bradymiller/openemr/commit/4e485a7bde35193f1cb021f66dd70b987eebce4a
Thank you for this awesome contribution; since it’s now in OpenEMR, will be easier for you to continue working on it.
If developing, always best to develop on the development version (suggest not using the 4.1.1/patches etc). Please note your branch runs fine on my testing, so guessing it’s not the code.
I definitely should have caught the missing “echo.” Just did not click.
I will read some more on git and github and keeping up with everything since now other developers can edit this code and submit their revisions if they so choose, so I want to keep my copy of the development version in sync.
OK, now I need some pointers on git. Since you edited the edih_view.php script those edits are not in my branch. Actually, there have been a number of revisions to OpenEMR development version that I probably do not have. I should probably replace my ‘repo’ with a new one and create a new branch - replacing all my old stuff. I could also get rid of the stand-alone edihistory repository since it no longer serves a purpose.
In your case, since only a few branches, will be best to just drop them all (and master) rather than redo the repo, because you will then keep all your commits, if ever want to use them. Try the following:
#drop your remote branches (it appears you have already done this)
git push origin :<branchname>
#now lets drop your local master branch
#(you will need to create a new temporary branch and go into that for this to work, since can't delete a branch you are in)
git checkout master
git checkout -b temp_branch
git branch -D master
#now lets get the master from upstream (official github openemr mirror)
git checkout upstream/master
-(ignore the warning)
#now lets create your local master branch
git checkout -b master
#now lets push it to your github account (note the + before master which will drop and then re-push your github master branch)
git push origin +master
Hi Kevin,
Here’s additional instructions to remove your edihistory_2 and edihistory branches from your local repo and github repo:
#Remove from github (note the colon before the branch names)
git push origin :edihistory
git push origin :edihistory_2
#Remove them in your local repo
git branch -D edihistory
git branch -D edihistory_2
At first I would like to tell you that I am a Developer and I not much idea about this 277, 997 ACK files etc. .,
When I upload a x12 837 Batch file it uploads correctly. But when I try to search for the Encounter number. it say “Failed to find the batch file for 4457”.
also is there any document for Users how should we use these forms? So that I am aware of where I am doing wrong.
Thanks for trying it. There is a Readme, access it under the ‘Notes’ tab.
The edihistory project is entirely dependent on the actual uploaded files and does not access the OpenEMR database (as of this time). Therefore, the encounter will only be found if it is in one of the batch files that have been processed using the New Files tab (batch file information is saved into the the csv tables ‘files_batch.csv’ and ‘claims_batch.csv’). In the New Files tab, click the Process button which runs the new files search and process functions. If you have a lot of new files, you may want to turn off ‘HTML Output’ because the new files process will give a table format output for each type of file and claims which indicate some error (may be too much to bother with, but no harm). This is a once through visual, shown only for new files. Use the CSV Tables tab to locate particular claim responses, using the ordering and search capabilities of the jquery dataTables.
To deal with new files, Go to the New Files tab, click the Browse button, select files, click the submit button; repeat as needed. Then click the Process button and the new files are parsed and rows appended to the csv files, but you see the new file output.
The CSV Table select button is populated only after the csv tables have some data, by reading whether the csv files have a minimum length. Reload the frame to get a new read.
The whole thing is built around the process of generating batch files, submitting them to the clearinghouse and then downloading the responses. I assume the batch file exists if there is a response. First time reading of a large amount of files works the same, but the new file output may be a lot more than you want. Zip archives can be uploaded as a way to load a large amount of files, but may not work if your files are in a subdirectory in the zip.
Thanks alot for your quick response. Yes that was really helpful. But I still have few more doubts:
1). As you wrote “The whole thing is built around the process of generating batch files, submitting them to the clearinghouse”. But Openemr is already generating a x12 837 batch file that we submit to the clearing house.
Later Clearing house provides us different versions of file formats like 277, 835, 997, TA1 etc…
So can you please clear me what you meant by your statement above?
2). What is the purpose of other tabs like “ERA Files and X12 Text”?
3). How ERA is going to be effective. If I upload ERA file through Openemr.
Thanks Kevin as I really need to your help to get things fixed.
The “process” here is actually submitting claims and getting responses. As you say, OpenEMR has generated the batch file. If you do billing then you know that finding errors and rejected claims, and researching payments, is a pain. The project parses the response files and lets you know if there is an error or rejected claim. Click the links in the output and see the reject response, the batch claim segments, or the file.
The ERA Files tab is to let you view an ERA File in an RA format. The detail is comparable to what I get in detailed paper RA’s. The OpenEMR handling of ERA files in the "Fees | Payments"is a one-time view, and then the database entries that can be accessed by searching payments or the EOB page. There may be a lot more information in the ERA file. This does not replace the OpenEMR Payments functions at all - it is only for your information and to see your ERA response whenever you want to. The ERA files are stored by OpenEMR, but they have a naming scheme that I just did not want to deal with, it depends on user uploads, and there is not a good way to see what is new. I have downloaded ERA files and then missed processing them in the Payments screen and there are no clues about that possibility. That is something that is a little better with this project, but I would like to add a feature to check whether an ERA file is shown in the OpemEMR database.
The X12 Text tab is for viewing the x12 files. They are read and formatted to make looking at them a little easier.
Remember that all these files are in the user’s directory tree at some point. The only ones that you really know are in OpenEMR are the batch files. Giving the user some tools for dealing with these files is what the project is all about.