High-quality manual test cases need to be built up and maintained moving forward. A group of testers can run through them before releases are shipped and developers can also use them as a point of reference/sanity check when doing the big codebase refactor project.
@Brady: Historically, do we utlitize a finite set of browsers while testing before releases? If not, it would be great to identify a minimal spec (IE 10/11, Edge v25, Firefox v40, Chromium v40, etc):
EDIT: by no means would we have to test every single upgrade from Firefox v40 and up (same applies with the Chromium example). I am trying to take a snapshot at a sufficiently modern browser version that we can reasonably assume will behave similarly on all subsequent releases (until we need to “bump up the window” of versions to match the reality of quick updates from Firefox/Chromium)… for example, OpenEMR will behave the same as it does in Firefox v40 and v47. Though a bit risky, this approach worked great for a recent team I was on. It is not best to assume folks are using the latest browser version. For instance I am a software engineer that should know better and my Firefox is at v43…
Looks like a good starting point and could always bring this into the main codebase (would recommend placing the Manual_Tests directory inside the Tests directory, though); if in main codebase, then would be easier for others to contribute. It will be good for you to do 1 or 2 of them so folks can then follow by your example in formatting etc (important to note that it is tough to predict whether folks will collaborate with specific projects or in the release; but I always try to remain optimistic on this front ).
At this point, the project officially supports Firefox (most developers use this I think), and could simplify it by just focusing on Firefox for now. Regarding the version of Firefox, may be tough to expect the testers to use certain versions (for example, in ubuntu 16.04, basically only have 2 firefox versions to choose from regarding standard packages). Interestingly, in my 16.04 environment, I downgraded to the lower firefox version in order to work with Selenium.
Let me get the “Patient_Demographics” test suite finished and then I’ll send a pull request with all other suite folders with a .gitkeep just so they’re there.
Here is what I am working on at the moment (almost done with the first one!):
At this point, the project officially supports Firefox
Cool, I will note this. Have you ever ran into/heard of a facility that uses OpenEMR via Internet Explorer? I’ve personally seen Internet Explorer has a larger market share in hospitals, for instance.
Willing to bet some folks use OpenEMR from internet explorer. If they were to report bugs on the forums, though, the first advice they would likely get would be to use Firefox. OpenEMR still has a ways to go before it captures the use of large enterprise settings use in the US. Note OpenEMR is also not ideal for the inpatient setting.
Sounds good! Noted the target Firefox browser on the Manual Testing project page.
Not to derail the thread too much but… since we are focused on outpatient and not inpatient, should this be reflected on our new homepage? Should we mention that inpatient is on our roadmap?
I wouldn’t go into outpatient vs inpatient specifics since is dependent on the specific use case. Note the lacking of inpatient support out of the box is in the US (I think this is because inpatient use in the US brings with it automatic enterprise needs and different priorities such as billing mechanisms and orders). There are places outside the US that have customized and use OpenEMR for inpatient use: http://www.open-emr.org/wiki/index.php/OpenEMR_Success_Stories#Siaya_District_Hospital_in_Kenya_Goes_Live_With_OpenEMR_in_April_2012
@Brady: sounds good. I’m going to put that PR in soon… before I do, is there any HL7 testing (inbound or outbound) to be done with respect to Patient Demographics?
There is mechanism to import/export patient info via CCDA in the Care Coordination module, but I wouldn’t go there yet on your testing scripts (that’s a bit more complicated since need to install/configure the modules and credentials are required to use ZH Healthcare’s mirth server which is required for CCDA processing).
There is also an xml import/export in popups (bottom of left_nav frame), but wouldn’t incorporate this into your test scripts since is complicated and don’t know if it’s even used by users.
Sounds good. When we get to the HL7 part I believe I will be helpful in getting Mirth configured, documented (if it isn’t already), and tested appropriately (recently did a deep dive into Mirth).
I’m currently putting together Patient and Provider scheduling test cases. Calendar is a pretty big feature so the PR may take a week for so.
Before I forget to ask… are there any manual tests that have been created over the years for Meaningful Use qa? I’m happy to recreate them, I was just wondering if this was ever captured.
I clicked on a few PDFs and am wondering where the actual test steps are. The documents only appears to have basic test plan information and version history.
Is there a matrix (on our wiki or otherwise) of what test cases OpenEMR is required to run? I noticed that certain tests are marked as optional.