Let’s get a small group of 3-4 people to QA and test any patch before it’s released to the public.
Currently the code is merged to the master and Brady posts the patch. There is no real QA process behind this, and we see “this is broken” posts by users in the following days. Let’s help curb that with a formal process.
Under my suggestion, when Brady is preparing a release, he will email this group of people. Give them a few days to test this thoroughly and post any issues or bugs encountered to iron out. Rinse/repeat until perfected, and only release it once all parties have signed off through some audit trail. I believe this will be beneficial in providing more stable releases.
I will be more than happy to lead this and make this an actual process. Brady/Tony, any opposition or thoughts? If not, anyone else wanting to join me in this?
My suggestion is to read that stuff and see how a patch is made and learn how to make a patch yourself. Then place your proposal on how to improve the process on the wiki there. I’d be very happy to absolve myself of this patching responsibility (I can dream…). My initial instinct is that attempting to create a committee by folks that don’t know the nuts/bolts of the patching process will simply impede/stop the patch process at this stage in OpenEMR’s evolving community; but I could be wrong.
Also, I can only think of one patch that needed a patch (ie. bug in the patch); all the other “this is broken” stuff has been incorrect use/install of the patch.
That was a great suggestion. I would like to further add to that suggestion is that the group consist of people that area “experts” at sections of the program. No one person could possible test every part of the program. It is to vast at this point. But we would need each major section tested to make sure it is not broken or malfunctioning in some way after patches or versioning has been done.
Of course since this is a nation of volunteers, we need some experts to set aside time to be on this team and it should have a duration of time that the experts will serve on this team.
I was in no way insinuating the patching process was broken, merely suggesting a way to improve it’s current testing process before release. I can think of a few patch/release issues that have came up over the past year: NewCrop, Billing Module, Appt/Enc Report.
I’m not a developer by any means, I am just a very thorough (borderline OCD) user, and I would be more than happy to run through the interface and certain aspects of it, just to verify the integrity/functionality and add a fifth/testing step to the release process before we releasing it as “official”, treating potential patches pre-release as Release Canididates. I understand this is an open source, non-profit project; however testing still needs done with everything we are releasing. I have never seen any formal testing done before we release patches or official releases. I also see on the wiki that the Automated Testing has been broken/not used for some time.
It also might help the adaptation of the software if an interested party sees it is a thoroughly tested piece of software before release. I know it would help ease my mind. I talk and demo to these IT folks, Doctors, and Office Managers that are interested, and how much this project forks scares them off sometimes, so establishing another level of uniformity and organization would benefit the project as a whole as well as it’s current community. Like I said, I’m not a developer, but I would be more than happy to lead this campaign and oversee it from a project standpoint.
To correct misinformation, only one patch had a bug in 4.1.1 patches (because a bug fix in the patch unmasked another bug), which was noted and promptly addressed here: http://sourceforge.net/p/openemr/discussion/202506/thread/4854b2b1/#60c0
(This is one of the many reasons I keep things like this in a thread and documented)
If you want to help out in any way, I am all for it. As the person who gets to spend the time making the patches, here is my main, honest concern:
When I email out the patch to the party of verifiers, they will then have all sorts of questions like “what got changed”, “what should I test” etc. thus creating more work for the patch creater. This is why it makes sense for this group to really understand the patch process (ie. they can look at the commits that went into the patch to know what exactly needs to be tested in addition to some general use sanity testing; this is how I test it before it goes).
Suggest placing your proposed scheme for process on that wiki page and then will attempt to go through whatever steps are there on the next patch.
A mere 3 days after the 4.1.2 release and we already have Patch 1 suggests to me that the project could benefit from additional quality assurance practices.
Both Brad’s attempt to recruit assistance and Brady’s pleas to “TEST,TEST, TEST” seem to be met by to borrow Tony’s phrase “crickets chirping.”
One of the issues with this problem and many others encountered in the project is many a “good idea” is suggested, but proposals are too vague to act on. The devil really is in the details, and as always resources are limited.
Brady’s concerns regarding dealing with questions like “what got changed” and “what should I test” could at least in part be addressed by devising a checklist of test cases. It’s clear from the Vital Signs bug that EXISTING functionality needs attention too. More formal verification of even the basics would have value.
The project would also benefit from more detailed descriptions of testing done for code contributions. However, as was brought up in the documentation discussion, there is the potential that more stringent requirements may discourage people from contributing.
My “dream” solution would be an automated integration/test platform that could handle basic testing with an ever expanding set of test cases that could be used to evaluate the status of the code base without a lot of manual intervention. However, again time and resources are limited.
Hopefully I haven’t wasted my time in drafting this message, but somehow I get the feeling these sorts of statements aren’t reaching enough people willing assist.
One tool that might be of use (assuming someone out the has the time and inclination) is Selenium http://docs.seleniumhq.org/. This is a OS web testing tool. Allow you to record use of a web page and play it back for testing results. This is a very effective way to test basic functionality. Not a simple activity to take on, but it might be more fun compared to manual banging away at a list of “things to test”
Would a tool like Selenium have the possibility to detect a problem in correcting the used date like in Vitals patch for V4.1.2 patch 1?
One of the problems was it has been in Development Demo for some time, but was not mentioned as a problem. Why? Because most users did not have a problem, they just got the correct date and continued.
But every solution in pre-testing might bring some relief for the end testers.
Selenium is ok for user interface changes. As MU pushes EMR to additional functionality, EMR will break not just due changes in its code but also with interface changes pushed by third party. So a better approach will be using a testing tool like codeception that uses Selenium as just one of the modules.
Back to the original message for this thread, testing setup will need probably 1 developer and 2-3 non-developer (super)users with keen eye for noticing anything unusual. They will need to work on creating scripts for complete process - appointment to payment.
Developer of any new development submissions will need to include minimal unit test script(s) to make Tony and other gatekeepers’ life bit easier.
Also suggest using a QA system like the Demo with these tools configured since many small developers are unlikely to have correct setup.
Unfortunately we are talking about a serious amount of time commitment here. Does any of the corporate partners have anything they can contribute in this area for headstart?
I share your concern in regards to time, spending a good portion of my time writing test procedures on the Wiki to receive no volunteers is my hesitation. I think discussion is great, but it’s also this project’s downside. One thing I see happening frequently is someone proposes an idea, then it gets overly critiqued or, for a lack of a better jumble of words, armchair software developed until all of the energy has been exausted in to a conversation with more Segways* than Silicon Valley in 2003.
We would certainly need a developer or two to be a part of that process; to run the automation, fixes, and code stuff, as well as one other interface and functionality tester. This project needs more than Brady, Kevin, and Tony. That’s not to say there isn’t more out there, I’m not trying to ruffle feathers, I just view them as being the most active.
Personally, I think Tony’s proposal on modifying how/what code is checked in with is great. I think the project’s current cluttered state of contribs is scary, because there is no real uniformity or clear dependencies listed. If we start making everything standard and uniform, everything becomes easier for everyone at every level.
He even had a build test server that checked the tests daily. It was a cool couple weeks. So, perhaps, an option is to pick that up again if anybody is interested.
A lot of this is a matter of infrastructure. For example, the development demos have done a great job avoiding installation errors since these are doing a installation “build” daily. And offering that code up for “easy” testing helps (although, as noted with the vitals bug, is definitely no even close to perfect). Having a build testing engine would also build on the infrastructure and appears that it essentially runs on auto pilot after set up (just like the development demos do).
Regarding the project proposals, in the end these type of decisions are basically up to the person/company willing to do the work. For example, whether you want to use phpunit, selenium, codeception or hack up your own testing scripts that is really up to you in the end (or whom is willing to do the work). My suggestion is to just make a decision and go with it.
After I finish my current crowd sourced project (the CMS 1500 changes) I might try another round to raise money to pay for development and maintenance of an automated testing suite. The problem here is that the recent project were largely funded by the same people. I have doubts about the communities willingness to support such an effort at an adequate level.
While the initial implementer will have the most impact regarding choice of tools, it is important that there be some buy-in from other developers, especially if we new code contributions to come with new test cases. I wrote some test scripts using watir in the past. Mainly they could create new patients and appointments. The problem was that I kept confusing myself switching between PHP and Ruby syntax to when writing test cases.
MD Support, thanks for pointing out codeception. It just may fit the bill.