I want to work on the openemr source code and run it in a docker container. I am finding that the only way I can do this is to work inside the docker container using bash. The problem for me is that this is not an ideal way to develop. I would like work inside phpstorm on my host and have it as a volume inside the container. I am having trouble doing that.
When I work inside the OpenEMR Dockerfile shouldn’t I be able to pull down this code
Note that this solution is not in the rel-500 tag, but I can show you how to get that working if you wish. With this solution, you’ll be using the latest development version of OpenEMR.
Thank you both. I have been able to get it this running using docker compose up but I am seeing some issues.
I notice that this version of openemr will auto log me out sometimes. I am not sure about that yet as far as how to reporduce it since it seems quite random.
When I did a docker-compose down -v, I cant seem to be able to connect to the database after I run a docker-compose up.
I get this error - Check that you can ping the server mysql.
$GLOBALS[‘dbh’] is false
$database->_connectionID is false
and my mysql variables are correct
mysql root root openemr 3306
I can also ping the mysql container from inside my openemr_local_development.
I can also get into the mysql container from the openemr_local_development container with this command
I had this exact issue. Apparently the first time the openemr docker image spins up it makes necessary configuration changes to the sql server and to files in the openemr container. Here is a modified git status showing the files that are removed during the initial setup:
Changes to be committed:
(use “git reset HEAD …” to unstage)
deleted: ../../acl_setup.php
deleted: ../../acl_upgrade.php
deleted: ../../gacl/setup.php
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
deleted: ../../ippf_upgrade.php
deleted: ../../setup.php
modified: ../../sites/default/sqlconf.php
deleted: ../../sql_upgrade.php
Untracked files:
(use "git add <file>..." to include in what will be committed)
../../dir
My workflow is:
make changes that will be starting point for development for the session
commit those changes
docker-compose up -d
<<make small change to code in open emr container (actually editing the file on my computer)>>
get things working on that one file I am developing,
Create a copy of the file elsewhere
now I want to push that file into git more permanently but want to reset openemr container
docker-compose down -v
docker rm
docker rmi
git reset --hard head #returns state of image to include files deleted by setup#
now put modified file where it needs to go
git add and git commit
now spin up container and repeat.
It would be easier if openemr handled setup more gracefully, but this is working for me, if very kludgy.
Also, I just messed something up with the git reset command, so BE CAREFUL and back up your work
This is excellent feedback. Let’s work together on this.
Basically, I’m thinking that we can document a good .gitignore strategy for local development, but the key is to not throw away the files on the disk, just ignore:
Obviously adding the _setup.php files to .gitignore is easy, so I’m focusing on the directories first (“worst first!”). I appended the following to .gitignore:
interface/main/calendar
sites/default/
I then studied up on applying the new rules because git status still shows the files. Think this is how to do it:
I am willing to help in any way I can, but I am very new to git, docker, php and open-emr. I began this journey late last year, but have been on it pretty intensely in the past month.
I don’t think that git ignore will help in the case where the container is created, run, destroyed and recreated cycle. It seems that the first time the image is created it makes changes to the openemr folder (which is mapped through virtualbox to a folder on my windows computer. These changes seem to be deleting things and setting flags that indicate that the system successfully configured mysql. When the container is destroyed (docker-compose down -v) those changes persist in the file system on my computer, which is outside the container. The next time a container is created, it looks for those changes and flags, thinks that Mysql is configured correctly (which it is not, because the mysql container is fresh and new too). Then the system cant see mysql and fails with the error given. I have my own fork and am working in a branch from it.
I have no idea if this is optimal at all, but I have a new workflow that is a bit less destructive:
docker-compose up -d make code changes test backup changes to another location
docker-compose down -v
docker rmi openemr_openemr
git checkout . <----make sure the dot is there this returns my file system to the un-set up version add changes that were backed up
git commit
git push
repeat
I will post some other things that were helpful but took a long time for me to figure out such as networking and getting into the running container in another post.
I generally keep one degree of separation between my code repo and testing place. For example, my repo is not where my testing happens. When I make an edit in phpstorm (before phpstorm, I used to run a script to do this), it updates the place where testing code is automatically. Then anything that happens in the testing place won’t affect my repo after run setup/testing etc. Is there a way to work this kind of strategy into the docker dev methodology?
(and agree .gitignore will likely cause more issues than it solves)