I’ve been using Docker in my current role for over six months now to provide me with an Oracle Database instance on a Mac. Oracle is the database of choice on the other environments; from CI to Production.
As Oracle cannot be installed on a Mac we had to make do with MySql. It was a bit annoying maintaining two sets of DDL scripts for the database schema so it has certainly been useful having this rather than the MySql instance I had previously.
I have been wondering for a while now how flexible this setup could be during development. If I need to stop what I’m working on to swap tasks it’s easy to just do a
git stash, swap branches etc. but what about the database?
Any changes I had made to the database and any test data that I had would be lost once I reset the database back to the current schema version to work on a different branch. When I eventually got some time to investigate this further I found it was easy.
Starting another container instance is as easy as just running the
docker run command:
$ docker run -d -p 49161:1521 wnameless/oracle-xe-11g
This would provide me with a new container with an empty Oracle database instance. However, we have DDL/DML scripts for each application version which, if I could get to execute in this new Docker container, would allow me to provision a new database instance at the schema version of the current Git branch.
As you can see above, I am using the
wnameless/oracle-xe-11g Docker image to provide the Oracle containers. It works perfectly out the box but I wanted to add the ability to execute a script when starting a new container so, first things first:
So my first Dockerfile creation started.
After some tinkering, I discovered the
ENTRYPOINT Docker file command. This allowed me to create a standard *nix shell script to act as a wrapper for starting the Oracle instance in the new container. By adding parameters to the docker run command you can instruct the container to execute a SQL script during container creation. If no such parameters are provided then you’ll get an empty Oracle instance to play with.
#!/bin/bash set -e echo "Starting Oracle..." export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe export PATH=$ORACLE_HOME/bin:$PATH export ORACLE_SID=XE sh -c "/usr/sbin/startup.sh" echo "Oracle startup complete" if [ "$1" = 'runscript' ]; then echo "Running SQL script $2..." cd $(dirname "$2") sqlplus -s system/oracle << EOF whenever sqlerror exit sql.sqlcode; set heading off @$2 exit; EOF fi echo "Oracle instance is ready for action" exec /usr/sbin/sshd -D
The docker run command then becomes:
$ docker run -d -p 49161:1521 iancollington/oracle-xe-11g runscript create_schema.sql
Of course, the SQL file that is executed has to be available to the container. This can be achieved by simply mounting a directory from the host:
$ docker run -d -p 49161:1521 -v /path/to/dbscripts:/dbscripts iancollington/oracle-xe-11g runscript /dbscripts/create_schema.sql
I was then able to mount the directory containing the SQL scripts and create a wrapper SQL script to execute each of them in turn:
@ddl/schema_v0.0.1.sql @dml/schema_dml_v0.0.1.sql @ddl/schema_v0.0.2.sql @ddl/schema_v0.0.3.sql @ddl/schema_v0.0.4.sql @dml/schema_dml_v0.0.4.sql
This gives me my provisioned Oracle database in a nice shiny new container.
At this point, I thought job done. I soon realised, after I had stopped and started the container, that the runscript will execute every time the container is started. Doh!
After some investigation, I found that the
CMD value could be removed from a container when committing the container as a new image.
$ docker commit --change "CMD echo" e27cbfd5ac7a myproject/myapp_db:v0.0.2
This allows me to have a number of tagged images for the schema at different versions.
$ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE myproject/myapp_db v0.0.2 8abca37a48b5 1 day ago 4.848 GB myproject/myapp_db v0.0.1 57414119b31d 3 day ago 4.848 GB iancollington/oracle-xe-11g latest 10b052eeff9f 2 days ago 3.537 GB
This works quite well as I can then create containers as and when I need them.
For instance, when I start work on a new feature, update the DML scripts and require a new database to work from I simply do:
$ docker run -d -p 49161:1521 -v /path/to/dbscripts:/dbscripts iancollington/oracle-xe-11g runscript /dbscripts/create_schema.sql $ docker commit --change "CMD echo" e27cbfd5ac7a myproject/myapp_db:v0.0.3 $ docker run -d -p 49161:1521 --name myapp_db myproject/myapp_db:v0.0.3
When I need to stop and work on an urgent bug which uses the previous schema version I simply stop the current container I’m using and start a new one using the appropriate tag of the image for the required schema version:
$ docker stop myapp_db $ docker run -d -p 49161:1521 --name bug_fix_52 myproject/myapp_db:v0.0.1
Of course, if I used different ports between the containers then I wouldn’t need to stop any containers as I could run multiple Oracle containers concurrently.