Docker: Difference between revisions
Brian Wilson (talk | contribs) |
Brian Wilson (talk | contribs) mNo edit summary |
||
Line 1: | Line 1: | ||
== Docker on Bellman == | |||
I migrated services into Dockers on [[Bellman]], including UniFi, owncloud, mariadb, and a media server. | |||
== Docker ArcGIS Enterprise == | == Docker ArcGIS Enterprise == | ||
Revision as of 17:33, 4 September 2017
Docker on Bellman
I migrated services into Dockers on Bellman, including UniFi, owncloud, mariadb, and a media server.
Docker ArcGIS Enterprise
The Docker project I am working on right now is to put ArcGIS Enterprise into containers. You can find out more in this Github project
General information
Docker is a container platform.
About a month ago you could not find it on docker.com, but that changed. I used to say "go to Github" but now when you check Github you will get information about a new project Moby that I know very little about, so go to the docker.com site and look for the Community edition. For Debian, try https://www.docker.com/docker-debian
Don't set up PostgreSQL in a Docker container for production. Setting up a database in a Docker is not recommended, but it works for what I want to do: allow one or two people to look at and edit spatial data.
I want to try out Solr (and Tomcat) but don't want to go through the whole installation thing, I just want it all running right now. So I am going to cut to the chase and load a pre-built docker.
Running Java servlets can be a pain because of the requirement to set up Tomcat, and often the apps require a specific version of Tomcat and Java. So running the whole thing in Docker should be easier.
I left my notes on ownCloud down below, I wanted to put it on the Synology NAS. It's not a priority right now since I have switched to using Synology Cloud Station.
I also want to set up containers for Geonode and Geoserver. I know how fiddly Geoserver can be (it runs in Tomcat, too) and so I want to isolate it from the Debian host it will run on.
So in preparation for using Docker. I removed PostgreSQL and Tomcat from my host server. Take that!
Development on Mac
Install Docker for Mac (unless your Mac is too old, boo hoo like Stellar) or Kitematic. Kitematic installs VirtualBox. Docker for Mac is lighter weight. If you already use Vagrant (for MySQL for example) then you already need VirtualBox so Kitematic is fine.
Asking to download Kitematic takes me to the Docker Toolbox download page. The Docker Toolbox includes Kitematic. Kitematic installs VirtualBox if you don't already have it.
After installing the prerequisites, the Docker Toolbox installer took me to the Docker warehouse, where I selected the "official" Solr container and started it, all in one go, without actually needing to know anything at all about Docker or Virtualbox. I now have a running Solr instance.
Deployment on Linux
For Debian make sure you are using docker.com docker-ce package not docker-engine
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-storage-driver-option
I am using btrfs as a compromise for speed and stability
My /etc/docker/daemon.json file om laysan contains this, it works on Ubuntu but not on Debian Jessie
{ "storage-driver":"btrfs", "graph":"/volumes/ssd_machines/docker", "dns":["192.168.123.2"], "dns-search":["arcgis.net"] }
See https://docs.docker.com/v1.11/engine/reference/commandline/daemon/#daemon-configuration-file
Just a quick test
You should now be able to download and run a pre-built image from the Docker hub, get a shell prompt, and quit. After you exit from the shell, you can see that the container was running with the "docker ps -a" command.
# Download the Ubuntu Server LTS (Long Term Support) release docker pull ubuntu:16.04 # Run it interactively with a shell prompt, so we can look around docker run -it ubuntu /bin/bash root@a6f99cc58685:/# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a6f99cc58685 ubuntu "/bin/bash" 21 seconds ago Exited (0) 10 seconds ago furious_kowalevski
Building my Geoserver Docker images
PostGIS
- Create a directory cd ~/Projects/docker-postgis
- Put a Dockerfile in it emacs Dockerfile
- Use a VOLUME directive to tell it we want to persist the data in the host not the container.
- Put other files needed in there, eg emacs initdb.sh
- Build the docker docker build -t wildsong/postgis .
- Run it docker run --name=postgis -p 5432:5432 --net=pg -d wildsong/postgis
- Connect to it to test it psql -U postgres -h localhost
In this case I have mapped the standard postgres port to the same port on the host, I can get away with this because there is no instance of Postgres already running here. Then I can connect to it as if it were running locally with the psql command.
From the psql prompt, I can check out how the postgis set up went with
\c geonode_imports select postgis_full_version(); POSTGIS="2.3.1 r15264" GEOS="3.4.2-CAPI-1.8.2 r3921" PROJ="Rel. 4.8.0, 6 March 2012" GDAL="GDAL 1.10.1, released 2013/08/26" LIBXML="2.9.1" LIBJSON="0.11.99" RASTER
This looks reasonably complete and reasonably up to date. (Hmph, GDAL is a bit out of date, version 2 is out.)
You can run an interactive shell and look around.
docker exec -it postgis /bin/bash
Geoserver
I marched through the same procedure to create a Geoserver docker, and when I start it I tell it to export port 8080.
docker run -t -p 8080:8080 geoceg/geoserver
Geonode
Plumbing the network
Create a network to hook together the geoserver and postgis containers
docker network create pg docker run --name postgis --net=pg -p 5432 -d wildsong/postgis docker run --name geoserver --net=pg -p 8888:8080 -d wildsong/geoserver docker network inspect pg
The two containers can see each other now with the names 'postgis' and 'geoserver' resolving correctly. The 'inspect' command shows the two containers are connected to the pg network.
Persisting the data
Add to the Dockerfile for PostGIS, at the end:
ENV PGDATA /pgdata VOLUME ${PGDATA}
The ENV statement will tell Postgres you want to keep all its files in a place called /pgdata. The VOLUME command allows you to tie that place "/pgdata" to a spot on your host computer when you run the container. Change the run command by added a mapping between the host and /pgdata/, for example,
docker run --name postgis -v /home/myfavoriteplace/pgdata:/pgdata/ etc
will cause the files to be written to /home/myfavoriteplace/pgdata. This gives you the flexibility to use the same Docker image on different computers, keeping the data stored in the local filesystem, but in a different place on each computer.
Dealing with passwords
I don't want passwords to be coded into my Docker images, so how do I get them out of there? I know I am supposed to put them into environment settings but then how do I load the environment?
Getting a custom docker onto a Synology
I can do this by pushing my custom built Docker image into the Docker Hub and then pull it onto the Synology. I have set up both an organizational account and a personal account. I log in with my personal credentials and then push to the organizational account.
docker login -u=brian32768 -p=paswword docker push wildsong/postgis
Once the image is pushed then I should be able to pull it onto the Synology. Passing everything through a repository some 1000's of miles away might not work for you; you could set up your own registry. From the command line was easiest for me,
sudo docker pull wildsong/postgis
I used the web interface to create a container and launch it once the image was copied to the server.
Docker Hub
apt-get install docker-engine
Shortcuts
See all images available on this machine:
docker images
See all running containers and get an id:
docker ps
See all containers, running or not:
docker ps -a
Look at a container in detail
docker inspect id
See also /var/lib/docker/containers/id
Docker networking
Check ifconfig docker0 to see what the bridge address is on your docker server then as needed create a route on your router so that traffic can flow from your local computers into the docker instances running on the docker server.
If you have more than one docker server you have to make sure they don't both pick the same network ip range. If routes are already set up in the router docker should be able to find an unused subnet.
docker pull nginx
cd static-html-directory cat <<EOF > nginx.docker FROM nginx COPY static-html-directory /usr/share/nginx/html EOF
Place this file in the same directory as your directory of content ("static-html-directory"), run ., then start your container:
docker run --name some-nginx -d some-content-nginx
Docker Compose
My big problem is the path for a volume has a space in it and it blows Docker Compose out of the water.
/c/Program Files/
It sees two arguments, with the space as a delimiter
I am trying to find out where it's happening in Compose.
I tried all the variations I could think of to quote it.
compose.parallel.feed_queue: Pending: set([]) compose.parallel.parallel_execute_iter: Finished processing: <Service: datastore> compose.parallel.feed_queue: Pending: set([])
ERROR: for server Cannot create container for service server: Invalid bind mount spec "39aab1cb131c79756827d7d4c41fb05dde57bd451011592106e0a0db0aeeb708:Files/ESRI/License10.5/sysgen]:rw": Invalid volume destination path: 'Files/ESRI/License10.5/sysgen]' mount path must be absolute. ERROR: compose.cli.main.main: Encountered errors while bringing up the project.