Docker: Difference between revisions
Brian Wilson (talk | contribs) mNo edit summary |
Brian Wilson (talk | contribs) |
||
Line 36: | Line 36: | ||
== Building my Geoserver Docker images == | == Building my Geoserver Docker images == | ||
=== PostGIS === | |||
# Create a directory ''cd ~/Projects/docker-postgis'' | # Create a directory ''cd ~/Projects/docker-postgis'' | ||
# Put a Dockerfile in it ''emacs Dockerfile'' | # Put a Dockerfile in it ''emacs Dockerfile'' | ||
# Put other files there, eg ''emacs initdb.sh'' | # Use a VOLUME directive to tell it we want to persist the data in the host not the container. | ||
# Build the docker ''docker build -t | # Put other files needed in there, eg ''emacs initdb.sh'' | ||
# Run it ''docker run --name=postgis -p 5432:5432 -d wildsong/postgis'' | # Build the docker ''docker build -t wildsong/postgis .'' | ||
# Connect to it ''psql -U postgres -h localhost'' | # Run it ''docker run --name=postgis -p 5432:5432 --net=pg -d wildsong/postgis'' | ||
# Connect to it to test it ''psql -U postgres -h localhost'' | |||
In this case I have mapped the standard postgres port to the same port on the host, | In this case I have mapped the standard postgres port to the same port on the host, |
Revision as of 06:15, 27 December 2016
Docker is a container platform.
It's open source but if you go to the site, you won't be able to tell, you can just find out about pricing there. After looking at docker.com, go to https://github.com/docker/docker and look for installation instructions there.
I am following their instructions to install on a Synology NAS and on two Debian systems, one co-located and one at home.
Don't do this for production. Setting up a database in a Docker is not recommended, but it works for what I want to do: allow one or two people to look at and edit spatial data.
I want to try out Solr (and Tomcat) but don't want to go through the whole installation thing, I just want it all running right now. So I am going to cut to the chase and load a pre-built docker.
Running Java servlets can be a pain because of the requirement to set up Tomcat, and often the apps require a specific version of Tomcat and Java. So running the whole thing in Docker should be easier.
I left my notes on ownCloud down below, I wanted to put it on the Synology NAS. It's not a priority right now since I have switched to using Synology Cloud Station.
I also want to set up containers for Geonode and Geoserver. I know how fiddly Geoserver can be (it runs in Tomcat, too) and so I want to isolate it from the Debian host it will run on.
I already have PostgreSQL 9.5 running directly on the host, but my only intended use is as a backend for PostGIS/Geoserver, so I will probably follow along with the instructions here and containerize it too.
So in preparation for using Docker. I removed PostgreSQL and Tomcat from my host server. Take that!
For starters with Geoserver, see Orchestrating Geoserver with Docker and Fig I have no idea what Fig is, yet. Sounds tasty.
Development on Mac
Install Docker for Mac (unless your Mac is too old, boo hoo like Stellar) or Kitematic. Kitematic installs VirtualBox. Docker for Mac is lighter weight. If you already use Vagrant (for MySQL for example) then you already need VirtualBox so Kitematic is fine.
Asking to download Kitematic takes me to the Docker Toolbox download page. The Docker Toolbox includes Kitematic. Kitematic installs VirtualBox if you don't already have it.
After installing the prerequisites, the Docker Toolbox installer took me to the Docker warehouse, where I selected the "official" Solr container and started it, all in one go, without actually needing to know anything at all about Docker or Virtualbox. I now have a running Solr instance.
Building my Geoserver Docker images
PostGIS
- Create a directory cd ~/Projects/docker-postgis
- Put a Dockerfile in it emacs Dockerfile
- Use a VOLUME directive to tell it we want to persist the data in the host not the container.
- Put other files needed in there, eg emacs initdb.sh
- Build the docker docker build -t wildsong/postgis .
- Run it docker run --name=postgis -p 5432:5432 --net=pg -d wildsong/postgis
- Connect to it to test it psql -U postgres -h localhost
In this case I have mapped the standard postgres port to the same port on the host, I can get away with this because there is no instance of Postgres already running here. Then I can connect to it as if it were running locally with the psql command.
From the psql prompt, I can check out how the postgis set up went with
\c geonode_imports select postgis_full_version(); POSTGIS="2.3.1 r15264" GEOS="3.4.2-CAPI-1.8.2 r3921" PROJ="Rel. 4.8.0, 6 March 2012" GDAL="GDAL 1.10.1, released 2013/08/26" LIBXML="2.9.1" LIBJSON="0.11.99" RASTER
This looks reasonably complete and reasonably up to date.
Where does PostgreSQL keep its data in the Docker container? By default it's putting everything in /var/lib/postgresql/data
You can run an interactive shell and look around.
docker exec -it container /bin/bash
I marched through the same procedure to create a Geoserver docker, and when I start it I tell it to export port 8080.
docker run -t -p 8080 wildsong/geoserver
Plumbing the network
Create a network to hook together the geoserver and postgis containers
docker network create pg docker run --name postgis --net=pg -p 5432 -d wildsong/postgis docker run --name geoserver --net=pg -p 8888:8080 -d wildsong/geoserver docker network inspect pg
The two containers can see each other now with the names 'postgis' and 'geoserver' resolving correctly. The 'inspect' command shows the two containers are connected to the pg network.
Persisting the data
Add to the Dockerfile for PostGIS, at the end:
ENV PGDATA /pgdata VOLUME ${PGDATA}
The ENV statement will tell Postgres you want to keep all its files in a place called /pgdata. The VOLUME command allows you to tie that place "/pgdata" to a spot on your host computer when you run the container. Change the run command by added a mapping between the host and /pgdata/, for example,
docker run --name postgis -v /home/myfavoriteplace/pgdata:/pgdata/ etc
will cause the files to be written to /home/myfavoriteplace/pgdata. This gives you the flexibility to use the same Docker image on different computers, keeping the data stored in the local filesystem, but in a different place on each computer.
Dealing with passwords
I don't want passwords to be coded into my Docker images, so how do I get them out of there? I know I am supposed to put them into environment settings but then how do I load the environment?
Getting a custom docker onto a Synology
I can do this by pushing my custom built Docker image into the Docker Hub and then pull it onto the Synology. I have set up both an organizational account and a personal account. I log in with my personal credentials and then push to the organizational account.
docker login -u=brian32768 -p=paswword docker push wildsong/postgis
Once the image is pushed then I should be able to pull it onto the Synology. Passing everything through a repository some 1000's of miles away might not work for you; you could set up your own registry. From the command line was easiest for me,
sudo docker pull wildsong/postgis
I used the web interface to create a container and launch it once the image was copied to the server.
Docker Hub
apt-get install docker-engine
Metadata
Get an id:
docker ps
Look at a container in detail
docker inspect id
See also /var/lib/docker/containers/id
Docker networking
Check ifconfig docker0 to see what the bridge address is on your docker server then as needed create a route on your router so that traffic can flow from your local computers into the docker instances running on the docker server.
If you have more than one docker server you have to make sure they don't both pick the same network ip range. If routes are already set up in the router docker should be able to find an unused subnet.
docker pull nginx
cd static-html-directory cat <<EOF > nginx.docker FROM nginx COPY static-html-directory /usr/share/nginx/html EOF
Place this file in the same directory as your directory of content ("static-html-directory"), run ., then start your container:
docker run --name some-nginx -d some-content-nginx
Owncloud
Set up a Vagrant machine to run an instance of MySQL. Make sure it has port 3306 open and set a root password.
I want to run Owncloud, so I need these components
- Nginx
- php 7 vinaocruz/php7-fpm-mysql
- Redis redis
- Owncloud (Owncloud is just some files so it does not need a separate docker container)
- MySQL (which runs on the host not in Docker)
Nginx + PHP
I need to pick an approach and then have to pick some images. I could use the generic nginx image and a separate php image but then I'd have to get them to work together. Instead I am trying to find a good combined nginx + php image. Lots of images on Docker Hub have no useful information, making them relatively useless. Look for ones with stars, if other people like them they are probably better. maxexcloo/nginx-php seems to have a pretty descriptive doc page.
Set up a web server, copying the files I want to serve into it and mapping it to port 80 on the host. This works as long as there is not already a web server on the host.
Basic server:
docker run --name="debian" -it maxexcloo/debian bash docker run --name="nginx" -d -p 80:80 maxexcloo/nginx-php
mkdir owncloud-server cd owncloud-server unzip Downloads/owncloud*.zip cat >Dockerfile <<EOF2 FROM virtualgarden/nginx-php7 COPY owncloud /app EOF2
Building this image will copy all the owncloud files into the container. You have to preconfigure the owncloud configuration before doing this step.
docker build --label owncloud_server --tag owncloud_server .
Starting an instance will already have all the files. docker run -d --name owncloud -p 80:80 443:443 owncloud_server
Then open up http://localhost/ and learn about all the missing PHP modules, which includes
- zip
- Dom
- XMLReader
- libxml
- ctype
- zlib
Redis
Used by owncloud (via PHP) and by nginx. It needs some file space. Make sure you get a new enough Redis to work with owncloud.
docker run --name redis-owncloud -d redis redis-server --appendonly yes
I need to learn more about persistent data.
Linking it all together
Server: MySQL Create an owncloud database, with a user owncloud allowed full access to the database. Make sure that the Nginx server can connect to it through the private LAN. (I run MySQL on the Docker host.)
Source: Redis Start the Redis docker as a source
Source: PHP7 Start the FPM PHP7 docker as a source
Receiver: Nginx Start the Nginx docker as a receiver for the services from Redis and PHP7. Add environment settings to tell it how to find MySQL.