How to deal with persistent storage (e.g. databases) in docker

juwalter Source

How do people deal with persistent storage for your docker containers? I am currently using this approach: build the image, e.g. for Postgres, and then start the container with

docker run --volumes-from c0dbc34fd631 -d app_name/postgres

IMHO, that has the drawback, that I must not ever (by accident) delete container "c0dbc34fd631".

Another idea would be to mount host volumes "-v" into the container, however, the userid within the container does not necessarily match the userid from the host, and then permissions might be messed up.

Note: Instead of --volumes-from 'cryptic_id' you can also use --volumes-from my-data-container where my-data-container is a name you assigned to a data-only container, e.g. docker run --name my-data-container ... (see accepted answer)



answered 5 years ago Tim Dorr #1

While this is still a part of docker that needs some work, you should put the volume in the Dockerfile with the VOLUME instruction so you don't need to copy the volumes from another container. That will make your containers less inter-dependent and you don't have to worry about the deletion of one container affecting another.

answered 4 years ago ben schwartz #2

Depends on your scenario (this isn't really suitable for a prod environment) but here is one way:

this gist of it is, use a directory on your host for data persistence.

answered 4 years ago tommasop #3

Docker 1.9.0 and above

Use volume API

docker volume create --name hello
docker run -d -v hello:/container/path/for/volume container_image my_command

this means that the data only container pattern must be abandoned in favour of the new volumes.

Actually the volume API is only a better way to achieve what was the data-container pattern.

If you create a container with a -v volume_name:/container/fs/path docker will automatically create a named volume for you that can:

  1. Be listed through the docker volume ls
  2. Be identified through the docker volume inspect volume_name
  3. Backed up as a normal dir
  4. Backed up as before through a --volumes-from connection

The new volume api adds a useful command that let you identify dangling volumes:

docker volume ls -f dangling=true

And then remove it through its name:

docker volume rm <volume name>

as @mpugach underlines in the comments you can get rid of all the dangling volumes with a nice one liner:

docker volume rm $(docker volume ls -f dangling=true -q)
# or using 1.13.x
docker volume prune

Docker 1.8.x and below

The approach that seems to work best for production is to use a data only container.

The data only container is run on a barebone image and actually does nothing except exposing a data volume.

Then you can run any other container to have access to the data container volumes:

docker run --volumes-from data-container some-other-container command-to-execute
  • Here you can get a good picture of how to arrange the different containers
  • Here there is a good insight on how volumes work

In this blog post there is a good description of the so called container as volume pattern which clarifies the main point of having data only containers.

Docker documentation has now the DEFINITIVE description of the container as volume/s pattern.

Following is backup/restore procedure for Docker 1.8.x and below


sudo docker run --rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
  • --rm: remove the container when it exits
  • --volumes-from DATA: attach to the volumes shared by the DATA container
  • -v $(pwd):/backup: bind mount the current directory into the container; to write the tar file to
  • busybox: a small simpler image - good for quick maintenance
  • tar cvf /backup/backup.tar /data: creates an uncompressed tar file of all the files in the /data directory


# create a new data container
$ sudo docker run -v /data -name DATA2 busybox true
# untar the backup files into the new container᾿s data volume
$ sudo docker run --rm --volumes-from DATA2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
# compare to the original container
$ sudo docker run --rm --volumes-from DATA -v `pwd`:/backup busybox ls /data

Here is a nice article from the excellent Brian Goff explaining why it is good to use the same image for a container and a data container.

answered 3 years ago amitmula #4

In Docker release v1.0, binding a mount of a file or directory on the host machine can be done by the given command:

$ docker run -v /host:/container ...

The above volume could be used as a persistent storage on the host running docker.

answered 3 years ago Raman #5

@tommasop's answer is good, and explains some of the mechanics of using data-only containers. But as someone who initially thought that data containers were silly when one could just bind mount a volume to the host (as suggested by several other answers), but now realizes that in fact data-only containers are pretty neat, I can suggest my own blog post on this topic:

See also: my answer to the question "What is the (best) way to manage permissions for docker shared volumes" for an example of how to use data containers to avoid problems like permissions and uid/gid mapping with the host.

To address one of the OPs original concerns: that the data container must not be deleted. Even if the data container is deleted, the data itself will not be lost as long as any container has a reference to that volume i.e. any container that mounted the volume via --volumes-from. So unless all the related containers are stopped and deleted (one could consider this the equivalent of an accidental rm -fr /) the data is safe. You can always recreate the data container by doing --volumes-from any container that has a reference to that volume.

As always, make backups though!

UPDATE: Docker now has volumes that can be managed independently of containers, which further makes this easier to manage.

answered 3 years ago Johann Romefort #6

If you want to move your volumes around you should also look at

from the README:

Flocker is a data volume manager and multi-host Docker cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on Linux. This means that you can run your databases, queues and key-value stores in Docker and move them around as easily as the rest of your app.

answered 3 years ago slth #7

I recently wrote about a potential solution and an application demonstrating the technique. I find it to be pretty efficient during development and in production. Hope it helps or sparks some ideas.


answered 2 years ago Lanti #8

My solution is to get use of the new docker cp, which is now able to copy data out from containers, not matter if it's running or not and share a host volume to the exact same location where the database app creating it's db files inside the container. This double solution works without a data-only container, straight from the original database container.

So my systemd init script taking the job of backuping the database into an archive on the host. I placed a timestamp in the filename to never rewrite a file.

It's doing it on the ExecStartPre:

ExecStartPre=-/usr/bin/docker cp lanti-debian-mariadb:/var/lib/mysql /home/core/sql
ExecStartPre=-/bin/bash -c '/usr/bin/tar -zcvf /home/core/sql/sqlbackup_$$(date +%%Y-%%m-%%d_%%H-%%M-%%S)_ExecStartPre.tar.gz /home/core/sql/mysql --remove-files'

And doing the same thing on ExecStopPost too:

ExecStopPost=-/usr/bin/docker cp lanti-debian-mariadb:/var/lib/mysql /home/core/sql
ExecStopPost=-/bin/bash -c 'tar -zcvf /home/core/sql/sqlbackup_$$(date +%%Y-%%m-%%d_%%H-%%M-%%S)_ExecStopPost.tar.gz /home/core/sql/mysql --remove-files'

Plus I exposed a folder from the host as a volume to the exact same location where the database is stored:

  build: ./mariadb
    - $HOME/server/mysql/:/var/lib/mysql/:rw

It works great on my VM (I building a LEMP stack for myself):

But I just don't know is it a "bulletproof" solution when your life depends on it actually (for example: webshop with transactions in any possible miliseconds)?

At 20:20 from this official Docker keynote video, the presenter does the same thing with the db:

"For the database we have a volume, so we can make sure that, as the database goes up and down, we don't loose data, when the database container stopped."

answered 2 years ago ben_frankly #9

In case it is not clear from Update 5 of the selected answer, as of Docker 1.9, you can create volumes that can exist without being associated with a specific container, thus making the "data-only container" pattern obsolete.


I think the Docker maintainers realized the data-only container pattern was a bit of a design smell and decided to make volumes a separate entity that can exist without an associated container.

answered 2 years ago Alen Komljen #10

I'm just using a predefined directory on the host to persist data for Postgres. Also, this way it is possible to migrate existing Postgres installations to Docker container easily:

answered 2 years ago toast38coza #11

As of docker-compose 1.6, there is now improved support for data volumes in docker Compose. The following compose file will create a data image which will persist between restarts (or even removal) of parent containers:

Here is the blog announcement:

Here's an example compose file:

version: "2"

    restart: on-failure:10
    image: postgres:9.4
      - "db-data:/var/lib/postgresql/data"
    restart: on-failure:10
    build: .
    command: gunicorn mypythonapp.wsgi:application -b :8000 --reload
      - .:/code
      - "8000:8000"
      - db


As far as I can understand: This will create a data volume container (db_data) which will persist between restarts.

If you run: docker volume ls you should see your volume listed:

local               mypthonapp_db-data

You can get some more details about the data volume:

docker volume inspect mypthonapp_db-data
    "Name": "mypthonapp_db-data",
    "Driver": "local",
    "Mountpoint": "/mnt/sda1/var/lib/docker/volumes/mypthonapp_db-data/_data"

Some testing:

# start the containers
docker-compose up -d
# .. input some data into the database
docker-compose run --rm web python migrate
docker-compose run --rm web python createsuperuser
# stop and remove the containers:
docker-compose stop
docker-compose rm -f

#start it back up again
docker-compose up -d

# verify the data is still there
(it is)

# stop and remove with the -v (volumes) tag:

docker-compose stop
docker=compose rm -f -v

# up again .. 
docker-compose up -d

# check the data is still there:
(it is). 


  • You can also specify various drivers in the volumes block. e.g.: You could specify the flocker driver for db_data:

        driver: flocker
  • As they improve the integration between Docker Swarm and Docker Compose (and possibly start integrating Flocker into the Docker eco-system (I heard a rumor that Docker have bought Flocker), I think this approach should become increasingly powerful.

Disclaimer: This approach is promising, and I'm using it successfully in a development environment. I would be apprehensive to use this in production just yet!

answered 1 year ago Santanu Dey #12

Use persistent volume claim from Kubernetes, which is a docker container management & scheduling tool

The advantage of using Kubernetes for this purpose are that

  • you can use any storage like NFS or other storage and even when the node is down, the storage need not be.
  • Moreover the data in such volumes can be configured to be retained even after the container itself is destroyed - so that it can be reclaimed, if necessary, by another container.

answered 1 year ago Czar Pino #13

When using docker-compose, simply attach a named volume e.g.

version: '2'
    image: mysql:5.6
      - db_data:/var/lib/mysql:rw

answered 1 year ago Will Stern #14

There are several levels of managing persistent data, depending on your needs:

  • Store it on your host
    • use the flag -v host-path:container-path to persist container directory data to a host directory
    • backups/restores happen by running a backup/restore container (such as tutumcloud/dockup) mounted to the same directory
  • Create a data container and mount it's volumes to your app container
    • create a container that exports a data volume, use --volumes-from to mount that data into your app container
    • backup/restore the same as the above solution
  • Use a docker volume plugin that backs an external/third-party service
    • Docker volume plugins allow your datasource to come from anywhere - NFS, AWS (S3, EFS, EBS)
    • Depending on the plugin/service, you can attach single or multiple containers to a single volume
    • Depending on the service, backups/restores may be automated for you
    • While this can be cumbersome to do manually, some orchestration solutions - such as Rancher - have it baked in and simple to use
    • Convoy is the easiest solution for doing this manually

comments powered by Disqus