Docker will run on Linux, a VM, or Windows and the location of the logfiles that the system generates depends on which operating system you have. You can read through log files manually, use an analyzer that is built into the Docker environment, use a third-party tool to access and read the logs, or feed them into a SIEM or threat hunting package.
The Docker environment, or daemon, creates its logs. So, to be clear, in this guide, we are specifically concerned with the logs that relate to container activity.
Where are Docker container logs on Linux?
If you run Docker on a Linux machine, the Docker system will have set up its directories and these are kept under the /var directory. The exact name for the log file depends on which log driver you are using. By default, the Docker system uses json-file, which stores log messages in JavaScript Object Notation (JSON) format.
Each of the containers that Docker runs is given a unique ID and that identifier forms part of the log file name. If you haven’t changed the default driver, you will find the Docker logs in:
/var/lib/docker/containers/<containerID>/<containerID>-json.log
The name will be different if you change the log driver in your Docker system configuration because the “json” part will change.
Where are Docker container logs on Windows?
Docker is written for Linux but there is a built-in feature within the Windows operating system that lets Linux systems, such as Docker communicate with the operating system. This is called Windows Subsystem for Linux, the current version is version 2 and under this system, at the command prompt, you can type \\wsl$ to get to the root directory of the Windows Subsystem for Linux environment.
Docker container logs will be located under \\wsl$. The full path is:
\\wsl$\docker-desktop-data\version-pack-data\community\docker\containers\<containerID>\<containerID>-json.logs
Where are Docker container logs on macOS?
The macOS system is a derivative of Unix, the same as Linux, so at least with this operating system, you don’t have the issue of changing the slope of the slashes in any address. The location of Docker container logs is the same as it is for Linux:
/var/lib/docker/containers/<containerID>/<containerID>-json.log
However, there is a catch. Docker automatically runs within a VM on macOS, so it is virtualization within virtualization. The setup of that VM has performed automatically, so you probably won’t be aware that it exists. The upshot of this configuration is that you can’t see the Docker container logs from the macOS operating system – you have to get inside the Docker for Mac VM first. Type:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
After that command, you will be able to access the Docker container logs with a cat, a cp, or whatever action you want to perform, by accessing it in the container log directory explained above.
Docker container log driver details
While the location of the Docker container logs is always the same, the name will be slightly different if you change the log driver. So, for example, that “json” part of the logfile name will be different if you don’t use the JSON File logging driver.
You can change the log file driver in the system settings of your Docker implementation. To discover the current logging driver in Linux and macOS, and in Windows through PowerShell using:
docker info --format '{{.LoggingDriver}}'
The above command will show you the current logging driver set in the configuration. Changing this value won’t alter the logging driver for containers that already exist. It will only influence the creation of new containers. This can make tracking your containers a little difficult because you can end up with containers having different logging standards. The only way around that is to drop and then recreate all existing containers after you change the logging standard.
You can check on the allocated logging driver for each container with the following command:
docker inspect -f '{{.HostConfig.LogConfig.Type}}' <containerID>
It might be that your logging strategy requires different treatments for groups of containers. In such a scenario, it could be useful to change the default log driver before creating a new group of containers. It is also possible to create a new container with a log driver other than the default. This topic is covered further down in this guide
Changing the default Docker container log driver
The Docker environment includes a library of log drivers. You can easily allocate a new standard by selecting a driver, specifying that you don’t want logging, or creating a customized standard. Your options are:
- none Do not store logs
- local A custom format
- json-file JSON format, which is the default
- syslog Syslog format, which is a good option for feeding into SIEMs
- journald Uses the journald system – remember to set up the daemon
- gelf Graylog Extended Log Format – use to feed into Graylog or Logstash
- fluentd Forward input fluentd format – be sure to start the fluentd daemon
- awslogs For Amazon Cloudwatch logs
- splunk Feed into Splunk via the HTTP Event Collector
- etwlogs Event Tracing for Windows (Windows platform only)
- gcplogs Google Cloud Platform format
- logentries Rapid7 proprietary format
The links in the above list lead to the official definitions of these options at the Docker documentation site.
Set the log driver by writing a line in the Docker daemon configuration file. If you just want to stick with the JSON format, don’t do anything.
On Linux, you will find the daemon configuration file at /etc/docker/daemon.json.
On macOS, use the screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty function to access the daemon configuration file, which is in the same path as that used on Linux.
On Windows, the file is at C:\ProgramData\Docker\config\daemon.json.
The standard configuration file for the Docker daemon has options that are written in alphabetical order, which makes the lines relating to container log files easy to find. The default settings look like this:
"log-driver": "json-file", "log-level": "", "log-opts": { "cache-disabled": "false", "cache-max-file": "5", "cache-max-size": "20m", "cache-compress": "true", "compress": "true", "env": "os,customer", "labels": "somelabel", "max-file": "5", "max-size": "10m" },
The available log options, written in the log-opts section are not the same for all log types. See the description for your chosen log format for more details: follow the link in the list above to get to the Docker guide on your preferred log type. You don’t have to include the log-opts section.
Docker container log rotation
You don’t want Docker to write logs to one big never-ending file, so you need to set up log rotation. This will also enable you to move files elsewhere, perhaps to a cloud repository, and clear space on your server.
The log-opts section of the daemon.json file lets you specify how big a log file can get and how many files can be current. In the configuration list shown in the previous section, you will see these lines:
"max-file": "5", "max-size": "10m"
These two lines tell Docker to start a new file when the current file gets to 10MB in size. If writing out a new message will take the file over that limit, the daemon will write it in the current file anyway, so the file size could be a little larger than that size.
The above example allows Docker to create five files. Remember, these settings refer to each container, so that is five log files per container.
The older log files will be renamed so that the current file will always be <containerID>-json.log (when the json-file log driver is active). The next newest log file will have the name <containerID>-json.log1, and the next newest will be <containerID>-json.log2, then <containerID>-json.log3, and then <containerID>-json.log4.
As .log is always present, the highest number on the end of a log file extension will be one less than the maximum number of log files you specified in daemon.json. Log extensions get bumped, with the oldest stored file getting overwritten every time a new file is created.
The example above shows a value of 10m for the maximum log file size. That “m” stands for MB. You can also use “k”, which means KB, and “g”, which means GB. The following values are all valid:
"max-size": "20m"
"max-size": "500k"
"max-size": "1g"
If you leave out the max-size option, you will get the default, which is “-1”. That value signals to the daemon that the log file will be never-ending. In that case, it doesn’t matter what value you specify for max-files because the current file will never hit its maximum size – you will only get one log file, which will have the name <containerID>-json.log. So, if there is no max-size line in the daemon.json file, you don’t need a max-files line.
The default value for max-files is “1”, so if you don’t write a max-files specification, you will only have one log file present per container. That doesn’t mean that file will grow to infinite size. If you have a max-size value set and no max-files value, the demon will wipe the log file and start again once the file reaches the specified size.
There are several other values possible in the log-opts section, but we will just look at one more, which is compressed. This specifies whether the older logs – that is the files that have the extensions .log1, .log2, .log3, etc – should be compressed.
The compression will never apply to the currently open .log file. You can specify “true” or “false” for this value. However, the default value is to disable compression, so there is no point in putting a compressed statement with a false value – just leave the compress option out.
So, if this line exists, its value will always be:
"compress": "true"
It is probably better not to bother with compression. If space is a problem, then you should just reduce the number of log files possible and move files into backup storage as soon as they are rotated.
Specifying log conditions at container creation
All of the above tips about specifying a default log strategy in the daemon.json file can be overridden by specifying log requirements when you create a container. A container gets created with the docker container create or docker run command. The format for the log driver specification is the same in both cases. In this context, you don’t put quotes around values. From here on, we will refer to the docker run command, but all tips apply equally to the docker container create command.
You can add conditions and options to a docker run command and these have a double-dash in front of them. The log driver specification and any of the log options allowed for your chosen log driver can be added to the docker run command. In this case, those logging requirements will only apply to the container that is currently being set up. This will not change the log conditions of all previously created containers that are currently active and those specifications won’t apply to any of the containers that you subsequently create.
The precedence of log driver selection has the following order:
- The log driver with the options you specify in docker run with the log options that you also write into the command.
- The log driver you specify in docker run with that driver’s default settings if you don’t add in log option specifications.
- The log driver specified in daemon.json with the options that you specify in that file.
- The log driver specified in daemon.json with that driver’s default settings if you don’t specify any log driver options in that file.
- json-file with its default settings.
Both in the configuration file and at the command line, the default settings for each log option will apply if you don’t specify it.
You specify a log driver other than the global default at runtime by including the –log-driver <drivername> setting in the docker run command.
Log options follow on from the –log-driver specification and all need to be preceded by –log-opt – this is singular, instead of the plural log-opts tag used in the daemon.json file. You need the –log-opt tag in front of each option. So, you would have –log-opt max-size=20m –log-op max-files=5 –log-op compress=true and so on.
You can see these options in the context in the Docker Run Reference Guide.
Accessing Docker container log messages
You can view the records in the current log file at the command line. To get a list of all running containers, use:
docker ps
This will show a table with the container ID in the first column. Use that ID to list the contents of the current log file with the command:
docker logs <containerID>
However, this command only works for the json-file and journald drivers. There are several options that you can add to this command. To get new records shown at the command line in a constant stream use:
docker logs <containerID> -f
or
docker logs <containerID> --follow
To get a list of the last few lines in the log file, use:
docker logs <containerID> --tail N
or
docker logs <containerID> -N
In the above examples, N represents the number of lines to show.
Storing Docker container log files
It is better to move log files somewhere else as soon as they are rotated. Leave the current log file that is being written in its place but move it as soon as it is saved to .log1.
You can set up a docker container that is dedicated to collecting logs from all of your active containers and sending them to other locations. The way those logs are dealt with is then the responsibility of the log server that is active at the location to which you send the files.
The exact settings that you need for your Docker container logs and the way you set up the shipper container depend on the log server system that you intend to use. For example, you have seen that it is possible to generate log files in a format that is suitable for Splunk, Graylog, and Logstash. Other log management services might prefer to use the json-file log specification. So, it is better to choose a log processor and read the specific requirements of that tool before implementing either log specifications or a shipper.
Consider the following log management and analysis tools:
- Datadog Log Management A cloud-based service that can store and analyze a range of log types.
- Splunk A widely-used data analysis tool that has a free version. Docker allows container logs to be produced in Splunk’s native format.
- Graylog A log manager and analyzer that can be adapted to provide a range of functions, such as a SIEM. This tool’s preferred log format is an option within the Docker setup.
- Logstash A free log manager that is part of the ELK Stack of data analysis tools.
- Sematext Logs A cloud-hosted implementation of the ELK Stack.
You can read more about log handling tools in my guide, The Best Log Management & Analysis Tools.