13. Managing CoreOS with Systemd and Other Tools¶
This chapter covers using systemctl
commands and other
debugging commands and services for diagnosing problems
on a CoreOS system.
CoreOS uses systemd as both a system and service manager and
as an init system. The tool systemctl has many commands
which allow a user to look at and control the state of systemd
.
This is by no means an exhaustive list or description of the potential of any of the tools described here, merely an overview of tools and their most useful services. See the links provided within this chapter for more information. For more debugging information relevant to DIMS, see dimsdockerfiles:debuggingcoreos.
13.1. State of systemd¶
There are a few ways to check on the state of systemd
, as a whole system.
Check all running units and their state on a node at once.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
core@core-01 ~ $ systemctl UNIT LOAD ACTIVE SUB DESCRIPTIO boot.automount loaded active waiting Boot parti sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0- sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0- sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0- sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0- sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0- sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0- sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0- sys-devices-pci0000:00-0000:00:01.1-ata1-host0-target0:0:0-0:0:0:0- sys-devices-pci0000:00-0000:00:03.0-virtio0-net-eth0.device loaded sys-devices-pci0000:00-0000:00:08.0-virtio1-net-eth1.device loaded sys-devices-platform-serial8250-tty-ttyS0.device loaded active sys-devices-platform-serial8250-tty-ttyS1.device loaded active sys-devices-platform-serial8250-tty-ttyS2.device loaded active sys-devices-platform-serial8250-tty-ttyS3.device loaded active sys-devices-virtual-net-docker0.device loaded active plugged sys-devices-virtual-net-vethcbb3671.device loaded active plugge sys-devices-virtual-tty-ttyprintk.device loaded active plugged sys-subsystem-net-devices-docker0.device loaded active plugged sys-subsystem-net-devices-eth0.device loaded active plugged sys-subsystem-net-devices-eth1.device loaded active plugged sys-subsystem-net-devices-vethcbb3671.device loaded active plug -.mount loaded active mounted / boot.mount loaded active mounted Boot parti dev-hugepages.mount loaded active mounted Huge Pages dev-mqueue.mount loaded active mounted POSIX Mess media.mount loaded active mounted External M sys-kernel-debug.mount loaded active mounted Debug File tmp.mount loaded active mounted Temporary usr-share-oem.mount loaded active mounted /usr/share usr.mount loaded active mounted /usr coreos-cloudinit-vagrant-user.path loaded active running c motdgen.path loaded active waiting Watch for systemd-ask-password-console.path loaded active waiting Di systemd-ask-password-wall.path loaded active waiting Forwa user-cloudinit@var-lib-coreos\x2dinstall-user_data.path loaded acti user-configdrive.path loaded active waiting Watch for docker-201c7bd05ea49b654aa8b02a92dbb739a06dd3e8a4cc7813dcdc15aa4282 docker-5f41c7d23012a856462d3a7876d7165715164d2b2c6edf3f94449c21d594 docker-8323ab8192308e5a65102dffb109466c6a7c7f43ff28f356ea154a668b5f app-overlay.service loaded activating auto-restart App overla audit-rules.service loaded active exited Load Secur consul.service loaded active running Consul boo coreos-setup-environment.service loaded active exited Mod data-overlay.service loaded activating auto-restart Data overl dbus.service loaded active running D-Bus Syst docker.service loaded active running Docker App etcd2.service loaded active running etcd2 fleet.service loaded active running fleet daem getty@tty1.service loaded active running Getty on t kmod-static-nodes.service loaded active exited Create lis locksmithd.service loaded active running Cluster re settimezone.service loaded active exited Set the ti sshd-keygen.service loaded active exited Generate s sshd@2-10.0.2.15:22-10.0.2.2:33932.service loaded active runnin swarm-agent.service loaded active running Swarm agen swarm-manager.service loaded active running Swarm mana system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service loaded a system-cloudinit@var-tmp-hostname.yml.service loaded active exi system-cloudinit@var-tmp-networks.yml.service loaded active exi systemd-journal-flush.service loaded active exited Flush systemd-journald.service loaded active running Journal Se systemd-logind.service loaded active running Login Serv systemd-networkd.service loaded active running Network Se systemd-random-seed.service loaded active exited Load/Sav systemd-resolved.service loaded active running Network Na systemd-sysctl.service loaded active exited Apply Kern systemd-timesyncd.service loaded active running Network Ti systemd-tmpfiles-setup-dev.service loaded active exited C ...skipping... systemd-udev-trigger.service loaded active exited udev Co systemd-udevd.service loaded active running udev Kerne systemd-update-utmp.service loaded active exited Update U systemd-vconsole-setup.service loaded active exited Setup update-engine.service loaded active running Update Eng user-cloudinit@var-lib-coreos\x2dvagrant-vagrantfile\x2duser\x2ddat -.slice loaded active active Root Slice system-addon\x2dconfig.slice loaded active active system- system-addon\x2drun.slice loaded active active system-add system-getty.slice loaded active active system-get system-sshd.slice loaded active active system-ssh system-system\x2dcloudinit.slice loaded active active sys system-user\x2dcloudinit.slice loaded active active syste system.slice loaded active active System Sli user.slice loaded active active User and S dbus.socket loaded active running D-Bus Syst docker-tcp.socket loaded active running Docker Soc docker.socket loaded active running Docker Soc fleet.socket loaded active running Fleet API rkt-metadata.socket loaded active listening rkt metada sshd.socket loaded active listening OpenSSH Se systemd-initctl.socket loaded active listening /dev/initc systemd-journald-audit.socket loaded active running Journa systemd-journald-dev-log.socket loaded active running Jour systemd-journald.socket loaded active running Journal So systemd-networkd.socket loaded active running networkd r systemd-udevd-control.socket loaded active running udev Co systemd-udevd-kernel.socket loaded active running udev Ker basic.target loaded active active Basic Syst cryptsetup.target loaded active active Encrypted getty.target loaded active active Login Prom local-fs-pre.target loaded active active Local File local-fs.target loaded active active Local File multi-user.target loaded active active Multi-User network.target loaded active active Network paths.target loaded active active Paths remote-fs.target loaded active active Remote Fil slices.target loaded active active Slices sockets.target loaded active active Sockets swap.target loaded active active Swap sysinit.target loaded active active System Ini system-config.target loaded active active Load syste time-sync.target loaded active active System Tim timers.target loaded active active Timers user-config.target loaded active active Load user- logrotate.timer loaded active waiting Daily Log rkt-gc.timer loaded active waiting Periodic G systemd-tmpfiles-clean.timer loaded active waiting Daily C LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization SUB = The low-level unit activation state, values depend on unit 119 loaded units listed. Pass --all to see loaded but inactive unit To show all installed unit files use 'systemctl list-unit-files'.
This shows all loaded units and their state, as well as a brief description of the units.
For a slightly more organized look at the state of a node, along with a list of failed unites, queued jobs, and a process tree based on CGroup:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
[dimsenv] mboggess@dimsdev2:~/core-local () $ vagrant ssh core-03 VM name: core-03 - IP: 172.17.8.103 Last login: Tue Jan 26 15:49:34 2016 from 10.0.2.2 CoreOS beta (877.1.0) core@core-03 ~ $ systemctl status ● core-03 State: starting Jobs: 4 queued Failed: 0 units Since: Wed 2016-01-27 12:40:52 EST; 1min 0s ago CGroup: / ├─1 /usr/lib/systemd/systemd --switched-root --system -- └─system.slice ├─dbus.service │ └─509 /usr/bin/dbus-daemon --system --address=system ├─update-engine.service │ └─502 /usr/sbin/update_engine -foreground -logtostde ├─system-sshd.slice │ └─sshd@2-10.0.2.15:22-10.0.2.2:58499.service │ ├─869 sshd: core [priv] │ ├─871 sshd: core@pts/0 │ ├─872 -bash │ ├─878 systemctl status │ └─879 systemctl status ├─systemd-journald.service │ └─387 /usr/lib/systemd/systemd-journald ├─systemd-resolved.service │ └─543 /usr/lib/systemd/systemd-resolved ├─systemd-timesyncd.service │ └─476 /usr/lib/systemd/systemd-timesyncd ├─systemd-logind.service │ └─505 /usr/lib/systemd/systemd-logind ├─systemd-networkd.service │ └─837 /usr/lib/systemd/systemd-networkd ├─system-getty.slice │ └─getty@tty1.service │ └─507 /sbin/agetty --noclear tty1 linux ├─system-user\x2dcloudinit.slice │ └─user-cloudinit@var-lib-coreos\x2dvagrant-vagrantfi │ └─658 /usr/bin/coreos-cloudinit --from-file=/var/l ├─systemd-udevd.service │ └─414 /usr/lib/systemd/systemd-udevd ├─locksmithd.service │ └─504 /usr/lib/locksmith/locksmithd └─docker.service ├─547 docker daemon --dns 172.18.0.1 --dns 8.8.8.8 - └─control └─742 /usr/bin/systemctl stop docker
This shows the status of the node (line 7), how many jobs are queued (line 8), and any failed units (line 9). It also shows which services have started, and what command they are running at the time this status “snapshot” was taken.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88
core@core-01 ~ $ systemctl status ● core-01 State: running Jobs: 2 queued Failed: 0 units Since: Wed 2016-01-27 12:40:13 EST; 3min 28s ago CGroup: / ├─1 /usr/lib/systemd/systemd --switched-root --system -- └─system.slice ├─docker-5f41c7d23012a856462d3a7876d7165715164d2b2c6ed │ └─1475 /swarm join --addr=172.17.8.101:2376 consul:/ ├─dbus.service │ └─508 /usr/bin/dbus-daemon --system --address=system ├─update-engine.service │ └─517 /usr/sbin/update_engine -foreground -logtostde ├─system-sshd.slice │ └─sshd@2-10.0.2.15:22-10.0.2.2:33932.service │ ├─ 860 sshd: core [priv] │ ├─ 862 sshd: core@pts/0 │ ├─ 863 -bash │ ├─1499 systemctl status │ └─1500 systemctl status ├─docker-201c7bd05ea49b654aa8b02a92dbb739a06dd3e8a4cc7 │ └─1461 /swarm manage -H tcp://172.17.8.101:8333 cons ├─swarm-agent.service │ ├─1437 /bin/bash /home/core/runswarmagent.sh 172.17. │ └─1449 /usr/bin/docker run --name swarm-agent --net= ├─systemd-journald.service │ └─398 /usr/lib/systemd/systemd-journald ├─fleet.service │ └─918 /usr/bin/fleetd ├─systemd-resolved.service │ └─554 /usr/lib/systemd/systemd-resolved ├─systemd-timesyncd.service │ └─476 /usr/lib/systemd/systemd-timesyncd ├─swarm-manager.service │ ├─1405 /bin/bash /home/core/runswarmmanager.sh 172.1 │ └─1421 /usr/bin/docker run --name swarm-manager --ne ├─systemd-logind.service │ └─505 /usr/lib/systemd/systemd-logind ├─systemd-networkd.service │ └─829 /usr/lib/systemd/systemd-networkd ├─system-getty.slice │ └─getty@tty1.service │ └─498 /sbin/agetty --noclear tty1 linux ├─systemd-udevd.service │ └─425 /usr/lib/systemd/systemd-udevd ├─consul.service │ ├─940 /bin/sh -c NUM_SERVERS=$(fleetctl list-machine │ └─973 /usr/bin/docker run --name=consul-core-01 -v / ├─docker-8323ab8192308e5a65102dffb109466c6a7c7f43ff28f │ └─1371 /bin/consul agent -config-dir=/config -node c ├─locksmithd.service │ └─1125 /usr/lib/locksmith/locksmithd ├─docker.service │ ├─ 877 docker daemon --dns 172.18.0.1 --dns 8.8.8.8 │ ├─1004 docker-proxy -proto tcp -host-ip 172.17.8.101 │ ├─1011 docker-proxy -proto tcp -host-ip 172.17.8.101 │ ├─1027 docker-proxy -proto tcp -host-ip 172.17.8.101 │ ├─1036 docker-proxy -proto tcp -host-ip 172.17.8.101 │ ├─1057 docker-proxy -proto udp -host-ip 172.17.8.101 │ ├─1071 docker-proxy -proto tcp -host-ip 172.17.8.101 │ ├─1089 docker-proxy -proto udp -host-ip 172.17.8.101 │ ├─1108 docker-proxy -proto tcp -host-ip 172.17.8.101 │ └─1117 docker-proxy -proto udp -host-ip 172.18.0.1 - └─etcd2.service └─912 /usr/bin/etcd2 -name core-01 -initial-advertis core@core-01 ~ $ docker ps CONTAINER ID IMAGE COMMAND CR EATED STATUS PORTS NAMES 5f41c7d23012 swarm:latest "/swarm join --addr=1" Ab out a minute ago Up About a minute swarm-agent 201c7bd05ea4 swarm:latest "/swarm manage -H tcp" Ab out a minute ago Up About a minute swarm-manager 8323ab819230 progrium/consul "/bin/start -node cor" 2 minutes ago Up 2 minutes 172.17.8.101:8300-8302->8300 -8302/tcp, 172.17.8.101:8400->8400/tcp, 172.17.8.101:8500->8500/tcp , 172.18.0.1:53->53/udp, 172.17.8.101:8600->8600/tcp, 172.17.8.101: 8301-8302->8301-8302/udp, 53/tcp consul-core-01
This shows the status of another node in the cluster at a different point in the startup process. It still shows the status of the node, the number of jobs queued and failed units, but there are a lot more services in the process tree. Finally, at line 68, you see how to check on the status of active, running Docker containers.
Note
If
docker ps
seems to “hang”, this generally means there is one or more Docker containers trying to get started. Just be patient, and they should show up. To check that the Docker daemon is indeed running, try to run “docker info”. It might also hang until whatever activating container starts up, but as long as it doesn’t return immediately with “Cannot connect to the Docker daemon. Is the docker daemon running on this host?”, Docker is working, just be patient.If
docker ps
doesn’t hang but shows up with just headings and no containers, but you are expecting there to be containers, rundocker ps -a
. This will show all docker containers, even ones that have failed for some reason.systemd
logs output to its journal. The journal is queried by a tool calledjournalctl
. To see all journal output of allsystemd
processes since the node was created, runjournalctl
This is a lot of output, so it won’t be shown here. Use this tool to see output of all the things in one gigantic set. Particularly useful if you’re trying to see how different services might be affecting each other.
To only see journal output for the last boot, run
journalctl -b
Same type of output as
journalctl
, but only since the last boot.
13.2. State of systemd units¶
All services run on a node with systemd
are referred to as units. You can
check the state of these units individually.
Check the status of a unit and get the tail of its log output.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
core@core-01 ~ $ systemctl status consul.service -l ● consul.service - Consul bootstrap Loaded: loaded (/run/systemd/system/consul.service; disabled; ve ndor preset: disabled) Active: active (running) since Wed 2016-01-27 12:41:56 EST; 37mi n ago Process: 941 ExecStartPost=/bin/sh -c /usr/bin/etcdctl set "/serv ices/consul/bootstrap/servers/$COREOS_PUBLIC_IPV4" "$COREOS_PUBLIC_ IPV4" (code=exited, status=0/SUCCESS) Process: 932 ExecStartPre=/bin/sh -c /usr/bin/etcdctl mk /service s/consul/bootstrap/host $COREOS_PUBLIC_IPV4 || sleep 10 (code=exite d, status=0/SUCCESS) Process: 926 ExecStartPre=/usr/bin/docker rm consul-%H (code=exit ed, status=0/SUCCESS) Process: 921 ExecStartPre=/usr/bin/docker kill consul-%H (code=ex ited, status=1/FAILURE) Main PID: 940 (sh) Memory: 28.0M CPU: 117ms CGroup: /system.slice/consul.service ├─940 /bin/sh -c NUM_SERVERS=$(fleetctl list-machines | grep -v "MACHINE" |wc -l) && EXPECT=$(if [ $NUM_SERVERS -lt 3 ] ; then echo 1; else echo 3; fi) && JOIN_IP=$(etcdctl ls /s ervices/consul/bootstrap/servers | grep -v $COREOS_PUBLIC_ IPV4 | cut -d '/' -f 6 | head -n 1) && JOIN =$(if [ "$JOIN_IP" != "" ] ; then sleep 10; echo "-join $JOIN_IP"; else echo "-bootstrap-expect $EXPECT"; fi) && /usr/bin/docker run --name=consul-core-01 -v /mnt:/data -p 172.17.8.101 :8300:8300 -p 172.17.8.101:8301:8301 -p 172.1 7.8.101:8301:8301/udp -p 172.17.8.101:8302:8302 -p 172.17.8.101:8302:8302/udp -p 172.17.8.101:8400:84 00 -p 172.17.8.101:8500:8500 -p 172.17.8.101: 8600:8600 -p 172.18.0.1:53:53/udp progrium/co nsul -node core-01 -server -dc=local -advertise 172.17.8.101 $JOIN └─973 /usr/bin/docker run --name=consul-core-01 -v /mnt: /data -p 172.17.8.101:8300:8300 -p 172.17.8.101:8301:8301 -p 172.17 .8.101:8301:8301/udp -p 172.17.8.101:8302:8302 -p 172.17.8.101:8302 :8302/udp -p 172.17.8.101:8400:8400 -p 172.17.8.101:8500:8500 -p 17 2.17.8.101:8600:8600 -p 172.18.0.1:53:53/udp progrium/consul -node core-01 -server -dc=local -advertise 172.17.8.101 -bootstrap-expect 1 Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [WARN] raft: R ejecting vote from 172.17.8.103:8300 since our last term is greater (43, 1) Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [WARN] raft: H eartbeat timeout reached, starting election Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] raft: N ode at 172.17.8.101:8300 [Candidate] entering Candidate state Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] raft: E lection won. Tally: 2 Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] raft: N ode at 172.17.8.101:8300 [Leader] entering Leader state Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] consul: cluster leadership acquired Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] consul: New leader elected: core-01 Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [WARN] raft: A ppendEntries to 172.17.8.103:8300 rejected, sending older logs (nex t: 479) Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] raft: p ipelining replication to peer 172.17.8.102:8300 Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] raft: p ipelining replication to peer 172.17.8.103:8300
The
-l
is important as the output will be truncated without it.This command also shows a multitude of things. It gives you a unit’s state as well as from what unit file location a unit is run. Unit files can be placed in multiple locations, and they are run according to a hierarchy, but the file shown by here (line 3) is the one that
systemd
actually runs.This command also shows the status of any commands used in the stopping or starting of a service (i.e., all the
ExecStart*
orExecStop*
directives in a unit file). See lines 9, 12, 14, 16. This is particularly useful if you haveExec*
directives that could be the cause of a unit failure.The command run from the
ExecStart
directive is shown, starting at line 20.Finally, this command gives essentially the tail of the service’s journal output. As you can see at line 57, a Consul leader was elected!
To see the unit file
systemd
runs, run1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
core@core-01 ~ $ systemctl cat consul.service # /run/systemd/system/consul.service [Unit] Description=Consul bootstrap Requires=docker.service fleet.service After=docker.service fleet.service [Service] EnvironmentFile=/etc/environment TimeoutStartSec=0 ExecStartPre=-/usr/bin/docker kill consul-%H ExecStartPre=-/usr/bin/docker rm consul-%H ExecStartPre=/bin/sh -c "/usr/bin/etcdctl mk /services/consul/boots ExecStart=/bin/sh -c "NUM_SERVERS=$(fleetctl list-machines | grep - && EXPECT=$(if [ $NUM_SERVERS -lt 3 ] ; then echo 1; else echo && JOIN_IP=$(etcdctl ls /services/consul/bootstrap/servers \ | grep -v $COREOS_PUBLIC_IPV4 \ | cut -d '/' -f 6 \ | head -n 1) \ && JOIN=$(if [ \"$JOIN_IP\" != \"\" ] ; then sleep 10; echo \" && /usr/bin/docker run --name=consul-%H -v /mnt:/data \ -p ${COREOS_PUBLIC_IPV4}:8300:8300 \ -p ${COREOS_PUBLIC_IPV4}:8301:8301 \ -p ${COREOS_PUBLIC_IPV4}:8301:8301/udp \ -p ${COREOS_PUBLIC_IPV4}:8302:8302 \ -p ${COREOS_PUBLIC_IPV4}:8302:8302/udp \ -p ${COREOS_PUBLIC_IPV4}:8400:8400 \ -p ${COREOS_PUBLIC_IPV4}:8500:8500 \ -p ${COREOS_PUBLIC_IPV4}:8600:8600 \ -p 172.18.0.1:53:53/udp \ progrium/consul -node %H -server -dc=local -advertise ${C ExecStartPost=/bin/sh -c "/usr/bin/etcdctl set \"/services/consul/b ExecStop=/bin/sh -c "/usr/bin/etcdctl rm \"/services/consul/bootstr ExecStop=/bin/sh -c "/usr/bin/etcdctl rm /services/consul/bootstrap ExecStop=/usr/bin/docker stop consul-%H Restart=always RestartSec=10s LimitNOFILE=40000 [Install] WantedBy=multi-user.target
This command shows the service’s unit file directives. It also shows at the top (line 2) the location of the file. In this unit file, there are directives under three headings, “Unit”, “Service”, and “Install”. To learn more about what can go in each of these sections of a unit file, see freedesktop.org’s page on systemd unit files.
To make changes to a unit file, run
systemctl edit consul.service
This will actually create a brand new file to which you can add or override directives to the unit definition. For slightly more information, see DigitalOcean’s How to Use Systemctl to Manage Systemd Services and Units.
You can also edit the actual unit file, rather than just creating an override file by running
systemctl edit --full consul.service
systemd
unit files have many directives used to configure the units. Some of these are set or have defaults that you may not be aware of. To see a list of the directives for a given unit and what these directives are set to, run1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147
core@core-01 ~ $ systemctl show consul.service Type=simple Restart=always NotifyAccess=none RestartUSec=10s TimeoutStartUSec=0 TimeoutStopUSec=1min 30s WatchdogUSec=0 WatchdogTimestamp=Wed 2016-01-27 12:41:56 EST WatchdogTimestampMonotonic=102810100 StartLimitInterval=10000000 StartLimitBurst=5 StartLimitAction=none FailureAction=none PermissionsStartOnly=no RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=940 ControlPID=0 FileDescriptorStoreMax=0 StatusErrno=0 Result=success ExecMainStartTimestamp=Wed 2016-01-27 12:41:56 EST ExecMainStartTimestampMonotonic=102810054 ExecMainExitTimestampMonotonic=0 ExecMainPID=940 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/usr/bin/docker ; argv[]=/usr/bin/docker kill c ExecStartPre={ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm con ExecStartPre={ path=/bin/sh ; argv[]=/bin/sh -c /usr/bin/etcdctl mk ExecStart={ path=/bin/sh ; argv[]=/bin/sh -c NUM_SERVERS=$(fleetctl ExecStartPost={ path=/bin/sh ; argv[]=/bin/sh -c /usr/bin/etcdctl s ExecStop={ path=/bin/sh ; argv[]=/bin/sh -c /usr/bin/etcdctl rm "/s ExecStop={ path=/bin/sh ; argv[]=/bin/sh -c /usr/bin/etcdctl rm /se ExecStop={ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop consu Slice=system.slice ControlGroup=/system.slice/consul.service MemoryCurrent=29401088 CPUUsageNSec=141291138 Delegate=no CPUAccounting=no CPUShares=18446744073709551615 StartupCPUShares=18446744073709551615 CPUQuotaPerSecUSec=infinity BlockIOAccounting=no BlockIOWeight=18446744073709551615 StartupBlockIOWeight=18446744073709551615 MemoryAccounting=no MemoryLimit=18446744073709551615 DevicePolicy=auto EnvironmentFile=/etc/environment (ignore_errors=no) UMask=0022 LimitCPU=18446744073709551615 LimitFSIZE=18446744073709551615 LimitDATA=18446744073709551615 LimitSTACK=18446744073709551615 LimitCORE=18446744073709551615 LimitRSS=18446744073709551615 LimitNOFILE=40000 LimitAS=18446744073709551615 LimitNPROC=3873 LimitMEMLOCK=65536 LimitLOCKS=18446744073709551615 LimitSIGPENDING=3873 LimitMSGQUEUE=819200 LimitNICE=0 LimitRTPRIO=0 LimitRTTIME=18446744073709551615 OOMScoreAdjust=0 Nice=0 IOScheduling=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SecureBits=0 CapabilityBoundingSet=18446744073709551615 MountFlags=0 PrivateTmp=no PrivateNetwork=no PrivateDevices=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 RuntimeDirectoryMode=0755 KillMode=control-group KillSignal=15 SendSIGKILL=yes SendSIGHUP=no Id=consul.service Names=consul.service Requires=basic.target docker.service fleet.service Wants=system.slice RequiredBy=swarm-manager.service Conflicts=shutdown.target Before=shutdown.target swarm-manager.service After=system.slice systemd-journald.socket fleet.service docker.ser Description=Consul bootstrap LoadState=loaded ActiveState=active SubState=running FragmentPath=/run/systemd/system/consul.service UnitFileState=disabled UnitFilePreset=disabled InactiveExitTimestamp=Wed 2016-01-27 12:41:55 EST InactiveExitTimestampMonotonic=102215240 ActiveEnterTimestamp=Wed 2016-01-27 12:41:56 EST ActiveEnterTimestampMonotonic=102891180 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no IgnoreOnSnapshot=no NeedDaemonReload=no JobTimeoutUSec=0 JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Wed 2016-01-27 12:41:55 EST ConditionTimestampMonotonic=102214129 AssertTimestamp=Wed 2016-01-27 12:41:55 EST AssertTimestampMonotonic=102214129 Transient=no
To see all logs of a given unit since the node was created, run
journalctl -u consul.service
Watch the logs of a given unit since the last reboot, run
journalctl -b -u consul.service
Watch the tail of the logs of a unit.
journalctl -fu consul.service
To see logs with explanation texts, run
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112
core@core-01 ~ $ journalctl -b -x -u consul.service -- Logs begin at Tue 2016-01-26 15:47:27 EST, end at Wed 2016-01-27 13:50:21 EST. -- Jan 27 12:41:55 core-01 systemd[1]: Starting Consul bootstrap... -- Subject: Unit consul.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit consul.service has begun starting up. Jan 27 12:41:56 core-01 docker[921]: Error response from daemon: Cannot kill container consul-core-01: notrunning: Container cb7c6 Jan 27 12:41:56 core-01 docker[921]: Error: failed to kill containers: [consul-core-01] Jan 27 12:41:56 core-01 docker[926]: consul-core-01 Jan 27 12:41:56 core-01 sh[932]: 172.17.8.101 Jan 27 12:41:56 core-01 sh[940]: Error retrieving list of active machines: googleapi: Error 503: fleet server unable to communicat Jan 27 12:41:56 core-01 sh[941]: 172.17.8.101 Jan 27 12:41:56 core-01 systemd[1]: Started Consul bootstrap. -- Subject: Unit consul.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit consul.service has finished starting up. -- -- The start-up result is done. Jan 27 12:42:39 core-01 sh[940]: ==> WARNING: BootstrapExpect Mode is specified as 1; this is the same as Bootstrap mode. Jan 27 12:42:39 core-01 sh[940]: ==> WARNING: Bootstrap mode enabled! Do not enable unless necessary Jan 27 12:42:39 core-01 sh[940]: ==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1 Jan 27 12:42:39 core-01 sh[940]: ==> Starting raft data migration... Jan 27 12:42:39 core-01 sh[940]: ==> Starting Consul agent... Jan 27 12:42:39 core-01 sh[940]: ==> Starting Consul agent RPC... Jan 27 12:42:39 core-01 sh[940]: ==> Consul agent running! Jan 27 12:42:39 core-01 sh[940]: Node name: 'core-01' Jan 27 12:42:39 core-01 sh[940]: Datacenter: 'local' Jan 27 12:42:39 core-01 sh[940]: Server: true (bootstrap: true) Jan 27 12:42:39 core-01 sh[940]: Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400) Jan 27 12:42:39 core-01 sh[940]: Cluster Addr: 172.17.8.101 (LAN: 8301, WAN: 8302) Jan 27 12:42:39 core-01 sh[940]: Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false Jan 27 12:42:39 core-01 sh[940]: Atlas: <disabled> Jan 27 12:42:39 core-01 sh[940]: ==> Log data will now stream in as it occurs: Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [INFO] serf: EventMemberJoin: core-01 172.17.8.101 Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [INFO] serf: EventMemberJoin: core-01.local 172.17.8.101 Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [INFO] raft: Node at 172.17.8.101:8300 [Follower] entering Follower state Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [WARN] serf: Failed to re-join any previously known node Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [WARN] serf: Failed to re-join any previously known node Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [INFO] consul: adding server core-01 (Addr: 172.17.8.101:8300) (DC: local) Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [INFO] consul: adding server core-01.local (Addr: 172.17.8.101:8300) (DC: loc Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [ERR] agent: failed to sync remote state: No cluster leader Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [ERR] http: Request /v1/kv/docker/nodes/172.19.0.1:2376, error: No cluster le Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [ERR] http: Request /v1/kv/docker/nodes/172.19.0.1:2376, error: No cluster le Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [INFO] serf: EventMemberJoin: core-02 172.17.8.102 Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [INFO] consul: adding server core-02 (Addr: 172.17.8.102:8300) (DC: local) Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [ERR] http: Request /v1/kv/docker/nodes/172.19.0.1:2376, error: No cluster le Jan 27 12:42:39 core-01 sh[940]: 2016/01/27 17:42:39 [ERR] http: Request /v1/kv/docker/nodes/172.19.0.1:2376, error: No cluster le Jan 27 12:42:40 core-01 sh[940]: 2016/01/27 17:42:40 [WARN] raft: Heartbeat timeout reached, starting election Jan 27 12:42:40 core-01 sh[940]: 2016/01/27 17:42:40 [INFO] raft: Node at 172.17.8.101:8300 [Candidate] entering Candidate state Jan 27 12:42:40 core-01 sh[940]: 2016/01/27 17:42:40 [ERR] raft: Failed to make RequestVote RPC to 172.17.8.103:8300: dial tcp 172 Jan 27 12:42:40 core-01 sh[940]: 2016/01/27 17:42:40 [INFO] raft: Election won. Tally: 2 Jan 27 12:42:40 core-01 sh[940]: 2016/01/27 17:42:40 [INFO] raft: Node at 172.17.8.101:8300 [Leader] entering Leader state ...skipping... Jan 27 12:42:41 core-01 sh[940]: 2016/01/27 17:42:41 [ERR] raft: Failed to AppendEntries to 172.17.8.103:8300: dial tcp 172.17.8.1 Jan 27 12:42:41 core-01 sh[940]: 2016/01/27 17:42:41 [ERR] raft: Failed to heartbeat to 172.17.8.103:8300: dial tcp 172.17.8.103:8 Jan 27 12:42:41 core-01 sh[940]: 2016/01/27 17:42:41 [ERR] raft: Failed to AppendEntries to 172.17.8.103:8300: dial tcp 172.17.8.1 Jan 27 12:42:41 core-01 sh[940]: 2016/01/27 17:42:41 [WARN] raft: Failed to contact 172.17.8.103:8300 in 509.786599ms Jan 27 12:42:41 core-01 sh[940]: 2016/01/27 17:42:41 [ERR] raft: Failed to heartbeat to 172.17.8.103:8300: dial tcp 172.17.8.103:8 Jan 27 12:42:41 core-01 sh[940]: 2016/01/27 17:42:41 [ERR] raft: Failed to heartbeat to 172.17.8.103:8300: dial tcp 172.17.8.103:8 Jan 27 12:42:41 core-01 sh[940]: 2016/01/27 17:42:41 [ERR] raft: Failed to AppendEntries to 172.17.8.103:8300: dial tcp 172.17.8.1 Jan 27 12:42:41 core-01 sh[940]: 2016/01/27 17:42:41 [ERR] raft: Failed to heartbeat to 172.17.8.103:8300: dial tcp 172.17.8.103:8 Jan 27 12:42:41 core-01 sh[940]: 2016/01/27 17:42:41 [WARN] raft: Failed to contact 172.17.8.103:8300 in 981.100031ms Jan 27 12:42:42 core-01 sh[940]: 2016/01/27 17:42:42 [ERR] raft: Failed to AppendEntries to 172.17.8.103:8300: dial tcp 172.17.8.1 Jan 27 12:42:42 core-01 sh[940]: 2016/01/27 17:42:42 [ERR] raft: Failed to heartbeat to 172.17.8.103:8300: dial tcp 172.17.8.103:8 Jan 27 12:42:42 core-01 sh[940]: 2016/01/27 17:42:42 [WARN] raft: Failed to contact 172.17.8.103:8300 in 1.480625817s Jan 27 12:42:42 core-01 sh[940]: 2016/01/27 17:42:42 [ERR] raft: Failed to heartbeat to 172.17.8.103:8300: dial tcp 172.17.8.103:8 Jan 27 12:42:42 core-01 sh[940]: 2016/01/27 17:42:42 [ERR] raft: Failed to AppendEntries to 172.17.8.103:8300: dial tcp 172.17.8.1 Jan 27 12:42:43 core-01 sh[940]: 2016/01/27 17:42:43 [ERR] raft: Failed to heartbeat to 172.17.8.103:8300: dial tcp 172.17.8.103:8 Jan 27 12:42:44 core-01 sh[940]: 2016/01/27 17:42:44 [ERR] raft: Failed to AppendEntries to 172.17.8.103:8300: dial tcp 172.17.8.1 Jan 27 12:42:44 core-01 sh[940]: 2016/01/27 17:42:44 [ERR] raft: Failed to heartbeat to 172.17.8.103:8300: dial tcp 172.17.8.103:8 Jan 27 12:42:46 core-01 sh[940]: 2016/01/27 17:42:46 [ERR] raft: Failed to AppendEntries to 172.17.8.103:8300: dial tcp 172.17.8.1 Jan 27 12:42:47 core-01 sh[940]: 2016/01/27 17:42:47 [ERR] raft: Failed to heartbeat to 172.17.8.103:8300: dial tcp 172.17.8.103:8 Jan 27 12:42:51 core-01 sh[940]: 2016/01/27 17:42:51 [ERR] raft: Failed to AppendEntries to 172.17.8.103:8300: dial tcp 172.17.8.1 Jan 27 12:42:52 core-01 sh[940]: 2016/01/27 17:42:52 [ERR] raft: Failed to heartbeat to 172.17.8.103:8300: dial tcp 172.17.8.103:8 Jan 27 12:43:02 core-01 sh[940]: 2016/01/27 17:43:02 [ERR] raft: Failed to AppendEntries to 172.17.8.103:8300: dial tcp 172.17.8.1 Jan 27 12:43:05 core-01 sh[940]: 2016/01/27 17:43:05 [ERR] raft: Failed to heartbeat to 172.17.8.103:8300: dial tcp 172.17.8.103:8 Jan 27 12:43:14 core-01 sh[940]: 2016/01/27 17:43:14 [ERR] raft: Failed to AppendEntries to 172.17.8.103:8300: dial tcp 172.17.8.1 Jan 27 12:43:17 core-01 sh[940]: 2016/01/27 17:43:17 [ERR] raft: Failed to heartbeat to 172.17.8.103:8300: dial tcp 172.17.8.103:8 Jan 27 12:43:23 core-01 sh[940]: 2016/01/27 17:43:23 [INFO] serf: EventMemberJoin: core-03 172.17.8.103 Jan 27 12:43:23 core-01 sh[940]: 2016/01/27 17:43:23 [INFO] consul: adding server core-03 (Addr: 172.17.8.103:8300) (DC: local) Jan 27 12:43:23 core-01 sh[940]: 2016/01/27 17:43:23 [INFO] consul: member 'core-03' joined, marking health alive Jan 27 12:43:24 core-01 sh[940]: 2016/01/27 17:43:24 [WARN] raft: AppendEntries to 172.17.8.103:8300 rejected, sending older logs Jan 27 12:43:24 core-01 sh[940]: 2016/01/27 17:43:24 [WARN] raft: Rejecting vote from 172.17.8.103:8300 since we have a leader: 17 Jan 27 12:43:24 core-01 sh[940]: 2016/01/27 17:43:24 [WARN] raft: Failed to contact 172.17.8.103:8300 in 500.297851ms Jan 27 12:43:25 core-01 sh[940]: 2016/01/27 17:43:25 [WARN] raft: Failed to contact 172.17.8.103:8300 in 938.153601ms Jan 27 12:43:25 core-01 sh[940]: 2016/01/27 17:43:25 [WARN] raft: Rejecting vote from 172.17.8.103:8300 since we have a leader: 17 Jan 27 12:43:25 core-01 sh[940]: 2016/01/27 17:43:25 [WARN] raft: Failed to contact 172.17.8.103:8300 in 1.424666193s Jan 27 12:43:27 core-01 sh[940]: 2016/01/27 17:43:27 [WARN] raft: Rejecting vote from 172.17.8.103:8300 since we have a leader: 17 Jan 27 12:43:28 core-01 sh[940]: 2016/01/27 17:43:28 [WARN] raft: Rejecting vote from 172.17.8.103:8300 since we have a leader: 17 Jan 27 12:43:30 core-01 sh[940]: 2016/01/27 17:43:30 [WARN] raft: Rejecting vote from 172.17.8.103:8300 since we have a leader: 17 Jan 27 12:43:31 core-01 sh[940]: 2016/01/27 17:43:31 [WARN] raft: Rejecting vote from 172.17.8.103:8300 since we have a leader: 17 Jan 27 12:43:33 core-01 sh[940]: 2016/01/27 17:43:33 [WARN] raft: Rejecting vote from 172.17.8.103:8300 since we have a leader: 17 Jan 27 12:43:34 core-01 sh[940]: 2016/01/27 17:43:34 [WARN] raft: Rejecting vote from 172.17.8.103:8300 since we have a leader: 17 Jan 27 12:43:34 core-01 sh[940]: 2016/01/27 17:43:34 [ERR] raft: peer 172.17.8.103:8300 has newer term, stopping replication Jan 27 12:43:34 core-01 sh[940]: 2016/01/27 17:43:34 [INFO] raft: Node at 172.17.8.101:8300 [Follower] entering Follower state Jan 27 12:43:34 core-01 sh[940]: 2016/01/27 17:43:34 [INFO] consul: cluster leadership lost Jan 27 12:43:34 core-01 sh[940]: 2016/01/27 17:43:34 [INFO] raft: aborting pipeline replication to peer 172.17.8.102:8300 Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [WARN] raft: Rejecting vote from 172.17.8.103:8300 since our last term is gre Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [WARN] raft: Heartbeat timeout reached, starting election Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] raft: Node at 172.17.8.101:8300 [Candidate] entering Candidate state Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] raft: Election won. Tally: 2 Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] raft: Node at 172.17.8.101:8300 [Leader] entering Leader state Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] consul: cluster leadership acquired Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] consul: New leader elected: core-01 Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [WARN] raft: AppendEntries to 172.17.8.103:8300 rejected, sending older logs Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] raft: pipelining replication to peer 172.17.8.102:8300 Jan 27 12:43:35 core-01 sh[940]: 2016/01/27 17:43:35 [INFO] raft: pipelining replication to peer 172.17.8.103:8300 Jan 27 13:30:47 core-01 sh[940]: 2016/01/27 18:30:47 [INFO] agent.rpc: Accepted client: 127.0.0.1:44510
Line 2 says what the date/time range of possible logs exist, but as you can see in line 3, the first log in this set is not a Jan 26 date, as could be possible according to line 2, but a Jan 27 date, which is the last time this node was rebooted.
This service started up just fine, so there’s no failures to point out, but this is where you’d find them and any possible explanation for those failures.
If the unit you are running is running a Docker container, all relevant and helpful information may not be available to you via
journalctl
. To see logs from the Docker container itself, run1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185
core@core-01 ~ $ docker logs consul-core-01 ==> WARNING: BootstrapExpect Mode is specified as 1; this is the sa me as Bootstrap mode. ==> WARNING: Bootstrap mode enabled! Do not enable unless necessary ==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1 ==> Starting raft data migration... ==> Starting Consul agent... ==> Starting Consul agent RPC... ==> Consul agent running! Node name: 'core-01' Datacenter: 'local' Server: true (bootstrap: true) Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8 400) Cluster Addr: 172.17.8.101 (LAN: 8301, WAN: 8302) Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false Atlas: <disabled> ==> Log data will now stream in as it occurs: 2016/01/27 17:42:39 [INFO] serf: EventMemberJoin: core-01 172.1 7.8.101 2016/01/27 17:42:39 [INFO] serf: EventMemberJoin: core-01.local 172.17.8.101 2016/01/27 17:42:39 [INFO] raft: Node at 172.17.8.101:8300 [Fol lower] entering Follower state 2016/01/27 17:42:39 [WARN] serf: Failed to re-join any previous ly known node 2016/01/27 17:42:39 [WARN] serf: Failed to re-join any previous ly known node 2016/01/27 17:42:39 [INFO] consul: adding server core-01 (Addr: 172.17.8.101:8300) (DC: local) 2016/01/27 17:42:39 [INFO] consul: adding server core-01.local (Addr: 172.17.8.101:8300) (DC: local) 2016/01/27 17:42:39 [ERR] agent: failed to sync remote state: N o cluster leader 2016/01/27 17:42:39 [ERR] http: Request /v1/kv/docker/nodes/172 .19.0.1:2376, error: No cluster leader 2016/01/27 17:42:39 [ERR] http: Request /v1/kv/docker/nodes/172 .19.0.1:2376, error: No cluster leader 2016/01/27 17:42:39 [INFO] serf: EventMemberJoin: core-02 172.1 7.8.102 2016/01/27 17:42:39 [INFO] consul: adding server core-02 (Addr: 172.17.8.102:8300) (DC: local) 2016/01/27 17:42:39 [ERR] http: Request /v1/kv/docker/nodes/172 .19.0.1:2376, error: No cluster leader 2016/01/27 17:42:39 [ERR] http: Request /v1/kv/docker/nodes/172 .19.0.1:2376, error: No cluster leader 2016/01/27 17:42:40 [WARN] raft: Heartbeat timeout reached, sta rting election 2016/01/27 17:42:40 [INFO] raft: Node at 172.17.8.101:8300 [Can didate] entering Candidate state 2016/01/27 17:42:40 [ERR] raft: Failed to make RequestVote RPC to 172.17.8.103:8300: dial tcp 172.17.8.103:8300: connection refuse d 2016/01/27 17:42:40 [INFO] raft: Election won. Tally: 2 2016/01/27 17:42:40 [INFO] raft: Node at 172.17.8.101:8300 [Lea der] entering Leader state 2016/01/27 17:42:40 [INFO] consul: cluster leadership acquired 2016/01/27 17:42:40 [INFO] consul: New leader elected: core-01 2016/01/27 17:42:40 [INFO] raft: Disabling EnableSingleNode (bo otstrap) 2016/01/27 17:42:40 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:40 [INFO] raft: pipelining replication to peer 172.17.8.102:8300 2016/01/27 17:42:40 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:40 [INFO] consul: member 'core-03' reaped, der egistering 2016/01/27 17:42:41 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:41 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:41 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:41 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:41 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:41 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:41 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:41 [WARN] raft: Failed to contact 172.17.8.103 :8300 in 509.786599ms 2016/01/27 17:42:41 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:41 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:41 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:41 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:41 [WARN] raft: Failed to contact 172.17.8.103 :8300 in 981.100031ms 2016/01/27 17:42:42 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:42 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:42 [WARN] raft: Failed to contact 172.17.8.103 :8300 in 1.480625817s 2016/01/27 17:42:42 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:42 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:43 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:44 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:44 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:46 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:47 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:51 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:42:52 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: connection refused 2016/01/27 17:43:02 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: no route to host 2016/01/27 17:43:05 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: no route to host 2016/01/27 17:43:14 [ERR] raft: Failed to AppendEntries to 172. 17.8.103:8300: dial tcp 172.17.8.103:8300: no route to host 2016/01/27 17:43:17 [ERR] raft: Failed to heartbeat to 172.17.8 .103:8300: dial tcp 172.17.8.103:8300: no route to host 2016/01/27 17:43:23 [INFO] serf: EventMemberJoin: core-03 172.1 7.8.103 2016/01/27 17:43:23 [INFO] consul: adding server core-03 (Addr: 172.17.8.103:8300) (DC: local) 2016/01/27 17:43:23 [INFO] consul: member 'core-03' joined, mar king health alive 2016/01/27 17:43:24 [WARN] raft: AppendEntries to 172.17.8.103: 8300 rejected, sending older logs (next: 479) 2016/01/27 17:43:24 [WARN] raft: Rejecting vote from 172.17.8.1 03:8300 since we have a leader: 172.17.8.101:8300 2016/01/27 17:43:24 [WARN] raft: Failed to contact 172.17.8.103 :8300 in 500.297851ms 2016/01/27 17:43:25 [WARN] raft: Failed to contact 172.17.8.103 :8300 in 938.153601ms 2016/01/27 17:43:25 [WARN] raft: Rejecting vote from 172.17.8.1 03:8300 since we have a leader: 172.17.8.101:8300 2016/01/27 17:43:25 [WARN] raft: Failed to contact 172.17.8.103 :8300 in 1.424666193s 2016/01/27 17:43:27 [WARN] raft: Rejecting vote from 172.17.8.1 03:8300 since we have a leader: 172.17.8.101:8300 2016/01/27 17:43:28 [WARN] raft: Rejecting vote from 172.17.8.1 03:8300 since we have a leader: 172.17.8.101:8300 2016/01/27 17:43:30 [WARN] raft: Rejecting vote from 172.17.8.1 03:8300 since we have a leader: 172.17.8.101:8300 2016/01/27 17:43:31 [WARN] raft: Rejecting vote from 172.17.8.1 03:8300 since we have a leader: 172.17.8.101:8300 2016/01/27 17:43:33 [WARN] raft: Rejecting vote from 172.17.8.1 03:8300 since we have a leader: 172.17.8.101:8300 2016/01/27 17:43:34 [WARN] raft: Rejecting vote from 172.17.8.1 03:8300 since we have a leader: 172.17.8.101:8300 2016/01/27 17:43:34 [ERR] raft: peer 172.17.8.103:8300 has newe r term, stopping replication 2016/01/27 17:43:34 [INFO] raft: Node at 172.17.8.101:8300 [Fol lower] entering Follower state 2016/01/27 17:43:34 [INFO] consul: cluster leadership lost 2016/01/27 17:43:34 [INFO] raft: aborting pipeline replication to peer 172.17.8.102:8300 2016/01/27 17:43:35 [WARN] raft: Rejecting vote from 172.17.8.1 03:8300 since our last term is greater (43, 1) 2016/01/27 17:43:35 [WARN] raft: Heartbeat timeout reached, sta rting election 2016/01/27 17:43:35 [INFO] raft: Node at 172.17.8.101:8300 [Can didate] entering Candidate state 2016/01/27 17:43:35 [INFO] raft: Election won. Tally: 2 2016/01/27 17:43:35 [INFO] raft: Node at 172.17.8.101:8300 [Lea der] entering Leader state 2016/01/27 17:43:35 [INFO] consul: cluster leadership acquired 2016/01/27 17:43:35 [INFO] consul: New leader elected: core-01 2016/01/27 17:43:35 [WARN] raft: AppendEntries to 172.17.8.103: 8300 rejected, sending older logs (next: 479) 2016/01/27 17:43:35 [INFO] raft: pipelining replication to peer 172.17.8.102:8300 2016/01/27 17:43:35 [INFO] raft: pipelining replication to peer 172.17.8.103:8300 2016/01/27 18:30:47 [INFO] agent.rpc: Accepted client: 127.0.0. 1:44510
This is generally the same output what you can get from
journalctl
, but I think I have found other information in the docker logs thanjournalctl
by itself.Note
The name of the
systemd
service and the name of the Docker container might NOT be the same. They can be the same. However, if, as in this example, you name your service “foo” so the service is “foo.service”, and you name your Docker container “foo-$hostname”, runningdocker logs foo.service
ordocker logs foo
will not work. Don’t get upset with Docker when it tells you there’s no such container “foo.service” when you named a container “foo-$hostname”. :)To follow the logs in real time, run
docker logs -f consul-core-01
13.3. Managing systemd units¶
You can start, stop, restart, and reload units with
sudo systemctl {start|stop|reload|restart} consul.service
You must run with sudo.
The “reload” option works for units which can reload their configurations without restarting.
When you make changes to a unit and are going to restart that unit, first you must let the system daemon know that changes are happening:
sudo systemctl daemon-reload
Warning
This may seem obvious, but it’s a good thing to remember: if a systemd unit is running a Docker container, if you restart the unit, this doesn’t necessarily mean the Docker container gets removed and you get a new container when the unit is restarted.