8

I run multiple CoreOS instances on Google Compute Engine (GCE). CoreOS uses systemd's journal logging feature. How can I push all logs to a remote destination? As I understand, systemd journal doesn't come with remote logging abilities. My current work-around looks like this:

journalctl -o short -f | ncat <addr> <ip>

With https://logentries.com using their Token-based input via TCP:

journalctl -o short -f | awk '{ print "<token>", $0; fflush(); }' | ncat data.logentries.com 10000

Are there better ways?

EDIT: https://medium.com/coreos-linux-for-massive-server-deployments/defb984185c5

mikemaccana
  • 110,530
  • 99
  • 389
  • 494
mattes
  • 8,936
  • 5
  • 48
  • 73

5 Answers5

10

systemd past version 216 includes remote logging capabilities, via an client / server process pair.

http://www.freedesktop.org/software/systemd/man/systemd-journal-remote.html

fche
  • 2,641
  • 20
  • 28
  • 1
    it's not `systemd-journal-remote` you're looking for, it's `systemd-journal-upload` and that's available since version 216 as well. – mattes Jan 06 '18 at 19:16
  • 1
    One is the receiver, the other is the transmitter. – fche Jan 08 '18 at 02:37
7

A downside to using -o short is that the format is hard to parse; short-iso is better. If you're using an ELK stack, exporting as JSON is even better. A systemd service like the following will ship JSON-formatted logs to a remote host quite well.

[Unit]
Description=Send Journalctl to Syslog

[Service]
TimeoutStartSec=0
ExecStart=/bin/sh -c '/usr/bin/journalctl -o json -f | /usr/bin/ncat syslog 515'

Restart=always
RestartSec=5s

[Install]
WantedBy=multi-user.target

On the far side, logstash.conf for me includes:

input {
  tcp {
    port  => 1515
    codec => json_lines
    type  => "systemd"
  }
}

filter {
  if [type] == "systemd" {
    mutate { rename => [ "MESSAGE", "message" ] }
    mutate { rename => [ "_SYSTEMD_UNIT", "program" ] }
  }
}

This results in the whole journalctl data structure being available to Kibana/Elasticsearch.

J.C.
  • 487
  • 5
  • 7
0

Kelsey Hightower's journal-2-logentries has worked pretty well for us: https://logentries.com/doc/coreos/

If you want to drop in and enable the units without Fleet:

#!/bin/bash
#
# Requires the Logentries Token as Parameter

if [ -z "$1" ]; then echo "You need to provide the Logentries Token!"; exit
0; fi

cat << "EOU1" > /etc/systemd/system/systemd-journal-gatewayd.socket
[Unit]
Description=Journal Gateway Service Socket
[Socket]
ListenStream=/run/journald.sock
Service=systemd-journal-gatewayd.service
[Install]
WantedBy=sockets.target
EOU1

cat << EOU2 > /etc/systemd/system/journal-2-logentries.service
[Unit]
Description=Forward Systemd Journal to logentries.com
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
Restart=on-failure
RestartSec=5
ExecStartPre=-/usr/bin/docker kill journal-2-logentries
ExecStartPre=-/usr/bin/docker rm journal-2-logentries
ExecStartPre=/usr/bin/docker pull
quay.io/kelseyhightower/journal-2-logentries
ExecStart=/usr/bin/bash -c \
"/usr/bin/docker run --name journal-2-logentries \
-v /run/journald.sock:/run/journald.sock \
-e LOGENTRIES_TOKEN=$1 \
quay.io/kelseyhightower/journal-2-logentries"
[Install]
WantedBy=multi-user.target
EOU2

systemctl enable systemd-journal-gatewayd.socket
systemctl start systemd-journal-gatewayd.socket
systemctl start journal-2-logentries.service

rm -f $0
andylukem
  • 131
  • 1
  • 5
0

A recent python package my be useful: journalpump

With support for Elastic Search, Kafka and logplex outputs.

user22866
  • 231
  • 4
  • 15
0

You can also use rsyslog-kafka module inside Rsyslog.

Rsyslog with moduels:
 - imfile - input file
 - omkafka - output to Kafka

Define json template and push them to Apache Kafka. When logs are in Kafka...

Shubhitgarg
  • 588
  • 6
  • 20
Sebs
  • 1