3

I am using Amazon ECS and docker image is using php application. Everything is running fine.

In the entry point i am using supervisord in foreground and those logs are currently send to cloudwatch logs.

In my docker image i have logs send to files

/var/log/apache2/error.log
/var/log/apache2/access.log
/var/app/logs/dev.log
/var/app/logs/prod.log

Now i want to send those logs to aws cloudwatch. whats the best way for that. Also i have multiple containers for single app so example all foour containers will be having these logs.

Initially i thought to install aws logs agent in container itself but i have to use same docke rimage for local and ci and nonprod environments so i dont want to use cloudwatch logs there.

Is there any other way for this?

Mirage
  • 139
  • 3
  • 9

4 Answers4

2

In your task definition, specify the logging configuration as the following:

"logConfiguration": {
  "logDriver": "awslogs", 
  "options": {
    "awslogs-group": "LogGroup",
    "awslogs-region": "us-east-1",
    "awslogs-stream-prefix": "Prefix"
  }
}
  • awslogs-stream-prefix is optional for EC2 launch type but required for Fargate

In the UserData section when you launch a new instance, register the instance to the cluster and make sure you specify the logging of type awslogs as well:

#!/bin/bash
echo 'ECS_CLUSTER=ClusterName' > /etc/ecs/ecs.config
echo ECS_AVAILABLE_LOGGING_DRIVERS='[\"json-file\", \"awslogs\"]' >> /etc/ecs/ecs.config
start ecs

More Info:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html

Muhammad Soliman
  • 21,644
  • 6
  • 109
  • 75
1

You have to do two things:

  1. Configure the ECS Task Definition to take logs from the container output and pipe them into a CloudWatch logs group/stream. To do this, you add a LogConfiguration property to each ContainerDefinition property in your ECS task definition. You can see the docs for this here, here, and here.
  2. Instead of writing logs to a file in the container, instead write them to /dev/stdio or /dev/stdout / /dev/stderr. You can just use these paths in your Apache configuration and you should see the Apache log messages outputted to the container's log.
John Nicely
  • 1,006
  • 8
  • 18
  • 1
    The problem with 2nd thing is , then logs from all files will go single stream whereas i want to have logs from 10 files to go to separate files – Mirage Jun 06 '18 at 23:30
  • @Mirage probably not what you want to hear, but in my experience this isn’t really a problem. You can use awslogs (https://github.com/jorgebastida/awslogs) to get logs from the log group. Optionally, you can change your log format to include the host name (see http://httpd.apache.org/docs/current/mod/mod_log_config.html), which makes it much easier to filter for traffic from specific hosts. I’m not aware of any way to handle logs in the way you want to. – John Nicely Jun 07 '18 at 04:52
1

You can use the awslogs logging driver of Docker

Refer to the documentation on how to set it up https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html

Nune Isabekyan
  • 531
  • 2
  • 4
0

Given your defined use case:

  • Collect logs from 4 different files from within a container
  • Apply docker log driver awslog for the task

In previous answers you already have seen, that awslog applies the stdout as logging mechanism. Further, it has been stated, that awslog is applied per container, which means one aws cloud logging stream per running container.

To fulfill your goal when switching to stdout for all logging is not a choice of yours:

  • You apply a seperate container as logging mechanism (remember one log stream per container) for the main container
  • this leads to a seperate container, which applies the awslogs driver and reads the files from the other container sequentially (also async possible, more complex) and pushes them into a seperate aws cloud log stream of your choice
  • this way, you have seperate logging streams or groups if you like, for every file

Prerequisites:

  1. The main container and a seperate logging container with access to a volume of the main container or the HOST

See this question how shared volumes between containers are realized via docker compose: Docker Compose - Share named volume between multiple containers

  1. The logging container needs to talk to the host docker daemon. Running docker inside docker is not recomended and also not needed here!

here is a link to see how you can make the logging container talking to the host docker daemon https://itnext.io/docker-in-docker-521958d34efd

  1. Create the logging docker container with a Dockerfile like this:

    FROM ubuntu
    ...
    ENTRYPOINT ["cat"]
    CMD ["loggingfile.txt"]

  2. You can apply this container as a function with input parameter logging_file_name to write to stdout and directly into aws Cloudwatch:

    docker run -it --log-driver=awslogs
    --log-opt awslogs-region= region
    --log-opt awslogs-group= your defined group name
    --log-opt awslogs-stream= your defined stream name
    --log-opt awslogs-create-group=true
    <Logging_Docker_Image> <logging_file_name>


enter image description here

With this setup you have a seperate docker logging container, which talks to the docker host and spins up another docker container to read the logging files of the main container and pushes them to aws Cloudwatch fully costumized by you.

Jwf
  • 175
  • 2
  • 14