0

I have problem with Logstash, I start the process with

bin/logstash -f logstash.conf

and it runs properly but not ship logs from files until I press ctrl+c to terminate process, only then it sends data to elasticsearch.

My question is why I must terminate process to start sending all collected data to elastic?

logstash.conf:

input {
  file {
    type => logs
    path => "/home/admin/logs/*"
    start_position => beginning
    sincedb_path => "/home/admin/sincedb"
    ignore_older => 0
    codec => multiline {
      pattern => "^[0-2][0-3]:[0-5][0-9].*"
      negate => "true"
      what => "previous"
    }
  }
}
filter {
grok {
    match => { 
        message => "%{NOTSPACE:date}\t+%{INT:done}\t+%{INT:idnumber}\t+SiteID=%{INT:SiteID};DateFrom=%{NOTSPACE:DateFrom};DateTo=%{NOTSPACE:DateTo};RoomCode=%{INT:RoomCode};RatePlanRoomID=%{INT:RatePlanRoomID};DaySetupIDs:%{NOTSPACE:DaySetupIDs};RatePlanID=%{INT:RatePlanID};RatePlanCode=%{INT:RatePlanCode};Calculation=%{WORD:Calculation};IsClosed=%{INT:IsClosed};BaseOccupancy=%{INT:BaseOccupancy};MaxOccupancy=%{INT:MaxOccupancy};MinLOS=%{INT:MinLOS};IsDirtyRate=%{INT:IsDirtyRate};IsDirtyAvail=%{INT:IsDirtyAvail};BasePrice=%{NOTSPACE:BasePrice};ExtraPriceAdult=%{NOTSPACE:ExtraPrice};Currency=%{WORD:Currency};Inventory=%{INT:Inventory};Reservations=%{INT:Reservations};\n+%{GREEDYDATA:Request}\n+%{GREEDYDATA:Response}"
        }
}
grok {
    match => { 
        path => "%{GREEDYDATA:pp}/%{INT:filedate}_%{INT:fileid}_%{INT:ChannelID}_%{GREEDYDATA:action}_%{INT:isdone}\.bin"
        }
}
    mutate {
        lowercase => [ "action" ]   
    }   
}
output {
  stdout {
    codec => rubydebug
  }
  elasticsearch {
      hosts => ["localhost:9200"]
      index => "crs-%{SiteID}"
   }
 }

Logstash:

15:25:10.496 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
15:25:10.632 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

then I waiting and waiting - nothing until I kill process:

^C15:28:33.530 [SIGINT handler] WARN  logstash.runner - SIGINT received. Shutting down the agent.
15:28:33.544 [LogStash::Runner] WARN  logstash.agent - stopping pipeline {:id=>"main"}
{
              "date" => "11:48:49",
                "pp" => "/home/admin/logs",
         "BasePrice" => "270.00",
          "filedate" => "115151",
          "idnumber" => "106274275",
          "IsClosed" => "0",
              "type" => "logs",
              "path" => "/home/admin/logs/115151_00_3_SavePrice_1.bin",
        "RatePlanID" => "13078",
      "MaxOccupancy" => "0",
          "Currency" => "PLN",
          "@version" => "1",
              "host" => "xxxx",
      "Reservations" => "0",
            "action" => "saveprice",
       "Calculation" => "N",
     "BaseOccupancy" => "0",
            "isdone" => "1",
      "RatePlanCode" => "975669",
            "fileid" => "00",
            "MinLOS" => "1",
            "SiteID" => "1709",
    "RatePlanRoomID" => "61840",
        "ExtraPrice" => ";ExtraPriceChild=",
           "Request" => "<?xml version=\"1.0\"?><request>xxx",
         "ChannelID" => "3",
              "done" => "1",
       "DaySetupIDs" => "0=39355996",
       "IsDirtyRate" => "1",
              "tags" => [
        [0] "multiline"
    ],
          "Response" => "<ok></ok>",
      "IsDirtyAvail" => "1",
        "@timestamp" => 2016-12-29T15:28:33.801Z,
          "RoomCode" => "28114102",
          "DateFrom" => "2016-12-05",
         "Inventory" => "3",
            "DateTo" => "2016-12-05"
}
Mikolaj
  • 3
  • 5

2 Answers2

1

What if you change your sincedb_path as following:

sincedb_path => "/dev/null"

Apart from that, the properties for the file plugin on your conf seems spot on. Hope this SO would sound handy. As far as I understood from the official docs, sincedb_path just needs to be a directory where logstash has write permission for the registry. Logstash by default would keep all the records in $HOME/.sincedb*.

If you don't want to go ahead in the above method, you could always clear the .sincedb files within the default directory and then try re-parsing your logstash conf. More likely logstash would be picking up the changes automatically.

EDIT:

I guess i found out your issue, where you're missing the quotes for start_position, which should look like this:

start_position => "beginning" 
Community
  • 1
  • 1
Kulasangar
  • 9,046
  • 5
  • 51
  • 82
  • Still the same, my problem is that logstash is parsing logs but sends it to elasticsearch only when I terminate process, not sending right after discovered new data – Mikolaj Dec 29 '16 at 20:59
  • @Mikolaj I've updated the answer. I guess you're missing out the quotes for `start_position`. If you can check it and let me know? – Kulasangar Dec 30 '16 at 11:46
0

Adding auto_flush_interval helped me

  codec => multiline {
  auto_flush_interval => 1
  pattern => "^[0-2][0-3]:[0-5][0-9].*"
  negate => "true"
  what => "previous"
}
Mikolaj
  • 3
  • 5