4

For the past couple of months, we have been logging to Loggly incorrectly. Our contexts historically have been an numerical array of strings.

['message1', 'message2, 'message3' ...]

We are looking to send to loggly an array of objects moving forward which should use less keys.

Example new loggly payload:

['orderId' => 123, 'logId' => 456, 'info' => json_encode(SOMEARRAY)]

In testing a new format whereby we have cleaner logging format, Loggly provides the following message:

2 out of 9 sent in this event were not indexed due to max allowed (100) unique fieldnames being exceeded for this account. The following were the affected fields: [json.context.queue, json.context.demandId]

We are on a 30 day plan. Does this mean that for our contexts to be indexed correctly, we need to wait 30 days for the old indexed logs to expire? Is there a way of rebuilding the indexing to accommodate the new format logs?

Gravy
  • 12,264
  • 26
  • 124
  • 193

1 Answers1

0

You do not need to wait for 30 days. As long as you stop sending logs in the old format, usually within a few hours or at most a couple of days you will be able to send data with new fields. You can also reach out to support@loggly.com.

MauricioRoman
  • 832
  • 1
  • 9
  • 15