2

I have a semi-large project that has been using nlog, and throughout I re-used alot of field names for different datatypes. I started to send my logs (including all log properties/fields) to ElasticSearch, and now its starting to haunt me. I noticed if ElasticSearch is unable to convert a field to it's dataType it will just drop the log entirely. The dynamic index mapping is deciding the dataType depending on what it's seen first.

Is there anyway I can tell the index to use a default dataType like string?

PS; The Instance is cloud hosted, I access it through Kibana, and I have no Idea where to find a log that tells me if/when it drops logs for parsing errors.

Edit

Index Mapping

PUT /indexName
{
  "mappings": {
      "properties": {
        "@domain": {
          "type": "keyword"
        },
        "@logTarget": {
          "type": "keyword"
        },
        "@logger": {
          "type": "keyword"
        },
        "@memUsage": {
          "type": "long"
        },
        "@processID": {
          "type": "integer"
        },
        "@serviceGUID": {
          "type": "keyword"
        },
        "@timestamp": {
          "type": "date"
        },
        "level": {
          "type": "keyword"
        },
        "message": {
          "type": "text"
        }
      }
  }
}

Unfortunately I don't know the api for pushing logs to elastic but here are some examples of what they might look like. Theses aren't the best examples but they show the overlapping/re-use of field names.

Example 1:

//NLog structured log
logger.LogInfo("That Lady has {count} cats", 5);
//JSON Object
{
    "@domain": "Service1",
    "@logTarget": "None",
    "@logger": "AppName.Program.Main",
    "@memUsage": 100,
    "@processID": 17000,
    "@serviceGUID": 0,
    
    "level": "INFO",
    "message": "That Lady has 5 cats",
    
    "count": 5,
}

Example 2:

//NLog structured log
int count = 3;
logger.LogInfo("Loaded {count}", count + " dogs");
//JSON Object
{
    "@domain": "Service1",
    "@logTarget": "None",
    "@logger": "AppName.Program.Main",
    "@memUsage": 100,
    "@processID": 17000,
    "@serviceGUID": 0,
    
    "level": "INFO",
    "message": "Loaded "5 dogs"",
    
    "count": 5,
}

Example 3:

//NLog structured log
object count = nil;
logger.LogInfo("Value is {count}", count);
//JSON Object
{
    "@domain": "Service1",
    "@logTarget": "None",
    "@logger": "AppName.Program.Main",
    "@memUsage": 100,
    "@processID": 17000,
    "@serviceGUID": 0,
    
    "level": "INFO",
    "message": "Loaded "5 dogs"",
    
    "count": 5,
}
ZZT
  • 71
  • 1
  • 6
  • es will determine the default mapping if you dont set it after X documents. You have to reindex your data after specify your own mapping, take a loot ak: https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html – LeBigCat Jan 03 '21 at 02:20
  • I've read through that before, and nothing it in jumps out at me for solving the problem. When I created the index I setup static mapping, but I will still need dynamic mapping. My problem is that my field names are overlapping due to lack of foresight. Ex: Log comes with "banana" as a string. Dynamic mapping is now created for "banana" Different log comes with "banana" as an int ElasticError, Log Dropped – ZZT Jan 04 '21 at 17:19
  • Could you share how you set mappings (using a post, or nest c# client wathever?) also please share 2 3 documents sample. – LeBigCat Jan 04 '21 at 17:22
  • Edited to provide examples, and mapping – ZZT Jan 06 '21 at 21:36
  • @LeBigCat any updates on this? I think the question is really "how to control structured logs when I don't really know what my app produces". – Shadow Jul 13 '22 at 22:35
  • Hi, a little late i thinck, but according to message, es must reject exemple 2 and 3. "message": "Loaded "5 dogs"" => they are not valid json. – LeBigCat Jul 18 '22 at 14:58

0 Answers0