2

I have configured a flow as follows:

  1. GetFile
  2. SplitText -> splitting into flowfiles
  3. ExtractText -> adding attributes with two keys
  4. PutDistributedMapCache -> Cache Entry Identifier is ${Key1}_${Key2}

Now I configured one sample GenerateFlowFile which generates a sample record and then goes into LookupRecord ( concat(/Key1,'_',/Key2)) which looks for the same key in cache.

I see a problem in my caching flow because when I configure a GenerateFlowFile to cache same records , I am able to do lookup

This flow is not able to lookup. Please help

Flow is: enter image description here

PutDistributedMapCache

enter image description here

ExtractText

enter image description here

Lookup flow

enter image description here

LookupRecord Config

enter image description here

I have added four keys in total because that is my business use case.

I have a csv file with 53 records and I use Splitfile to split each record and add attributes which act as my key which I store in PutDistributedMapcache. Now I have a different flow where in I start with a GenerateFlowFile which generates a record like this :

enter image description here

So I expect my LookupKeyRecord which has a jsonreader and jsonwriter to read this record , lookup with the key in the distributedcache and populate the /Feedback field in my record.

This fails to look up records and records goes as UNMATCHED.

Now the catch is lets say I remove GetFile and use a GenerateFlowFile with this config to cache :

enter image description here

so my lookup works with the keys 9_9_9_9. But the moment I add another set of records with different keys , my lookup fails.

Aviral Kumar
  • 814
  • 1
  • 15
  • 40
  • what is this: `concat(/Key1,'_',/Key2)` ? Could you edit your question and provide all parameters of the LookupRecord and PutDistributedMapCache processors – daggett Sep 02 '19 at 10:21
  • I have added the configs – Aviral Kumar Sep 02 '19 at 11:43
  • @daggett . Can you suggest me with this problem – Aviral Kumar Sep 02 '19 at 14:28
  • now describe your problem. provide example of json + avro schema for it. why you have to use LookupRecord instead of PutDistributedMapCache? the point that i can see: according to [documentation](https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-lookup-services-nar/1.9.2/org.apache.nifi.lookup.DistributedMapCacheLookupService/index.html) your Record Path must contain the key `'key'`. so, it should look like: `/key[concat(...)]/...`, but to provide a full answer example of json+format is required. – daggett Sep 02 '19 at 14:38
  • I have added the details. – Aviral Kumar Sep 02 '19 at 15:00
  • Let me know if you need more info – Aviral Kumar Sep 02 '19 at 15:47
  • Waiting for help :) – Aviral Kumar Sep 03 '19 at 01:10

1 Answers1

2

I figured it out , my DistributedMapCache server was having a default config of Max Cache Entries as 1. I increaded it , its working now :)

Aviral Kumar
  • 814
  • 1
  • 15
  • 40