6

I'm getting good results with llama_index having indexed PDFs, however I am having trouble finding which PDF it found the results in to base its answers upon. result.node_sources uses a Doc id which it seems to internally generate. How can I get a reference back to the document?

edencorbin
  • 2,569
  • 5
  • 29
  • 44

2 Answers2

3

Got this answer directly from the Llama team -

Thanks for the questions and for your support of LlamaIndex. There are a few general approaches you can do:

  • Inject metadata into the extra_info of each Document, such as file name, link, etc. A lot of LlamaHub loaders should already automatically add metadata into the extra_info, but you can add/remove extra_info yourself if you'd like. This extra_info gets injected into each Node. When you get a response from a query engine, you can do response.source_nodes to fetch the relevant sources.

These sources will contain both the original text as well as the metadata. Take a look at this doc: https://gpt-index.readthedocs.io/en/latest/how_to/customization/custom_documents.html#customizing-documents

  • Assuming you add the appropriate metadata to the extra_info field, you can choose to either modify the query string, or the QA/refine prompts and say something like "Please cite sources along with your answer" in either of those.

The query string you can just append to, for customizing prompts, take a look at https://gpt-index.readthedocs.io/en/latest/how_to/customization/custom_prompts.html

jveritas
  • 46
  • 1
0

It seems that they changed 'extra_info' to 'metadata'.

I used this code and it works perfectly:

    if hasattr(response, 'metadata'):
        document_info = str(response.metadata)
        find = re.findall(r"'page_label': '[^']*', 'file_name': '[^']*'", document_info)

        print('\n'+'=' * 60+'\n')
        print('Context Information')
        print(str(find))
        print('\n'+'=' * 60+'\n')
Nils
  • 1