2

I want to make a chatbot, that should answer questions from the context, in my case, a vector database. It is doing that perfectly. But I also want it to answer questions, which are not in the vector database. But it is unable to do so. It only is able to answer from the context.

This is the prompt template I have for this:

template = """Answer the question in your own words from the 
context given to you.
If questions are asked where there is no relevant context available, please answer from 
what you know.

Context: {context}
Chat history: {chat_history}

Human: {question}
Assistant:"""

My prompt is as follows:

prompt = PromptTemplate(
input_variables=["context", "chat_history", "question"], template=template

)

For the memory, I provided an initial question:

memory.save_context({"input": "Who is the founder of India?"},
                {"output": "Gandhi"})

For the QA Retrieval, I am using the following code:

qa = RetrievalQA.from_chain_type(
llm=llm,
retriever=vectorstore.as_retriever(),
memory=memory,

chain_type_kwargs={'prompt': prompt}

)

But when I ask about a question:

question= "What did I ask about India?"
result = qa({"query": question})

It doesn't have any answer for that. Although this question is stored in the chat history. It is only able to answer questions from the vector database. I will greatly appreciate a help in this.

Usman Afridi
  • 179
  • 1
  • 11
  • Ideally it should work if the conversation history is getting passed to the model. 1. Which model are you using: 3.5 or 4? 2. Could you please change the question to "In the earlier part of the conversation, what did I ask about India?" 3. Also, instead of asking GPT to answer from context, ask it to answer from context + conversational history. Here I am assuming that langchain portion of the code is working as expected. – Arnab Biswas Sep 01 '23 at 04:45
  • I am using GPT 3.5 Turbo from Azure OpenAi. – Usman Afridi Sep 01 '23 at 05:02
  • 1
    Can you try with GPT-4 and that too with the latest version (0613)? Also, enable tracing and debugging at langchain to ensure what you are expecting is actually happening: https://python.langchain.com/docs/guides/debugging – Arnab Biswas Sep 01 '23 at 05:17

1 Answers1

1

Below is the code that stores history by default, if there is no answer in doc store, it will fetch result from llm.

    from langchain.embeddings.openai import OpenAIEmbeddings
    from langchain.vectorstores import Chroma
    from langchain.text_splitter import CharacterTextSplitter
    from langchain.llms import OpenAI
    from langchain.chains import ConversationalRetrievalChain,RetrievalQA
    from langchain.document_loaders import TextLoader
    from langchain.memory import ConversationBufferMemory
    from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
    from langchain.prompts import PromptTemplate

    loader = TextLoader("fatherofnation.txt")
    documents = loader.load()

    template = """Answer the question in your own words from the 
    context given to you.
    If questions are asked where there is no relevant context available, please answer from 
    what you know.

    Context: {context}

    Human: {question}
    Assistant:"""

    prompt = PromptTemplate(
    input_variables=["context",  "question"], template=template)

    text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
    documents = text_splitter.split_documents(documents)

    embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")

    vectorstore = Chroma.from_documents(documents, embedding_function)

    llm = "your llm model here"

    memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

    memory.save_context({"input": "Who is the founder of India?"},
                    {"output": "Gandhi"})

    qa = RetrievalQA.from_chain_type(llm, retriever=vectorstore.as_retriever(), memory=memory,chain_type_kwargs={'prompt': prompt}
    )

    # question = "Who is the father of India nation?"
    # result = qa({"query": question})
    # print(result)

    question1= "What did I ask about India?"
    result1 = qa({"query": question1})
    print(result1)

    question1= "Tell me about google in short ?"
    result1 = qa({"query": question1})
    print(result1)
ZKS
  • 817
  • 3
  • 16
  • 31
  • Which LLM did you use? I am using Azure GPT 3.5 Turbo and it is still giving me the same error. Guess I need to upgrade it. – Usman Afridi Sep 01 '23 at 07:42
  • deployment_name = "text-davinci-003" llm = AzureOpenAI(deployment_name=deployment_name, temperature=0) – ZKS Sep 01 '23 at 09:21
  • Also, even after using GPT-4, although it is now giving answers outside the context, but is not able to answer about the previous questions asked in the memory. – Usman Afridi Sep 01 '23 at 12:47
  • Please go through the Langchain document to understand more about memory management and llms. unless you understand you would not be abe to resolve the issues you are facing.. the code I have shared has everything.. its just matter for correct understanding of concepts. – ZKS Sep 01 '23 at 13:33