2

I am trying to create an customer support system using langchain. I am using text documents as external knowledge provider via TextLoader

In order to remember the chat I using ConversationalRetrievalChain with list of chats

My problem is, each time when I execute conv_chain({"question": prompt, "chat_history": chat_history}),

it is creating a new ConversationalRetrievalChain that is, in the log, I get Entering new ConversationalRetrievalChain chain > message

And the chat_history array looks like, multiple nested arrays :

[[ "Hi I am Ragesh", "Hi Ragesh, How are your"] , ["What is my name?", "Sorry, As an AI....., " ]]

So it couldn't remember my previous chat.

Why this is happening ?

I am very new to AI field. Please help me.

My code:

https://gist.github.com/RageshAntony/79a9050b76e74f5ea868888cd57c6705

RagAnt
  • 1,064
  • 2
  • 17
  • 35
  • 1
    Please post a minimal reproducible example inline, in your actual post, instead of linking to Github. – andrew_reece May 16 '23 at 16:54
  • ["By default, Chains and Agents are stateless, meaning that they treat each incoming query independently"](https://python.langchain.com/en/latest/modules/memory.html) - the LangChain docs highlight that Chains are stateless by nature - they do not preserve memory. However there are a number of Memory objects that can be added to conversational chains to preserve state/chat history. Have a look at [this documentation on how to add memory to a ConversatoinalRetrievalChain](https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html). – andrew_reece May 17 '23 at 00:03

2 Answers2

2

you may add something here

def generate_response(support_qa: BaseConversationalRetrievalChain, prompt):
    response = support_qa({"question": prompt, "chat_history": chat_history})
    chat_history.append((prompt, response["answer"]))
    print( json.dumps(chat_history))
    return response['answer']

to this code below. for the time you need you history

def generate_response(support_qa: BaseConversationalRetrievalChain, prompt):
    chat_history = [(prompt, --**previous chat_history** --)]
    response = support_qa({"question": prompt, "chat_history": chat_history})
    chat_history.append((prompt, response["answer"]))
    print( json.dumps(chat_history))
    return response['answer']

this will ensure that support_qa chat_history is the history you were looking for. be carefull with token maximum issues. you may need to use map-reduce to summarize your history.

-3

You need to update your Langchain version