I'm attempting to modify an existing Colab example to combine langchain memory and also context document loading. In two separate tests, each instance works perfectly. Now I'd like to combine the two (training context loading and conversation memory) into one - so I can load previously trained data and also have conversation history in my chat bot. The issue is that I do not know how to achieve this with using 'ConversationChain' which expects only a single parameter, namely 'input'.
When I use 'ConversationChain', I'm able to pass the following:
query = "What is the title of the document?"
docs = docsearch.similarity_search(query)
chain.run(input_documents=docs, question=query)
Could anyone point me in the right direction?
I'm using the memory example from here: https://www.pinecone.io/learn/langchain-conversational-memory/
My knowledge of Python and langchain is limited.
I tried:
with open('/content/gdrive/My Drive/ai-data/docsearch.pkl', 'rb') as f:
docsearch = pickle.load(f)
model_kwargs = {"model": "text-davinci-003", "temperature": 0.7, "max_tokens": -1, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0.5, "n": 1, "best_of": 1}
llm = OpenAI(model_kwargs=model_kwargs)
def count_tokens(chain, query):
with get_openai_callback() as cb:
docs = docsearch.similarity_search(query)
# working older version: result = chain.run(query)
result = chain.run(input_documents=docs, question=query)
print(f'Spent a total of {cb.total_tokens} tokens')
return result
conversation_bufw = ConversationChain(
llm=llm,
memory=ConversationBufferWindowMemory(k=5)
)
count_tokens(
conversation_bufw,
"Good morning AI!"
)