I thought RetrievalQAChain will only reply based on the documents returned from the database. My prompt explicitly says, don't reply if you don't know the answer. 0 documents were returned from my vector database and I hopping it would say I don't know the answer but instead I got the correct response. I think CHATGPT is using it's own knowledge to answer the question even though 0 documents were added in the context. Am I missing something here.
After seeing this, even if there are a few relevant documents in the prompttemplate, how do we know CHATGPT used those to generate a response or replied based on its own knowledge.
export async function POST(req: NextRequest) {
try {
const prompt = new PromptTemplate({
template:
"Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\nContext: {context}\n\nQuestion: {question}\nHelpful Answer:",
inputVariables: ["context", "question"],
});
{/*Rest of the code */}
const vectorStoreRetriever = vectorStore.asRetriever(5);
console.log(
"vectorStoreRetriever",
await vectorStoreRetriever._getRelevantDocuments(message)
);
const model = new OpenAI({
streaming: true,
temperature: 0,
timeout: 60000,
});
// Create a chain that uses the OpenAI LLM and HNSWLib vector store.
const chain = new RetrievalQAChain({
combineDocumentsChain: loadQAStuffChain(model, { prompt }),
retriever: vectorStore.asRetriever(),
});
const results = await chain.call({
query: "What is NextJs",
});
// return NextResponse.json({ splitDocuments });
return NextResponse.json({ results });
} catch (e: any) {
console.log(e);
return NextResponse.json({ error: e.message }, { status: 500 });
}
}
Results: