0

I've encountered difficulties in obtaining a solution to my inquiry after multiple attempts. I'm currently utilizing LLama 2 in conjunction with LangChain for the first time. The challenge I'm facing pertains to extracting the response from LLama in the form of a JSON or a list. I've made attempts to include this requirement within the prompt, but unfortunately, it hasn't yielded the desired outcome. Additionally, I've experimented with implementing the output_parser feature from LangChain, yet it hasn't produced the intended results either. Provided below is the code I've employed.

from langchain.output_parsers import CommaSeparatedListOutputParser
output_parser = CommaSeparatedListOutputParser()
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
    template="Extract a contextual keywords for this product title: {product_title}.\n\n\n\n\n\n\n\n\n\n\n{format_instructions}",
    input_variables=["product_title"],
    partial_variables={"format_instructions": format_instructions}
)
question = prompt.format(product_title="Trottinette électrique pure air pro 2ème gén")
output=llm(question)
output_parser.parse(output)
print(output)

Do you have any suggestions on how to exclusively retrieve the response from my queries, eliminating any extraneous generated sentences such as "Sure, ..." or "Your Keywords are..."? Essentially, I'm looking to capture solely the list of items that I've specified.

Udemytur
  • 79
  • 1
  • 5

0 Answers0