ValyuContext
Valyu allows AI applications and agents to search the internet and proprietary data sources for relevant LLM ready information.
This notebook goes over how to use Valyu context tool in LangChain.
First, get an Valyu API key and add it as an environment variable. Get $10 free credit by signing up here.
Setup
The integration lives in the langchain-valyu
package.
%pip install -qU langchain-valyu
In order to use the package, you will also need to set the VALYU_API_KEY
environment variable to your Valyu API key.
import os
valyu_api_key = os.environ["VALYU_API_KEY"]
Instantiation
Now we can instantiate our retriever:
The ValyuContextRetriever
can be configured with several parameters:
-
k: int = 5
The number of top results to return for each query. -
search_type: str = "all"
The type of search to perform. Options may include "all", "web", "proprietary", etc., depending on your use case. -
similarity_threshold: float = 0.4
The minimum similarity score (between 0 and 1) required for a document to be considered relevant. -
query_rewrite: bool = False
Whether to enable automatic rewriting of the query to improve search results. -
max_price: float = 20.0
The maximum price (in USD) you are willing to spend per query. -
client: Optional[Valyu] = None
An optional custom Valyu client instance. If not provided, a new client will be created internally. -
valyu_api_key: Optional[str] = None
Your Valyu API key. If not provided, the retriever will look for theVALYU_API_KEY
environment variable.
from langchain_valyu import ValyuContextRetriever
retriever = ValyuContextRetriever(
k=5,
search_type="all",
similarity_threshold=0.4,
query_rewrite=False,
max_price=20.0,
client=None,
valyu_api_key=os.environ["VALYU_API_KEY"],
)
Usage
query = "What are the benefits of renewable energy?"
docs = retriever.invoke(query)
for doc in docs:
print(doc.page_content)
print(doc.metadata)
Use within a chain
We can easily combine this retriever in to a chain.
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate.from_template(
"""Answer the question based only on the context provided.
Context: {context}
Question: {question}"""
)
llm = ChatOpenAI(model="gpt-4o-mini")
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
API reference
For detailed documentation of all Valyu Context API features and configurations head to the API reference: https://docs.valyu.network/overview
Related
- Retriever conceptual guide
- Retriever how-to guides