Prolog
LangChain tools that use Prolog rules to generate answers.
Overview
The PrologTool class allows the generation of langchain tools that use Prolog rules to generate answers.
Setup
Let's use the following Prolog rules in the file family.pl:
parent(john, bianca, mary).
parent(john, bianca, michael).
parent(peter, patricia, jennifer).
partner(X, Y) :- parent(X, Y, _).
#!pip install langchain-prolog
from langchain_prolog import PrologConfig, PrologRunnable, PrologTool
TEST_SCRIPT = "family.pl"
Instantiation
First create the Prolog tool:
schema = PrologRunnable.create_schema("parent", ["men", "women", "child"])
config = PrologConfig(
rules_file=TEST_SCRIPT,
query_schema=schema,
)
prolog_tool = PrologTool(
prolog_config=config,
name="family_query",
description="""
Query family relationships using Prolog.
parent(X, Y, Z) implies only that Z is a child of X and Y.
Input can be a query string like 'parent(john, X, Y)' or 'john, X, Y'"
You have to specify 3 parameters: men, woman, child. Do not use quotes.
""",
)
Invocation
Using a Prolog tool with an LLM and function calling
#!pip install python-dotenv
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv(), override=True)
True
#!pip install langchain-openai
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
To use the tool, bind it to the LLM model and query the model:
llm = ChatOpenAI(model="gpt-4o-mini")
llm_with_tools = llm.bind_tools([prolog_tool])
messages = [HumanMessage("Who are John's children?")]
response = llm_with_tools.invoke(messages)
The LLM will respond with a tool call request:
messages.append(response)
response.tool_calls[0]
{'name': 'family_query',
'args': {'men': 'john', 'women': None, 'child': None},
'id': 'call_3VazWUstCGlY8zczi05TaduU',
'type': 'tool_call'}
The tool takes this request and queries the Prolog database:
tool_msg = prolog_tool.invoke(response.tool_calls[0])
The tool returns a list with all the solutions for the query:
messages.append(tool_msg)
tool_msg
ToolMessage(content='[{"Women": "bianca", "Child": "mary"}, {"Women": "bianca", "Child": "michael"}]', name='family_query', tool_call_id='call_3VazWUstCGlY8zczi05TaduU')
That we then pass to the LLM, and the LLM answers the original query using the tool response:
answer = llm_with_tools.invoke(messages)
print(answer.content)
John has two children: Mary and Michael. Their mother is Bianca.
Chaining
Using a Prolog Tool with an agent
To use the prolog tool with an agent, pass it to the agent's constructor:
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
tools = [prolog_tool]
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
The agent takes the query and use the Prolog tool if needed:
answer = agent_executor.invoke({"input": "Who are John's children?"})
Then the agent recieves the tool response as part of the "agent_scratchpad
" placeholder and generates the answer:
print(answer["output"])
John has two children: Mary and Michael, with Bianca as their mother.
API reference
(TODO: update with API reference once built.)
See https://github.com/apisani1/langchain-prolog/tree/main for detail.
Related
- Tool conceptual guide
- Tool how-to guides