ChatGoodfire
This will help you getting started with Goodfire chat models. For detailed documentation of all ChatGoodfire features and configurations head to the PyPI project page, or go directly to the Goodfire SDK docs. All of the Goodfire-specific functionality (e.g. SAE features, variants, etc.) is available via the main goodfire
package. This integration is a wrapper around the Goodfire SDK.
Overview
Integration details
Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatGoodfire | langchain-goodfire | ❌ | ❌ | ❌ |
Model features
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |
Setup
To access Goodfire models you'll need to create a/an Goodfire account, get an API key, and install the langchain-goodfire
integration package.
Credentials
Head to Goodfire Settings to sign up to Goodfire and generate an API key. Once you've done this set the GOODFIRE_API_KEY environment variable.
import getpass
import os
if not os.getenv("GOODFIRE_API_KEY"):
os.environ["GOODFIRE_API_KEY"] = getpass.getpass("Enter your Goodfire API key: ")
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
Installation
The LangChain Goodfire integration lives in the langchain-goodfire
package:
%pip install -qU langchain-goodfire
Note: you may need to restart the kernel to use updated packages.
Instantiation
Now we can instantiate our model object and generate chat completions:
import goodfire
from langchain_goodfire import ChatGoodfire
base_variant = goodfire.Variant("meta-llama/Llama-3.3-70B-Instruct")
llm = ChatGoodfire(
model=base_variant,
temperature=0,
max_completion_tokens=1000,
seed=42,
)
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Invocation
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = await llm.ainvoke(messages)
ai_msg
AIMessage(content="J'adore la programmation.", additional_kwargs={}, response_metadata={}, id='run-8d43cf35-bce8-4827-8935-c64f8fb78cd0-0', usage_metadata={'input_tokens': 51, 'output_tokens': 39, 'total_tokens': 90})
print(ai_msg.content)
J'adore la programmation.
Chaining
We can chain our model with a prompt template like so:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)
chain = prompt | llm
await chain.ainvoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
AIMessage(content='Ich liebe das Programmieren. How can I help you with programming today?', additional_kwargs={}, response_metadata={}, id='run-03d1a585-8234-46f1-a8df-bf9143fe3309-0', usage_metadata={'input_tokens': 46, 'output_tokens': 46, 'total_tokens': 92})
Goodfire-specific functionality
To use Goodfire-specific functionality such as SAE features and variants, you can use the goodfire
package directly.
client = goodfire.Client(api_key=os.environ["GOODFIRE_API_KEY"])
pirate_features = client.features.search(
"assistant should roleplay as a pirate", base_variant
)
pirate_features
FeatureGroup([
0: "The assistant should adopt the persona of a pirate",
1: "The assistant should roleplay as a pirate",
2: "The assistant should engage with pirate-themed content or roleplay as a pirate",
3: "The assistant should roleplay as a character",
4: "The assistant should roleplay as a specific character",
5: "The assistant should roleplay as a game character or NPC",
6: "The assistant should roleplay as a human character",
7: "Requests for the assistant to roleplay or pretend to be something else",
8: "Requests for the assistant to roleplay or pretend to be something",
9: "The assistant is being assigned a role or persona to roleplay"
])
pirate_variant = goodfire.Variant("meta-llama/Llama-3.3-70B-Instruct")
pirate_variant.set(pirate_features[0], 0.4)
pirate_variant.set(pirate_features[1], 0.3)
await llm.ainvoke("Tell me a joke", model=pirate_variant)
AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field! Arrr! Hope that made ye laugh, matey!', additional_kwargs={}, response_metadata={}, id='run-7d8bd30f-7f80-41cb-bdb6-25c29c22a7ce-0', usage_metadata={'input_tokens': 35, 'output_tokens': 60, 'total_tokens': 95})
API reference
For detailed documentation of all ChatGoodfire features and configurations head to the API reference
Related
- Chat model conceptual guide
- Chat model how-to guides