Skip to main content
Open In ColabOpen on GitHub

ChatDeepSeek

This will help you getting started with DeepSeek's hosted chat models. For detailed documentation of all ChatDeepSeek features and configurations head to the API reference.

tip

DeepSeek's models are open source and can be run locally (e.g. in Ollama) or on other inference providers (e.g. Fireworks, Together) as well.

Overviewโ€‹

Integration detailsโ€‹

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
ChatDeepSeeklangchain-deepseek-officialโŒbetaโœ…PyPI - DownloadsPyPI - Version

Model featuresโ€‹

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs
โœ…โœ…โŒโŒโŒโŒโœ…โœ…โœ…โŒ

Setupโ€‹

To access DeepSeek models you'll need to create a/an DeepSeek account, get an API key, and install the langchain-deepseek-official integration package.

Credentialsโ€‹

Head to DeepSeek's API Key page to sign up to DeepSeek and generate an API key. Once you've done this set the DEEPSEEK_API_KEY environment variable:

import getpass
import os

if not os.getenv("DEEPSEEK_API_KEY"):
os.environ["DEEPSEEK_API_KEY"] = getpass.getpass("Enter your DeepSeek API key: ")

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

Installationโ€‹

The LangChain DeepSeek integration lives in the langchain-deepseek-official package:

%pip install -qU langchain-deepseek-official

Instantiationโ€‹

Now we can instantiate our model object and generate chat completions:

from langchain_deepseek import ChatDeepSeek

llm = ChatDeepSeek(
model="deepseek-chat",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# other params...
)
API Reference:ChatDeepSeek

Invocationโ€‹

messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg.content

Chainingโ€‹

We can chain our model with a prompt template like so:

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)

chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API Reference:ChatPromptTemplate

API referenceโ€‹

For detailed documentation of all ChatDeepSeek features and configurations head to the API Reference.


Was this page helpful?