Skip to main content
Open In ColabOpen on GitHub

ChatPipeshift

This will help you getting started with Pipeshift chat models. For detailed documentation of all ChatPipeshift features and configurations head to the API reference.

Overviewโ€‹

Integration detailsโ€‹

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
ChatPipeshiftlangchain-pipeshiftโŒ-โŒPyPI - DownloadsPyPI - Version

Model featuresโ€‹

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs
โŒโŒโœ…โœ…โŒโŒโœ…โœ…โœ…-

Setupโ€‹

To access Pipeshift models you'll need to create an account on Pipeshift, get an API key, and install the langchain-pipeshift integration package.

Credentialsโ€‹

Head to Pipeshift to sign up to Pipeshift and generate an API key. Once you've done this set the PIPESHIFT_API_KEY environment variable:

import getpass
import os

if not os.getenv("PIPESHIFT_API_KEY"):
os.environ["PIPESHIFT_API_KEY"] = getpass.getpass("Enter your Pipeshift API key: ")

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

Installationโ€‹

The LangChain Pipeshift integration lives in the langchain-pipeshift package:

%pip install -qU langchain-pipeshift
Note: you may need to restart the kernel to use updated packages.

Instantiationโ€‹

Now we can instantiate our model object and generate chat completions:

from langchain_pipeshift import ChatPipeshift

llm = ChatPipeshift(
model="meta-llama/Meta-Llama-3.1-8B-Instruct",
temperature=0,
max_tokens=512,
# other params...
)

Invocationโ€‹

messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
AIMessage(content='Here is the translation:\n\nJe suis amoureux du programme. \n\nHowever, a more common translation would be:\n\nJ\'aime programmer.\n\nNote that "Je suis amoureux" typically implies romantic love, whereas "J\'aime" is a more casual way to express affection or enjoyment for an activity, in this case, programming.', additional_kwargs={}, response_metadata={}, id='run-5cad8e5c-d089-44a8-8dcd-22736cde7d7b-0')
print(ai_msg.content)
Here is the translation:

Je suis amoureux du programme.

However, a more common translation would be:

J'aime programmer.

Note that "Je suis amoureux" typically implies romantic love, whereas "J'aime" is a more casual way to express affection or enjoyment for an activity, in this case, programming.

Chainingโ€‹

We can chain our model with a prompt template like so:

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)

chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API Reference:ChatPromptTemplate
AIMessage(content="Das ist schรถn! Du liebst Programmieren! (That's great! You love programming!)\n\nWould you like to know the German translation of a specific programming-related term or phrase, or would you like me to help you with something else?", additional_kwargs={}, response_metadata={}, id='run-8a4b7d56-23d9-43a7-8fb2-e05f556d94bd-0')

API referenceโ€‹

For detailed documentation of all ChatPipeshift features and configurations head to the API reference: https://dashboard.pipeshift.com/docs


Was this page helpful?