Skip to main content
Open In ColabOpen on GitHub

ClovaXEmbeddings

This notebook covers how to get started with embedding models provided by CLOVA Studio. For detailed documentation on ClovaXEmbeddings features and configuration options, please refer to the API reference.

Overviewโ€‹

Integration detailsโ€‹

ProviderPackage
Naverlangchain-naver

Setupโ€‹

Before using embedding models provided by CLOVA Studio, you must go through the three steps below.

  1. Creating NAVER Cloud Platform account
  2. Apply to use CLOVA Studio
  3. Create a CLOVA Studio Test App or Service App of a model to use (See here.)
  4. Issue a Test or Service API key (See here.)

Credentialsโ€‹

Set the CLOVASTUDIO_API_KEY environment variable with your API key.

import getpass
import os

if not os.getenv("CLOVASTUDIO_API_KEY"):
os.environ["CLOVASTUDIO_API_KEY"] = getpass.getpass("Enter CLOVA Studio API Key: ")

Installationโ€‹

ClovaXEmbeddings integration lives in the langchain_naver package:

# install package
%pip install -qU langchain-naver

Instantiationโ€‹

Now we can instantiate our embeddings object and embed query or document:

  • There are several embedding models available in CLOVA Studio. Please refer here for further details.
  • Note that you might need to normalize the embeddings depending on your specific use case.
from langchain_naver import ClovaXEmbeddings

embeddings = ClovaXEmbeddings(
model="clir-emb-dolphin" # set with the model name of corresponding test/service app. Default is `clir-emb-dolphin`
)

Indexing and Retrievalโ€‹

Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our RAG tutorials.

Below, see how to index and retrieve data using the embeddings object we initialized above. In this example, we will index and retrieve a sample document in the InMemoryVectorStore.

# Create a vector store with a sample text
from langchain_core.vectorstores import InMemoryVectorStore

text = "CLOVA Studio is an AI development tool that allows you to customize your own HyperCLOVA X models."

vectorstore = InMemoryVectorStore.from_texts(
[text],
embedding=embeddings,
)

# Use the vectorstore as a retriever
retriever = vectorstore.as_retriever()

# Retrieve the most similar text
retrieved_documents = retriever.invoke("What is CLOVA Studio?")

# show the retrieved document's content
retrieved_documents[0].page_content
API Reference:InMemoryVectorStore
'CLOVA Studio is an AI development tool that allows you to customize your own HyperCLOVA X models.'

Direct Usageโ€‹

Under the hood, the vectorstore and retriever implementations are calling embeddings.embed_documents(...) and embeddings.embed_query(...) to create embeddings for the text(s) used in from_texts and retrieval invoke operations, respectively.

You can directly call these methods to get embeddings for your own use cases.

Embed single textsโ€‹

You can embed single texts or documents with embed_query:

single_vector = embeddings.embed_query(text)
print(str(single_vector)[:100]) # Show the first 100 characters of the vector
[-0.094717406, -0.4077411, -0.5513184, 1.6024436, -1.3235079, -1.0720996, -0.44471845, 1.3665184, 0.

Embed multiple textsโ€‹

You can embed multiple texts with embed_documents:

text2 = "LangChain is a framework for building context-aware reasoning applications"
two_vectors = embeddings.embed_documents([text, text2])
for vector in two_vectors:
print(str(vector)[:100]) # Show the first 100 characters of the vector
[-0.094717406, -0.4077411, -0.5513184, 1.6024436, -1.3235079, -1.0720996, -0.44471845, 1.3665184, 0.
[-0.25525448, -0.84877056, -0.6928286, 1.5867524, -1.2930486, -0.8166254, -0.17934391, 1.4236152, 0.

API Referenceโ€‹

For detailed documentation on ClovaXEmbeddings features and configuration options, please refer to the API reference.


Was this page helpful?