NVIDIA NIMs
The langchain-nvidia-ai-endpoints
package contains LangChain integrations building applications with models on
NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models
from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA
accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single
command on NVIDIA accelerated infrastructure.
NVIDIA hosted deployments of NIMs are available to test on the NVIDIA API catalog. After testing, NIMs can be exported from NVIDIAβs API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, giving enterprises ownership and full control of their IP and AI application.
NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog. At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.
This example goes over how to use LangChain to interact with NVIDIA supported via the ChatNVIDIA
class.
For more information on accessing the chat models through this api, check out the ChatNVIDIA documentation.
Installationβ
%pip install --upgrade --quiet langchain-nvidia-ai-endpoints
Setupβ
To get started:
Create a free account with NVIDIA, which hosts NVIDIA AI Foundation models.
Click on your model of choice.
Under
Input
select thePython
tab, and clickGet API Key
. Then clickGenerate Key
.Copy and save the generated key as
NVIDIA_API_KEY
. From there, you should have access to the endpoints.
import getpass
import os
# del os.environ['NVIDIA_API_KEY'] ## delete key and reset
if os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
print("Valid NVIDIA_API_KEY already in environment. Delete to reset")
else:
nvapi_key = getpass.getpass("NVAPI Key (starts with nvapi-): ")
assert nvapi_key.startswith("nvapi-"), f"{nvapi_key[:5]}... is not a valid key"
os.environ["NVIDIA_API_KEY"] = nvapi_key
Working with NVIDIA API Catalogβ
## Core LC Chat Interface
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="mistralai/mixtral-8x7b-instruct-v0.1")
result = llm.invoke("Write a ballad about LangChain.")
print(result.content)
Working with NVIDIA NIMsβ
When ready to deploy, you can self-host models with NVIDIA NIMβwhich is included with the NVIDIA AI Enterprise software licenseβand run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.
from langchain_nvidia_ai_endpoints import ChatNVIDIA
# connect to an embedding NIM running at localhost:8000, specifying a specific model
llm = ChatNVIDIA(base_url="http://localhost:8000/v1", model="meta/llama3-8b-instruct")
Stream, Batch, and Asyncβ
These models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples.
print(llm.batch(["What's 2*3?", "What's 2*6?"]))
# Or via the async API
# await llm.abatch(["What's 2*3?", "What's 2*6?"])
for chunk in llm.stream("How far can a seagull fly in one day?"):
# Show the token separations
print(chunk.content, end="|")
async for chunk in llm.astream(
"How long does it take for monarch butterflies to migrate?"
):
print(chunk.content, end="|")
Supported modelsβ
Querying available_models
will still give you all of the other models offered by your API credentials.
The playground_
prefix is optional.
ChatNVIDIA.get_available_models()
# llm.get_available_models()
Model typesβ
All of these models above are supported and can be accessed via ChatNVIDIA
.
Some model types support unique prompting techniques and chat messages. We will review a few important ones below.
To find out more about a specific model, please navigate to the API section of an AI Foundation model as linked here.
General Chatβ
Models such as meta/llama3-8b-instruct
and mistralai/mixtral-8x22b-instruct-v0.1
are good all-around models that you can use for with any LangChain chat messages. Example below.
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_nvidia_ai_endpoints import ChatNVIDIA
prompt = ChatPromptTemplate.from_messages(
[("system", "You are a helpful AI assistant named Fred."), ("user", "{input}")]
)
chain = prompt | ChatNVIDIA(model="meta/llama3-8b-instruct") | StrOutputParser()
for txt in chain.stream({"input": "What's your name?"}):
print(txt, end="")
Code Generationβ
These models accept the same arguments and input structure as regular chat models, but they tend to perform better on code-genreation and structured code tasks. An example of this is meta/codellama-70b
.
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are an expert coding AI. Respond only in valid python; no narration whatsoever.",
),
("user", "{input}"),
]
)
chain = prompt | ChatNVIDIA(model="meta/codellama-70b") | StrOutputParser()
for txt in chain.stream({"input": "How do I solve this fizz buzz problem?"}):
print(txt, end="")
Multimodalβ
NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs is nvidia/neva-22b
.
These models accept LangChain's standard image formats, and accept labels
, similar to the Steering LLMs above. In addition to creativity
, complexity
, and verbosity
, these models support a quality
toggle.
Below is an example use:
import IPython
import requests
image_url = "https://www.nvidia.com/content/dam/en-zz/Solutions/research/ai-playground/nvidia-picasso-3c33-p@2x.jpg" ## Large Image
image_content = requests.get(image_url).content
IPython.display.Image(image_content)
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="nvidia/neva-22b")
Passing an image as a URLβ
from langchain_core.messages import HumanMessage
llm.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": "Describe this image:"},
{"type": "image_url", "image_url": {"url": image_url}},
]
)
]
)
Passing an image as a base64 encoded stringβ
At the moment, some extra processing happens client-side to support larger images like the one above. But for smaller images (and to better illustrate the process going on under the hood), we can directly pass in the image as shown below:
import IPython
import requests
image_url = "https://picsum.photos/seed/kitten/300/200"
image_content = requests.get(image_url).content
IPython.display.Image(image_content)
import base64
from langchain_core.messages import HumanMessage
## Works for simpler images. For larger images, see actual implementation
b64_string = base64.b64encode(image_content).decode("utf-8")
llm.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": "Describe this image:"},
{
"type": "image_url",
"image_url": {"url": f"data:image/png;base64,{b64_string}"},
},
]
)
]
)
Directly within the stringβ
The NVIDIA API uniquely accepts images as base64 images inlined within <img/>
HTML tags. While this isn't interoperable with other LLMs, you can directly prompt the model accordingly.
base64_with_mime_type = f"data:image/png;base64,{b64_string}"
llm.invoke(f'What\'s in this image?\n<img src="{base64_with_mime_type}" />')
Advanced Use Case: Forcing Payloadβ
You may notice that some newer models may have strong parameter expectations that the LangChain connector may not support by default. For example, we cannot invoke the Kosmos model at the time of this notebook's latest release due to the lack of a streaming argument on the server side:
from langchain_nvidia_ai_endpoints import ChatNVIDIA
kosmos = ChatNVIDIA(model="microsoft/kosmos-2")
from langchain_core.messages import HumanMessage
# kosmos.invoke(
# [
# HumanMessage(
# content=[
# {"type": "text", "text": "Describe this image:"},
# {"type": "image_url", "image_url": {"url": image_url}},
# ]
# )
# ]
# )
# Exception: [422] Unprocessable Entity
# body -> stream
# Extra inputs are not permitted (type=extra_forbidden)
# RequestID: 35538c9a-4b45-4616-8b75-7ef816fccf38
For a simple use case like this, we can actually try to force the payload argument of our underlying client by specifying the payload_fn
function as follows:
def drop_streaming_key(d):
"""Takes in payload dictionary, outputs new payload dictionary"""
if "stream" in d:
d.pop("stream")
return d
## Override the payload passthrough. Default is to pass through the payload as is.
kosmos = ChatNVIDIA(model="microsoft/kosmos-2")
kosmos.client.payload_fn = drop_streaming_key
kosmos.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": "Describe this image:"},
{"type": "image_url", "image_url": {"url": image_url}},
]
)
]
)
For more advanced or custom use-cases (i.e. supporting the diffusion models), you may be interested in leveraging the NVEModel
client as a requests backbone. The NVIDIAEmbeddings
class is a good source of inspiration for this.
Example usage within RunnableWithMessageHistoryβ
Like any other integration, ChatNVIDIA is fine to support chat utilities like RunnableWithMessageHistory which is analogous to using ConversationChain
. Below, we show the LangChain RunnableWithMessageHistory example applied to the mistralai/mixtral-8x22b-instruct-v0.1
model.
%pip install --upgrade --quiet langchain
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
# store is a dictionary that maps session IDs to their corresponding chat histories.
store = {} # memory is maintained outside the chain
# A function that returns the chat history for a given session ID.
def get_session_history(session_id: str) -> InMemoryChatMessageHistory:
if session_id not in store:
store[session_id] = InMemoryChatMessageHistory()
return store[session_id]
chat = ChatNVIDIA(
model="mistralai/mixtral-8x22b-instruct-v0.1",
temperature=0.1,
max_tokens=100,
top_p=1.0,
)
# Define a RunnableConfig object, with a `configurable` key. session_id determines thread
config = {"configurable": {"session_id": "1"}}
conversation = RunnableWithMessageHistory(
chat,
get_session_history,
)
conversation.invoke(
"Hi I'm Srijan Dubey.", # input or query
config=config,
)
conversation.invoke(
"I'm doing well! Just having a conversation with an AI.",
config=config,
)
conversation.invoke(
"Tell me about yourself.",
config=config,
)