Prompt Templates
Another key abstraction of the LangChain ecosystem is Prompt Templates, which allows you to dynamically modify prompts at run time with pre-written templates.
Let's perform the standard task of translating text from Chinese to English. We normally prompt in this manner:
Translate 人工智能 to English
If we want to translate another text, we will need to retype the entire prompt, which is inefficient. Hence, we can leverage LangChain PromptTemplate
to reuse prompts for similar use cases.
Using Prompt Template Abstraction
First, we will import the necessary component and create a template.
The ChatPromptTemplate
class accepts a list of tuples, where each tuple contains a role and content
These will be converted into SystemMessage
and HumanMessage
respectively.
from langchain_core.prompts import ChatPromptTemplate
# Instantiate Template
template = ChatPromptTemplate([
('system', 'You are a language translator specialising in translating Chinese text to English'),
('human', 'Translate {text} to english')
])
To fill in the dynamic parts of the prompt in run time, use the .invoke()
method and pass in the input variables as a key in a dictionary with its value:
print(template.invoke({'text': '人工智能'}))
Output:
messages=[SystemMessage(content='You are a language translator specialising in translating Chinese text to English', additional_kwargs={}, response_metadata={}),
HumanMessage(content='Translate 人工智能 to english', additional_kwargs={}, response_metadata={})]
The invoked template generates a list of messages that can be used with the LangChain Chat Model integration.
Prompt Templates with LLM
Let's instantiate our LLM and use the |
operator to pass the output of the filled template as input to the LLM
# Instantiating LLM
llm = ChatOpenAI(model = 'gpt-4o-mini', temperature = 0)
# Create Template
template = ChatPromptTemplate([
('system', 'You are a language translator specilaising in translating Chinese text to English'),
('human', 'Translate {text} to English')
])
# Create Chain
chain = template | llm
# Invoke Chain with Inputs
translation_1 = chain.invoke({'text': '人工智能'})
translation_2 = chain.invoke({'text': '如梦幻泡影'})
print(translation_1)
print(translation_2)
Invoking & Runnables:
When we run chain.invoke()
, it is actually applying the .invoke()
method in every component of the chain separated by |
.
The output of the first component is passed as input to the second component.
Thus, with chain = template | llm
, chain.invoke({'text': ”人工智能“})
is analogous to llm.invoke(template.invoke({'text': '人工智能'}))
The .invoke()
method belongs to the runnable interface and you can read more information here