Skip to content

Prompt Templates

Another key abstraction of the LangChain ecosystem is Prompt Templates, which allows you to dynamically modify prompts at run time with pre-written templates.

Let's perform the standard task of translating text from Chinese to English. We normally prompt in this manner:

text
Translate 人工智能 to English

If we want to translate another text, we will need to retype the entire prompt, which is inefficient. Hence, we can leverage LangChain PromptTemplate to reuse prompts for similar use cases.

Using Prompt Template Abstraction

We will import the ChatPromptTemplate class and instantiate it with a list of tuple containing the role and content which evaluates to SystemMessage or HumanMessage(e.g. ('system', 'You are a helpful assistant'))

python
from langchain_core.prompts import ChatPromptTemplate
# Instantiate Template
template = ChatPromptTemplate([
    ('system', 'You are a language translator specialising in translating Chinese text to English'),
    ('human', 'Translate {text} to english')
])

To fill in the dynamic parts of the prompt in run time, we can simply use the .invoke() method and pass in any input variable as a key in a dictionary:

python
print(template.invoke({'text': '人工智能'}))

Output:

python
messages=[SystemMessage(content='You are a language translator specialising in translating Chinese text to English', additional_kwargs={}, response_metadata={}), 
HumanMessage(content='Translate 人工智能 to english', additional_kwargs={}, response_metadata={})]

Notice that we get back a list of messages which is perfect for input to a LLM

Prompt Templates with LLM

Let's instantiate our LLM and use the | operator to pass the output of the filled template as input to the LLM

python
# Instantiating LLM
llm = ChatOpenAI(model = 'gpt-4o-mini', temperature = 0)
# Create Template
template = ChatPromptTemplate([
    ('system', 'You are a language translator specilaising in translating Chinese text to English'),
    ('human', 'Translate {text} to English')
])
# Create Chain
chain = template | llm
# Invoke Chain with Inputs
translation_1 = chain.invoke({'text': '人工智能'})
translation_2 = chain.invoke({'text': '如梦幻泡影'})
print(translation_1)
print(translation_2)

Invoking & Runnables:

When we run chain.invoke(), it is actually applying the .invoke() method in every component of the chain separated by |.

The output of the first component is passed as input to the second component.

Thus, with chain = template | llm, chain.invoke({'text': ”人工智能“}) is analogous to llm.invoke(template.invoke({'text': '人工智能'}))

The .invoke() method belongs to the runnable interface and you can read more information here