ConversationBufferMemory¶
In [1]:
Copied!
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
In [2]:
Copied!
from cqlsession import getCQLSession, getCQLKeyspace
astraSession = getCQLSession()
astraKeyspace = getCQLKeyspace()
from cqlsession import getCQLSession, getCQLKeyspace
astraSession = getCQLSession()
astraKeyspace = getCQLKeyspace()
/home/stefano/.virtualenvs/langchain-cassio-3.10/lib/python3.10/site-packages/cassandra/io/asyncorereactor.py:34: DeprecationWarning: The asyncore module is deprecated and will be removed in Python 3.12. The recommended replacement is asyncio import asyncore /home/stefano/.virtualenvs/langchain-cassio-3.10/lib/python3.10/site-packages/cassandra/datastax/cloud/__init__.py:173: DeprecationWarning: ssl.PROTOCOL_TLS is deprecated ssl_context = SSLContext(PROTOCOL_TLS) /home/stefano/.virtualenvs/langchain-cassio-3.10/lib/python3.10/site-packages/cassandra/io/asyncorereactor.py:347: DeprecationWarning: ssl.match_hostname() is deprecated self._connect_socket()
In [3]:
Copied!
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=astraSession,
keyspace='langchain',
ttl_seconds = 3600,
)
message_history.clear()
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=astraSession,
keyspace='langchain',
ttl_seconds = 3600,
)
message_history.clear()
Use in a ConversationChain¶
Create a Memory¶
The Cassandra message history, and the corresponding template variable name for filling prompts, are specified:
In [8]:
Copied!
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
Language model¶
We need an LLM
In [9]:
Copied!
from langchain.llms import OpenAI
llm = OpenAI()
from langchain.llms import OpenAI
llm = OpenAI()
Create a chain¶
In [10]:
Copied!
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
In [12]:
Copied!
conversation.predict(input="Hello, how can I roast an apple?")
conversation.predict(input="Hello, how can I roast an apple?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: > Finished chain.
Out[12]:
" Hi there! To roast an apple, you'll need to preheat your oven to 375 degrees Fahrenheit. Cut the apple into slices or wedges and spread them out on a baking sheet. Sprinkle with cinnamon and brown sugar. Bake for 30 minutes, or until the apple slices are tender. Enjoy your roasted apples!"
In [13]:
Copied!
conversation.predict(input="Can I do it on a bonfire?")
conversation.predict(input="Can I do it on a bonfire?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Hi there! To roast an apple, you'll need to preheat your oven to 375 degrees Fahrenheit. Cut the apple into slices or wedges and spread them out on a baking sheet. Sprinkle with cinnamon and brown sugar. Bake for 30 minutes, or until the apple slices are tender. Enjoy your roasted apples! Human: Can I do it on a bonfire? AI: > Finished chain.
Out[13]:
" I'm sorry, I don't know the answer to that question. Roasting an apple requires an oven."
In [15]:
Copied!
conversation.predict(input="What about a microwave, would the apple taste good?")
conversation.predict(input="What about a microwave, would the apple taste good?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Hi there! To roast an apple, you'll need to preheat your oven to 375 degrees Fahrenheit. Cut the apple into slices or wedges and spread them out on a baking sheet. Sprinkle with cinnamon and brown sugar. Bake for 30 minutes, or until the apple slices are tender. Enjoy your roasted apples! Human: Can I do it on a bonfire? AI: I'm sorry, I don't know the answer to that question. Roasting an apple requires an oven. Human: What about a microwave, would the apple taste good? AI: > Finished chain.
Out[15]:
' No, microwaving an apple is not recommended. The result would be a soggy apple with no texture. Roasting is the best way to get a delicious, caramelized apple.'
In [16]:
Copied!
message_history.messages
message_history.messages
Out[16]:
[HumanMessage(content='Hello, how can I roast an apple?', additional_kwargs={}, example=False), AIMessage(content=" Hi there! To roast an apple, you'll need to preheat your oven to 375 degrees Fahrenheit. Cut the apple into slices or wedges and spread them out on a baking sheet. Sprinkle with cinnamon and brown sugar. Bake for 30 minutes, or until the apple slices are tender. Enjoy your roasted apples!", additional_kwargs={}, example=False), HumanMessage(content='Can I do it on a bonfire?', additional_kwargs={}, example=False), AIMessage(content=" I'm sorry, I don't know the answer to that question. Roasting an apple requires an oven.", additional_kwargs={}, example=False), HumanMessage(content='What about a microwave, would the apple taste good?', additional_kwargs={}, example=False), AIMessage(content=' No, microwaving an apple is not recommended. The result would be a soggy apple with no texture. Roasting is the best way to get a delicious, caramelized apple.', additional_kwargs={}, example=False)]
Manually tinkering with the prompt¶
In [17]:
Copied!
from langchain import LLMChain, PromptTemplate
from langchain import LLMChain, PromptTemplate
In [18]:
Copied!
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
Chatbot:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
Chatbot:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
In [20]:
Copied!
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=astraSession,
keyspace='langchain',
)
f_message_history.clear()
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=astraSession,
keyspace='langchain',
)
f_message_history.clear()
In [21]:
Copied!
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
In [22]:
Copied!
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
In [23]:
Copied!
llm_chain.predict(human_input="Tell me about springs")
llm_chain.predict(human_input="Tell me about springs")
> Entering new LLMChain chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs Chatbot: > Finished chain.
Out[23]:
' Springs are full of surprises! They bounce back with lots of energy and are always ready to take on a new challenge. Plus, they provide a great way to relax and have fun!'
In [24]:
Copied!
llm_chain.predict(human_input='Er ... I mean the other type actually.')
llm_chain.predict(human_input='Er ... I mean the other type actually.')
> Entering new LLMChain chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs AI: Springs are full of surprises! They bounce back with lots of energy and are always ready to take on a new challenge. Plus, they provide a great way to relax and have fun! Human: Er ... I mean the other type actually. Chatbot: > Finished chain.
Out[24]:
" Oh, you mean the season? Well, spring is a season of renewal and growth. Everything is coming back to life after a cold winter and the world starts to bloom and blossom. It's definitely a time of joy and celebration!"