To build artificial intelligence (AI) systems that can interact with people in smarter, safer and more useful ways, we need to teach them to adapt to our needs. Today, we’re releasing BlenderBot 3, our state-of-the-art conversational agent that can converse naturally with people, who can then provide feedback to the model on how to improve its responses. We will be sharing data from these interactions, and we’ve shared the BlenderBot 3 model and model cards with the scientific community to help advance research in conversational AI.
The BlenderBot series has made progress in combining conversational skills — like personality, empathy and knowledge — incorporating long-term memory, and searching the internet to carry out meaningful conversations. BlenderBot 3 inherits these skills and delivers superior performance because it’s built from Meta AI’s publicly available OPT-175B language model — approximately 58 times the size of BlenderBot 2.
Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3. Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.
Allowing an AI system to interact with people in the real world leads to longer, more diverse conversations, as well as more varied feedback. For example, you can react to each chat message in our BlenderBot 3 demo by clicking either the thumbs-up or thumbs-down icons. Choosing a thumbs-down lets you explain why you disliked the message — whether it was off-topic, nonsensical, rude, spam-like or something else. You can also submit feedback in the chat itself.
To improve BlenderBot 3’s ability to engage with people, we trained it with a large amount of publicly available language data. Many of the datasets used were collected by our own team, including one new dataset consisting of more than 20,000 conversations with people predicated on more than 1,000 topics of conversation. We trained BlenderBot 3 to learn from conversations to improve upon the skills people find most important — from talking about healthy recipes to finding child-friendly amenities in the city.
When the chatbot’s response is unsatisfactory, we collect feedback on it. Using this data, we can improve the model so that it doesn’t repeat its mistakes.
We understand that not everyone who uses chatbots has good intentions, so we also developed new learning algorithms to distinguish between helpful responses and harmful examples. Over time, we will use this technique to make our models more responsible and safe for all users.
Compared with its predecessors, we found that BlenderBot 3 improved by 31% on conversational tasks. It’s also twice as knowledgeable, while being factually incorrect 47% less often. We also found that only 0.16% of BlenderBot’s responses to people were flagged as rude or inappropriate.
The goal of our research is to collect and release feedback data that we and the broader AI research community can leverage over time. That way, we can find new ways for AI systems to be safer and more engaging for people who use them.
Progress in the field of AI heavily depends on the opportunity for the wider AI research community to build on the best available technology. Therefore, releasing chatbot models and datasets is key to gaining complete, reliable insights into how and why they work, the potential they hold and their limitations.
While BlenderBot 3 significantly advances publicly available chatbots, it’s certainly not at a human level. It’s occasionally incorrect, inconsistent and off-topic. As more people interact with our demo, we’ll improve our models using their feedback and release data to benefit the wider AI community.