We thought Peak Chatbot was a few years ago, but the idea is having a ChatGPT-fuelled rebirth in 2023. The latest launch could be one of the most mainstream yet in terms of its potential reach.
Snap has added a chatbot called ‘My AI’ to Snapchat, using OpenAI’s GPT technology to power what it’s describing as an “experimental feature”. For now it’s only available to the 2.5 million people paying for a Snapchat+ subscription.
“My AI can recommend birthday gift ideas for your BFF, plan a hiking trip for a long weekend, suggest a recipe for dinner, or even write a haiku about cheese for your cheddar-obsessed pal,” announced Snap.
Now comes the caveat. “As with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything. Please be aware of its many deficiencies and sorry in advance!”
Let’s just hope it doesn’t go down the traditional chatbot-meltdown route and turn Nazi…
Update: We’ve rethought that last sentence. It’s a flippant reference to a serious point: past examples of public-facing chatbots that have been led (often by the humans interacting with them) down some unpleasant paths. Microsoft’s Tay in 2016 and Meta’s BlenderBot 3 in 2022 being two of the most famous examples.
The important thing is for anyone launching a chatbot or conversational AI to learn from those past examples, and put measures in place to try to avoid those kinds of problems.
In the case of Snapchat’s new ‘My AI’ feature, those measures include programming it to avoid responses that are violent, hateful, sexually explicit or offensive in other ways, and training it on texts – this is also work done by OpenAI – to avoid spouting harmful biases.
‘My AI’ has also been trained on Snapchat’s own safety issues, including its existing automatic language detection terms. If users do see anything they think is inappropriate or offensive, they can report it from within the app.