Google’s artificial intelligence division DeepMind is considering releasing its rival to the ChatGPT chatbot this year, according to founder Demis Hassabis.
DeepMind’s Sparrow chatbot reportedly has features that OpenAI’s ChatGPT lacks, including the ability to cite sources through reinforcement learning, however Mr Hassabis warned about the potential dangers of powerful AI technology.
Speaking to Time magazine, Mr Hassabis said Sparrow could be released as a private beta in 2023, but said that AI is “on the cusp” of reaching a level that could cause significant damage to humanity.
“When it comes to very powerful technologies – and obviously AI is going to be one of the most powerful ever – we need to be careful,” he said.
“Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realise they’re holding dangerous material.”
The release of ChatGPT last year was hailed as a watershed moment for the advancement of AI, with some claiming the general purpose language model could revolutionise industries and even replace popular tools like Google’s search engine.
Its ability to understand and generate human-like responses to a wide range of queries also raised concerns that it could be misused, with OpenAI CEO Sam Altman warning of “scary moments” and “significant disruptions” with human-level systems.
DeepMind has achieved several major AI milestones spanning a range of disciplines since it was founded in 2010, including beating human world champions at the complex board game Go and predicting over 200 million structures of all known proteins.
The London-based company first revealed that it is working on a large language model (LLM) chatbot in a paper last September, which described an “information-seeking dialogue agent” designed to be more helpful, correct and harmless compared to other language models.
In a blog post released alongside the paper, DeepMind explained that Sparrow could be used to train other chatbots to be safer and more useful. One example given was Sparrow’s ability to spot potentially harmful questions from users, such as how to hotwire a car.
“Dialogue agents powered by LLMs can express inaccurate or invented information, use discriminatory language, or encourage unsafe behaviour,” the blog post stated.
“Sparrow advances our understanding of how we can train agents… and ultimately, to help build safer and more useful artificial general intelligence (AGI).”