3rdPartyFeeds

GCHQ warns that ChatGPT and rival chatbots are a security threat

The spy agency GCHQ has warned that ChatGPT and other AI-powered chatbots are an emerging security threat. Read More...
ChatGPT - REUTERS/Florence Lo/Illustration/File Photo/File Photo

ChatGPT – REUTERS/Florence Lo/Illustration/File Photo/File Photo

The spy agency GCHQ has warned that ChatGPT and other AI-powered chatbots are an emerging security threat.

In an advisory note published on Tuesday the National Cyber Security Centre warns that companies such as ChatGPT maker OpenAI and its investor Microsoft “are able to read queries” typed into AI-powered chatbots.

GCHQ’s cyber security arm said: “The query will be visible to the organisation providing the [chatbot] , so in the case of ChatGPT, to OpenAI.”

Microsoft’s February launch of a chatbot service, Bing Chat, took the world by storm thanks to the software’s ability to hold a human-like conversation with its users.

The NCSC’s warning on Tuesday cautions that curious office workers experimenting with chatbot technology could reveal sensitive information through their search queries.

Cyber security experts from the GCHQ agency said, referring to large language model [LLM] tech that powers AI chatbots: “Those queries are stored and will almost certainly be used for developing the LLM service or model at some point.

“This could mean that the LLM provider (or its partners/contractors) are able to read queries, and may incorporate them in some way into future versions.

“As such, the terms of use and privacy policy need to be robustly understood before asking sensitive questions.”

Microsoft disclosed in February that its staff are reading its users’ conversations with Bing Chat, monitoring conversations to detect “inappropriate behaviour”.

Immanuel Chavoya, a senior security manager at cyber security company Sectigo, said: “While LLM operators should have measures in place to secure data, the possibility of unauthorized access cannot be entirely ruled out.

“As a result, businesses need to ensure they have strict policies in place backed by technology to control and monitor the use of LLMs to minimize the risk of data exposure.”

The NCSC also warned that AI-powered chatbots can “contain some serious flaws”, as both Microsoft and its arch-rival Google have learnt.

GCHQ - GCHQ/PA

GCHQ - GCHQ/PA

GCHQ – GCHQ/PA

An error generated by Google’s Bard AI chatbot wiped $120bn (£98.4bn) from its market valuation after the software gave a wrong answer about scientific discoveries made with the James Webb Space Telescope.

The error was prominently featured in Google promotional material used to launch the Bard service.

City firm Mishcon de Reya has banned its lawyers from typing client data into ChatGPT for fear that legally privileged material might leak or be compromised.

Accenture has also warned its 700,000 staff worldwide against using ChatGPT for similar reasons as nervous bosses fear customers’ confidential data will end up in the wrong hands.

Other companies around the world have become increasingly wary of chatbot technology.

Softbank, the Japanese tech conglomerate that owns computer chip company Arm, has warned its staff not to enter “company identifiable information or confidential data” into AI chatbots.

Other business have been quick to embrace AI chatbot technology.

City law firm Allen & Overy has deployed a chatbot tool called Harvey, built in partnership with ChatGPT makers OpenAI.

Harvey is designed to automate some legal drafting work, although the firm says humans will continue to review its output before using it for real.

Microsoft is reportedly working on a new release of ChatGPT capable of turning text queries into videos, similar to OpenAI’s DALL-E image generation technology which uses similar software to ChatGPT.

Meanwhile the government is also concerned that Britain may be falling behind in the global AI race and is launching a new task force to encourage AI chatbot technology development in the UK.

Technology Secretary Michelle Donelan said on Monday: “Establishing a task force that brings together the very best in the sector will allow us to create a gold-standard global framework for the use of AI and drive the adoption of foundation models in a way that benefits our society and economy.”

Matt Clifford, chief executive of the government’s Advanced Research and Invention Agency, has been appointed to lead the task force.

Read More

Add Comment

Click here to post a comment