3rdPartyFeeds

We need to create guardrails for AI

A group of Harvard academics and artificial intelligence experts has just launched a report aimed at putting ethical guardrails around the development of potentially dystopian technologies such Microsoft-backed OpenAI’s seemingly sentient chatbot, which debuted in a new and “improved” (depending on your point of view) version, GPT-4, last week. The group, which includes Glen Weyl, a Microsoft economist and researcher, Danielle Allen, a Harvard philosopher and director of the Safra Center for Ethics, and many other industry notables, is sounding alarm bells about “the plethora of experiments with decentralised social technologies”. Read More...

A group of Harvard academics and artificial intelligence experts has just launched a report aimed at putting ethical guardrails around the development of potentially dystopian technologies such Microsoft-backed OpenAI’s seemingly sentient chatbot, which debuted in a new and “improved” (depending on your point of view) version, GPT-4, last week. The group, which includes Glen Weyl, a Microsoft economist and researcher, Danielle Allen, a Harvard philosopher and director of the Safra Center for Ethics, and many other industry notables, is sounding alarm bells about “the plethora of experiments with decentralised social technologies”.

Read More

Add Comment

Click here to post a comment