3rdPartyFeeds

Google’s Ethics Effort Is Looking Rather Evil

(Bloomberg Opinion) -- Google used to have a simple motto: Don’t be evil. Now, with the firing of a data scientist whose job was to identify and mitigate the harm that the company’s technology could do, it has yet again demonstrated how far it has strayed from that laudable goal.Timnit Gebru was one of Google’s brightest stars, part of a group hired to work on accountability and ethics in artificial intelligence — that is, to develop fairness guidelines for the algorithms that increasingly decide everything from who gets credit to who goes to prison. She was a lead researcher on the Gender Shades project, which demonstrated the flaws of facial recognition software developed by IBM, Amazon, Microsoft and others, particularly in identifying dark female faces. As far as I can ascertain, she was fired for doing her job: specifically, for critically assessing models that allow computers to converse with people — an area in which Google is active.Full disclosure: It’s hard for me to untangle my opinion on this from my personal and professional loyalties. I’m not acquainted with Gebru, but we have quite a few friends in common and I’ve admired her work for some time. I signed a letter supporting her. I also run an independent company that specializes in auditing algorithms for bias, so I have an interest in getting big tech firms to use my services rather than do their vetting in-house.All that said, I genuinely believe that Gebru’s story illustrates a broader issue: You can’t trust companies to check their own work, particularly where the result might conflict with their financial interests. My favorite example is Theranos, which insisted that its research into a novel blood test was so amazing and valuable that it couldn’t be shown to outsiders — until it proved to be a dangerous fraud. The warning applies no less to tech companies such as Google, IBM, Microsoft and Facebook, which have created internal ethics groups and external tools in an effort to display responsibility and keep their algorithms unregulated.I’ll admit that to some extent, I envy the people who work on the accountability teams. They have fascinating jobs, with access to tons of data that they’d never be able to play with in academia. At the same time, though, they have little or no influence to push their employers to actually implement the fairness frameworks that they so carefully develop. Their scientific papers are often heavily edited or even censored, as I learned when I once tried to co-author one (I quit the project).I often wondered about Gebru and others working at Google: How could they stand the bureaucracy, or express their very real concerns in that environment? As it turns out, they couldn’t.Gebru, along with co-authors from academia as well as Google, was trying to get the company’s approval to submit a paper on some unintended consequences of large language models. One problem is that their energy consumption and carbon footprint have been rapidly expanding along with their use of computing power. Another...

(Bloomberg Opinion) — Google used to have a simple motto: Don’t be evil. Now, with the firing of a data scientist whose job was to identify and mitigate the harm that the company’s technology could do, it has yet again demonstrated how far it has strayed from that laudable goal.

Timnit Gebru was one of Google’s brightest stars, part of a group hired to work on accountability and ethics in artificial intelligence — that is, to develop fairness guidelines for the algorithms that increasingly decide everything from who gets credit to who goes to prison. She was a lead researcher on the Gender Shades project, which demonstrated the flaws of facial recognition software developed by IBM, Amazon, Microsoft and others, particularly in identifying dark female faces. As far as I can ascertain, she was fired for doing her job: specifically, for critically assessing models that allow computers to converse with people — an area in which Google is active.

Full disclosure: It’s hard for me to untangle my opinion on this from my personal and professional loyalties. I’m not acquainted with Gebru, but we have quite a few friends in common and I’ve admired her work for some time. I signed a letter supporting her. I also run an independent company that specializes in auditing algorithms for bias, so I have an interest in getting big tech firms to use my services rather than do their vetting in-house.

All that said, I genuinely believe that Gebru’s story illustrates a broader issue: You can’t trust companies to check their own work, particularly where the result might conflict with their financial interests. My favorite example is Theranos, which insisted that its research into a novel blood test was so amazing and valuable that it couldn’t be shown to outsiders — until it proved to be a dangerous fraud. The warning applies no less to tech companies such as Google, IBM, Microsoft and Facebook, which have created internal ethics groups and external tools in an effort to display responsibility and keep their algorithms unregulated.

I’ll admit that to some extent, I envy the people who work on the accountability teams. They have fascinating jobs, with access to tons of data that they’d never be able to play with in academia. At the same time, though, they have little or no influence to push their employers to actually implement the fairness frameworks that they so carefully develop. Their scientific papers are often heavily edited or even censored, as I learned when I once tried to co-author one (I quit the project).

I often wondered about Gebru and others working at Google: How could they stand the bureaucracy, or express their very real concerns in that environment? As it turns out, they couldn’t.

Gebru, along with co-authors from academia as well as Google, was trying to get the company’s approval to submit a paper on some unintended consequences of large language models. One problem is that their energy consumption and carbon footprint have been rapidly expanding along with their use of computing power. Another is that, after ingesting a large chunk of the entire history of all written text, they’re troublingly likely to use nasty, racist or otherwise inappropriate language.

The findings, while perfectly good and interesting, were not particularly new. Which makes it all the more bizarre that someone higher up at Google decided, with no explanation, that Gebru had to back out of publishing the paper. When she demanded to know what the actual complaints were so she could address them, she was fired (with her boss informed only after the fact).

Aside from turning the paper viral, the incident offered a shocking indication of how little Google can tolerate even mild pushback, and how easily it can shed all pretense of scientific independence. The fact that Gebru was one of the company’s only Black female researchers makes it a particularly egregious example of punching down in the same old tired way.

Embarrassing as this episode should be for Google — the company’s CEO has apologized — I’m hoping policy makers grasp the larger lesson. The artificial intelligence that plays a growing role in our lives requires outside scrutiny, from people who have the proper incentives to be independent and the power to compel meaningful reform. Otherwise, algorithms will be doomed to repeat and amplify the flaws of the humans who made them.

(Updates with apology of Google’s CEO.)

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Cathy O’Neil is a Bloomberg Opinion columnist. She is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of “Weapons of Math Destruction.”

For more articles like this, please visit us at bloomberg.com/opinion

Subscribe now to stay ahead with the most trusted business news source.

©2020 Bloomberg L.P.

Read More

Add Comment

Click here to post a comment