3rdPartyFeeds

Microsoft’s Chief Scientific Officer, one of the world’s leading A.I. experts, doesn’t think a 6 month pause will fix A.I.—but has some ideas of how to safeguard it

In a rare interview, Eric Horvitz talks about A.I., humanity, and how the two can coexist. Read More...

Eric Horvitz, Microsoft’s first Chief Scientific Officer and one of the leading voices within the rapidly-evolving sector of artificial intelligence, has spent a lot of time thinking about what it means to be human.

More from Fortune: 5 side hustles where you may earn over $20,000 per year—all while working from home Looking to make extra cash? This CD has a 5.15% APY right now Buying a house? Here’s how much to save This is how much money you need to earn annually to comfortably buy a $600,000 home

It’s now, perhaps more than ever, that underlying philosophical questions rarely mentioned in the workplace are bubbling to the C-Suite: What sets humans apart from machines? What is intelligence—how do you define it? Large language models are getting smarter, more creative, and more powerful faster than we can blink. And, of course, they are getting more dangerous.

“There will always be bad actors and competitors and adversaries harnessing [A.I.] as weapons, because it’s a stunningly powerful new set of capabilities,” Horvitz says, adding: “I live in this, knowing this is coming. And it’s going faster than we thought.”

Horvitz speaks much more like an academic than an executive: He is candid and visibly excited about the possibilities of new technology, and he welcomes questions many other executives might prefer to dodge. Horvitz is one of Microsoft’s senior leaders in its ongoing, multi-billion A.I. efforts: He has led key ethics and trustworthiness initiatives to guide how the company will deploy the technology, and spearheads research on its potential and ultimate impact. He is also one of more than two dozen individuals who advise President Joe Biden as a member of the President’s Council of Advisors on Science and Technology, which met most recently in early April. It’s not lost on Horvitz where A.I. could go off the guardrails, and in some cases, where it is doing exactly that already.

Just last month, more than 20,000 people—including Elon Musk and Apple cofounder Steve Wozniak—signed an open letter urging companies like Microsoft, which earlier this year started rolling out an OpenAI-powered search engine to the public on a limited basis, to take a six-month pause. Horvitz sat down with me for a wide-ranging discussion where we talked about everything from the letter, to Microsoft laying off one of its A.I. ethics teams, to whether large language models will be the foundation for what’s known as “AGI,” or artificial general intelligence. (Some portions of this interview have been edited or rearranged for brevity and/or clarity)

Fortune: I feel like now, more than ever, it is really important that we can define terms like intelligence. Do you have your own definition of intelligence that you are working off of at Microsoft?

Horvitz: We don’t have a single definition… I do think that Microsoft [has] views about the likely beneficial uses of A.I. technologies to extend people and to empower them in different ways, and then we’re exploring that in different application types… It takes a whole bunch of creativity and design to figure out how to basically harness what we’re considering to be these [sparks] of more general intelligence…

That also gets into the whole idea of what we call responsible A.I., which is, well, how can this go off the rails?… The Kevin Roose article in The New York Times—I heard it was a very widely read article. Well, what happened there exactly? And can we understand that? In some ways, when we field complex technologies like this, we do the best we can in advance in-house. We red-team it. We have people doing all sorts of tests and try different things out to try to understand the technology… We characterize it deeply in terms of the rough edges, as well as the power for helping people out and achieving their goals, to empower people. But we know that one of the best tests we can do is to put it out in limited preview and actually have it in the open world of complexity, and watch carefully without having it be widely distributed to understand that better. We learned quite a bit from that as well. And some of the early users, I have to say, some were quite intensive testers, pushing the system in ways that we didn’t necessarily all push the system internally—like staying with a chat for, I don’t know how many hours, to try to get it to go off the rails, and so on. These kinds of things happened in limited preview. So we learn a lot in the open world as well.

Let me ask you something about that: Some people have pushed back against Microsoft and Google’s approach of going ahead and rolling this out. And there was that open letter that was signed by more than 20,000 people—asking companies to sort of take a step back, take a six-month pause. I noticed that a few Microsoft engineers signed their names on that letter. And I’m curious about your opinion on that—and if you think these large language models could be existentially dangerous, or become a threat to society?

I really actually respect [those that signed the letter]. And I think it’s reasonable that people are concerned… To me, I would prefer to see more knowledge, and even an acceleration of research and development, rather than a pause for six months, which I am not sure if it would even be feasible. It’s a very ill-defined request in some ways… On the Partnership on A.I. (PAI), we spent time thinking about what are the actual issues. If you were going to pause something, what specific aspects should be paused and why? And what’s the cost and benefits of stopping versus investigating more deeply and coming up with solutions that might address concerns?…

In a larger sense, six months doesn’t really mean very much for a pause. We need to really just invest more in understanding and guiding and even regulating this technology—jump in, as opposed to pause… I do think that it’s more of a distraction, but I like the idea that it’s a call for expressing anxiety and discomfort with the speed. And that’s clear to everybody.

What concerns you most about these models? And what concerns you least?

I’m least concerned with science-fiction-centric notions that scare people of A.I. taking over—of us being in a state where humans are somehow outsmarted by these machines in a way that we can’t escape, which is one of these visions that some of the people that sign that letter dwell on. I’m perhaps most concerned about the use of these tools for disinformation, manipulation, and impersonation. Basically, they’re used by bad actors, by bad human actors, right now.

Can we talk a little bit more about the disinformation? Something that comes to mind that really shocked me and made me think about things differently was that A.I.-generated image of the Pope that went viral of him in the white puffer jacket. It really made me take a step back and reassess how even more prevalent misinformation could become—more so than it already is now. What do you see coming down the pipeline when it comes to misinformation, and how can companies, how can the government, how can people get ahead of that?

These A.I. technologies are here with us to stay. They’ll only get more sophisticated, and we won’t be able to easily control them by saying companies should stop doing X, Y, or Z—because they’re now open-source technologies. Soon after DALL-E 2, which generates imagery of the form you’re talking about, was made available, there were two or three open-sourced versions of it that came to be—some quite better in certain ways, and doing even more realistic imagery.

In 2016, or 2017 or so, I saw my first deep fake… I gave a talk at South by Southwest on this and I said: Look what’s happening… I said this is a big deal, and I told the audience this is going to be a game-changer, a big challenge for everybody. We need to think more deeply about this as a society. Things have gone from there into—we see all sorts of uses of these technologies by nation states that are trying to foment unrest or dissatisfaction or polarization all the way to satire.

So what do we do about this? I put a lot of my time and attention into this, because I think it really threatens to erode democracies, because democracies really depend on an informed citizenry to function well. And if you have systems that can really misinform and manipulate, it’s not clear that you’ll have effective democracy. I think this is a really critical issue, not just for the United States, but for other countries, and it needs to be addressed.

In 2019, in January, I met with the [former director general of BBC, Tony Hall] at the World Economic Forum. We had a one-on-one meeting, and I showed him some of the breaking deep fakes and he had to sit down—he was beside himself… And that led to a major effort at Microsoft that we pulled together across several teams to create what we call the authentication of media provenance to know that nobody has manipulated from the camera and the production by a trusted news source like BBC, for example, or the New York Times, nobody has faked it or changed things all the way to your display… Across [three] groups now, there are over 1,000 members participating and coming up with standards for authenticating the provenance of media. So someday soon you’ll be seeing, when you look at video, there’ll be a sign that tells you, and you can hover over it, that certifies that it is coming from a trusted source that you know, and that there has been no manipulation along the way.

But my view is there’s no one silver bullet. We’re going to need to do all those things. And we’re also probably going to need regulations.

I want to ask you about the layoffs at Microsoft. In mid-March Platformer reported that Microsoft had laid off its ethics and society team, which was focused on how to design A.I. tools responsibly. And this seems to me like the time when that is needed most. I wanted to hear your perspective on that.

Just like A.I. systems can manipulate minds and distort reality, so can our attention-centric news economy now. And here’s the example. Any layoff makes us very sad at Microsoft. It’s something that is really a challenge when it happens. In this case, the layoff was a very small number of people who were in a design team and, from my point of view, quite peripheral to our major responsible and ethical and trustworthy A.I. efforts.

I wished we would talk more publicly about our engineering efforts that went into several different work streams—all coordinated on safety, trustworthiness, and broader considerations of responsibility in shipping out to the world the Bing chat, and the other technologies—incredible amounts of red-teaming. I’d say, if I had to estimate, over 120 people altogether have been involved in a significant set of work streams, with daily check-ins. That small number of people were not central in that work, although we respect them and I like their design work over the years. They’re part of a larger team. And it was poor timing, and kind of amplified reporting about that being the ethics team, but it was not by any means. So I don’t mean to say that it is all fake news, but it was certainly amplified and distorted.

I’ve been on this ride, [part of] leading this effort of responsible A.I. at Microsoft since 2016 when it really took off. It is central at Microsoft, so you can imagine we were kind of heartbroken with those articles… It was unfortunate that those people at that time were laid off. They did happen to have ethics in their title. It’s unfortunate timing.

[A spokeswoman later said that fewer than ten team members were impacted and said that some of the former members now hold key positions within other teams. “We have hundreds of people working on these issues across the company, including dedicated responsible A.I. teams that continue to grow, including the Office of Responsible A.I., and a responsible A.I. team known as RAIL that is embedded in the engineering team responsible for our Azure OpenAI Service.”]

I want to circle to the paper you published at the end of March. It talks about how you’re seeing sparks of AGI from GPT-4. You also mentioned in the paper that there’s still a lot of shortfalls, and overall, it’s not very human-like. Do you believe that large language models like GPT, which are trained to predict the next word in a sentence, are laying the groundwork for artificial general intelligence—or would that be something else entirely?

A.I. in my mind has always been about general intelligence. The phrase “AGI” only came into vogue in large use by people outside the field of A.I. when they saw the current versions of A.I. successes being quite narrow. But from the earliest days of A.I., it’s always been about how can we understand general principles of intelligence that might apply to humans and machines, sort of an aerodynamics of intelligence. And that’s been a long-term pursuit. Various projects along the way from 1950s to now have shown different kinds of aspects of what you might call general principles of intelligence.

It’s not clear to me that the current approach with large language models is going to be the answer to the dreams of artificial intelligence research and aspirations that people may have about where A.I. is going to build intelligence that might be more human-like or that might be complementary to human-like competencies. But we did observe sparks of what I would call magic, or unexpected magic, in the system’s abilities that we go through in the paper and list point by point. For example, we did not expect a system that was not trained on visual information to know how to draw or to recognize imagery…

And so, the idea that a system can do these things, with very simple short questions without any kind of pre-training or fancy prompt engineering, as it’s called—it’s pretty remarkable… These kinds of powerful, subtle, unexpected abilities, whether it be in medicine, or in education, chemistry, physics, general mathematics and problem solving, drawing, and recognizing images—I would view them as bright little sparks that we didn’t expect that have raised interesting questions about the ultimate power of these kinds of models, and as they scale to be more sophisticated. At the same time, there are specific limitations we described in the paper. The system doesn’t do well at backtracking and certain kinds of problems really confound it…. And the fact that it’s fabulously brilliant… and embarrassingly stupid other places means that this is not really human-like. To have a system that does advanced math, integrals and notation… and then it can’t do arithmetic… It can’t multiply but it can do this incredible proof of the infinite numbers of primes and do poetry about it and do it in a Shakespearean pattern.

Just taking a step back, to make sure I understand clearly how you’re answering the first part of my question. Are you saying that large language models could be the foundation of these aspirations people have for creating human intelligence, but you’re not sure?

I’d say I am uncertain, but when you see a spark of something that’s interesting, a scientist will follow that spark and try to understand it more deeply. And here’s my sense: What we’re seeing is raising questions and pointers and directions for research that would help us to better understand how to get there. It’s not clear that when you see little sparks of flint, you have the ability to really do something more sustained or deeper, but it certainly is a way…  We can investigate, as we are now and as the rest of the computer science community is now….

So I guess, to be clear, the current large language models have given us some evidence of interesting things happening. We’re not sure enough if you need the gigantic, large language models to do that, but we’re certainly learning from what we’re seeing about what it might take moving forward.

You don’t have access to OpenAI’s training data for its models. Do you feel like you have a comprehensive understanding of how the A.I. models work and how they come to the conclusions that they do?

I think it’s pretty clear that we have general ideas about how they work and general ideas and knowledge about the kinds of data the system was trained on. And depending on what your relationship is with OpenAI and our research agreements… There are some understandings of the training data and so on.

That doesn’t mean that there’s a deep understanding of every aspect… We don’t understand everything about what’s happening in these models. No one does yet. And I think to be fair to the people that are asking for a slowdown—there’s anxiety, and some fear about not understanding everything about what we’re seeing. And so I understand that, and as I say, my approach to it is we want to both study it more intensively and work extra hard to not only understand the phenomenon but also understand how we can get more transparency into these processes, how we can have these systems become better explainers to us about what they’re doing. And also understand any potential social or societal implication of this…

I think today there are lots of questions about how these systems work, at the details, even when, broadly, we have good understandings of the power of scale and the fact that these systems are generalizing and have the ability to synthesize.

On that thread—do you think that the models should be open source so that people can study them and understand how they work? Or is that too dangerous?

I’m a strong supporter of the need to have these models shared out for academic research. I think it’s not the greatest thing to have these models cloistered within companies in a proprietary way when having more eyes, more scientific effort more broadly on the models could be very helpful. If you look at the what’s called the Turing Academic Program research, we’ve been a big supporter of taking some of our biggest models and making them available, from Microsoft, to university-based researchers…

I know how much work that OpenAI did and that Microsoft did and we did together on working to make these models safer and more accurate, more fair, and more reliable. And that work, which includes the colloquial phrase “alignment,” aligning the models with human values, was very effortful. So I’m concerned with these models being out in their raw form in open source, because I know how much effort went into polishing these systems for consumers and for our product line. And these were major, major efforts to grapple with what you call hallucination, inaccuracy, to grapple with reliability—to grapple with the possibility that they would stereotype or generate toxic language. And so I and others share the sense that open sourcing them without those kinds of controls and guardrails wouldn’t be the greatest thing at this point in time.

In your position serving on PCAST, how is the U.S. government already involved in the oversight of A.I. and in what ways do you think that it should be?

There’s been regulation of various kinds of technologies, including A.I. and automation, for a very long time. The National Highway Transportation Safety Administration, the Fair Housing Act, the Civil Rights Act of 1964—these all talk about what the responsibilities of organizations are. The Equal Employment Opportunity Commission oversees and makes it illegal to discriminate against a person for employment and there’s another one for housing. So systems that will have influences—there is opportunity to regulate them through various agencies that already exist in different sectors…

My overall sense is that it will be the healthiest to think about actual use cases and applications and to regulate those the way they have been for decades, and to bring A.I. as another form of automation that’s already being looked at very carefully by government regulations.

These A.I. models are so powerful that they’re making us ask ourselves some really important underlying questions about what it means to be human, and what distinguishes us from machines as they get more and more capable. You’ve spoken before about music, and one of my colleagues pointed out to me a paper that you wrote about captions for New Yorker cartoons a few years ago. Throughout all of the research and time you’ve spent digging into artificial intelligence and the impact it could have on society, have you come to any personal realizations of what it is that distinctly makes us human, and what things could never be replaced by a machine?

My reaction is that almost everything about humanity won’t be replaced by machines. I mean, the way we feel and think, our consciousness, our need for one another—the need for human touch, and the presence of people in our lives. I think, to date, these systems are very good at synthesizing and taking what they’ve learned from humanity. They learn and they have become bright because they’re learning from human achievements. And while they could do amazing things, I haven’t seen the incredible bursts of true genius that come from humanity.

I just think that the way to look at these systems is as ways to understand ourselves better. In some ways we look at these systems and we think: Okay, what about my intellect, and its evolution on the planet that makes me who I am—what might we learn from these systems to tell us more about some aspects of our own minds? They can light up our celebration of the more magical intellects that we are in some ways by seeing these systems huff and puff to do things that are sparking creativity once in a while.

Think about this: These models are trained for many months, with many machines, and using all of the digitized content they can get their hands on. And we watch a baby learning about the world, learning to walk, and learning to talk without all that machinery, without all that training data. And we know that there’s something very deeply mysterious about human minds. And I think we’re way off from understanding that. Thank goodness. I think we will be very distinct and different forever than the systems we create—as smart as they might become.

Jeremy Kahn contributed research for this story.

This story was originally featured on Fortune.com

More from Fortune:
5 side hustles where you may earn over $20,000 per year—all while working from home
Looking to make extra cash? This CD has a 5.15% APY right now
Buying a house? Here’s how much to save
This is how much money you need to earn annually to comfortably buy a $600,000 home

Read More

Add Comment

Click here to post a comment