Home   Business   Article

Subscribe Now

NASA’s Cambridge speaker says AI has ‘potential and peril’




A former NASA computer scientist will advise business leaders on the technological revolution “that society has yet to fully grasp” at an event hosted by the Institute of Directors (IoD) at the Bradfield Centre this month.

Peter Scott, who has worked for NASA's famed Jet Propulsion Laboratory for more than 30 years, helping advance exploration of the solar system, touches down on Friday, November 15 at the Bradfield Centre on Cambridge Science Park as the guest speaker for an IoD Cambridgeshire event titled ‘The Future of Work in the AI Revolution’.

Peter’s first academic paper, “Classification Schemas for Artificial Intelligence Failures”, was published in July. He also authored Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race in 2017, and founded the Next Wave Institute, an educational not-for-profit "formed to educate generations on the peril and promise of exponential technological progress".

As one of the leading authorities on IT infrastructure, Peter has made the exploration of technology and its impact on human development his life’s work. He was born, perhaps implausibly, in Southend, and studied Computer Science at the University of Cambridge.

“I'm looking forward to going back,” Peter told me over video link from his current home in British Columbia. “It's my favourite city in the whole of the country.”

After Cambridge he went to work for NASA’s Jet Propulsion Laboratory.

“I'm a zero-generation immigrant,” he deadpans. “I went to work as an employee for 16 years, then left, but I'm still a remote contractor. But there was not enough for me at JPL so I did a lot of things outside, including coaching, teaching - soft practices that involved working with people. So that's two wildly different worlds that don't interact at all.”

His interest in artificial intelligence (AI) began in earnest when he became a father.

“I thought these worlds would never coincide, but then I had a revelation that the trajectory with AI will get us to the point where unless we have transcended our own psychology and become more awake and more aware and understand some of the conflicts that are driving us in this world, then AI would become dystopian or worse, and that would be something that affected my daughters' world. When you look at the possibility that your children's lives may not have the same opportunities as your own, that focuses you very much.”

Artificial intelligence has a dystopian undertow which must be considered, says Peter Scott
Artificial intelligence has a dystopian undertow which must be considered, says Peter Scott

Is AI only a threat to jobs?

"The interesting thing is that AI holds so much potential and peril at the same time in so many dimensions. In the short term it has the potential to automate so many jobs, and we ought to consider the perils and that isn't being discussed. With jobs that's up to us, but we also have Artificial General Intelligence [the intelligence of a machine that can understand or learn any intellectual task that a human being can] and a lot of money is being thrown at that - and Artificial Superintelligence is not very far behind and we have to work hard to avoid that."

Artificial Superintelligence (ASI) is a term referring to the time when the capability of computers will surpass humans. We don't have that yet, of course - we don't have Artificial General Intelligence. But once AGI arrives, ASI won't be far behind, Peter suggests.

“We're accelerating in ways that provide more unpredictability, partly because there are areas of overlap between different fields of technology. For instance GPS matured to a certain level, AI has reached a certain level, there is machine vision and advances in semi-conductors - the overlap of these factors has made autonomous vehicles possible and that a game-changer for the whole world, it all spells a whole lot of volatility.”

As a species we're slow to respond to incoming danger, only taking it seriously when it's... serious.

“We knew about climate change 100 years ago,” Peter says. “Even 30 years ago we could have stopped it but we had to wait until we're staring down the barrel of catastrophic climate change and now we don't know if we can change it in time. It's like that with AI. No one knows what the level of disruption is going to be like.”

Hollywood hasn't helped.

“Most scenarios known to people come from Hollywood which is pretty bad in this area. The things we're talking about don't fit easily into movie plots.”

Peter Scott giving a TEDx talk
Peter Scott giving a TEDx talk

Fortunately the sense of direction is being powered by organisations focusing on the existential risks involved: Peter points to Cambridge's Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, along with equivalents in Stanford, Boston, Prague and Oxford. But it's all a bit Wild West for AI still.

“It's already a free-for-all,” he says. “If you look at the money Russia and China are pouring into this... well, if they've got it right don't expect what comes out of there to have a lot of fuses.”

Peter's belief is that governments have to sort the AI conundrum out.

“An appropriate response would involve the government, the military, academics, industry leaders, the education system and social programmes, and only the government has the ability to bring all the players to the table and move the needle at the same time,” he concludes.

Simone Robinson, regional director for IoD East of England, said: “This is an incredible opportunity to hear from one of the world’s greatest authorities and explore the possibilities and challenges that technology brings. We are proud to be hosting this event in Cambridge and look forward to welcoming business leaders.”

Event details at the IoD website: a non-members’ ticket price is £5.



This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies - Learn More