Home   Business   Article

Subscribe Now

Cambridge AI Social probes ChatGPT and ‘the illusion of intelligence’ it offers





Cambridge AI Social held its inaugural meet-up at The Bradfield Centre last week, with keynote speaker Prof Lawrence Paulson outlining the challenges – and opportunities – as AI and AGI (artificial general intelligence) develops.

Prof Lawrence Paulson, professor of computational logic at the University of Cambridge and director of research of the Computer Laboratory at the William Gates Building, speaking at the inaugural Cambridge AI Social event, March 2023. Picture: Mike Scialom
Prof Lawrence Paulson, professor of computational logic at the University of Cambridge and director of research of the Computer Laboratory at the William Gates Building, speaking at the inaugural Cambridge AI Social event, March 2023. Picture: Mike Scialom

The new networking group was established by Aaron Turner in 2022. Aaron, an independent AGI researcher since 1985, welcomed guests who had made it through the blizzard outside.

“It was in the mid-1960s that Joseph Weizenbaum created Eliza, the first-ever chatbot,” Aaron told the audience of around 50 in the auditorium at The Bradfield. “At the time most people perceived Eliza to be far more intelligent than it actually was, which is today known as the Eliza Effect.”

It turned out that the Eliza Effect is still in play in Cambridge in 2023: one of the key themes to the evening’s entertainment was an analysis and discussion of ChatGPT, the chatbot launched by OpenAI in November last year. So is ChatGPT intelligent?

“ChatGPT has had more coverage than any other AI topic in the last 40 years,” Aaron said n his introduction, “but under the hood it’s a large-language model. It’s like a dictionary, where all the words are connected to each other – and in that circularity there is no genuine meaning, as none of the words are connected to real-world experiences.

Cambridge AI Social’s next event is on April 14 at the West Hub
Cambridge AI Social’s next event is on April 14 at the West Hub

“So with large language models there can’t be any real intelligence or cognition – it’s essentially just the illusion of intelligence.

“These models provide some utility but from an AGI perspective it has no IQ. There are three components to intelligence: induction, deduction and abduction. Deduction has a 2,500-year history and is best understood as automated theorem proving, or ATP. ATP is connected to AI. If the computer-based model achieves sufficient performance, it could perform genuine deductive reasoning…

“So allow me to introduce, alive and in colour, the one and only, the fabulous Professor Lawrence Paulson.”

Prof Paulson is professor of computational logic at the University of Cambridge and director of research of the Computer Laboratory at the William Gates Building.

“Logic, the technique of logic, and some of its implications for AI are what we’ll be looking at in the next hour,” he tells the audience.

Charlie Gao, of Hibiki AI, left, with Harry Little of the CAIS team at The Bradfield Centre. Picture: Mike Scialom
Charlie Gao, of Hibiki AI, left, with Harry Little of the CAIS team at The Bradfield Centre. Picture: Mike Scialom

A few slides later – “to show there’s been a lot of phases” in AI – Prof Paulson says: “So what about intelligence? As Aaron has said it means induction, deduction or abduction and I’m not going to mention abduction at all. So, induction: I observe stars moving across the night sky night after night, and there’s other objects and I try and work out where they’ll be next – that’s inductive logic. Even plants grow towards the light – that’s inductive reasoning: ‘There was light here yesterday so this is where the light will be next’.”

Allegedly people can get by without using deductive reasoning.

“I don’t think that even very smart people are using deductive reasoning very often,” Prof Paulson suggests. “Even Warren Buffet reads all the business figures and makes various calculations and, based at least partly on intuition, which is inductive reasoning. For some of the stocks he’ll just say: ‘That doesn’t look good’.”

Prof Paulson’s advice is “don’t try to prove a whole theorem at once”. Prove it one bit at a time. AI gets it wrong more often than you might suspect.

Guests networking at first Cambridge AI Social event
Guests networking at first Cambridge AI Social event

“As we know ChatGPT is often wrong,” Prof Paulson continued. “I asked about myself and it gave me all sorts of activities and things I’ve never actually done but still, someone has done them, so that’s a good start.”

After the laughs fade, it’s time for the Q&A. ChatGPT, someone from the audience suggests, is “not so much like a monkey trying to write Shakespeare but it is like a 10 year-old trying to type out a story – it’s better than the monkey is all”.

“Yes,” Prof Paulson responds. “It’s a question of to brute force or not to brute force?”

The brute force theory of computation which says that you just have to keep doing exhaustive searches to get results. The alternative is to look for some way of reducing the amount of computation necessary to reach a result. Either way Prof Paulson recommended programmers avoid using C++ as the computing language “is certainly not a way of building a system that’s trustworthy”.

The networking part of the event proved very popular
The networking part of the event proved very popular

He continued: “This is early work [on ChatGPT] and if you look at computer chess, on the whole they were beaten by computers that did gigantic searches so brute force has some merit.”

Is ChatGPT some kind of a threat to academic work, asks another member of the audience.

“As for the idea that people will use ChatGPT to write essays, well what is the point because it needs to be thoroughly checked anyway, so you might as well do it yourself. ChatGPT is usually very generic and very bland – though you can ask it to write in the style of Jane Austen or HP Lovecraft which is quite cute.”

Next up was pizza, and I found myself saying hello to Paul Crane, the new CEO of CW, among others.

During the networking I did a quick vox pop on C++. Is it really as bad as Prof Paulson claimed?

Charlie Gao, of Hibiki AI on Cambridge Science Park, said: “It does the job. I prefer R, and C. You can compare it with learning French.

Cambridge AI Social guest Jacobo Vargas, an engineer at Focal Point Positioning. Picture: Mike Scialom
Cambridge AI Social guest Jacobo Vargas, an engineer at Focal Point Positioning. Picture: Mike Scialom

“If you’re good at French it doesn’t mean you’ll be good at Russian. Fortran is definitely the fastest.”

Harry Little from the CAIS team, standing next to Charlie, added: “A lot of the people who use C said that C++ was just a moneyspinner.”

Jacobo Vargas, an engineer at Focal Point Positioning, said: “I never liked C++, I like C, but he [Prof Paulson] wasn’t saying C++ is rubbish, he was saying that anything programmed in a standard language can’t be trusted because it was programmed by a human and humans can make mistakes, which is problematic.”

I tried to imagine what a programming language untouched by human hands – unvisualised by a human mind – would resemble, but these are complex calculations.

Aaron said after the event: “I believe we had 52 people there in total, which is a respectable result for a first event, and certainly sufficient for a decent party in the socialisation part of the evening.

Prof Lawrence Paulson, directo rof research, Computer Labs
Prof Lawrence Paulson, directo rof research, Computer Labs

“Professor Paulson’s talk was perfectly pitched, and expertly delivered – he really is a superb educator. The audience asked a large number of questions at the end of the talk, which is a good indicator that they were genuinely engaged, and therefore that they enjoyed it.

“There seemed to me to be a real buzz during the socialisation phase, with many people staying past the full hour — again, a good indicator that the audience had a genuinely good time, which is ultimately what it’s all about.

“A significant number of people came and thanked me profusely at the end, and promised to attend subsequent events.

Our next event is on April 14 at the West Hub: Professor Manuela Veloso, Herbert A Simon Professor Emerita at Carnegie Mellon University, and head of AI research at JP Morgan Chase, will be delivering a talk on robotics.”



This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies - Learn More