‘Outsourcing intellect is here to stay’ says expert Henry Ajder of AI-enabled chatboxes at inaugural Cambridge Tech Week
The massive upsurge in the use of AI-enabled chatboxes such as ChatGPT is presenting humanity with unique problems - but they are still solve-able, said deep learning expert Henry Ajder at Cambridge Tech Week.
Ajder was speaking exclusively to the Cambridge Independent at a Jesus College networking event following two Big Tech Debates on the topic at the inaugural showcase for the Cambridge ecosystem. He holds a master of philosophy degree from Queen’s College Cambridge and is a speaker and advisor on synthetic media, deeptech, deepfakes and the evolving AI ecosystem.
I asked whether he agreed with the suggestion that organisations should state where they use content involving ChatGPT and other such AI-enabled tools.
“It’s a really interesting dilemma because ChatGPT and other large language models already saturate us in ways we’re not aware of,” Ajder replied. “But what’s happened is that ChatGPT has added 100m users in two months this year. It’s the fastest growing application ever. At the same time Microsoft has Bing Chat and Google has Bard. The reality is that most of us are already engaging with AI content in some way – text mainly, but visually also. AI is fundamentally a tool, so for instance Grammarly autospells content and gives you a smile emoji or a sad face depending on what you write.
“The real test is if I write an article for the Cambridge Independent and maybe it’s a bit clumsy and if I want to smooth the tone I could ask ChatGPT or one of the other language models to reinterpret the text….”
How would you do that?
“I could say for instance ‘Make this article for a local new site more concise and readable for a general readership’. And it will translate your article into a distinct format that is similar enough to your work, but it’s been sculpted.”
Ajder has been working in this sector for six years. He differentiates between ‘scuplting’ an existing article and generating an entirely new article, between “generating an entirely new piece of work and reediting the new raw output”.
“There are reasons why publishing raw output is a bad idea,” he says. “Firstly there is a tendency right now to essentially overstate the capability of these algorithms. They often make mistakes. They get facts wrong. There are instance where journalists have been asked to discuss books they haven’t written but ChatGPT says they have. They can and often do make mistakes, so if there’s no human oversight you leave yourself vulnerable to these hallucinations as they’re called.
“These tools are generating text from just one prompt. Obviously it’s outsourcing intellect, but personally I have no problem if someone comes up with an article and uses ChatGPT to augment their style. But if you ask ‘why is cutting tax is a good idea?’ it’s not an intellectual exercise on your part – the hard work is being done for you, and it’s not necessarily factual or accurate.
“So there are concerns about generative AI being used in text creation or generation.
“Having said that, I don’t want people to think these tools can’t be used responsibly. The problems develop when they’re deployed without human oversight, with no conceptual thinking or where large volumes of disinformation are involved, such as fake reviews on Amazon, or students creating entirely generated essays which they very lightly moderate.
“Again, there’s nothing wrong with using these tools as a collaborative or feedback mechanism but if a student is running late for an assignment and they use it without having any knowledge, it’s wrong.”
So if it’s wrong, what’s the solution?
“The answer is not, ‘don’t let students use this’,” Ajder says. “We have to understand this technique is here to stay. It requires innovation and reflection in the way it’s used. We need to reassess how we assess students and how we think of AI when it comes to content.”
So should students – and indeed marketers, journalists, script writers etc – declare where ChatGPT is used?
“When it comes to disclosure mechanisms it’s challenging because obviously copy and paste exists, so even if there’s a disclaimer there’s nothing to stop people from removing the disclaimer, so if a student writes an essay on the Amazon rainforest with one of these tools and gets an ‘A’, it’s not a good outcome.”
Since ChatGPT arrived in November 2022, some universities have banned the use of AI-generated text, including Cambridge, whose spokesperson said: “We recognise that artificially intelligent chatbots, such as ChatGPT, are new tools being used across the world. The University has strict guidelines on student conduct and academic integrity. These stress that students must be the authors of their own work. Content produced by AI platforms, such as ChatGPT, does not represent the student’s own original work so would be considered a form of academic misconduct to be dealt with under the University’s disciplinary procedures. The University has issued guidance to departments to help address concerns about risks to the integrity of assessments.”
Meanwhile other universities are merely reviewing positions. So can a ban on students using ChatGPT work without some sort of wider consensus?
“There’s some educational institutions which have said ‘no’ to a ban, but ‘yes’ you denote when you’re using it. But there are other ways of assessing students, for instance face-to-face assessments, and I think that will become an increasingly powerful argument as these tools become more prevalent, but it’s very very hard to retain that disclosure, especially with text.
“But I’m an optimist. I’ve seen lots of new technologies emerge and they’re not always as apocalyptic as they seem at the time. You have to cut through the hype and the panic and ask ‘what infrastructure do we need for a new age?’.”
The inaugural Cambridge Tech Week has been a huge success, says Caroline Hyde, head of ecosystem initiatives and partnerships at Cambridge Enterprise, who is on the event’s steering committee.
“This week has been really really good,” she said of the week-long programme of events and discussions. “When you launch something like this you don’t know how it’s going to land but the response has been phenomenal, especially in its reach beyond Cambridge. “The intention now is to take what we’ve learned and to come back bigger and better next year.
“Our ambition is for this event to become a staple, not just of the Cambridge ecosystem, but also the global technology calendar.”