AI and the rise of the ‘Intention Economy’ - University of Cambridge researchers warn of ‘troubling new marketplace’
Among the many implications of advances in artificial intelligence is the potential for a new commercial frontier, called the ‘intention economy’.
AI ethicists at the University of Cambridge say it could be powered by conversation AI agents, developed to influence our intentions covertly.
They warn that in the near future, AI assistants could forecast and influence our decision-making at an early stage, then sell these developing “intentions” in real-time to companies that can meet our needs and desires – even before we have made up our minds.
They say we are at the dawn of a “lucrative yet troubling new marketplace for digital signals of intent”.
An explosion in generative AI and our increasing familiarity with chatbots, means new “persuasive technologies” - hinted at in recent corporate announcements by tech giants - could influence everything from how we buy movie tickets to vote for candidates.
Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) says “anthropomorphic” AI agents - such as chatbot assistants, digital tutors and girlfriends - will have access to great quantities of intimate psychological and behavioural data, which can often be gleaned via informal and conversational spoken dialogue.
AI will be used to combine knowledge of our online habits with an uncanny ability to communicate with us in comforting ways – perhaps mimicking personalities or anticipating desired responses. This will build levels of trust and understanding that, the researchers warn, will allow social manipulation on an industrial scale.
“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve”, said LCFI visiting scholar Dr Yaqub Chaudhary.
“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions.
“We caution that AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes.”
Dr Jonnie Penn, an historian of technology from Cambridge’s LCFI, added: “For decades, attention has been the currency of the internet. Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy.
“Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer, and sell human intentions.
“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of its unintended consequences.”
Dr Penn and Dr Chaudhary write, in a new Harvard Data Science Review paper, that the intention economy will be the attention economy “plotted in time”, profiling how user attention and communicative style connects to patterns of behaviour and the choices we make.
“While some intentions are fleeting, classifying and targeting the intentions that persist will be extremely profitable for advertisers,” noted Dr Chaudhary.
Tech companies could use large language models (LLMs) to target a user’s cadence, politics, vocabulary, age, gender, online history and even preferences for flattery and ingratiation - all at low cost.
The information-gathering would be linked with brokered bidding networks, increasing the chances of achieving a desired outcome - such as selling some cinema tickets.
An AI agent might ask us: “You mentioned feeling overworked, shall I book you that movie ticket we’d talked about?”.
The researchers suggest it could steer conversations towards particular platforms, advertisers, businesses or even political organisations.
So far, the intention economy is an aspiration for the tech giants, rather than a reality.
But there are enough hints - and indeed, published research - indicating that it is on its way.
There was an open call for “data that expresses human intention… across any language, topic, and format” in a 2023 OpenAI blogpost.
And the director of product at Shopify – an OpenAI partner – spoke at a conference in the same year of chatbots coming in “to explicitly get the user’s intent”.
Meanwhile, Nvidia’s CEO has spoken publicly of using LLMs to figure out intention and desire, while Facebook owner Meta released ‘Intentonomy’ research - a dataset for human intent understanding - in 2021.
Last year, Apple’s new ‘App Intents’ developer framework for connecting apps to its voice-controlled personal assistant, Siri, includes protocols to “predict actions someone might take in future” and “to suggest the app intent to someone in the future using predictions you [the developer] provide”.
Dr Chaudhary said: “AI agents such as Meta’s CICERO are said to achieve human level play in the game Diplomacy, which is dependent on inferring and predicting intent, and using persuasive dialogue to advance one’s position.
“These companies already sell our attention. To get the commercial edge, the logical next step is to use the technology they are clearly developing to forecast our intentions, and sell our desires before we have even fully comprehended what they are.”
Dr Penn noted that these developments are not necessarily negative, but do have the potential to be destructive.
“Public awareness of what is coming is the key to ensuring we don’t go down the wrong path,” he said.