Cambridge study calls for ‘guardrails’ to protect children in digital and AI worlds
The cognitive functioning of children is being impacted by their exposure to digital media, says a new Cambridge Judge Business School study – and the call for regulatory guidelines around AI-enabled toys has created a new set of challenges.
The report is titled ‘How digital media impacts child development’ and is co-authored by Professor Lucia Reisch, El-Erian professor of behavioural economics and public policy at the Judge.
It concludes that extensive smartphone and/or internet exposure combined with low exposure to computers for entertainment and education and medium television exposure was associated with higher impulsivity and cognitive inflexibility scores – especially in girls.
The research assessed the connection between cognitive functioning and exposure to multiple sources of media through digital multitasking – such as checking emails while watching videos – and involved thousands of children and adolescents in nine European countries.
“Children require protection against the likely adverse impact of digital environment,” concludes the research published in Scientific Reports, part of the Nature portfolio of journals.
“Exposure to smartphones and media multitasking were positively associated with impulsivity and cognitive inflexibility while being inversely associated with decision-making ability.”
Prof Reisch advocates the use of ‘guardrails’ for online use, rather than blocking children from access.
“Governments should ensure legal guardrails for safe and beneficial use of digital media,” she says. “This is not about blocking kids from access per se, but about making it a children-safe environment. This may include very different things, including limits on hours of use of digital media within schools and limiting potentially damaging content.
“Platforms such as those owned by Meta – including Facebook and Instagram – where children and teens spend most of their time, have a huge responsibility to curate the platforms to minimise the potential dangers. Examples include age certification and filters, monitoring and feedback on social media hours, and soft defaults limiting social media hours (by parents or teens themselves as a kind of self-nudge), and limiting or banning picture filters that seem particularly impactful on girls regarding body image.”
The ‘guardrails’ option for children is also relevant for the incoming challenges posed by AI, says Prof Reisch.
AI-enabled toys are already on sale. Robots, puzzles, Lego, teddy bears – the list of AI-enabled toys is already long and growing by the week. One game, Pictionary, uses AI to guess what a child’s drawing is of. On the box, it says the game was “designed with children’s privacy in mind”.
Pictionary’s new AI element was trained using “millions of user-submitted drawings”, says maker Mattel, and it integrates Google’s ‘Quick, Draw!’ software, which is available online. In addition, there is an online version called Pictionary Air.
So as AI is increasingly integrated into the toys and games our children play, how cautious should we be?
Prof Reisch told the Cambridge Independent: “While the UK generally has adopted a ‘light guardrails’ approach regarding AI regulation, the EU has developed much stricter AI regulations based on the level of societal risk involved.
“AI potentially harming children is seen as high risk. Toys are part of broader legislative efforts, such as the European AI Act that has just been largely agreed on and will soon come into effect.
“In general, the continent uses a more conservative precautionary approach to AI in general, and AI-augmented toys such as AI-based speaking dolls have been banned in some countries since they have clear (cyber) security risks, even beyond the ethical challenges they pose, eg parents spying on their children’s play.”
A spokesperson for the Department for Science, Innovation and Technology said: “The UK is a leader in AI safety, from hosting the world’s first AI Safety Summit in November last year, to our recent commitment of £10million to ensure our regulators have the skills and expertise to address AI risks in their respective areas. Work is also well under way to better understand and mitigate the cyber security risks associated with AI.
“From 29 April, a raft of world-first measures will also be introduced to boost the security of products with internet connectivity, including toys, and offering new protections for consumers. This includes the banning of default passwords, better transparency over how long a product will receive security updates for, and a process for reporting bugs in devices.”
On 14 March, the EU approved the Artificial Intelligence Act, the world’s first recognised set of rules designed to regulate this technology. The new rules ban AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.
The AI Act has clout too: transgressors can be fined up to 35million euros or 7 per cent of global annual revenue, whichever is higher.