Home   Business   Article

Subscribe Now

AI Ethics course at Cambridge University is a first




The University of Cambridge has launched the UK’s first masters degree in managing the risks of artificial intelligence (AI).

The masters in AI Ethics is led by the Leverhulme Centre for the Future of Intelligence (CFI), an interdisciplinary research centre based at the University of Cambridge. Set up in 2016, the CFI has established itself at the forefront of AI ethics research worldwide, working in partnership with the University of Oxford, Imperial College London, and UC Berkeley.

The CFI is now partnering with the university’s Institute for Continuing Education to deliver the two-year part-time masters degree – the first course starts in October 2021.

Executive director of the CFI, Dr Stephen Cave, said: “People are using AI in different ways across every industry, and they are asking themselves, ‘How can we do this in a way that broadly benefits society?’

“We have brought together cutting edge knowledge on the responsible and beneficial use of AI, and want to impart that to the developers, policymakers, businesspeople and others who are making decisions right now about how to use these technologies.”

The Covid-19 pandemic has seen artificial intelligence rushed into experimental use at scale, bringing the importance of ethical AI competence into even greater relief – for example, AI has been deployed to fight the pandemic in the development of vaccines, early diagnosis and contact tracing.

Yet artificial intelligence is one of the biggest issues of modern times. Popularised in science fiction by killer robots – such as The Terminator and Westworld – ‘thinking machines’ have huge potential to greatly enhance life for billions of people. But the technology – already a part of everyday life in forms like Alexa, Amazon’s virtual assistant, facial identification, and Google maps – has downsides too.

Sophia UN AI (40433440)
Sophia UN AI (40433440)

It can embed sexism, as when an Amazon algorithm for ranking job applicants automatically downgraded women; or be used for intrusive surveillance using facial recognition algorithms that decide who is a ‘potential criminal’. The algorithm which initially marked down A-level grades for pupils from disadvantaged backgrounds and benefitted pupils at private schools is another example of bias which is baked into technology – and University of Cambridge researchers have found this applies to people of colour too.

CFI researchers have concluded that “the overwhelming ‘whiteness’ of artificial intelligence – from stock images and cinematic robots to the dialects of virtual assistants – removes people of colour from humanity’s visions of its high-tech future”.

The confusion about AI has arisen because they are not, in fact, ‘thinking machines’ – they are programmed machines, and the output is not a reflection of choices made by the machine, but choices made by the programmers – which makes an AI Ethics course a very welcome addition to the understanding of their use.

The course will cover issues of privacy, surveillance, justice, fairness, algorithmic bias, misinformation and microtargeting, Big Data, responsible innovation and data governance. The curriculum spans a wide range of academic areas including philosophy, machine learning, policy, race theory, design, computer science, engineering and law. Run by a specialist research centre, successful applicants will get to study the latest subject research taught by world-leading experts.

Applications for the first year have to be submitted by March 31.

– Get your free weekly digest sent to your inbox here.



This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies - Learn More