Home   Business   Article

Subscribe Now

AI: 'It's not just a playground'




Professor Stephen Hawking with Newtons copy of Principia Mathematica. Picture by Graham CopeKoga
Professor Stephen Hawking with Newtons copy of Principia Mathematica. Picture by Graham CopeKoga

Is Prof Stephen Hawking right to say AI is a danger to humanity? Dr Stella Pachidi says machine intelligence needs closer scrutiny

Computers, if they link up in an AI-enabled framework, will change the world forever.
Computers, if they link up in an AI-enabled framework, will change the world forever.

Professor Stephen Hawking has stepped up his concern that artificial intelligence (AI) robots could outperform their human designers – and ratcheted up his warning by suggesting that robots could replace human life completely.

In a new interview published in Wired magazine this month the Cambridge-based theoretical physicist, cosmologist and author said: “I fear that AI may replace humans altogether.

“If people design computer viruses, someone will design AI that improves and replicates itself.

“This will be a new form of life that outperforms humans.”

Dr Stella Pachidi. Lecturer in information systems at Cambridge Judge Business School, says an ethical framework needs to happen to ensure AI remains beneficial to humankind.
Dr Stella Pachidi. Lecturer in information systems at Cambridge Judge Business School, says an ethical framework needs to happen to ensure AI remains beneficial to humankind.

The world-renowned scientist also suggested in the interview that more interest needs to be taken in science or there will be “serious consequences”.

So how should we engage with science and the AI issues Prof Hawking describes?

Dr Stella Pachidi, lecturer in information systems at Cambridge Judge Business School, says that the public is only just starting to wake up to the incredible ways that algorithms are used. For the uninitiated, these are the set of rules that computers follow in calculations and other problem-solving operations whether it’s for your Facebook feed, giving you a product recommendation on Amazon or instructing a robot to spray-paint a car.

“With AI there will certainly be consequences as there is with all technology,” Dr Pachidi told the Cambridge Independent. “There is a lot of fuss going on and I think it’s a good thing. We have started talking about it and we are indeed imbuing not just AI but any technology with decision criteria that reflect the programmers’ own values when deciding what data to include or exclude. It is becoming prevalent in more profound ways – the Facebook algorithm for instance, regarding what information comes in to our news feed, or Google’s algorithm with TripAdvisor.

Artificial intelligence and human brains do not offer like-for-like pathways.
Artificial intelligence and human brains do not offer like-for-like pathways.

“The differences with artificial intelligence are that the technologies are self-developing, so they are learning from the data they are fed with. The algorithms are based on data input and the bias isn’t necessarily in how we design the technology but how we feed the algorithms with data, and this is something the public is less concerned about. The consequences are not realised, and the conversations are starting, but we need to train ourselves how to use data and see how the data leads to certain actions.”

AI is already in use, of course, but the sector is devoid of an ethical framework, partly because corporations aren’t inclined to share their data with the public because that means sharing it with competitors. But this lack of transparency in more public situations – social media – has the potential to cause much more profound problems when it comes to AI. For instance, some job applications can already be handled by connecting to applicants’ LinkedIn profile or ranking people’s qualifications on Upwork.

“There are start-ups developing which are very advanced, so in a video interview an algorithm is evaluating your qualification based on how you performed in the interview. Those technologies are just around the corner. Also you need to think about the consequences of when different types of data get to be combined through big data technologies, so what if your LinkedIn data is analysed with other sorts of data? Information from any digital footprint could be used – so for instance Twitter or Facebook likes.”

As a society, Dr Pachidi – and indeed Prof Hawking – assert that we are faced with a future that’s going to look very different from what’s gone before.

“What will happen to truck drivers once driverless trucks take over, or call centre and customer service employees once algorithmic solutions take their place? ” she asks.

“This is an issue that we as a society are faced with – how is our economy going to be, how will we live in the future with this technology? It’s not ‘there’s nothing we can do about it’. We need to have conversations about how technology will augment us, and for ways to rechannel human resources and develop different kinds of skills so people can work together with the technology in a different kind of way.

“The consequences of technology are quickly leading to the point where we need to start talking about these issues on a policy level – ‘this is what a picture of new technologies looks like’, ‘this is how corporations are getting more and more power’ and ‘should corporations be more transparent about how they develop algorithms?’ and ‘what ethical frameworks should be used?’”

The answer could lie in self-education.

“You should not underestimate the agency of the user so choose what information you share, who you follow and unfollow, which digital profiles you link together and how. We need to have the awareness of people that the digital environment is consequential. It’s not just a playground – use it wisely. It takes time for humans to adjust and we don’t have time.”

One of the first steps could be to oblige Google, Facebook, TripAdvisor and other tech giants to open up their algorithms to public scrutiny, especially since the fake news format clearly influenced the US election a year ago.

“I don’t think they’re doing enough,” says Dr Pachidi of the digital age’s tech giants. “That could be because their algorithms offer a competitive advantage, but events taking place are making institutions realise that they might need to put more pressure on them to act, but also think how, on a citizen level, do we inform our citizens of the issues, why do we expect users to be informed by Facebook and Twitter?”

It seems being informed by social media too often means being misinformed, or at least misguided. So if we let technology companies continue to develop their algorithms outside of any proper scrutiny, what chance have we got with AI-enabled robots?



This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies - Learn More