A deep dive into AI tech at CW
PUBLISHED: 18:31 18 October 2017 | UPDATED: 18:31 18 October 2017
Iliffe Media Ltd
Will AI be operated via the cloud or on the gadget, and other key questions...
CW (formerly Cambridge Wireless) is well known for hosting tech-related events, with its annual conference being one of the highlights of the Cambridge calendar, but it schedules events at other points of the year too – including a one-day conference on artificial intelligence (AI) at the William Gates Building, which took place earlier this month.
The full title for the occasion was the CW Technology & Engineering Conference (CW-TEC) on Artificial Intelligence, subtitled ‘Underlying technologies – how they work and how they are applied’. Hosted by The Computer Labs and with sponsors including Magna, Innovate UK and Prowler.io, you’d be right in thinking this CW-TEC event is aimed at the engineering end of AI rather than the more philosophical musings which have graced much recent media output on the topic. But the fact is, everyone’s talking about AI, so it makes sense to get down and dirty and find out what’s going on at a more granular level. CW CEO Bob Driver calls this CW-TEC day as “more of a deep dive into a deep-tech area that engineers and executives might want to have a look at”.
With speakers from Google, Amazon, Apple and Arm, you’d have to count the day as a success and it came in two parts. The morning’s proceedings included talks from Apple, Google and Prowler.io, the latter fresh from its successful £13million funding round for its probabilistic modelling platform.
Professor Steve Young was first up: he may be listed as a professor of information engineering at the University of Cambridge but his bio in the CW blurb gives the game away when it states that “he is now a senior member of technical staff in the Apple Siri development team based in Cambridge”.
Prof Young talked about the advances being made in the sector: lots of the underlying technology on which AI is based have been around for some time, for instance the mathematical algorithms which have been in use since the 1950s and 60s.
The big difference today, of course, is the sheer computing power available, and the huge data sets available in the cloud, of which more later.
Next up was Theophane Weber, a senior research scientist with Google DeepMind, who got stuck in on Deep Learning and neural networks. Neural nets are the information processing paradigms we use to make decisions, and AI will have to be able to make similar decisions to be effective. They involve relational reasoning, so when I watch Sherlock I’m watching Benedict Cumberbatch cast his neural nets ever wider to bring a crime and its perpetrator within his relational reasoning.
We all do something similar when we’re at the supermarket and try and work out whether we can afford to buy an organic or a non-organic product! Bob Driver put it best when he said that the various models under discussion are “about getting computers to think, not just crunch data”.
Next up was Professor Carl Edward Rasmussen, who is professor of machine learning at the University of Cambridge as well as being chairman of Prowler.io. Prof Rasmussen put up a slide of Bent Larsen playing chess against Deep Thought in the 1970s. The computer did the calculation and someone moved the piece on the board according to its diktat. The point Prof Rasmussen made was that to compare the human against the computer in this situation was unfair because the computer wasn’t capable of moving its own chess pieces. Quite often with AI it’s the simple things we take for granted that a computer can’t achieve. For instance, when I wake up of a morning I go into the kitchen, turn the radio on, grab some milk from the fridge and some coffee from the jar, fire up the coffee maker and check my phone. AI can potentially complete all these individual tasks, but doing them all in under a minute is a long way off.
Just before lunch came Dr Tony Robinson, founder and CTO of Cambridge-based Speechmatics, who spoke about the revolution in speech recognition technology. “All well and good,” said Bob Driver, “but something really cool happened – as he spoke his words were simultaneously and automatically translated into text and displayed on the screen behind him, beneath his presentation slides.”
The programme changed tack after lunch. “The afternoon was really about the tools that people use for AI,” explained Driver. “A lot of toolkits are open-source now, so you can use them to develop an AI application far easier than you could in the past.”
Speakers included Alison Lowndes of NVIDIA, Simon Knowles of Graphcore and Dr Peter Baldwin and Dr David Page of Myrtle Software.
After the mid-afternoon tea break it was the turn of Anton Lokhmotov of dividiti. “We’re now racing towards creating AI systems and solutions,” he told the audience. “Both software and hardware, all the way from IoT to supercomputing. What is getting in the way are the massive amount of choices. If you’re one of those than wants to introduce AI into your systems, you’ve got to make some decisions. How do you select what’s important? Some people joke: ‘Why don’t we get AI to design AI?’. But that is exactly what we need to do. What I’d like to encourage is for this community to collaborate more, then, gradually, we’ll design a better system. And finally we can create truly self-optimising, self-learning systems.”
After this, most events would have gone into slow decline mode, but CW-TEC kept two key speakers until the end: Jem Davies is a fellow and general manager of machine learning at Arm, and Cyrus Vahid is principal solutions architect at Amazon Web Services (AWS).
Jem started off with the Arm soundbite: 50 billion chips sold to partners so far, 100 billion more expected to be sold in the next four years. “Wherever computing happens, that’s where we’ll aim to be.” Chips for drones, for Amazon’s Alexa, for the Huawei Mate 9 ‘phablet’. Jem shared how much change has happened. Is predictive text machine learning? Ten years ago no, but “today it certainly is”.
He types in the name of one of Arm’s projects into his phone, of an almost unknown Norse goddess. “Only one or two people here” – there were 200 people in the audience – “would have ever heard that word, and it got it right first time.”
Also, “a phone manufacturer can predict the sex of the user with 90 per cent accuracy just by the way you walk – good or bad, it’s foundational technology”.
Cyrus’ mission is “about making technology an extension of humanity”. A lot of what Cyrus is trying to achieve is “trying to change the multiplication so the number of operations goes down”. If the computer has to do fewer operations to perform a function, it will complete the job faster. Cyrus set out four ways of doing this:
1. Pruning – removing the less effective connections;
2. Quantization – using fewer bits to express the same information;
3. Huffman coding – a data compression algorithm; and
4. Reduced architecture: with 50 times fewer parameters you could potentially still do the job with the same accuracy, just faster.
The two speakers outlined their versions of the road AI should be going down: ARM seems to be saying that the gadget is where the action should be taking place, pointing out that to upload data to the cloud and wait for it to come back into use on the gadget you’re using renders real-time activities such as driving a car impossible. Amazon is saying upload it all to the cloud, speeds will pick up. What happens next is anyone’s guess. I bet even AI couldn’t tell you how it’ll play out...
To join CW or find out more about its events, visit cambridgewireless.co.uk.