Home   Business   Article

Subscribe Now

Audio Analytic has just made smartphones smarter




The embarrassment caused by your phone goes off at maximum volume in a quiet space could soon be a thing of the past.

Audio Analytic CEO Chris Mitchell. Picture: Keith Heppell
Audio Analytic CEO Chris Mitchell. Picture: Keith Heppell

Audio Analytic has teamed up with Qualcomm to make our smartphones smarter and sense the environment around them.

The Cambridge company’s acoustic scene recognition technology can now be embedded in mobiles so that they are are aware of whether the user in a ‘chaotic’, ‘lively’, ‘calm’ or ‘boring’ environment.

This can be used to adapt how phones behave when calls or notifications arrive, reducing their volume in quieter environments, or preventing you from missing an important call because your phone is still on vibrate when you’re in a busy bar.

Audio Analytic CEO Dr Chris Mitchell said: “Sound recognition is the most exciting branch of artificial intelligence right now. As humans, we make sense of the world around us through sound, and by empowering machines with a human-like sense of hearing we’re enabling the next wave of innovation on smartphones.”

The company, based in Quayside, confirmed that its ai3-nano software and Acoustic Scene Recognition AI technology has been pre-validated and optimised to run in always-on, low-power mode running on the Qualcomm Sensing Hub.

The second generation of this sensing hub - part of the Qualcomm Snapdragon 888 5G Mobile Platform - was unveiled at the Snapdragon Tech Summit Digital 2020.

The technology also has important applications for hearing-impaired users, as it can recognise important sounds around them. This means it can alert them to signs of danger, such as smoke or carbon monoxide alarms, or warn them of a knock at the door.

And the innovation means that for the first time smartphone users will not have to pick between voice assistant or sound recognition.

Audio Analytic’s ai3-nano software takes up less than 40kB of space on the chipset, meaning it can run concurrently with the low-power audio subsystem within the Qualcomm AI Engine and the Qualcomm Aqstic audio codec, which is part of the technology supporting voice assistants like Amazon Alexa, Google Assistant and others.

The software also enables tagging of audio content in videos and photos to enable creative editing, social sharing or easy retrieval - meaning you could find the moment a child laughs while on holiday, or share content via social media that takes advantage of sound-related effects and filters applied when a guitar is played.

The technology can also trigger funny video filters and effects “based on the sounds you make for spontaneous silliness in video chats or games with friends and family”. Make a cow sound, for example, and your phone will put a cow filter over your face on a chat.

The applications run on-device, as with all of Audio Analytic’s technology, meaning no information is sent to the cloud for analysis, preventing any privacy concerns.

The ultra-compact code footprint also means it can be used in multiple ways without draining the battery.

The technology relies on Audio Analytic’s machine learning expertise, which have been trained on huge amounts ofdiverse data. Its Alexandria dataset, for example, contains 30 million labelled recordings across 1,000 sound classes.

Read more

Sign up for our weekly newsletter for the latest Cambridge technology and business news direct to your inbox every Friday

Inside Audio Analytic’s silent labs, work goes on

Audio Analytic secures $12m funding to give consumer electronics a sense of hearing

Audio Analytic: 'The Shazam of real-world sounds'

Chris Mitchell makes a big noise in smart home marketplace with Audio Analytic in Cambridge



This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies - Learn More