Loading...

Track 2 – AI & Sound

Introduction
AI, and not least Deep AI, are paving their way into sound applications and are changing the sound application landscape dramatically, including the products and solutions from sound-based companies. Sound is becoming computing intensive. Rather than just being able to offer high quality sound reproduction, Deep AI e.g. offers the options of improving poor original sound almost beyond imagination or making other sophisticated uses of sound. The session focusses on three interesting applications already in use or to enter use in a foreseeable future in Danish sound-based companies and will be presented by researchers and engineers already deeply involved in employing the AI techniques in emerging products and solutions for future markets.

Talks:

“Augmented Hearing “Using a deep neural networks separation algorithm for improved speech segregation in hearing impaired listeners”
by Lars Bramsløw, Eriksholm Research Center, M.Sc.E, Ph.D., Senior Scientist

Deep AI may be employed in hearing aids to e.g. separate simultaneous conversations in even noisy environment from two other persons, and present the talks of the two persons in separate ears to a listener with impaired hearing abilities, allowing the impaired person to focus on one or the other of the two speaking persons. As of today even sophisticated hearing aids cannot offer that option. People with normal hearing find it possible to separate the voices of two people talking simultaneously to them and focus on the preferred. Deep AI promise to offer similar capabilities to hearing impaired people. The presentation will focus on this and others uses of deep AI.

“Towards end-to-end Speech Enhancement with Neural Networks”.
by Clément Laroche, GN Audio A/S (Jabra), Audio Research, Senior Research Scientist,

Already today companies like Google and Amazon deliver products and solutions utilizing deep AI significantly. However for portable solutions and products, where e.g. battery efficiency is key, and where communication is seamless, separate challenges exist. The promises – and challenges as seen from a perspective of portable solutions, where computing power still is reduced compared to wired solutions, is what R&D at a provider of conference and headset solutions focuses on in relation to deep AI.

“How machine learning could help to extract relevant information from acoustic and vibration data”
by Karim Haddad, Brüel & Kjær Sound and Vibration, Research Engineer at the Innovation Group

Acoustic and vibration recordings from microphones and accelerometers have generally a very rich content, and because of this, it can hide the type of information we are looking for. We could, for instance, be interested in detecting a defect in a machine focusing on the noise or the vibrations it is generating, or one may want to extract some relevant acoustic signals buried in background noise such as an aircraft noise recorded by a sound level meter in a city. Those are examples where AI can assist. Nowadays, the number of available databases with audio data is increasing, making possible to train machine learning systems with sufficient quantities of inputs, in particular deep learning networks. In the presentation, focus will be on how we can employ machine learning for acoustic and vibration signals, balanced against the challenges.

11.00-12.00

Seminar Room