Loading Events
This event has passed.

Interacting with technology! – the newest technological innovations should be able to interact with us, predict our needs and adapt to our surroundings making the combination of audio technologies and artificial intelligence essential in order to create more intelligent solutions based on users’ needs, e.g. IoT units (wearables and hearables), interactive robot technology, and personalized speech/sound systems. In this track you will be presented with some of the newest innovations and discussions on the opportunities and risks connected to the advancements in AI & Sound technologies.

Registration here: https://www.hightechsummit.dk/


  • 14.05 – 14.15: Introduction: “The increasing importance of AI & Sound” and a brief introduction to the “Sound & Health” project, Kira Vibe Jespersen, Post.Doc, AU Center for Music in the Brain
  • 14.15 – 14.35: Business Talk: “AI and headsets/speakerphones”, Rasmus Kongsgaard Olsson, Senior Research Scientist, GN Audio – Jabra
  • 14:35 – 14:55: Business Talk: “AI and hearing aids, the newest applications”, Jens Brehm Bagger Nielsen, Architect, Data Science and Machine Learning, Widex
  • 14:55 – 15:15: Business Talk: “AI and hearing aids, the newest applications”, Niels H. Pontoppidan, Research Area Manager, Eriksholm Research Centre, Oticon
  • 15:15 – 15:35: Business Talk: “AI and auditory digital assistants” Andreas Cleve, CEO Officer, Corti
  • 15:35 – 16:05: Debate: “Opportunities of the newest advancements in AI & Sound technologies” Participants: Marco Scirea, Tommy Sonne Alstrøm, Christian Munk Scheuer.

Reception at the “Sound Zone” exhibition stand afterwards

Facilitator: Anders Høeg Nissen,Techjournalist, podcaster, speaker moderator.


Kira Vibe Jespersen: Kira Vibe Jespersen holds a PhD in health sciences from Aarhus University. She is now a postdoctoral researcher at the Danish National Research Foundation’s Center of Excellence for Music in the Brain. Her research focus on the use of music technologies to promote health as well as the neurobiological mechanisms underlying the potential effects of music in health care. Particularly, she has been interested in disentangling the effects of music as a sleep aid.


Rasmus Kongsgaard Olsson: Rasmus Kongsgaard Olsson has for the last 11 years worked at GN Audio A/S. Focusing on signal processing and machine learning to help make headsets and speakerphones sound better. Prior to this, Rasmus completed his Master’s and Ph.D. degrees at the Intelligent Signal Processing group of DTU. The work was centered on applying machine learning methods to the cocktail party problem.


Jens Brehm Bagger Nielsen: Jens Brehm Bagger Nielsen currently holds a position as Architect, Machine Learning & Data Science, at Widex A/S. He holds a M.Sc.E.E. from the Technical University of Denmark (DTU) focusing on acoustics, digital signal processing and machine learning. Additionally, he holds a Ph.D. degree from DTU Compute done in collaboration with Widex A/S within the area of machine learning with a focus on Bayesian non-parametric models, reinforcement learning and Bayesian optimization. It is this work that has now been commercialized with the launch of Widex Evoke and the SoundSense Learn feature, that enables end-users to personalize their hearing aid setting on their own in the real-world. He has authored several scientific publications related to machine learning and audiology.


Niels H. Pontoppidan: Dr. Pontoppidan heads the Augmented Hearing research group at Eriksholm Research Centre, Oticon. Augmented Hearing investigates how future hearing devices will use artificial intelligence, machine learning, and advanced analytics embedded in hearing devices and connected to the cloud to solve problems for people with hearing problems. The group looks at new ways to compensate for segregation problems and new ways for audiologists to learn how people with hearing problems use their hearing devices. We ask ourselves three questions: First, what knowledge about the hearing problem is missing, second, how can that knowledge be obtained in the real life or in the lab, and finally: how can algorithms – simple or complex – compensate for this problem? Since November 2016, Dr. Pontoppidan coordinates EVOTION: “Big Data Supporting Hearing Health Care Policies” which obtained 5 MIO EUR in funding from European Commission Horizon 2020 program.


Andreas Cleve: More information to come


Marco Scirea: Marco Scirea is currently Assistant Professor at the University of Southern Denmark (SDU). He has completed a PhD in Affective Music Generation and its Effect on Player Experience as part of the Center for Computer Games Research games at the IT University of Copenhagen. His research investigates the expression of moods and the affective effect this mood-expressive music can have on the listener, and applied this research to games: the final objective is to create a system where, by using a cognitive model of the player, we would be able to identify the player’s emotional state and be able to reinforce or manipulate it through the use of mood-expressive music to improve user experience. What this research hopes to achieve is creating better immersive experiences (reinforcement of current emotional state and playstyle) and help the designers create experiences where the players are put in a specific emotional state.


Christian Munk Scheuer: Christian Scheuer is an internationally acclaimed sound designer, composer, software developer and serial entrepreneur based in Copenhagen and Los Angeles. He is the founder of the SoundFlow platform which launched in April 2016 and has clients in all major film capitals around the world. Scheuer graduated from the National Film School of Denmark in 2015 and holds a BA in Film and Media from the University of Copenhagen as well as a music degree from Humboldt-Universität in Berlin. Christian has worked on numerous award winning feature films, tv shows and short films. His films have been screened at more than 50 festivals around the world.
Simultaneously with his film career Scheuer has developed software and created platforms for more than 2 decades for SMB’s in Denmark and abroad.


Tommy Sonne Alstrøm:Tommy Sonne Alstrøm holds a PhD in machine learning for signal processing and currently holds a position as a senior researcher in the section of Cognitive Systems, DTU Compute. He has 10 years of experience in research in machine learning and prior to research, 10 years of experience in the industry as a software engineer. His primary research interests include Bayesian inference, active learning and deep learning with application to sensory data.


Anders Høeg Nissen: Anders Høeg Nissen has been working as a broadcast journalist and radio presenter for more than twenty years, covering science and technology for the Danish Broadcasting Corporation (DR).
Since 2017 he has been working as an independent podcaster, journalist, consultant and public speaker – still with a focus on the benefits and challenges of tech.