ISCA Special Interest Group: Security and Privacy in Speech Communication (SPSC)

A new community for researchers

Smart speech technology driven by AI has very recently become commonplace. We have voice assistants in our phones, TVs, smart home applications etc. The rapid introduction of this new technology is however about to backfire, since people are becoming increasingly worried about their privacy and rightly so. Poorly maintained privacy in smart speech interfaces can lead to invasive and unethical exploitation of users.

Image by Tom Bäckström
  • Banks, insurance companies and healthcare providers could covertly monitor user’s health to gain unethical leverage: insurances, loans and treatments could be denied based on such information.
  • Family members could secretly collect information about each other, when smart devices are used by multiple people. For example, a jealous ex-partner could use smart devices for stalking.
  • When responding to speech commands, smart devices could reveal sensitive personal information even when in a public place. For example, if your phone would respond with a loud voice “Did you ask about sexually transmitted diseases?” when sitting on a crowded bus, many would feel uncomfortable.
  • Governmental actors, criminals and hackers could use speech technology for criminal and unethical purposes, such as spying, impersonation and identity theft.
Image by Sneha Das
  • Access management can benefit from speaker and voice recognition; Speaker verification methods could give access to your phone, bank services or buildings.
  • Acoustic scene classification could trigger privacy and security settings of mobile devices.
  • Mobile devices could monitor your health status, without infringing privacy.

Links

An excited researcher of life and everything. Associate Professor in Speech and Language Technology at Aalto University, Finland.