ISCA Special Interest Group: Security and Privacy in Speech Communication (SPSC)
A new community for researchers
Smart speech technology driven by AI has very recently become commonplace. We have voice assistants in our phones, TVs, smart home applications etc. The rapid introduction of this new technology is however about to backfire, since people are becoming increasingly worried about their privacy and rightly so. Poorly maintained privacy in smart speech interfaces can lead to invasive and unethical exploitation of users.
To encourage research in this area, we are proud to announce the founding of an ISCA Special Interest Group “Security and Privacy in Speech Communication” (SPSC). The International Speech Communication Association (ISCA) is a non-profit organization and one of the two main professional associations for speech communication science and technology, the other association being the IEEE Signal Processing Society. It is therefore natural that this group is hosted by ISCA.
Privacy and security is an interdisciplinary topic which includes, among others, signal processing, computer science (esp. cryptography), linguistics, phonetics, acoustics, law, cognitive sciences and medical sciences. With the special interest group, we offer a central hub for all research and researchers in these areas, where similar-minded researchers can find new connections, where researchers can discuss, exchange ideas and develop joint interdisciplinary understanding of matters in security and privacy. To this end, we intend to organize events such as workshops, special sessions at conferences, meetings and special issues in journals. It is a self-organizing community and we are more than happy to embrace events you are organizing.
Research in this area is urgently needed. The problems are so wide-spread that privacy and security problems are constantly visible in the news. A sampling of recent news articles include:
- James Vlahos, ‘Smart talking: are our devices threatening our privacy?’ (26 March 2019, The Guardian);
- Roisin Kiberd, ‘Hey, Siri! Stop recording and sharing my private conversations’ (30 July 2019, The Guardian);
- Adam Clark Estes, ‘The bright side of humans eavesdropping on your Alexa Recordings’ (17 August 2019, Gizmodo);
- Dorian Lynskey, ‘Alex, are you invading my privacy? the dark side of our voice assistants’ (9 October 2019, The Guardian);
- Lily Hay Newman, ‘How to keep your smart assistant voice recordings private’ (29 October 2019, Wired);
The range of potential privacy and security problems in speech communication is wide, for example:
- Advertisers and employers could monitor users and employees to gain unethical leverage. This is especially problematic with vulnerable populations such as children, handicapped and the elderly.
- Banks, insurance companies and healthcare providers could covertly monitor user’s health to gain unethical leverage: insurances, loans and treatments could be denied based on such information.
- Family members could secretly collect information about each other, when smart devices are used by multiple people. For example, a jealous ex-partner could use smart devices for stalking.
- When responding to speech commands, smart devices could reveal sensitive personal information even when in a public place. For example, if your phone would respond with a loud voice “Did you ask about sexually transmitted diseases?” when sitting on a crowded bus, many would feel uncomfortable.
- Governmental actors, criminals and hackers could use speech technology for criminal and unethical purposes, such as spying, impersonation and identity theft.
In addition, we are certain that there are further not-yet-identified unethical or criminal purposes which can use speech technology, or where weaknesses of speech technology can be exploited.
On the other hand, when properly applied, speech technology can be used to improve and enhance security, privacy as well as usability, for example:
- Acoustic monitoring can identify dangerous situations (calls for help, sounds of a break-in or fire), and trigger rescue operations.
- Access management can benefit from speaker and voice recognition; Speaker verification methods could give access to your phone, bank services or buildings.
- Acoustic scene classification could trigger privacy and security settings of mobile devices.
- Mobile devices could monitor your health status, without infringing privacy.
With this background, we are excitedly looking forward to the future. This area provides a plethora of important tasks for researchers, which can be very useful for users in protecting their security and privacy and in making their user-experiences with speech technology better.
Join our community now! (See web-page for instructions.)