Voice assistants are being used more and more for internal communication in companies as well as in customer service. However, in addition to the many benefits and improvements, the safety of voice assistants is causing concern in many companies.
There is often uncertainty about what data is captured by voice assistants, how it is processed and whether the security of the data is guaranteed. In this article, you will learn what to consider when using voice assistants in your company.
Before we dig deeper into the safety of voice assistants and what security concerns companies have, it’s important to know what data voice assistants collect.
- Voice Recording Data: Voice assistants usually only record conversations if they have been activated beforehand. Depending on the voice assistant, the activation is done by a codeword like “Hey Alexa” or “Okay Google”. As soon as the voice assistant recognizes this codeword, it records the audio signals of its environment. This means that all questions, voice commands or conversations that take place during this period are recorded. Data about your search inquiries, purchases or created events is collected.
- User profile data: Less obvious, but still not to be neglected, is the storage of personal user data. When talking to a voice assistant, users are usually logged in with their personal profiles. This means that the user has previously created a user account and data such as name, e-mail, address or even bank details have been stored.
- Internet connection data: In addition, data on the internet connection is also collected. This includes, for example, the IP address.
The data is sent to a server and is processed in the cloud. Usually, this data is not deleted immediately but stored at least temporarily.
Since voice assistants collect, process and store a large amount of data, questions regarding the safety of voice assistants arise. In general, a distinction must be made between two different use cases: internal and external use.
Regarding internal use, voice assistants are used to organize and simplify internal communication. This can be, for example, searching for information or scheduling meetings.
Storing these requests on external servers is a concern for many organizations. Processing and storing data in the cloud could create security gaps and put sensitive company data in the wrong hands.
Furthermore, the records could be used to create so-called fake recordings. These are artificially generated voice recordings which, however, appear authentic and could thus be used to produce false statements.
Chatbots, voice assistants and AI,
For external use, voice assistants are used in customer service such as for making appointments or for on-site services. For example, a voice assistant in a hotel can provide information about the hotel or the surrounding area upon request.
When used externally, GDPR plays an important role. If companies collect, process or use personal data, they are obliged to comply with data protection standards.
Since the data is usually processed by a third party and not by the company itself, it is even more important guarantee compliance with the data protection guidelines.
By the way: Onlim’s products are GDPR compliant.
It is mainly the topics of data processing and storage that lead to concerns in many companies and for which there are already various promising solutions.
For example, a project from Darmstadt Technical University and Rosenheim University of Applied Sciences is trying to contain the security risks of voice assistants. The “Voice Guard” software architecture makes it possible to completely isolate voice processing and thus protect all data — even in real-time.
Mycroft Mark II offers another interesting approach. In contrast to most of its competitors’ models, this voice assistant relies on open-source software. It allows users to set up the voice assistant in a way that the data is not sent to the cloud but processed locally.