ArticlesEditorial

What Amazon Did Wrong By Listening to Your Alexa Voice Commands

It’s not what you think

Bloomberg’s exclusive report detailing how Amazon workers (both employees and contractors) were listening in on users’ voice commands as part of a program designed to improve Alexa’s performance as a voice-activated assistant was a bombshell of an article. It resulted in outrage, concern, and paranoia and since then, new reports were released showing that Apple, Google, Microsoft, and Facebook were doing essentially the same thing with their voice AI products and services.

Multiple outlets have since wrote about the discovery, tying it to the bigger problem between tech companies, privacy, and how new technologies such as these smart and voice-activated devices are intrusive and accelerating the demise of privacy.

We have a bit of a different take. 

Why this isn’t as big a deal as everyone makes it out to be

It’s important to provide a more nuanced take on what’s worth screaming about, what we should have expected, and what is actually fine and reasonable behavior. We should call out companies for their bad data or privacy practices and behavior, but we don’t believe that every use of data or every kind of data collection is cause for concern. 

Let’s take stock of some facts, as reported by the Bloomberg article. 

  • According to a person familiar with the program’s design, the device “pulls a random, small sampling of customer voice recordings and sends the audio files” to the individuals responsible for this QA.
  • This sampling is considered “extremely small” though the only number provided was that workers listen to about 1,000 recordings per shift (it’s unknown how long each recording is).
  • Employees have access to a user’s first name, device serial number, account number. 
  • The employees and contractors listening to the voice commands are based across multiple locations in the world and have strict directives to ensure privacy on the users’ side.
  • For example, they cannot take any actions based on what they hear from the devices (say some kind of distressing scenario or issue).
  • However, the article does report that the workers share experiences of what they heard in their internal chatroom.
  • The purpose of listening in to the recordings is to improve accuracy and verify certain voice commands and words – this is part of the process for improving Alexa’s algorithm.

Now, we do have to say that many of these facts and statements come from Amazon themselves so if there’s a natural distrust of Amazon, we can’t blame you. That being said, whether we take Amazon’s word at face value, the most important point we can make with this article is:

We’re not surprised and you shouldn’t be either.

There are two reasons we think the Amazon revelations are less problematic than the broad and rather hysterical reaction make it out to be: First, new technologies require human input to improve. As a new technology, especially one reliant on AI and machine learning, some human input is necessary to ensure the technology is taking into account accurate information as it develops. Even basic quality assurance would require humans to listen to recordings to help Alexa take into account differences between dialects, accents, phrases, etc.

Second, and equally importantly, anyone using a voice-activated device should probably assume they cannot expect strict data privacy unless they were actually promised strict data privacy. We’d be much more surprised if it turned out Amazon never listened in to our conversations. This is a company that lives and breathes off data – remember that the reason the device exists is so it can sell you more Amazon products.

Amazon’s far from innocent – here’s what they should’ve done

While we don’t put the Alexa ListenGate in the privacy violation hall of fame, that doesn’t mean Amazon didn’t act improperly. There are a lot of better choices Amazon could have made, and some specific things they should address now that this program is out in the public eye.

To start, Amazon should have been more transparent about the program. There is still a lot of speculation floating around because we don’t have a clear picture of how the listening program works. 

Here’s what we would like to know:

  • How small is the sampling of conversations collected for this program? The scale of this matters. Are they listening to 1% of Alexa voice commands? 5%? 25%? ‘Small’ is a vague descriptor.
  • Amazon should make it very clear that users’ voice commands could be listened to as part of this program. Right now, users would need to do more digging to understand that Alexa is always improving because workers are listening in to conversations. 
  • Amazon should have made this program opt-in, instead of opt-out by default. In conjunction with the fact that they weren’t clear about this program being in place to begin with, a lot of people were unassumingly giving Amazon permission to listen to their conversations.
  • They should be more detailed as to how this program works and how often conversations are sampled.
  • Do Amazon employees and workers need to know users’ first names and account numbers? Why? This feels like a no-brainer.

If Amazon changes nothing about this program, it’s bad. Enacting on any of the above suggestions is a step in the right direction. There should also be more safeguards in place to ensure that the workers can’t easily share private recordings outside of the workplace. Even the fact that they share it amongst each other is a bit troubling knowing that the employees know users’ first names and account details. If this data falls into the wrong hands, it would be pretty easy to identify someone.

Amazon should also be clear and ensure that this data isn’t being misused or sold to other third-parties or advertisers without our knowledge and consent. As with most data practices and policies, having an opt-in policy in regards to data collection, sharing, and selling is the kind of policy.

Unfortunately, we can’t rely on Amazon to take an opt-in approach or even be fully transparent. They weren’t upfront about this program so we can’t expect they’re being upfront about everything else. We have to support policy, regulations, and government initiatives that push for more privacy-minded and customer-centric behavior and hold companies accountable. 

And remember, you’re not powerless in this scenario.

What about the other companies?

So far we’ve only discussed Amazon – what about the other companies?

For the most part, what we’ve said about Amazon applies to nearly all the other companies and you can see that they’ve all reacted differently to these reports. Here’s a quick summary:

  • Amazon increased communication, transparency, and control. 
  • Apple stopped the program and announced that it will review the program. However, recently, they restarted the program, only with Apple employees and made it opt-in only, not something users are subjected to by default.
  • Google was ordered by German authorities to stop the human review process and Google paused the program worldwide
  • Microsoft changed their privacy policy to be more clear about the review program in place but has not paused it.
  • Facebook has also paused the program after the report that it was listening to messenger voice audio was released.

We’ll talk about the responsibility we should take into account in the next section but we do want to call out Microsoft and Facebook here. Microsoft and Facebook were listening to recordings that were captured as a result of communication between two or more people as they used a private messaging service. 

This is more of a privacy invasion than the case of users talking to voice-activated devices. We believe these companies should have been extremely clear that these private communications were being recorded, saved, and listened to. However, we’re not powerless here.

The Only Way To De-Risk is to De-Own

Keeping a firearm in your home elevates the odds that you will be shot. Keeping a voice activated listening device in your home elevates the odds that you will be listened to. If you aren’t ok with that risk, get rid of the device. It’s a foolproof solution.

More practically, we all understand that our decisions often have tradeoffs. When we use Google instead of DuckDuckGo, we’re making a tradeoff in the form of giving Google our data. When we share news with friends via Facebook, we’re making a choice to feed them our personal data instead of keeping it ‘private’ via email. The same is true when we choose to use an Amazon Echo, Google Home, or other voice-activated alternative.

It’s important to be frank with ourselves and understand that we have agency here. If you’re high on privacy, then you probably shouldn’t use an Amazon Echo. Or maybe you’re okay with the fact that Amazon might listen in once in a while, because it’s only to improve their product.

If you’re still uncomfortable but really want to use your Echo, here’s a good step-by-step guide to make sure you’re opted-out of the ‘improvement’ program. And in case you want to delete all the voice data associated with your Alexa device, CNET published a guide in 2018 that shows you how to wipe your voice data.

Are you safe if you don’t keep an Echo at home? Not entirely, as some of your friends likely have these devices. You might come across them in your AirBNB, and even some hotels are using them. As with nearly everything else, we have to stay aware of our surroundings, adjust our assumptions, and get used to a new world in which total privacy is very hard to maintain.

Show More
Back to top button