Last week, something really strange happened — I was talking to my friend, laptop in my bag, about how I wanted to buy fairy lights for my room. The very next day, I received a pop-up ad on Facebook about fairy lights from Amazon. I was creeped out by this seemingly spooky coincidence of ads, which Facebook says are based on general interests, to reflect my offline conversation. Apparently, this has also happened to many people at Woodstock and around the world. So, does this mean that our smartphones and laptops are listening to our conversations?
Maybe they are. As technology advances, new smartphones, laptops, and smart TVs release in the market; their high-tech features and artificial intelligent (AI) assistants allow us to have a surreal experience of the digital world around us. And, one of the features that we love about these smart devices is that we are able to have conversations with the AI assistants, such as Siri and Google. We trigger these talks by saying “Hey Siri” or “Okay Google.” But, when offline, these AI assistants can still process what we say.
According to Dr. Peter Henways, a senior consultant for cyberbullying at Asterix, our smartphones don’t use this “non-triggered data,” but the data can be accessed by third-party applications — such as Facebook and Google. Although Facebook denies the use of such data, users have seen far too many coincidences of personalized pop-up ads reflecting their conversations offline to believe that the ads are based on people’s profile, age, location, and interests.
In fact, to find out the truth, I talked about buying a dress in front of my laptop to see whether this “non-trigged data” would be used by Facebook. I said phrases like I need some cheap dresses and I want to buy a new dress. Later on, when I logged in Facebook to monitor any changes, I was surprised to see pop-ups recommending dresses from Shein and other stores at a cheap price; these had never appeared on my feed before, but it was exactly what I was looking for. It was utterly terrifying and eye-opening. I became absolutely afraid of my devices. Because they could hear me … And so can yours. And what’s even worse is that they can also send conversations you thought were private … to someone else.
Amazed, Danielle installed Echo Dots, an Alexa device, in every room of her house, believing Amazon’s claims of not invading privacy. But, when her private conversation with her husband got sent to a random contact number without her permission, Danielle became convinced that her smart home device was always listening to her, “invading [her] privacy.” But, are Echo Dots really at fault here? Or is it us who give permission to our smart devices to listen in?
Before creating an account on social media or downloading apps on our devices, there is a dreadfully, long — really long — agreement that we have to accept. If you’re not sure what I’m talking about, it’s because you, like most people around the world, have always ignored the need to read instead simply clicked “I agree,” with no thoughts of consequences, only convenience, to the privacy policy. That means that your conservations can be used to improve their service. Google’s policy mentions that it is allowed to use “third-party apps and sites, like ads.” And, “share personal information outside of Google” for legal reasons. That gives consent for your device to use your microphone. Hence, the AI can hear you.
Samsung’s smart TV users feel upset when they hear their private conversations that the AI has recorded, without their knowledge, in the history of their device. But, what upsets them? Samsung has warned us that these smart TVs are allowed to capture and transmit “spoken words will be among the data captured and transmitted to a third party through your use of Voice recognition” at all times. Samsung has made it clear in its privacy policy. So has Apple. And, Google. Facebook. Amazon Alexa.
So, yes, the AI in our devices is listening to us and using our private conversations as their data. But. They only ever do so because we allow them to. So, technically, we are at fault. But, in some “extremely rare” incidents, it is the AI’s fault. Like, in Danielle’s case, the Echo Dot had misinterpreted words in her conversation to be “Alexa, send a message.”
But still, in most cases, we allow these AIs to hear us. However, we can make our digital experience safer and prevent the AIs from listening. We can start by deleting and turning off audio recordings, microphone usage, and automatic AI assistance options — anything that is capable of listening and recording without our permission — on our smart devices.
And, next time, stop turning a blind eye to the privacy policies. However, it’s up to you to follow the advice. But if you don’t, you will not be able to stop. them. from hearing. you.