I’m learning a language, I speak it in public to other people who do. I don’t research the language, because I have some old text books on it. My partner doesn’t speak it and doesn’t research it on their devices. I don’t normally have my phone on me in public, but my partner does. It took about 4 months of publicly speaking in the language before they got ads
What do you think this means?
::edit::
It was a Reddit ad and my city has embraced those AI smart cameras, so I assume some of those are Google owned which makes sense with Reddit and Google’s recent alliance. This is assuming our devices aren’t listening to us without our permission and AI cameras are mining data on passersby
Other theories are that since cellphones are involved it doesn’t matter if I nor my partner ever searched for the language, at some point my phone or partner’s phone was near someone who spoke that language and the data brokers/ad sellers inferred from there
Seems like the consensus is that I must have posted in the language on some social media or used Google to research it or made some new friends who speak the language and that’s why
Yea, that is my problem with the “always listening” theory. I am sure they’re capable of that, but don’t think they’re doing it just because they can get more data with a fraction of the cost by more “traditional” tracking.
In a way, it is scarier than listening - because listening is far easier to understand than the multitude of ways the data is collected and combined.
Exactly. They definitely could, but there’d also be potential legal issues, and it’d just be much more expensive to analyze sound data.
If it’s done on each device, then their battery power would suck, and performance would decline. Sure, they could do that, but I imagine most phone manufacturers would rather sell more phones and make money from app companies (Meta, Google) who pay to have their apps pre-installed on the phone. Or Samsung and Apple, who have their own ecosystems for mining data like Google does.
If they were instead just uploading audio to central servers (which could mitigate legal issues due to “anonymizing” the data), then they’d be paying for the computational power to analyze all that data.
Again, completely possible, and likely in use with things like Alexa and Google Home. But on our phones (and laptops for that matter), they have so many other cheaper ways to get probably the same quality of information.
There is the Near-ultrasonic that can be used to transmit data and it seems that that is always listening so full audio isnt that far. But yeah i recon ur right its too risky when they have plenty of alternative ways to get the same data.
It is not about “too risky”. It is about “costs much more in processing power while providing a fraction of the info”.