The Dystopian World of Facial Recognition Software and Cookie Tracking
Is Anonymity a Human Right?
Do people have the right to privacy as we go about our daily lives? Can we, for instance, walk down the street safe in the knowledge that the strangers we pass will have no way of knowing who we are, where we live, how much we paid for homes, our relationship status, our childrens’ names, and where we dined out last night?
Or let’s say we’re weighing the benefits of a home equity loan or purchasing a more fuel-efficient vehicle…should countless marketing teams catch wind of it so they can push us in a direction that benefits them and their business goals, but might not be in our own best interest? Do we not have the right to consider important decisions without — literally — thousands of people knowing about it and trying to control our behavior?
What happens when we lose our privacy? The US National Library of Medicine National Institutes of Health notes that “Scholars consider privacy a central ingredient in the positive human functioning.” Specifically, privacy serves two main functions in promoting our wellbeing. First, it’s essential to maintain homeostasis or equilibrium when we need it, such as when we’re considering our options or are in the throes of decision making. Second, it supports the self-conscious processes that are vital for personal growth.
In other words, humans have an innate need for privacy, and without it, we are hard-pressed to be fully functional human beings. When we are robbed of our much-needed anonymity we make poorer decisions and our stress levels rise (but don’t expect data marketers to care).
Our Dystopian World
Over the past decade and a half, we have entered a dystopian world, though few of us may realize the extent to which we are observed and controlled. Some countries are worse off than others. For instance, US citizens suffer from an appalling lack of privacy.
Take, for instance, the case of Clearview AI, a company no one had even heard of until January 2020, when the New York Times broke a shocking story about its app. According to the Times, Clearview, “devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared.”
Any Clearview user can snap a photo of a person going about his or her business, upload it to the app, and instantly get links to his or her Facebook page, Twitter feed, and so on. To wit: billionaire grocer John Catsimatidis used Clearview to identify who his daughter was dining with when they both happened to be at the same restaurant.
Clearview can provide this level of detailed personal information to its users because it has scraped 3 billion photos from Facebook, YouTube, Venmo, and many other sites. Not one person in Clearview’s database agreed to have his or her personal information captured or sold. Consumers don’t even know to whom their data is sold or how those buyers intend to use it.
Cookie tracking is just as outrageous and has led to some spectacularly awful circumstances. Take Sean Lane, just a regular guy who wanted to surprise his girlfriend with an engagement ring he purchased from Overstock.com. Only Facebook’s Beacon ruined the surprise when it – without his knowledge or permission — posted in his feed: “Sean Lane bought 14k White Gold 1/5 ct Diamond Eternity Flower Ring from overstock.com.” He learned of this breach of privacy when his friends began congratulating the happy couple — even though he had yet to pop the question to her girlfriend.
About a year ago, Harvard Business School professor Shoshana Zuboff published an important book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. This book lays bare just how profoundly our private interactions have become the raw materials for countless companies to turn a profit.
Zuboff explains that the origins of surveillance capitalism begin with Google and its goal to optimize its search engine results. Google tracked which terms users searched on, and which ads they clicked on and sites they visited so that it could improve its algorithms on a continuous basis. As Zuboff explains, Google’s intent was to improve the user experience, and users were well served by Google’s actions.
But then things changed. All along Google had captured significantly more behavior data than it needed to serve up more accurate search results — a dataset Zuboff refers to as “behavioral surplus.” Rather than delete that surplus, Google stored it just in case it would be useful one day. It didn’t take long for Google to realize just how useful that data was in predicting human behavior. Suddenly the company possessed what marketers wanted most of all: the ability to identify which users are most likely to click on an ad. That’s when Google got serious about data mining. In the process, it turned to violate privacy into a phenomenally profitable market.
Zuboff explains that the dynamics of the relationship changed to the detriment of the world’s citizenry. Google didn’t ask the users’ permission to collect and sell their behavioral data, and the users’ interests aren’t particularly well served. In fact, Zuboff maintains quite the opposite occurred. Google violated consumer privacy so that other companies could control our behavior in order to turn a profit.
That’s why today, users who want to consider whether it’s a smart move too, say, take out a home equity loan will be bombarded with ads, many of them deploying dark patterns, like creating a false sense of urgency (“Interest rates at historic lows, apply now!”), and prompting them to make decisions that might not be right for them.
We at Digiseg believe that Zuboff is absolutely correct in decrying these privacy-busting tactics. Online behavior is a proxy for our inner thoughts and tracking our searches and site visits violate our privacy in truly dystopian ways.
/ Søren H. Dinesen, CEO Digiseg – Shd@digiseg.io