Loading...

Thursday, March 28, 2024

Facebook Wants to Read Your Brain

As Shoshana Zuboff has warned in her book “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power,” the great tech giants have built a system people have become dependent on for their social relationships. But this “social” system has been built to serve a fundamentally antisocial purpose. The customer is the product. And, thanks to advances in technology, the product can increasingly be remodeled to maximize its commercial value.

Sam di Bella, writing for the LSE Review of Books, sums up Zuboff’s notion of “surveillance capitalism.” She sees it is a specific form of capitalism with its own set of rules. It aims not at designing a product or service that people are free to buy, but as a business that aims at profiting “from the capture, rendering and analysis of behavioural data through ‘instrumentarian’ methods that are designed to cultivate ‘radical indifference […] a form of observation without witness.’” If traditional capitalism aimed at creating customer loyalty, surveillance capitalism aims at customer capture. While agreeing with her thesis, Di Bella critiques Zuboff for failing to imagine and promote viable solutions. He believes that the problem may be deeper and more substantial than even its best-informed critics maintain.


Can Technology Help China Rebuild Social Trust?


An article by Puja Das on the website Analytics Insight offers evidence that the problem has already deepened to a level that may put the prospect of solving it out of reach. Facebook has already gleaned so much data about each one of us that it presumably knows and understands more about us than we could ever realize on our own. Little does it matter that our online activities may be as varied as sincere expression of interest, random curiosity, inebriated joviality or sarcasm. The algorithms that track us have been designed to cut to the essential. It’s all about determining which products we might be willing to buy.

The social platform is now moving onto another dimension, probing even further into our souls. Here is what the Analytics Insight article tells us: “After disclosing the idea at its 2017 developer conference, it seems like Facebook is closer to making its brain-reading computer a reality with the help of artificial intelligence (AI).”

Today’s Daily Devil’s Dictionary definition:

Contextual Note

If anything, the idea of a “brain-reading computer” evokes a notional comparison with things like a “man-eating shark” or a “flesh-eating fungus” — in other words, not terribly reassuring. Even “plastic-devouring bacteria,” which offers an increasingly polluted world some ecological hope, doesn’t sound all that appetizing. Brain-reading may also be the first step toward brainwashing.

Puja Das tells us that “the company now claims to be working on a technology that can pick up thoughts directly from neurons and translate them into words.” That sounds like an admirably modest ambition. After all, sticks and stones will break our bones, but words will never hurt us. But is that all Facebook will seek to do?

Embed from Getty Images

Stated in those terms, there appears to be a problem in their conception of thought. The premise Facebook’s scientists and AI experts are putting forward is either dangerously simplistic or (more likely) deceptively reassuring. Rather than pointing to the kind of mind control and behavioral programming the CIA once sought to achieve in the notorious MK-Ultra project, when its scientists experimented on unwitting American citizens with LSD, hypnosis and a number of other drugs and behavioral techniques, Facebook wants us to believe that the innocent business is limited to simple pragmatic operations destined to make life easier, especially for the manually challenged. As Facebook describes it, brain-reading is about little more than controlling the gestures needed to navigate Facebook and related software with a mouse and a keyboard. In its modesty, the company is looking to “help users holding a virtual object, typing, and controlling a character in a video game.” Far be it from Facebook to seek to transform the results of this charitable effort at thought processing into commodifiable and marketable data.

Under mounting pressure from its critics, especially in government, Facebook and other tech giants like Google, Twitter and Apple have been doing their damnedest to demonstrate their ability to combat the evil forces in society that have sought refuge on their social platforms. That has led them to claim the regal power of deciding what is evil. AI will play a big role there, since no sane person would trust, say, Mark Zuckerberg’s judgment alone about what is or isn’t evil. Das tells us that “Facebook is even using artificial intelligence for several matters, from curbing the spread of misinformation on its social media platforms to removing hate speech, along with scanning political content.”

So we are left wondering, is it AI or some committee of Facebook sages that is responsible, as Dan Tynan reports in The Guardian, for “the removal of 800 pages and accounts” deemed suspect of promoting “coordinated inauthentic behavior?” At the same moment, YouTube has been demonetizing channels whose opinions or beliefs appear too distant from what they deem mainstream. Wielding their monopolies, the tech giants have taken upon themselves to decide what thoughts and ideas are “real” and whose discourse is “inauthentic.” As ACLU attorney Vera Eidelman, quoted by The Guardian, observes, this could “put everything from important political parody to genuine but outlandish views on the chopping block. It could also chill individuals who only feel safe speaking out anonymously or pseudonymously.”

Facebook, Google, Microsoft and the other suppliers of our common social environment will tell us that it’s all about using the best technology to make society a better, safer place. “The social media behemoth is pegging AI as the panacea for all the problems that it has faced and will likely deal with shortly,” according to Mike Schroepfer, the chief technical officer of Facebook. “We are paving the way for breakthrough new experiences that without hyperbole will improve billons’ lives.” Who could doubt that once Facebook learns to read our thoughts, it will be for the sole purpose of improving our lives?

Historical Note

In an article she penned for the Financial Times, Zuboff claims that the methods of surveillance capitalism “were invented at Google, traveled to Facebook, engulfed Silicon Valley and have since spread through every economic sector.” In other words, this is a deep historical trend the implications of which go far beyond simple commercial strategies or marketing innovations. Historical trends always have the potential to become a permanent force in the social landscape, with a significant impact on how societies and their cultures are structured. The implications extend to every domain of social and economic interaction.

Much has been made recently of China’s social credit system that employs a wide range of technology to track its citizens’ behaviors and model them to achieve the Chinese ideal of “harmony.” In 2018, Business Insider called it “creepy” and quoted the Chinese government’s justification: “keeping trust is glorious and breaking trust is disgraceful.” But the Chinese do not hide what they are doing or try to sell it as something that will “improve” billions of lives. The Chinese government has transparently announced what they are doing. Though it is clearly bad news for anyone who shares Western individualistic values, Chinese culture actually invites this kind of social control in the name of harmony and respect for authority.

The problem we are faced with in the West is more complex. Individualism commands that none of us wants to feel spied upon or have our thoughts controlled by others, especially by governments. But the economic system people now spontaneously adhere to established another rule largely accepted by the population: Like a dog waiting for a treat, we will sit up and beg for a service, especially an online service offered to us for free. And we will accept the consequences, especially if we are kept in the dark about what those consequences might be.

Leave a Reply

Your email address will not be published. Required fields are marked *