On Artificial Intelligence and Surveillance Capitalism

16 min read

Intro: From Super-Elixer to Dangerous Dark Art

Big Tech, which is always now: AI-fueled Big Tech, has made shopping, searching, connecting to friends on your preferred platforms astonishingly easy. Which is why the masses, including you and I, have given them our blessings without further ado.  When, next to that, we read that AI defeats humans at chess, poker and Go – and very recently also reaches the status of grandmaster in StarCraft II – we humble humans only can bow our heads in admiration for such wonders.

Very recently, Google Health claims breakthroughs in AI-assisted breast cancer cures – AI discovers more tumors and makes less mistakes in these discoveries than humans do. Even more recently, in the Corona era: AI discovers antibiotic to fight untreatable diseases. It makes our reverence crawls direction semi-religious worship. Especially as we know that similar uplifting messages will be brought to us every week.

Admittedly, from time to time we still have annoying experiences with how AI works as well. I am sixty and an experienced traveler. I have done the world’s main tourist destinations. So now I don’t book Barcelona but lesser-known Valencia. I am considering to skip Venice (been there) for Trieste. And exchanging Berlin for Leipzig. Annoyingly enough the somewhat stupid AI of Booking.com misunderstands my new appetite for second-tier holiday destinations and stubbornly tries to send me back to Valencia. Of course, this is a minor disillusion in AI. It mainly reinforces my hope and expectation that soon AI will help me better in exploring my new taste for holiday destinations. In short: in spite of tiny disappointments the shining king’s road for AI is inexorably up. Until recently.

Because, all of a sudden, the tide changed. Within three years the benevolent image of Big Tech – with AI always in its center – gave way to a much more critical opinion of Big Tech (BT):

  • BT has become too omnipotent and too powerful. After all, there are now only two kinds of companies: those that have gone digital already and those that will do so soon. In her memoirs of living in Silicon Valley – ‘Uncanny Valley’ – Anna Wiener describes how in one SV start-up where she worked, you can select any person you are interested in and then scrape up from heaps of big data all kinds of information about that person. They call it ‘going God-mode’. It’s omnipotence in action.
  • BT Is too manipulative to be embraced, let alone trusted. In the early Internet years the freedom-drenched saying was that on the Net no one knows you are a dog. Now Big Tech – once again with AI as the driving force – even knows the favorite pet food brand of your dog. And when you are running out of stock.
  • BT is too divisive and attacking the heart of democracy. High tech autocracies are on the rise. Social platforms are a powerful blessing to them. Social media can be ‘weaponized’ – as Amnesty is accusing Vietnam of. It is all against the grain of democracy.

  • With the impending rise of the Internet of Things and Sensor Society worries will intensify. It will endanger the end of public anonymity. Think Chinese social-credit system: ignoring a red traffic light will cost you points. A recent algorithm developed at Stanford University can correctly distinguish between pictures of gay and straight men with an accuracy of over 80%. Not a big deal in Silicon valley probably. But Silicon Valley does not stand for all the valleys on this planet where the real people live.

AI technologists, accustomed to beaming in the prestige and glory of working for benevolent AI, didn’t see the tsunami of criticisms coming. Who can blame them? Aren’t they just devoted specialists focusing on their own square centimeters of AI-developments – doing their laser-focused jobs? Isn’t that challenging and all-consuming enough? It is. Until it also causes neglect and blindness, mixed with disregard, when it comes to the broader picture: a society increasingly ill at ease with the machinations AI is capable of.

Probably Silicon Valley tech geniuses are the most ill-equipped ones on our whole planet to foresee and understand the changing tide. (Remember Zuckerberg dismissing the first announcements that Facebook has distorted the USA elections as “a silly idea”? That was as recent as end of 2016.)

Many SV AI-technologists are stuck with a post-hippy, ‘do good’-attitude, which should be appreciated (I wonder whether Chinese AI-technologists move on similar high moral grounds). But these attitudes have proven to be pretty naive, to put it mildly. Combine this proclivity to ‘do good’-naivety with the astounding international successes of the industry – plus more than a pinch of machismo – and general naivety turns into close-minded narcissism:  ‘I am on the top of the world, so my vision and views must be the best’.

Being on top at Silicon Valley easily obstructs the view on what’s happening in the real valleys of life, where real people live, with real desires and growing worries about who is watching them with every click they make, every step they take.

On top of the world unimaginative short-sightedness sets in easily, when it comes to understanding what society in general considers wholesome and benevolent. (In the nineties, my back-then Silicon Valley hero Negroponte declared that children growing up in that decade of the rise of the internet “were not going to know what nationalism is”. The utter S.V. visionary got it utterly wrong.)

Maybe we should cherish those SV AI-developers who perceive human nature as good – themselves in the first place. I am sure though, that Myanmar’s Rohingya people, victim of the biggest massacres of the last years, will put a critical question mark behind all the goodness.

In 2011 Myanmar’s internet usage was below 1%. Half a decade later the nation has one of the fastest internet speeds in the world. Then hell broke loose for the Rohingyas. It would be biased to claim that the malicious incitements on Facebook’s platform to kill Rohingyas caused the massacres. People kill people. But technologies facilitate. Their developers should embrace responsibilities for what they produce, the more so as their technologies grow ubiquitous and omnipotent. If not, the gods of Silicon Valley et cetera should leave any claim of superiority, morally or otherwise, behind. In fact they run the danger of losing their souls.

Everyone, not blinded by the naïve sense of superiority beliefs in force at Silicon Valley and related places, could have seen, easier and earlier than these idealist puppies, how their wondrous social platforms help autocrats to stay in their saddles, endanger democratic free speech and facilitate massacres.

Is super rich and powerful Big Tech not going to make the world a better place after all? Instead just making themselves ever rich and powerful? Here are the keywords around which the new critical discourse is spinning with regard to AI’s dangerous blind spots and society’s discontents.

How AI Creates Surveillance Capitalism 

Ever clicked on an interesting dinner plate, designer shawl or holiday destination? Sure you did. Then equally sure you have experienced how the pics you perused, hovered over or – even more trapped: clicked upon – follow you over all further online channels you visit. Yes, this can be labeled as helpful and facilitating.

Nevertheless, due to its intrusive, pervasive perseverance,  people increasingly perceive it as manipulative and creepy as well. May I give one example, out of hundreds, from my own country the Netherlands?

When you buy an airplane ticket you provide the online data brokers with your full name, address, date of birth, credit card data, preferred seat arrangements and so on. Even such an innocent detail as your seat preference can be used against you. In the Netherland indignation revolved around the rumor that some airplane companies put family members at dispersed seats on purpose in order to let them buy extra on the spot to be seated together.

The point is less whether this is actual practice or a rumor. The point is that Big Tech’s image has become creepy and manipulative enough for lots of people to believe the rumor, and as a consequence feel used by and alienated from how AI works.

A much broader example: In February 2020 the Official Monetary and Financial Institutional Forum revealed that according to a poll they conducted in 13 advances economic countries, trust in big tech companies to issue digital money was “strikingly low”.

On the long stretch between price manipulation of airplane tickets over general distrust regarding Big Tech’s digital money initiatives up to the rise of the Internet of Things (IoT) disturbing examples abound about how AI works.

Take tech wearables. Benevolently used at fitness centers to monitor your performances, they are now crossing over to workspaces – factory working places first, then all places where we work. Wearable tech will be able to monitor all employees’ moves, mapping each employees actual performances on whatever dimension considered relevant. Wearable tech will do so more meticulously and intrusively than ever thought possible before.

USA company Percolata for instance claims that their app successfully combines footfall sensors in stores with data on sales per employee, allowing managers to rank the productivity of each of them on all dimensions AI considers relevant. “Marketing optimization using machine learning” is part of the company’s payoff. Percolata convincingly demonstrates how its AI-led capacity can assess who are high performers and who are low performers. Also they can calculate what high performers should work extra shifts during peak hours and what employees better be discharged.

For sure, corporate capitalism can develop forthcoming profit leaps from here, with AI as its beasty motor. From a worker’s perspective though it will appear as capitalism on steroids, as super-charged exploitation ripping of worker’s humanness and dignity, turning them into tightly controlled robotic machines.

Rob Hart is author of  ‘The Warehouse’, a novel undoubtedly based on Amazon’s warehouse practices.

In the warehouse all workers are to wear watches which are 24/7/365 monitored by the company which is named Cloud. The results determine their work-performance score. Here are some of the score dimensions: 1. Arriving at work late more than twice. 2. Not meeting quotas as set by manager. 3. Personal health care negligence. 4.  Going over your allotment of sick days. 5. Losing or breaking your watch. Additional credits can be earned as well. 1. Meeting your monthly quotas for three months in a row. 2.  Using no sick days for six months or more. 3. Receiving a health checkup every six months. 4. Receiving a teeth cleaning once a year.

As said, this comes from a novel. But it nevertheless shows how AI can transform our working practices – more intimate and meticulous than ever before. Of course, you can label it all quite positively as the Jeff Bezos of the warehouse does: “Here you can be master of your own destiny”. Lots of warehouse employees though beg to differ.

Which will hit back upon the collective image of AI: from admired for its super capacities towards those who hated and feared for its divisive power to robotize humans and disrupt the social fabrics around work and society. AI technologists that are blind for this bigger picture will facilitate the image of AI as ultimate manipulator, with the power to transform the rest of us into panoptical prisoners of the digital age. They run the risk of being hated in the near future like bankers during the financial crisis a decade ago. Up to the point where politicians see no other possibility than giving in to the grudges of their electorates who resent living in the Age of Surveillance Capitalism.

The Age of Surveillance Capitalism as a characteristic, stems from Shoshana Zuboff’s thorough book with the same title. Zuboff demonstrates that AI-fueled Big Tech “ …accumulates vast domains of new knowledge from us, not for us”. It also demonstrates that AI-fueled Big Tech with its huge capacities to predict our futures and to nudge and manipulate us into directions they decide about, doing so “ for the sake of other’s gain, not ours.” This is the heart of Big Tech’s business model: industrial-scale monetizing of ‘our’ personal data they have reaped from us – without ever seriously asking permission.

Zuboff draws even deeper conclusions. The Industrial Revolution that started over two centuries ago exploited planet’s nature. It has brought us to the brink of worldwide ecological collapse. Now, under surveillance capitalism,  the ultimate exploitation of human nature is at stake. That is the heart of the new AI revolution with its immense capacities to monitor and manipulate. It explains Zuboff’s subtitle of the book: ‘The Fight for a Human Nature at the New Frontier of Power’.

It is difficult to imagine this impending exploitation of human nature in all its shapes and sizes, let alone to describe it. We can be sure though that society nowhere will consider AI-technologists as the best equipped to concoct these descriptions.

Can you expect anyone to pull their hearts out of their own body?

AI-technologist might be naïve, even downright blind when it comes to understand AI’s place in society, they cannot miss out on sensing the changing tides. They must adapt. And they do – to some extent for sure. Facebook now has 35k employees worldwide working in safety and security teams, to track hate speech, check facts and bruise fake accounts. Sorting out hate speech from all nuances of human speech is a devilishly difficult task. It necessitates the construction of algorithms fed with enormous amounts of data. Sometimes these amounts simply don’t exist, in Myanmar’s Burmese language for instance. Recently Facebook also has equipped London police with body scans during anti-terrorist trainings in order to generate heaps of relevant footage that can help to recognize terrorists threats anywhere. Also Facebook owned Instagram is experimenting with its ‘likes’-strategies, actually making the amounts of likes invisible in order to assuage ‘like’-addiction suffered by many social media fanatics who consider acquiring huge amounts of likes as their main life goals.

Sympathetic and helpful as these new measures are, they regularly are considered as cosmetic and unsubstantial. In spite of their differences all AI-fueled Big Tech business models focuses on two driving desires. A: Reaping each person’s data, repacking it and selling it to advertisers. And, aligned with this B: Doing anything to keep people on their platforms and/or using their online services as long as possible.

Corporates will do anything to keep their core business undiminished and safe anytime. That’s capitalism in action. (Chinese ‘socialism’ doesn’t behave differently.) Hate speech, fake news, alternative facts and conspiracy theories keep many longer on Big Tech’s platforms. Big Tech’s underlying algorithms will be designated to get us glued to them maximally. This is the heart of Big Tech’s business model. And what company on our capitalist planet is capable of ripping its own heart to pieces? Big Tech certainly not. As a consequence, the efforts to sooth society’s discontent will be intrinsically half-hearted.

Masquerading greedy business heartbeats with 35k employees striving-for-the-good might look impressive, but for a Big Tech giant like Facebook it is very doable. Breaking their own heart seriously and fundamentally is something else, even more doing so out of  free will. It will be a very unnatural thing to do by any company, let alone by the most powerful industry on our planet. They will be more inclined to resort to massive lobbying to defend their business models. Fortunately in our democracies people can speak their heart and worries out, don’t they?  But are we (still) living in democracies? Or do we live in lobby-cracies?

DEMOCRACY DESTROYERS

Many sense that the 2020 presidential USA elections will be meaner and more divisive than whatever we’ve seen before. Put people with different opinions together at a pub table and during discussions they often leave radical positions. Being able to watch each other in the eyes helps to find common ground and meet in the middle. The reverse often happens when people discuss at online forums. High degrees of anonymity and the lack of all-senses of human contact generally cause extremization of opinions. Vicious verbal exchanges are of all times but social media platforms amplify and intensify them. They create ‘natural’ habitats for extremism and hate speech to blossom – and go viral.  AI-fueled algorithms to keep you on the platforms exacerbate these dynamics. It is less about finding truth together. It is about ‘right-or-wrong: my tribe, my party, my people’.

This explains why the recent impeachment procedures against president Trump have run a different course than the impeachment of president Nixon in 1974. Since then our society has become much more atomized and characterized by footloose, free floating hyper- individualism. Consequently we crave for a sense of belonging more than ever before, especially now that the religious frames for collective sense making & belonging have weakened, even evaporated. This results into an intensified, even fanatic, resort to a ‘right or wrong: my tribe, my party, my people’ attitude – and the inversely proportional rejection of the other, the apostate. Their truths must be fake. No better place to create, spread and intensify these beliefs than on the social media.

During the Nixon impeachments of the seventies many Republicans, confronted with the Nixon tapes, were convinced by the misdemeanors it revealed. You can’t expect the same to happen now, in our super-divisive age of social media, fake news, alternative facts and consequently, the demise of a common truth. “Right-or-wrong: my tribe, my party, my people’ has taken over as decisive principle. Bye, bye common truth – and trust.

This means: democracy on the slippery slope. And in its wake: the rise of populism. It is too simple to blame Big Tech and its underlying algorithms for both. People’s turn to populism is mainly driven by their sense of having lost agency over their own lives – in a globalized world. But Big Tech’s algorithms play a substantial role nevertheless, introducing into society more division and confusion than could be imagined before their rise to ubiquitous power. Filleting society’s common stock of knowledge, threatening to destroy its common truths, up to the point where high tech autocratic regimes win ground in many regions of the world, is a serious indictment.

This situation deteriorates even more as AI confronts us with its ‘black box’ predicament: the inability not only of laymen but also of AI-technologists themselves to fully understand, let alone explain, how AI mysteriously moves and reaches its conclusions. That might be okay when AI manages to discover illnesses and create medicines more effectively than humans can do on their own. However, when it comes to governing societies it is against the grain of how democracies work: with an agreed-upon believable degree of transparency. AI’s black boxes don’t deliver here. “I showed a black box some data and it learned from it and then it did what it did. I don’t know why” isn’t very convincing reasoning to the general public. It induces unease and discontent in democratic societies, probably in all societies.

Big Tech – Facebook and Google in the first place, Apple and Microsoft more quietly after them – has the power to separate us like grains of sand on stormy beaches. This happens economically when different people see different offerings and price-tags online. It happens socially when they receive different parcels of political messages: customization based on their traditional preferences for sure, but not having a clue anymore what information other citizens receive. People in democratic societies, but probably all people, increasingly dislike this state-of-the-art AI manufacturing. It erases society’s common stock of knowledge, and with that their society’s solidity and the liveliness of its fabric. Google’s Eric Schmitt once bragged that his company can predict and determine each individual’s behavior. People don’t like that. It rapes their sense of agency. For now awareness and public grudges are mainly simmering under the surface. However,  will ‘taking back control’ become the recycled rallying cry to take it on Big Tech somewhere soon?

Democracy works best with a decent amount of transparency – offered by a free press and a public juridical system – and when people have a sense of agency. AI-fueled Big Tech threatens both. As the engine of surveillance capitalism it is perceived as the ultimate manipulator in all regions of life, including the political realm. There its algorithms help destroying democracy, not because Big Tech loves soft dictatorships but because its algorithms service their business models in the first place.

The Netflix original documentary ‘The Hack’ gives a thorough overview. One of its directors asks himself whether it is still possible to have fair and free elections without the reform of Big Tech’s workings. ‘The Hack’ itself quotes Cambridge Analytica proclaiming they have 5000 data points of each American. The discussion should not revolve around whether this is a bragging sales pitch or the disturbing truth. It should revolve how many of those 5000 data points, you have handed over consciously and voluntarily. Yes, you probably permitted data brokers to use your data when the consequence of not doing so hampers your entrance to the website of your desire. Surely, you did not read the very many pages of the relevant privacy policies. It is the passive consent of the semi-conscious ones – including, I immediately confess, myself.  How long will society consider these mechanics fair?

AI-fueled Big Tech has lost its innocence and its ravishing imagination in only few years. Now they are increasingly perceived as the new industry that mines our private worlds to gain financial and political influence. They have the potential of destroying democracy from within. Above that they have enormous power to lobby with our governments.

In his book ‘Don’t Be Evil. The Case Against Big Tech’  Rana Foroohar demonstrates how fiercely Big Tech protects its own interests, also when it hurts the fabrics of society and the people living in it. “They have defended their right to pay taxes in overseas low-taxation jurisdictions, rather than in the markets they make their money”. “They are lobbying hard not to be responsible for the content and the actions of their users on their all-powerful platforms.”

As a recent extension of this, it turns out that as part of the new trade deal with the US, Canada and Mexico had to agree never to define and treat the Big Tech platforms as publishers, which would make them responsible for the contents they deliver. This is excellent lobbying and neither in the interest of the people nor of democracies. Big Tech is now after Big Pharma the biggest lobbying party in Washington. And on company level Alphabet is the biggest of all. Big Tech is not only destroying democracy from the inside. It is also replacing it by a lobby-cracy.

INNOVATION SMOTHERERS

Foroohar has another complaint about Big Tech: they’ve lobbied fiercely and successfully to make it harder for start-up competitors to secure patents. Under Obama the American patent-system has been reshaped favoring the interests of Big Tech over smaller companies and start-ups. Big Tech now has more inroads to infringe the patents of little upcoming players – vice versa these players have less. This threatens fair entrepreneurship as it distorts the level playing fields between Goliaths and Davids. Even more so because Big Tech has by far has the deepest pockets to pay indecent amounts of lawyers to prolong every judiciary case until full exhaustion of their designated start-up victims.

Big Tech Goliaths has revolutionized our ways of life. But you can’t expect them to have all the best ideas themselves forever. The road to success is paved with gravestones of once great companies that lost their innovative cool – upon which others take the lead. Think a young Microsoft taking on incumbent IBM. Think a young Apple taking on incumbent Microsoft. Think a young Instagram taking on Facebook. (Now owned by Facebook!) Contemporary Big Tech’s attempts to entrench themselves by new laws – constructed with their lobbying power – against attacks of new generations of innovators, is bad for a healthy business competition as it holds back new waves of exciting innovations. Those innovations often arise from the fringes of the business landscape, less so within the incumbents. The fringes are the more natural habitat of the fresh dreamers and the tinkerers that will bring us the next generation of Cool Tech.

Our collective imagination often situates these dreamers and tinkerers not in corporate environments but, quite nostalgic, into romantic garages. These days not many of these garages exist. Big Tech is buying them prematurely as a precaution. It is getting unmasked as Big Tech’ ‘buy-and-kill’ tactic. In spite of all lobbying, the US Federal Trade commission is researching it right now.

Protecting your own business models is normal. But avoiding dreamers and tinkerers at the fringes to create the next-generation of disruptive inventions, means polluting a fair business climate – and hampering prosperity. Big Tech should not be allowed to create ‘kill zones’ around their powerful perks in order to prevent fair competition from new start-up.

Increasingly business is taking place in the clouds. Those clouds are owned and dominated by the Big Tech. It entitles them to build deep and detailed profiles of us all, based on the data we leave them for free, often unconscious of its consequences. ‘Taking back control’ is a popular slogan in politics. Will the same rallying cry pop up in the realm of social platforms and clouds? Is it imaginable that ‘Davids’ all over to unite themselves in demanding our data back, in demanding transparency, in demanding local, cloud-based services independent from Big Tech? The ideas generally receive passive sympathy. But as long as the Davids are not united, as laws are hampering them and the people prefer to stay uneducated, Big Tech is capable to nip many initiatives in the bud.

One of the best books about Big Tech as Innovation Smothers comes from author Thomas Phillipon, proudly wearing its title: ‘The Great Reversal: How America Gave up on Free Markets’. Philippon documents the decrease in the number of USA start-ups and the corresponding increase of Big Tech’s power to suffocate fresh innovations by promising start-ups.

In the meanwhile it is interesting that the EU is stirring its tail and starts considering building its own local clouds for specific goals, away from USA’s Big Tech monopolistic sway. Angela Merkel’s plea for Europe developing its own platforms to manage data and reduce its reliance of services from USA’s Google, Amazon and Microsoft points into the same direction: digital sovereignty, both for Europe as a whole and, ideally speaking, for all of its inhabitants. Europe is not a Big Tech Goliath. But neither it resembles a tiny David. The continent is leading in high value-added industries like pharma, automotive and sustainability. That gives leverage. This week the EU aims to loosen Big Tech’s grip by forcing them to share data. Last week Zuckerberg came over to EU to discuss what the criticism are. One was that companies like Facebook must do more to help our democracies. Zuckerberg at least listens to the arguments. In this fresh decade we will discover whether that is enough.


Notes:

  • ‘Make. Think. Imagine. Engineering the Future of Civilization.’ By John Browne. Top Read!
  • ‘Uncanny Valley. A memoir.’ By Anna Wiener.
  • ‘The Hidden History of Burma: Race, Capitalism, and the Crisis of Democracy in the 21st’ By Thant Myint-U. Side read.
  • ‘The Warehouse’. Rob Hart. Cool novel.
  • ‘The Age of Surveillance Capitalism. The fight for a Human future At the New Frontiers of ’ By Shoshana Zuboff. Top read!
  • ‘The Great Reversal: How America Gave up on Free Markets.’ By Thomas Phillipon. Good read.
  • ‘Don’t Be Evil. The Case Against Big Tech’. By Rana Foroohar. Top read, pretty detailed.
Carl Rohde Prof. Dr. Carl Rohde writes for DDI on the New Tech Forces and their cultural-sociological impact and meaning for contemporary and future culture and society. During the last ten years Rohde occupied professorate chairs in ‘Future Forecasting & Innovation’ in Shanghai, Barcelona and the Netherlands. Rohde also leads scienceofthetime.com a virtual network of trend spotters and market researchers worldwide.

One Reply to “On Artificial Intelligence and Surveillance Capitalism”

Leave a Reply

Your email address will not be published. Required fields are marked *