The convenience of the smart home may be worth the price; that’s for each of us to decide. But to do so with open eyes, one has to understand what the price is. After all, you don’t pay a monthly fee for Alexa, or Google Home. The cost, then, is a subtle one: a slight psychological adjustment in which we are tipped a bit further into passivity and dependence.
The Sleep Number Bed is typical of smart home devices, as Harvard business school Professor Shoshana Zuboff describes in The Age of Surveillance Capitalism. It comes with an app, of course, which you’ll need to install to get the full benefits. Benefits for whom? Well, to know that you would need to spend some time with the 16-page privacy policy that comes with the bed. There you’ll read about third-party sharing, analytics partners, targeted advertising, and much else. Meanwhile, the User Agreement specifies that the company can share or exploit your personal information even “after you deactivate or cancel … your Sleep Number account.” You are unilaterally informed that the firm does not honor “Do Not Track” notifications. By the way, the bed also transmits the audio signals in your bedroom. (I am not making this up.)
The business rationale for the smart home is to bring the intimate patterns of life into the fold of the surveillance economy, which has a one-way mirror quality. Increasingly, every aspect of our lives — our voices, our facial expressions, our political affiliations and intellectual predilections — are laid bare as a data to be collected by companies who, for their own part, guard with military-grade secrecy the algorithms by which to use this information to determine the world that is presented to us, for example when we enter a search term, or in our news feeds. They are also in a position to determine our standing in the reputational economy. The credit rating agencies and insurance companies would like to know us more intimately; I suppose Alexa can help with that.
Allow me to offer a point of reference that comes from outside the tech debates, but can be brought to bear on them. Conservative legal scholars have long criticized a shift of power from Congress to the administrative state, which seeks to bypass legislation and rule by executive fiat, through administrative rulings. The appeal of this move is that it saves one the effort of persuading others, that is, the inconvenience of democratic politics.
All of the arguments that conservatives make about the administrative state apply as well to this new thing, call it algorithmic governance, that operates through artificial intelligence developed in the private sector. It too is a form of power that is not required to give an account of itself, and is therefore insulated from democratic pressures.
In machine learning, an array of variables are fed into deeply layered “neural nets” that simulate the binary, fire/don’t-fire synaptic connections of an animal brain. Vast amounts of data are used in a massively iterated (and, in some versions, unsupervised) training regimen. Because the strength of connections between logical nodes is highly plastic, just like neural pathways, the machine gets trained by trial and error and is able to arrive at something resembling knowledge of the world. The logic by which an AI reaches its conclusions is impossible to reconstruct even for those who built the underlying algorithms. We need to consider the significance of this in the light of our political traditions.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeExcellent article. I wish democratic governments well in resisting the growth in power of big tech, and historically this has succeeded by following the money, so identifying how big tech makes its money, and where that is a public good like vast databases on individuals, taxing it, regulating it, nationalising it, splitting it. For a start, compelling publication of algorithms to a regulatory authority.
The EU has proposed a new law governing the use of AI, which I believe is the first in the world:
EUR-Lex – 52021PC0206 – EN – EUR-Lex (europa.eu)
And?? So what should we do?
I think it is time for Constitutional Amendment Number 28!
One which says as this is a nation, Of, For, and By humans, and thus no digitally created laws may be binding.
The article above says the entire AI/SocialMedia/Tech are overlapping with government, and THUS we need laws to keep it where any law, in the broadest sense, MUST be created and enforced, by duly appointed, or elected, Human officials. By digital laws I possibly include algorithms which generate and manage any content, or metadata, which is private by law, or custom, and that controls that which influences human behavior, choices, manipulates citizens, or otherwise causes them to behave in ways they would not have without these artificial algorithms – should be limited.
Humans controlling Humans should be the law.
Great to find you back, Mr Artzen. I usually disagree with you though, but not on this occasion!
The combination of “algorithmic governance” and “surveillance capitalism” seems truly terrifying. But I’m not sure if “rivalry” between government and big tech is the real problem we are dealing with. I would argue that the deeper, more troubling question is what class of politician will be in bed with what class of technocrat. In a recent episode of The Economist Asks, I heard Amy Klobuchar make a big speech about breaking up these mega-companies, but when asked very directly what she thought about Twitter banning Trump from its platform, she unequivocally endorsed the move. So it’s all about what’s convenient and who gets to benefit. And I really wonder if citizens will have any choice but to play along (privacy policies be damned) …
Yes, indeed. To put it in Biblical terms, it all depends on whose ox is being gored, with no thought of a general prohibition on the goring of oxen.
Smart phone providers together with social media companies have the majority of the, NOT so smart, people under their spell. Addiction is now complete.
I have no idea where is this heading but at the ground level it’s irritating and sometimes dangerous while on a freedom level it’s disturbing.
Would the repeal of Section 230 making platforms into publishers make any difference to this situation? I know Trump was keen on this but maybe political expediency is in action here as Big Tech definitely seems blue in its leanings. And of course Facebook has just overturned the Antitrust case which was attempting to harness its reach.
Big Tech is not a rival to the US government. It is a partner, which showed its worth when hiding the exposure of Biden corruption and pedophilia.
Love Crawford’s no nonsense appraisals of modern life. His background in mechanics and a hands on mentality shines through. I read Surveillance Society by Foucault some years back and though Foucault was a nutter and quite unlikable his thoughts on the subject have been pretty spot on. The Swamp keeps evolving and shifting. There is a fightback from independent thinkers underway right now with Brett Weinstein being at the forefront. David and Goliath all over again.
Let’s hope it IS a fight between David and Goliath, considering who won the last time.
In asking why these giants collect your data we find it has value to advertisers who want to target ads and to other organizations who now can sell you a telephone number for a personal contact. We can fight back to a degree by not purchasing any product you see in web adverts. The appeal to advertisers is better targeting to consumers and the web advert money proves effectiveness. Sadly given a finite ability to advertise, that has led to a decline in printed adverts which once sustained local newspapers.
Perhaps the hazards of the data lie in the ability to influence behavior by tailoring messages targeted to specific groups. This manipulation of emotion has likely huge political consequences in terms of greater fracination of society. Propaganda has always been effective particularly for some less informed public. The answer is a well informed public, not likely anytime soon.
I wonder what George Washington would have thought of this. He didn’t agree with political parties, with good reason. This is what he said in his farewell address, and it is even more valid today:
“The alternate domination of one faction over another, sharpened by the spirit of revenge, natural to party dissension, which in different ages and countries has perpetrated the most horrid enormities, is itself a frightful despotism. But this leads at length to a more formal and permanent despotism. The disorders and miseries which result gradually incline the minds of men to seek security and repose in the absolute power of an individual; and sooner or later the chief of some prevailing faction, more able or more fortunate than his competitors, turns this disposition to the purposes of his own elevation, on the ruins of public liberty.”
I noticed Apple was missing from his the list of bad tech companies. Was this an oversight or do they get a pass?
“…The logic by which an AI reaches its conclusions is impossible to reconstruct even for those who built the underlying algorithms…”
An aside from the politics of the discussion:
Neural networks have a peculiar parallel to human expert decision making and knowledge, which is little discussed – certainly no non-tecchies I have discussed this with have ever fully understood this point – they will look at you blankly if you raise it. However, tecchies who think about the nature of knowledge and algorithmic vs human decision making, and the question if all human processing is ultimately algorithmic or not, will know instantly what I am talking about. Like neural nets, top-end human expertise equally cannot tell you exactly what it does to reach the conclusions it does. An illustration of this, is that there are a small number of traders who will make money in all circumstances and all kinds of markets. However, when their professed methods are turned into algorithms, they significantly underperform the trader. This happens even if the trader instigated the process of turning the methodologies into trading system algorithms. Investigation of the point at which the trader and the algorithm differ always yields an extra layer of “rules” – ‘oh yeah, if a is happening and b or c is looking like the case, then in that circumstance you should look at x and do y and z instead’ – and there is *always* another layer, it’s like pulling on a string that keeps coming.
This says several interesting things, one of them being, it seems not only can human experts not fully articulate the deeper levels of their expertise, they are in effect confabulating stories about what they do, even to themselves.
Neural nets appear to be some kind of multi-dimensional function approximators, bypassing mass scale number crunching, and instead arriving at sophisticated heuristic answers – in this at least they appear to be doing something similar to what human expertise does.
Algorithmic technologies are not human scale and lawmakers without intimate knowledge of tech haven’t got a hope of drafting law quick enough in reaction – and by the time they react the landscape has altered and the tech has moved on, so they are mechanically set to remain behind the curve.
The problem stems from the unspoken human scale assumptions that have underpinned all human governance prior to the rise of ubiquitous computation; these assumptions are being undermined, ever faster, by algorithmic technologies which don’t face the same biological and physical limits of humans and human geography – speed of processing, unlimited scaling potential (both up and down), perfect replication, high speed transmission, unbreakable encryption. Traditional legislation mechanisms (short of coercion) cannot hope to complete, because algorithmic engineering solutions to get round or even subvert any frameworks of governance legislators come up with can always be created.
The first step towards any successful model of governance of the digital world is to recognise and acknowledge this. Thereafter, you are faced with the much bigger truth, unpalatable as it may be to this moment of humanity: technology will not alter to accommodate human societies; human societies must decide either to alter themselves in reaction to fit around technological advance, or junk modernity and revert back to antiquity. It’s a binary, there isn’t really a middle ground, no matter how much you may wish for it.
I suspect it is this inevitable fork in the road that inspired author Frank Herbert to imagine the adoption of a feudal system reflective of historical structures and establishes the rationale for the rejection of “thinking machines” made in the image of the human mind. In the “Dune” universe, humanity chooses to expand the potential of the human mind to fill the void resulting from its refusal to use machine intelligence. Failing to or suppressing the co-evolution of human thinking alongside its machine equivalent will possibly lead to a similar inflection point in humanity’s future.