X Close

‘Science Finds, Industry Applies, Man Conforms.’ But should it?

(Credit Image: Armin Weigel/DPA/PA Images)

(Credit Image: Armin Weigel/DPA/PA Images)

December 11, 2017   5 mins

While our politics seems more divided than we can remember, when it comes to technology there doesn’t seem much scope for individuality. iPhones and Androids are becoming indistinguishable, just like the way we cradle them, study them, let them inform us, entertain us, and intrude into social and business situations. It’s quite an irony: the vast power these little devices have put in our hands has led us to unmatched levels of conformity.

At the back of that, there’s an insidious, creeping uniformity in how we think – both about technology and about what it means to us. The startlingly candid motto of the 1933 World’s Fair in Chicago, one of the 20th century’s great expositions of human achievement, sums it up. ‘Science Finds, Industry Applies, Man Conforms.’1 Is this really what’s going on? First the lab makes the discovery; then the corporation comes up with the product; and then ‘man’ – all of us – do our duty and ‘conform’ by buying the product. If this seems like an exaggeration, just remember that there are now two billion people using Facebook at least once a month. That’s a number more than six times the population of the United States; 30 times that of the UK. It’s close to half the adults on the planet outside China (where it’s banned).2


The vision set out in the World’s Fair motto is, frankly, chilling. And the tendency of the digital revolution to encourage conformity, not least through the growth of vast corporations with a narrow product focus and enormous – sometimes monopolistic – profitability, both captures the World’s Fair vision and actually makes it creepier. Because the two billion who are ‘conforming’ to the Facebook product would never dream of thinking of themselves as naïve conformists. They are making personal choices, and they have chosen the product because they like it. Just as they do when they check their mobile phones during family dinners, or business meetings, or first dates.

The vision set out in the World’s Fair motto is, frankly, chilling. And the tendency of the digital revolution to encourage conformit both captures the World’s Fair vision and actually makes it creepier.

Though it’s worth noting that since they don’t pay cash to get it, few Facebook users think of it in quite those terms. The largely invisible barter system by which they trade data for service makes everyone’s thinking a little foggy.

Machines ‘r’ us?

Behind our consumer engagement with digital technology another debate is in progress. It’s many years since inventor and futurist Ray Kurzweil came up with his model of the ‘singularity’ – that future point when machines become smarter than we are and rapidly ascend an exponential curve of smartness that will leave us humans in the dust. Kurzweil has proposed various dates for this extraordinary event, though 2045 is the most recent.3

The Chicago World’s Fair 1933-34 celebrated a century of technological innovation. Credit: Weimer Pursell & Neely Printing Co. via Wikimedia Commons

More important than his prediction, though, lies the fundamental idea that machine intelligence is superior to human intelligence, and that humans should be looking forward to living digital lives. Years back the term ‘wetware’ was coined to describe brain matter – by analogy with software and hardware. Of course, as we know, the human brain is not only smart, it is extraordinarily complicated. But machines have beaten us at chess, and at the even more complex game Go. Should we accept the superiority of the machine and aspire to a digital future? Or, to rephrase that, while the machine is plainly better at some things than we are (that’s why we have machines, from the adze and the plough onwards!), does this mean that we should all aspire to be like machines?

When we are looking at our digital devices, it may be helpful to think adze. Obviously, machines are better at doing what they do than a human would be. But can a machine be better than we are at what we do as a whole? Can they be better humans than we are?

More important than this prediction lies the fundamental idea that machine intelligence is superior to human intelligence, and that we humans should be looking forward to living digital lives.

That is a very different question, and it’s one that people taking Kurzweil’s view can answer only by defining what we humans do – defining it down – in machine terms. For example, of course machines can have sex with humans (and indeed with each other, which is maybe an even weirder idea), but can they in any sense make love? This approach is in logical terms called a petitio principii – an argument that assumes its conclusion from the start. If we abstract the things that humans do in machine-like terms, it should scarcely surprise us that machines can do what we do. Whether they can be entranced by the beauty of a sunset, enjoy a children’s birthday party, or read Keats and weep is a different question altogether. And while, per Moore’s Law, circuits certainly keep being crammed onto chips and quantum computing suggests that even as cramming becomes impossible ramping up machine intelligence won’t, not everyone agrees that replicating the complexity of the human brain equals producing people.4

Hans Holbein’s famous portrait of Erasmus, the 16th Century renaissance humanist after whom this column is named. Credit: Hans Holbein & The National Gallery, London via Wikimedia Commons

The Question of Us

As technology continues to explode around us there’s no more significant question to address than this one: Can we develop and manage our technologies in a manner that will enable and enhance our capacity to live human lives? Behind that, can we think about them in a manner that will keep the human vision – of a human community sustained and empowered by the latest adzes and ploughs – at the forefront? Or is it inevitable that our human story will ultimately be superseded by the story of machines that are ever smarter and faster?

It’s plain that there’s no value in a return to ‘Luddism’. But while progress may be inevitable, what we make of it – in our personal lives and our families and our businesses and our communities and on our planet – is up to us. Does the future offer us privacy – and do we care? Will these vast new technologies drive wealth for all or exacerbate inequity and division? Do we all need to train in STEM or do we suspect that the future will hold a new golden age for the humanities (the stuff that machines will find it hardest to do)? In other words, are we up for a long campaign for a human future?



Ray Kurzweil’s The Singularity is Near. When Humans Transcend Biology (New York: Viking Books, 2005) offers a substantial manifesto for his thesis that Moore’s Law will lead to a machine takeover.

Here’s a critique from Microsoft co-founder Paul Allen in MIT Technology Review.

And a summary of criticisms in tech online newspaper Pando Daily.

  1.  The World’s Fair: Ron Grossman, May 26, 2013
  2.  Report from Anita Balakrishnan, June 27, 2017
  3.  Ray Kurzweil, The Singularity is Near. When Humans Transcend Biology New York: Viking Books, 2005
  4.  On Moore’s Law see here. Quantum computing: Abigail Beall, March 23, 2017, summarizes the latest in Wired Magazine.

Nigel Cameron writes about technology, society, and the future. In 2007 he founded the Washington think tank The Center for Policy on Emerging Technologies. His most recent book is Will Robots Take Your Job?


Join the discussion

Join like minded readers that support our journalism by becoming a paid subscriber

To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Notify of

Inline Feedbacks
View all comments