Humans Need Not Apply
Williams Myers, Curator & Researcher
Is Everything Actually Awesome?

Can we prosper in a world in which robots and artificial intelligence can do every job we have today? Alarming studies about vanishing employment appear weekly, like that by researchers at Oxford University estimating that nearly half of all jobs are vulnerable to automation over the next twenty years1. News media continually report on these predictions, often presenting a dark vision of the future in which jobs are scarce, hoards of people are idle and destitute, and wealth is concentrated in few hands. According to other future forecasters, we are hurtling towards a leisure-filled utopia in which we can all pursue artistic endeavors, collect a universal income, and observe elaborate ceremonies around socialising, dining and dress, akin to the aristocratic life depicted in Downton Abbey. Which vision is more likely to materialise?
The answer probably lies in the past. Imagine you could travel back in time to visit Dublin in 1858, the breakthrough year in which an undersea telegraph cable first connected both sides of the Atlantic. Communication across an ocean was suddenly possible, while electrification was delivering indoor lighting and mechanical conveniences. Cities swelled with migrants as back-breaking farm work was giving way to tractors, and higher wages in the cities’ factories beckoned. Imagine your conversation with an assembly line worker of this time, trying to explain to her what sort of jobs people have in 2017: cybersecurity expert, banner ad marketing manager, flight attendant, or video game designer. She’d be incredulous. After all, to her: How could there be so many jobs involving such inessential activity?
From a 19th century perspective, necessity has long been satisfied in the rich world. A strong majority of those reading this will have no memory of going a day in want of food, or a week without looking at a screen. For the factory worker in 1858 Dublin, worries were far more fundamental: children had just a two in three chance of reaching five years of age, while tuberculosis, cholera, fires, and hazardous factory work plagued adults. No antibiotics were on hand, so a common infection, from a splinter in the toe for example, could be fatal. In contrast, we now fear heart disease, Alzheimer’s, and cancer — afflictions that might be equally difficult to explain to a person whose life expectancy was only forty years.
Apart from marveling at our public health achievements, the young Dubliner of 1858 might conclude that our working lives were mere entertainment, pastimes invented to keep us busy, comfortable, and safe. She might quip, “It seems many of you live in a poetry-based economy in 2017!” Would she be right? Is the production and consumption many of us perform any more meaningful or concrete than a few amusing lines of verse? Perhaps we have unwittingly moved into a post-capitalist dreamland with our virtual assistants, currencies, and avatars.
Perhaps not. The (evil?) genius of the modern economy is in its capacity to generate infinite wants and then create new work to satisfy them. Part of this stems from our conditioning: beginning in childhood, media and advertising urge us to continually seek consumption upgrades like nicer vacations, cars, gadgets, clothes and dining experiences. If automation suddenly made basic transportation, energy, and food cost nothing, for example, it would certainly put millions out of work. However, it would also lay the groundwork for jobs we have not yet imagined, just as electricity or the internet did, while freeing up wealth previously spent on those basic commodities to instead consume other goods. This, essentially, is how we moved from agricultural to factory to service jobs in the last 150 years as capitalism relentlessly advanced.
This time, it is argued, is different because the rate of change is much faster than previous technological revolutions that reshaped the global economy and eliminated jobs2. Adaptive neural networks are becoming integrated into processes, like social media feeds and language translations, as well as legal decisions, medical diagnosis and journalism. They are also becoming part of things, like cars, thermostats and robots. If you’re a taxi driver, loan officer, legal clerk, retail floor worker or radiologist, for example, your job is in peril since its processes, though complex, are relatively formulaic and repetitive.
On the other hand, if your work is creative, variable, and relies on social connectivity, or what some call ‘emotional labor,’ like that done by a therapist, executive assistant, comedian or member of the clergy, your job is unlikely to be replaced by an artificial intelligence anytime soon.
What of art? Today, machines can already produce paintings, sculptures, music, or even screenplays. As the exhibition HUMANS NEED NOT APPLY demonstrates, this need not be a threat to art but a technological challenge, a moving of the goalposts for creative expression, which should be familiar. Many believe that a computer producing content that mimics artistic expression marks a turning point, since art is held up as the apex of human expression and cultural value; but they forget that there is no ‘final frontier’ to art. From the time of the first daguerreotypes in the 19th century, people have decried the end of painting, and yet it is alive and well. New media for art and the motivations behind its creation have proven limitless and ever-changing 3.
The same can be said, ultimately, for human desires for products and services. We eventually demand more and different things in the wake of technological changes, which, after a time, have been shown to generate more employment and improved wages 4. By 2067, there will likely be a poetry-based economy out there from our current, limited perspectives; the important question then becomes how to ease through the transition. Again, the answer probably lies in the past, in understanding the successful rise of phenomena like labor union organisation, mandatory basic education, and corporate taxation.
1. Carl Benedikt Frey, and Michael A. Osborne, ‘The Future of Employment: How Susceptible Are Jobs to Computerisation.’ Oxford Martin School, University of Oxford: September, 2013.
2. Michael Chui, James Manyika, and Mehdi Miremadi,’Where Machines Can Replace Humans — And Where They Can’t (Yet).’ McKinsey Quarterly, July 2016.
3. For more on the evolution of the arts in the context of machine learning, see Blaise Agüera y Arcas, ‘Art in the Age of Machine Intelligence.’ Medium, February 2016.
4. See Robert C. Allen, ‘Engels’ Pause: Technical Change, Capital Accumulation, and Inequality in the British Industrial Revolution’, Explorations in Economic History 46, no. 4 (2009): 418-435.
Profile
William Myers is a curator, writer, and teacher based in Amsterdam. His first book Biodesign (2012), published by MoMA, identifies the emerging practice of designers and architects integrating living processes in their work. His next book Bio Art: Altered Realities (2015), published by Thames & Hudson, profiles art that uses biology in new ways or responds to recent research in the life sciences that disrupts our notions of identity, nature, and the definition of life.
William’s writing and exhibitions have been profiled in the journal Science, The New York Times, The Wall Street Journal, New York Magazine, Smithsonian Magazine, Volkskrant and Folha de São Paulo, among others. William has delivered lectures at Harvard University, the Tate Modern, Universitário Belas Artes de São Paulo, International University of Catalunya, Leiden University, and the Royal College of Art. He has previously worked for MoMA, the Guggenheim Museum, the Smithsonian Cooper-Hewitt National Design Museum, Vitra, TU Delft, and The New Institute in Rotterdam.
william-myers.com @WMyersdesignCreativity incognito

Pan Fubin, 40, lives and works in what is known as the Oil Painting Village of Dafen in Shenzhen, China. He has a wife, two daughters, an expensive mortgage, and a longing for more free time. He also has become, to his surprise, the first artist in the world to paint a detailed portrait of a person whose every wrinkle and eyelash was developed using artificial intelligence.
Although Pan exhibited an early talent for drawing and a commitment to learning painting, his academic performance was insufficient to gain him entry into art school. At 16, he began working on the family farm. A career in agriculture seemed likely, until a new opportunity presented itself — an apprenticeship at a company producing high-quality copies of famous oil paintings. Over the next few years, his work developed rapidly, and he studied the 19th century French academic painter William-Adolphe Bouguereau.
By 24, Fubin was married and turning out copies of famous paintings for clients in the United Kingdom, Australia, and Hong Kong. Sometimes, these were direct replicas of known works; other commissions were for portraits of living or recently deceased people, done in the style of Bouguereau or others. In time, he learned new techniques and grew fond of other old masters, like Russian-Ukrainian realist painter Ilya Yefimovich Repin, whose work he had seen at an exhibition in Shenzhen. Pan Fubin’s practice developed as the village around him boomed. In the early 1990s, the village had just twenty practicing artists making copies of famous works by Van Gogh, Dalí, or Warhol for export; today, several thousand painters are employed doing such work, as well as an ecosystem of framers, canvas stretchers, paint suppliers, and shippers.
About 9,000 kilometers from Dafen, a Dutch advertising executive named Bas Korsten began a project in Amsterdam in 2013 that would win his agency many accolades and intense media attention, while indirectly producing a commission for Pan Fubin. He masterminded the launch of a collaboration between ING Bank and Microsoft to see if an artificial intelligence could be developed and trained to produce, with the help of 3D printers, a never-beforeseen painting that could look convincingly like the work of Rembrandt.
The two-year project ended with results that are stunning and could fool most people, yet the process of its making remains murky. Machine learning experts and even partners who collaborated on the project have expressed skepticism. The slick documentary video about the painting’s development is not supported by any academic publication, or the sharing of source code or details of the algorithms used to produce or paint the image. An art reproductions researcher who contributed data to the project, saw little value in it apart from power to generate attention. Indeed, this aspect is most impressive; Bas Korsten’s agency measures its success in billions of (free) media impressions for collaborators like ING and Microsoft1.
The documentary, entitled The Next Rembrandt, explains that custom-designed, artificially intelligent systems learned from the known works of Rembrandt in order to devise the most likely way the artist would produce another painting. It suggests a probabilistic modeling, finding averages on which to rest assumptions about subject and format, as well as features like brush strokes and color selection in a new work. As such, the process raises questions about authorship and originality, prompting the viewer to question whether the painting ought to be credited to the genius of a dead painter, a team of engineers and marketers, or a series of computer algorithms. Furthermore, who can claim to own such an image, with all of Rembrandt’s work in the public domain?
This provocative artifact has a place in an exhibition like HUMANS NEED NOT APPLY. Fortuitously, the painting was not available for loan in the time frame of the exhibition, leading to the idea to produce a human-made reproduction of the supposed machine-made work, a creative double negative only now possible: a fake of a fake. I found Pan Fubin with the help of a curator from London’s Victoria and Albert Museum who recently toured Dafen. Pan accepted the commission and proceeded to create a portrait of a man who never existed, but had been dreamt up by a machine and a staff of AI experts and art-historian consultants, in the style of a painter who died 348 years ago.
In this context, Pan’s work can be seen as a critique of the breathless hype that accompanies discussion of artificial intelligence. It is a work that required many hours of one man toiling alone using ancient technology, drawing on thousands of hours of training and practice. He was surprised this image could be produced by a computer, and — as if on script — joked that he will be “laid off” if such a trend continues. After more thought, he insisted that the computer “cannot create emotional value” which, in part, arises from the little flaws you see, even in the works of the masters, such as “errors in the structure or perspective.” A machine, he surmised, cannot be perfect and creative simultaneously, echoing the notion put forth by John Ruskin in 1853 in The Stones of Venice that imprecise execution of ornament, often visible in gothic architecture, signaled freedom and dignity in the social conditions of workers:
“You must either make a tool of the creature, or a man of him. You cannot make both. Men were not intended to work with the accuracy of tools, to be precise and perfect in all their actions. If you will have that precision out of them, and make their fingers measure degrees like cog-wheels, and their arms strike curves like compasses, you must unhumanise him… On the other hand, if you will make a man of the working creature… let him begin to imagine, to think… Out come all his roughness, all his dullness, all his incapacity; shame upon shame, failure upon failure, pause after pause: but out comes the whole majesty of him also…”2
Pan Fubin’s portrait of a machine’s dream is not a surrender to technology but a celebration of the need for the human touch to achieve real creativity, and of our ability to reflect on lived experience, something a computer cannot do, as a prerequisite of art. Such a position was argued with nuance and passion by Harold Cohen, a pioneer in AI assisted painting.3
This portrait is also a work by someone hungry for more commissions in order to dedicate more time to experimental painting and a solo exhibition. He admires artists like Lucian Freud, John Singer Sargent, Anders Zorn, van Gogh, John William Waterhouse, and Chinese artists like Leng Jun, Guo Runwen, and Ai Xuan. Perhaps you’d like a family portrait, or a copy of a famous 19th century masterwork? Pan’s English is quite good, and his email address is dz2006528@163.com. He goes by the working name “Dong Zi.”
That, as they say in the advertising industry, is your call to action.
1. For project description and results, see here
2. John Ruskin, The Stones of Venice, vol. 2 (1853; reprint, New York: E.P. Dutton & Co., 1907): 148–150. See also: Carma Gorman, The Industrial Design Reader. New York: Allworth Press, 2003. See here
3. Radio interview excerpt from Are Computers Creative? by Studio 360, published December 2011. See here
Profile
William Myers is a curator, writer, and teacher based in Amsterdam. His first book Biodesign (2012), published by MoMA, identifies the emerging practice of designers and architects integrating living processes in their work. His next book Bio Art: Altered Realities (2015), published by Thames & Hudson, profiles art that uses biology in new ways or responds to recent research in the life sciences that disrupts our notions of identity, nature, and the definition of life.
William’s writing and exhibitions have been profiled in the journal Science, The New York Times, The Wall Street Journal, New York Magazine, Smithsonian Magazine, Volkskrant and Folha de São Paulo, among others. William has delivered lectures at Harvard University, the Tate Modern, Universitário Belas Artes de São Paulo, International University of Catalunya, Leiden University, and the Royal College of Art. He has previously worked for MoMA, the Guggenheim Museum, the Smithsonian Cooper-Hewitt National Design Museum, Vitra, TU Delft, and The New Institute in Rotterdam.
william-myers.com @WMyersdesignArtificial intelligence and natural stupidity

What do the following data sets about the United States have in common?
- Civilians killed in encounters with police or law enforcement agencies
- Sale prices in the art world (and relationships between artists and gallerists)
- People excluded from public housing because of criminal records
- Trans people killed or injured in instances of hate crimes
- Poverty and employment statistics that include incarcerated people
- Muslim mosques and communities surveilled by the FBI and CIA
- Undocumented immigrants currently incarcerated or illegally underpaid
The answer is: they are all missing1. These data may have never been collected at all or perhaps they were hidden, misplaced or destroyed. We don’t know. Given the many topics of discourse these data sets could influence, or the value they might add to efforts to achieve greater social justice, it’s worthy and even urgent to question their state of absence. Brooklyn-based artist Mimi Onuoha is doing just that. She recently urged a gathering of engineers and guests at a Google conference on machine learning, with no small amount of bravery, to “identify the intentionality behind” sets of missing data. She argued, mercilessly and convincingly, that relying only on available data is a kind of irresponsible compromise, while being with people often reveals crucial, missing details. Data, in other words, are never impartial. They exist in a context of the presence or absence of other available data that in total speaks to our personal and societal glitches — like our tendency to look for examples that reinforce biases, or dysfunctional power dynamics, where collecting information about disenfranchised populations does not serve the interests of those deciding what research to fund. Crime statistics in the United States, for instance, are one of the most detailed and reported data types. Communities demand evidence that they’re kept safe; yet, there are still no national statistics on the number of civilians killed in encounters with police. It would seem some communities have more right to accountability than others.
When it comes to artificial intelligence, engineers necessarily rely heavily on available data. These are training sets, or reference libraries, a machine-learning system utilises to become useful. Sometimes these learning systems are then embedded within other systems, potentially amplifying the effect of the incompleteness of the data they ingested, like a rounding error finding exponential expression. In one example, Nikon’s camera software misread images of Asian people as blinking; in another, software used to assess the risk of convicted criminals reoffending was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes2. Gender disparity also appears: computer scientists at Carnegie Mellon recently found that women were less likely than men to be shown ads on Google for highly paid jobs3.
The worry is that missing data and its effects are, to borrow a phrase from the tech world, “a feature and not a bug” of the technology; that they are aligned with an intention or agenda. Technology can only reflect the priorities, behaviours, and biases of its creators. It must, therefore, be embraced with caution and gives us pause to consider how our social progress consistently lags behind our technological prowess. Similarly, the types of problems that new technologies or services address tend to be geared towards solving the problems of small and influential groups. Consider, for example, how much recent technology appears to be designed with the intent of enabling socialising (if you can call it that) without the potentially uncomfortable experience of eye contact. You might guess that many of our tech visionaries are motivated by severe social anxiety. Another way to look at the narrowness of tech-driven problem solving comes from architecture, a field that has rapidly adopted computer modeling tools, like parametric design. From Christopher Alexander:
“The effort to state a problem in such a way that a computer can be used to solve it will distort your view of the problem. It will allow you to consider only those aspects of the problem which can be encoded — and in many cases these are the most trivial and the least relevant aspects.”4
When it comes to automation, the problem that most artificial intelligence is geared to solve is the high cost of employees. This focus is blind to the human costs or the community impacts of putting people out of work, or of pushing them into insecure, freelance, or part-time arrangements. These are very real costs to which governments must respond. In the past, as agricultural work was replaced by factory and service jobs in the Industrial Revolution, the government built schools and made primary education mandatory while beginning to subsidise higher education. Workers simultaneously built a labor movement and formed unions. But these models of support and power-sharing have proven insuffcient in the 21st century. New, more nimble systems are needed to address the scale and speed of current changes propelled by machine learning. Lifelong education initiatives can be of help, for example, in which people are funded to retool or relocate with new skills every few years, instead of relying on a single university experience; another reform could involve realigning incentives, so that universities receive no tuition unless graduates earn well in the future, a percentage of which is paid to the school5. Broad protections for freelance workers are also overdue, in which companies might finally be obliged to contribute to the many costs, such as pensions, health care, and sick time, which those workers now bear alone. The emergent and so-called ‘gig economy’ demands 2.0 versions of unions, regulations, corporate taxes, and education. Whether most people will prosper in this new machine age will largely depend on how effectively we pursue their development.
Finally, artificial intelligence must be recognized for its power to exploit our mental and social vulnerabilities, particularly when used to select content we see on opinion-shaping platforms like Twitter and Facebook. Neural networks are mastering how to zero in on what content is most likely to get you more engaged, which means spending more time online — sharing, liking, posting, clicking more ads. This process is largely blind to the quality of the content, and so it often favors inflammatory posts, which measurably create more engagement but often carry with them negativity, stigma, or blatant falsities6.
An angry customer, it turns out, keeps coming back for more. Seasoned editors of newspapers, cable news, and radio programs have long known this, but they were always somewhat reined in by journalistic standards, maintaining reputation, or avoiding lawsuits. Algorithms know no such boundaries, and they work at speeds and on scales that exponentially strengthen the impact of, say, a fake news story about Brexit, Hillary Clinton, or climate change; stories that can be seen by millions, in a matter of minutes, with content mutating slightly with every share, to become even more enraging, and so, engaging.
The speed, openness, and reach of the internet, when combined with social media and machine learning, is clearly producing negative impacts along with all the benefits. Just as the automobile granted freedom of movement on breakthrough scales at the turn of the 20th century, it also started to create pollution and cause road deaths. Eventually, we designed seat belts and introduced emissions standards on car engines. So, too, we may need equivalent inventions for the digital world, being cautious not to censor speech, but to prevent the car wrecks and smog we face in the form of the widespread loss of our grip on facts. When we wield artificial intelligence, we ignore our natural stupidity at our peril.
1. List compiled and updated by Mimi Onuoha. See it here
2. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, ‘Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks’, ProPublica, May 2016.
3. Amit Datta, Michael Carl Tschantz, and Anupam Datta, ‘Automated Experiments on Ad Privacy Settings.’ Proceedings on Privacy Enhancing Technologies 2015, no. 1: 92-112.
4. Christopher Alexander, ‘A Much Asked Question about Computers and Design’. Architecture and the Computer, First Boston Architectural Center Conference, 1964.
5. See article proposing such a system in: ‘Graduate Stock’, The Economist. August 2015.
6. See research from Rui Fan, Jichang Zhao, Yan Chen, and Ke Xu. “Anger Is More Influential Than Joy: Sentiment Correlation in Weibo.” PLOS ONE, 9, no. 10 (2014): e110184.
Profile
William Myers is a curator, writer, and teacher based in Amsterdam. His first book Biodesign (2012), published by MoMA, identifies the emerging practice of designers and architects integrating living processes in their work. His next book Bio Art: Altered Realities (2015), published by Thames & Hudson, profiles art that uses biology in new ways or responds to recent research in the life sciences that disrupts our notions of identity, nature, and the definition of life.
William’s writing and exhibitions have been profiled in the journal Science, The New York Times, The Wall Street Journal, New York Magazine, Smithsonian Magazine, Volkskrant and Folha de São Paulo, among others. William has delivered lectures at Harvard University, the Tate Modern, Universitário Belas Artes de São Paulo, International University of Catalunya, Leiden University, and the Royal College of Art. He has previously worked for MoMA, the Guggenheim Museum, the Smithsonian Cooper-Hewitt National Design Museum, Vitra, TU Delft, and The New Institute in Rotterdam.
william-myers.com @WMyersdesign