El celibato voluntario y el amor sin sexo que proclaman los nativos digitales se expande a la población adulta. El desinterés sexual es cada vez mayor.
Muchos likes, retuits y pulgares hacia arriba, pero el sexo nunca llega. El millennial, nativo digital y demasiado ocupado en perfeccionar su don de compras online, ha perdido el interés por lo que ocurre fuera de la red, que es su hábitat natural. En el trabajo, por ejemplo, prefiere la comunicación virtual a una reunión presencial. Y así transcurre también su vida sexual. “Persiguen la satisfacción inmediata que les permite internet”, dice el novelista y exponente de la Generación X Bret Easton Ellis, después de acusarlos de pusilánimes y hedonistas. Por no querer, no quieren textos romanticones en las fotos que suben a Instagram y detestan cualquier gesto que haga que les confundan con “moñas”.
Pero tal vez Easton Ellis no ha caído en la cuenta de que, aunque faltan estudios exactos, son muchos los indicios que avisan de que la burbuja sexual empieza a pinchar en todos los tramos de edad. “Quienes hacemos consulta de sexología médica apreciamos un descenso muy importante en la frecuencia con que practican sexo las parejas estables, algo impensable hace un par de décadas. No es raro que un matrimonio refiera con absoluta normalidad que el número de encuentros sexuales es de uno o dos al mes”, indica Froilán Sánchez, médico y coordinador del grupo de trabajo de sexología de SEMERGEN (Sociedad Española de Médicos de Atención Primaria). ¿Patológico? No. Pero sin frecuencia sexual o, al menos, sin acercamientos o caricias sexuales es difícil superar disfunciones cada vez más frecuentes, como los trastornos eyaculatorios, eréctiles o de excitación femenina. “Es como intentar subir una montaña sin haberse entrenado”, añade Sánchez.
En el primer capítulo de Easy, la nueva serie de Nertflix, sus protagonistas discuten sobre ello. “Habría más sexo si la mujer no se ocupase de la colada”, dice uno de ellos. Otro le replica que lo que está matando el apetito sexual femenino es su afán por ganar dinero. Y la tercera en discordia recrimina a uno y otro por culpar a la mujer de la falta de sexo. Con la ciudad de Chicago como escenario y el sexo, la tecnología y el amor como argumentos, la ficción refleja esa realidad que advierten los sexólogos: la libido hace aguas.
Las parejas jóvenes practican sexo con mucha menor frecuencia de lo que lo hacía a su edad la generación anterior, y esto empieza a ser un tema preocupante
La paradoja es que, según Sánchez, ahora hay un mayor deseo de conocer: “Las redes propician nuevas relaciones y nuevas maneras de comunicarse sexualmente. Y ahí entra el sexo virtual, el intercambio de parejas, el voyerismo”. Pero, como indica Francisca Molero, psicóloga, una cosa son las relaciones y otra es la actividad sexual. “Posiblemente, las prácticas han aumentado. Tenemos más autonomía individual, más hedonismo, más estímulos compensatorios y más distracciones”.
Precisamente, esta saturación está provocando la desaparición del sexo como algo cotidiano. “El sexo convertido en algo excepcional y perfecto hace que ese momento excepcional llegue pocas veces. Se fomenta un prototipo de relación que exige mucha dedicación y esfuerzo, control de la eyaculación, mantener la erección, estar pendiente de la pareja y lograr el disfrute máximo. Todo ello puede resultar cansado y da incluso pereza”.
El autoerotismo, lo más cómodo
Frente al sexo exigente, se impone la masturbación. “Es más rápida, menos cansada y también satisfactoria”, añade Molero. Su opinión es que lo importante no es la frecuencia, sino que la sexualidad no se viva con angustia. Hay parejas que, aunque tienen menos coitos, integran otras prácticas sexuales: besos, caricias y tocamientos”.
Pero cuando los investigadores preguntan a los ciudadanos la razón de la falta de sexo, los argumentos parecen universales: falta de tiempo, exceso de trabajo, estrés o rutina. “Más que causas son excusas para justificar su inapetencia sexual”, apunta Sánchez. Él, igual que otros profesionales de la salud, señala la caída de la testosterona como una de las claves del estallido de la burbuja sexual tanto en hombres como en mujeres. Esto es debido al aumento de estrógenos ambientales y sintéticos. “Es lo que realmente está resquebrajando el deseo de descarga sexual, algo que hasta ahora nunca había sucedido”.
La falta de sexo, asunto de estado
La merma en la función de la testosterona afecta negativamente al deseo femenino y masculino pero, además, según indica el doctor, “provoca trastornos eréctiles en el hombre y de excitabilidad en la mujer. Y cuando se presentan, disuaden de cualquier intento de acercamiento con fines sexuales por miedo a fracasar”.
Tal es la preocupación, que en Suecia, el ministro de Salud, Gabriel Wikström, ha encargado un estudio que destapará qué hay detrás de la ausencia de deseo sexual en sus ciudadanos. Si el informe, que culminará en 2019, llegase a la conclusión de que el estrés u otros problemas de salud están deteriorando la vida sexual del país, el ministro ha anunciado que se trataría como un asunto político de primera magnitud. Como recuerda Sánchez, la falta de actividad sexual repercute negativamente en nuestra salud y es causa de insomnio, nerviosismo, tendencia al desánimo y a la depresión y desgaste en las relaciones sociales. Pero, además, tiene como consecuencia social el descenso de fertilidad y la baja natalidad. Algo que a corto plazo exigirá buscar soluciones.
El gobierno sueco ha decidido investigar qué factores están deteriorando la vida sexual de sus ciudadanos. Teme que sea un problema de salud
En Japón, el desinterés por el sexo ya amenaza con provocar desequilibrios demográficos. La mitad de sus habitantes reconocen sin empacho que su actividad sexual es nula. Cuando las autoridades escarban en qué les lleva a esta apatía, asoman la jornada laboral maratoniana y el hastío que les provoca la idea del sexo. El celibato voluntario de los más jóvenes se ha convertido en un fenómeno cada vez más visible. Pero son inactivos sexual y sentimentalmente.
Una tasa para los guapos
Ante tal descalabro, brotan soluciones tan disparatadas como la del economista Takuro Morinaga que, con cierto sarcasmo, propone una tasa sobre los guapos. “La verdadera raíz del declive de las relaciones íntimas en estos últimos años es la incapacidad de los feos para tener citas exitosas con mujeres”, ha escrito en una columna del diario Asahi. En su opinión, un impuesto sobre los hombres más atractivos corregiría tal quebranto y a los menos agraciados les resultaría más fácil encontrar el amor.
Al margen de idear soluciones puntuales, las investigaciones están más centradas en dar con el germen del problema. La mayoría coincide en dibujar un perfil de individuos muy hedonistas y a la vez excesivamente preocupados por escalar puestos en su vida profesional, según un estudio publicado en Harvard Business Review que los define como “orgullosamente mártires del trabajo”. Esto les hace, según sus conclusiones, individuos muy dependientes de la alabanza y de la opinión ajena.
Cuando el Ministerio de Salud nipón indagó en las causas de esta apatía, enseguida detectó que detrás de los argumentos más inocentes, como la falta de privacidad o de tiempo, lo que subyace es el fenómeno virtual. Internet y todos los recursos virtuales al alcance de cualquiera potencian la opción de disfrutar de manera inmediata, sin compromiso y además con un montón de variedades. Estas formas de consumo y de ocio sirven de paraguas bajo el que se cobija un colectivo creciente en el país nipón: jóvenes tímidos, con dejadez física y muy poco habilidosos en la seducción.
En China, por ejemplo, las mujeres jóvenes están adoptando un estilo de vida individualista que las lleva a eludir la presión de su Gobierno para que se casen. Se resisten al matrimonio y se atreven a defender la soltería en un país que estigmatiza a las mujeres que se acercan a la treintena sin tener un esposo. El país vive con perplejidad esta actitud de castidad femenina y más teniendo en cuenta que se calcula que en 2020 habrá al menos diez millones de varones chinos que no podrán casarse debido al desequilibrio demográfico que ha generado la política del hijo único. Habrá entonces miles de pequeños pueblos como Laoya, al este del país, poblado de “ramas desnudas”, que es como llaman allí a los hombres solteros. No es extraño que solo un 0,5% de la población masculina china declare que goza de experiencias sexuales satisfactorias.
Los últimos bastiones
Cualquier radiografía de la sexualidad en estos tiempos arroja resultados parecidos, independientemente de la idiosincrasia de cada país. Una investigación llevada a cabo en la Universidad de Wageningen (Holanda) concluye que el abandono sexual se puede detectar en la costumbre de los selfies. “La obsesión por los autorretratos no es más que una llamada de auxilio frente a la carestía sexual”, ha indicado su autor, Christyntjes Van Gallagher. El 83% de sus encuestados confesaron que carecen de una vida sexual activa y que evalúan su nivel de bienestar a partir de la imagen que construyen de sí mismos en las redes sociales.
Encuestas realizadas por OK Cupid revelan más datos curiosos. El primero, que el cibersexo es una opción válida para más de la mitad de la población. Y, si en 2005 el 69% de los participantes contestó que se acostaría con alguien en su primera cita, en 2015 el porcentaje bajó al 50%. Sin embargo, subieron del 50 al 61% los jóvenes que considerarían una amistad basada en el sexo, sin ninguna intención romántica, amorosa o de compromiso. Son resultados que no terminan de aclarar si, como piensa la socióloga francesa Eva Illouz, de la Universidad Hebrea de Jerusalén, el sexo precede al amor. O si sucede al revés, como reflejaba el ideal amoroso de la era premoderna, cuando el sexo se consideraba la culminación de este sentimiento.
Lo cierto es que, ante las nuevas formas de practicar sexo (poliamor, intercambio de parejas, sexo libre, orgasmos virtuales), parece complicado encajar la caída sexual. Francisca Molero trata de analizar estos comportamientos, en principios contradictorios, con una reflexión: “Es previsible, estamos en una sociedad muy controladora, posiblemente de las menos libres a nivel personal, y el sexo sigue siendo un elemento de control social. Por eso se le pone nombre a todo, lo etiquetamos y le damos visibilidad”. Con ello, la psicóloga aclara que no es que no hayamos avanzado mucho en respeto y derechos sexuales, pero la sociedad es como un agujero negro. “Engulle todo demasiado rápido”.
Si tuviéramos que nombrar una cultura que se encuentre en la vanguardia erótica, sería la homosexual. En una lectura que hizo Illouz a propósito del fenómeno Grey, la socióloga manifestó: “La superación de prohibiciones y de normas que regulan las relaciones, la multiplicación y la brevedad de los encuentros sexuales o la reafirmación del individuo en el placer erótico son formas sociales inventadas o perfeccionadas por los homosexuales”. Si esto es así, al menos queda este colectivo como bastión de la lujuria.
No es el único. Brasil se mostró como el país del desenfreno. Los Juegos Olímpicos de Río 2016 han sido los más sexuales de todos los tiempos. Más de 450.000 preservativos que se habían repartido para poner protección a las noches fogosas estuvieron a punto de colapsar el alcantarillado de la Villa Olímpica.
on a recent evening in San Francisco, Tristan Harris, a former product philosopher at Google, took a name tag from a man in pajamas called “Honey Bear” and wrote down his pseudonym for the night: “Presence.”
Harris had just arrived at Unplug SF, a “digital detox experiment” held in honor of the National Day of Unplugging, and the organizers had banned real names. Also outlawed: clocks, “w-talk” (work talk), and “WMDs” (the planners’ loaded shorthand for wireless mobile devices). Harris, a slight 32-year-old with copper hair and a tidy beard, surrendered his iPhone, a device he considers so addictive that he’s called it “a slot machine in my pocket.” He keeps the background set to an image of Scrabble tiles spelling out the words face down, a reminder of the device’s optimal position.
I followed him into a spacious venue packed with nearly 400 people painting faces, filling in coloring books, and wrapping yarn around chopsticks. Despite the cheerful summer-camp atmosphere, the event was a reminder of the binary choice facing smartphone owners, who, according to one study, consult their device 150 times a day: Leave the WMD on and deal with relentless prompts compelling them to check its screen, or else completely disconnect. “It doesn’t have to be the all-or-nothing choice,” Harris told me after taking in the arts-and-crafts scene. “That’s a design failure.”
Harris is the closest thing Silicon Valley has to a conscience. As the co‑founder of Time Well Spent, an advocacy group, he is trying to bring moral integrity to software design: essentially, to persuade the tech world to help us disengage more easily from its devices.
While some blame our collective tech addiction on personal failings, like weak willpower, Harris points a finger at the software itself. That itch to glance at our phone is a natural reaction to apps and websites engineered to get us scrolling as frequently as possible. The attention economy, which showers profits on companies that seize our focus, has kicked off what Harris calls a “race to the bottom of the brain stem.” “You could say that it’s my responsibility” to exert self-control when it comes to digital usage, he explains, “but that’s not acknowledging that there’s a thousand people on the other side of the screen whose job is to break down whatever responsibility I can maintain.” In short, we’ve lost control of our relationship with technology because technology has become better at controlling us.
Under the auspices of Time Well Spent, Harris is leading a movement to change the fundamentals of software design. He is rallying product designers to adopt a “Hippocratic oath” for software that, he explains, would check the practice of “exposing people’s psychological vulnerabilities” and restore “agency” to users. “There needs to be new ratings, new criteria, new design standards, new certification standards,” he says. “There is a way to design based not on addiction.”
Joe Edelman—who did much of the research informing Time Well Spent’s vision and is the co-director of a think tank advocating for more-respectful software design—likens Harris to a tech-focused Ralph Nader. Other people, including Adam Alter, a marketing professor at NYU, have championed theses similar to Harris’s; but according to Josh Elman, a Silicon Valley veteran with the venture-capital firm Greylock Partners, Harris is “the first putting it together in this way”—articulating the problem, its societal cost, and ideas for tackling it. Elman compares the tech industry to Big Tobacco before the link between cigarettes and cancer was established: keen to give customers more of what they want, yet simultaneously inflicting collateral damage on their lives. Harris, Elman says, is offering Silicon Valley a chance to reevaluate before more-immersive technology, like virtual reality, pushes us beyond a point of no return.
All this talk of hacking human psychology could sound paranoid, if Harris had not witnessed the manipulation firsthand. Raised in the Bay Area by a single mother employed as an advocate for injured workers, Harris spent his childhood creating simple software for Macintosh computers and writing fan mail to Steve Wozniak, a co-founder of Apple. He studied computer science at Stanford while interning at Apple, then embarked on a master’s degree at Stanford, where he joined the Persuasive Technology Lab. Run by the experimental psychologist B. J. Fogg, the lab has earned a cultlike following among entrepreneurs hoping to master Fogg’s principles of “behavior design”—a euphemism for what sometimes amounts to building software that nudges us toward the habits a company seeks to instill. (One of Instagram’s co-founders is an alumnus.) In Fogg’s course, Harris studied the psychology of behavior change, such as how clicker training for dogs, among other methods of conditioning, can inspire products for people. For example, rewarding someone with an instantaneous “like” after they post a photo can reinforce the action, and potentially shift it from an occasional to a daily activity.
Harris learned that the most-successful sites and apps hook us by tapping into deep-seated human needs. When LinkedIn launched, for instance, it created a hub-and-spoke icon to visually represent the size of each user’s network. That triggered people’s innate craving for social approval and, in turn, got them scrambling to connect. “Even though at the time there was nothing useful you could do with LinkedIn, that simple icon had a powerful effect in tapping into people’s desire not to look like losers,” Fogg told me. Harris began to see that technology is not, as so many engineers claim, a neutral tool; rather, it’s capable of coaxing us to act in certain ways. And he was troubled that out of 10 sessions in Fogg’s course, only one addressed the ethics of these persuasive tactics. (Fogg says that topic is “woven throughout” the curriculum.)
Harris dropped out of the master’s program to launch a start-up that installed explanatory pop-ups across thousands of sites, including The New York Times’. It was his first direct exposure to the war being waged for our time, and Harris felt torn between his company’s social mission, which was to spark curiosity by making facts easily accessible, and pressure from publishers to corral users into spending more and more minutes on their sites. Though Harris insists he steered clear of persuasive tactics, he grew more familiar with how they were applied. He came to conceive of them as “hijacking techniques”—the digital version of pumping sugar, salt, and fat into junk food in order to induce bingeing.
McDonald’s hooks us by appealing to our bodies’ craving for certain flavors; Facebook, Instagram, and Twitter hook us by delivering what psychologists call “variable rewards.” Messages, photos, and “likes” appear on no set schedule, so we check for them compulsively, never sure when we’ll receive that dopamine-activating prize. (Delivering rewards at random has been proved to quickly and strongly reinforce behavior.) Checking that Facebook friend request will take only a few seconds, we reason, though research shows that when interrupted, people take an average of 25 minutes to return to their original task.
Sites foster a sort of distracted lingering partly by lumping multiple services together. To answer the friend request, we’ll pass by the News Feed, where pictures and auto-play videos seduce us into scrolling through an infinite stream of posts—what Harris calls a “bottomless bowl,” referring to a study that found people eat 73 percent more soup out of self-refilling bowls than out of regular ones, without realizing they’ve consumed extra. The “friend request” tab will nudge us to add even more contacts by suggesting “people you may know,” and in a split second, our unconscious impulses cause the cycle to continue: Once we send the friend request, an alert appears on the recipient’s phone in bright red—a “trigger” color, Harris says, more likely than some other hues to make people click—and because seeing our name taps into a hardwired sense of social obligation, she will drop everything to answer. In the end, he says, companies “stand back watching as a billion people run around like chickens with their heads cut off, responding to each other and feeling indebted to each other.” d me the social network focuses on maximizing the quality of the experience—not the time its users spend on the site—and surveys its users daily to gauge success. In response to this feedback, Facebook recently tweaked its News Feed algorithm to punish clickbait—stories with sensationalist headlines designed to attract readers. (LinkedIn and Instagram declined requests for comment. Twitter did not reply to multiple queries.)
Even so, a niche group of consultants has emerged to teach companies how to make their services irresistible. One such guru is Nir Eyal, the author of Hooked: How to Build Habit-Forming Products, who has lectured or consulted for firms such as LinkedIn and Instagram. A blog post he wrote touting the value of variable rewards is titled “Want to Hook Your Users? Drive Them Crazy.” While asserting that companies are morally obligated to help those genuinely addicted to their services, Eyal contends that social media merely satisfies our appetite for entertainment in the same way TV or novels do, and that the latest technology tends to get vilified simply because it’s new, but eventually people find balance. “Saying ‘Don’t use these techniques’ is essentially saying ‘Don’t make your products fun to use.’ That’s silly,” Eyal told me. “With every new technology, the older generation says ‘Kids these days are using too much of this and too much of that and it’s melting their brains.’ And it turns out that what we’ve always done is to adapt.”
Google acquired Harris’s company in 2011, and he ended up working on Gmail’s Inbox app. (He’s quick to note that while he was there, it was never an explicit goal to increase time spent on Gmail.) A year into his tenure, Harris grew concerned about the failure to consider how seemingly minor design choices, such as having phones buzz with each new email, would cascade into billions of interruptions. His team dedicated months to fine-tuning the aesthetics of the Gmail app with the aim of building a more “delightful” email experience. But to him that missed the bigger picture: Instead of trying to improve email, why not ask how email could improve our lives—or, for that matter, whether each design decision was making our lives worse? Six months after attending Burning Man in the Nevada desert, a trip Harris says helped him with “waking up and questioning my own beliefs,” he quietly released “A Call to Minimize Distraction & Respect Users’ Attention,” a 144-page Google Slides presentation. In it, he declared, “Never before in history have the decisions of a handful of designers (mostly men, white, living in SF, aged 25–35) working at 3 companies”—Google, Apple, and Facebook—“had so much impact on how millions of people around the world spend their attention … We should feel an enormous responsibility to get this right.” Although Harris sent the presentation to just 10 of his closest colleagues, it quickly spread to more than 5,000 Google employees, including then-CEO Larry Page, who discussed it with Harris in a meeting a year later. “It sparked something,” recalls Mamie Rheingold, a former Google staffer who organized an internal Q&A session with Harris at the company’s headquarters. “He did successfully create a dialogue and open conversation about this in the company.”
Harris left the company last December to push for change more widely, buoyed by a growing network of supporters that includes the MIT professor Sherry Turkle; Meetup’s CEO, Scott Heiferman; and Justin Rosenstein, a co-inventor of the “like” button; along with fed-up users and concerned employees across the industry. “Pretty much every big company that’s manipulating users has been very interested in our work,” says Joe Edelman, who has spent the past five years trading ideas and leading workshops with Harris.
Through Time Well Spent, his advocacy group, Harris hopes to mobilize support for what he likens to an organic-food movement, but for software: an alternative built around core values, chief of which is helping us spend our time well, instead of demanding more of it. Thus far, Time Well Spent is more a label for his crusade—and a vision he hopes others will embrace—than a full-blown organization. (Harris, its sole employee, self-funds it.) Yet he’s amassed a network of volunteers keen to get involved, thanks in part to his frequent cameos on the thought-leader speaker circuit, including talks at Harvard’s Berkman Klein Center for Internet & Society; the O’Reilly Design Conference; an internal meeting of Facebook designers; and a TEDx event, whose video has been viewed more than 1 million times online. Tim O’Reilly, the founder of O’Reilly Media and an early web pioneer, told me Harris’s ideas are “definitely something that people who are influential are listening to and thinking about.” Even Fogg, who stopped wearing his Apple Watch because its incessant notifications annoyed him, is a fan of Harris’s work: “It’s a brave thing to do and a hard thing to do.”
At unplug sf, a burly man calling himself “Haus” enveloped Harris in a bear hug. “This is the antidote!,” Haus cheered. “This is the antivenom!” All evening, I watched people pull Harris aside to say hello, or ask to schedule a meeting. Someone cornered Harris to tell him about his internet “sabbatical,” but Harris cut him off. “For me this is w‑talk,” he protested.
Harris admits that researching the ways our time gets hijacked has made him slightly obsessive about evaluating what counts as “time well spent” in his own life. The hypnosis class Harris went to before meeting me—because he suspects the passive state we enter while scrolling through feeds is similar to being hypnotized—was not time well spent. The slow-moving course, he told me, was “low bit rate”—a technical term for data-transfer speeds. Attending the digital detox? Time very well spent. He was delighted to get swept up in a mass game of rock-paper-scissors, where a series of one-on-one elimination contests culminated in an onstage showdown between “Joe” and “Moonlight.” Harris has a tendency to immerse himself in a single activity at a time. In conversation, he rarely breaks eye contact and will occasionally rest a hand on his interlocutor’s arm, as if to keep both parties present in the moment. He got so wrapped up in our chat one afternoon that he attempted to get into an idling Uber that was not an Uber at all, but a car that had paused at a stop sign.
An accordion player and tango dancer in his spare time who pairs plaid shirts with a bracelet that has presence stamped into a silver charm, Harris gives off a preppy-hippie vibe that allows him to move comfortably between Palo Alto boardrooms and device-free retreats. In that sense, he had a great deal in common with the other Unplug SF attendees, many of whom belong to a new class of tech elites “waking up” to their industry’s unwelcome side effects. For many entrepreneurs, this epiphany has come with age, children, and the peace of mind of having several million in the bank, says Soren Gordhamer, the creator of Wisdom 2.0, a conference series about maintaining “presence and purpose” in the digital age. “They feel guilty,” Gordhamer says. “They are realizing they built this thing that’s so addictive.”
I asked Harris whether he felt guilty about having joined Google, which has inserted its technology into our pockets, glasses, watches, and cars. He didn’t. He acknowledged that some divisions, such as YouTube, benefit from coaxing us to stare at our screens. But he justified his decision to work there with the logic that since Google controls three interfaces through which millions engage with technology—Gmail, Android, and Chrome—the company was the “first line of defense.” Getting Google to rethink those products, as he’d attempted to do, had the potential to transform our online experience.
At a restaurant around the corner from Unplug SF, Harris demonstrated an alternative way of interacting with WMDs, based on his own self-defense tactics. Certain tips were intuitive: He’s “almost militaristic about turning off notifications” on his iPhone, and he set a custom vibration pattern for text messages, so he can feel the difference between an automated alert and a human’s words. Other tips drew on Harris’s study of psychology. Since merely glimpsing an app’s icon will “trigger this whole set of sensations and thoughts,” he pruned the first screen of his phone to include only apps, such as Uber and Google Maps, that perform a single function and thus run a low risk of “bottomless bowl–ing.” He tried to make his phone look minimalist: Taking a cue from a Google experiment that cut employees’ M&M snacking by moving the candy from clear to opaque containers, he buried colorful icons—along with time-sucking apps like Gmail and WhatsApp—inside folders on the second page of his iPhone. As a result, that screen was practically grayscale. Harris launches apps by using what he calls the phone’s “consciousness filter”—typing Instagram, say, into its search bar—which reduces impulsive tapping. For similar reasons, Harris keeps a Post-it on his laptop with this instruction: “Do not open without intention.”
His approach seems to have worked. I’m usually quick to be annoyed by friends reaching for their phones, but next to Harris, I felt like an addict. Wary of being judged, I made a point not to check my iPhone unless he checked his first, but he went so long without peeking that I started getting antsy. Harris assured me that I was far from an exception.
“Our generation relies on our phones for our moment-to-moment choices about who we’re hanging out with, what we should be thinking about, who we owe a response to, and what’s important in our lives,” he said. “And if that’s the thing that you’ll outsource your thoughts to, forget the brain implant. That is the brain implant. You refer to it all the time.”
Curious to hear more about Harris’s plan for tackling manipulative software, I tagged along one morning to his meeting with two entrepreneurs eager to incorporate Time Well Spent values into their start-up.
Harris, flushed from a yoga class, met me at a bakery not far from the “intentional community house” where he lives with a dozen or so housemates. We were joined by Micha Mikailian and Johnny Chan, the co-founders of an ad blocker, Intently, that replaces advertising with “intentions” reminding people to “Follow Your Bliss” or “Be Present.” Previously, they’d run a marketing and advertising agency.
“One day I was in a meditation practice. I just got the vision for Intently,” said Mikailian, who sported a chunky turquoise bracelet and a man bun.
“It fully aligned with my purpose,” said Chan.
They were interested in learning what it would take to integrate ethical design. Coordinating loosely with Joe Edelman, Harris is developing a code of conduct—the Hippocratic oath for software designers—and a playbook of best practices that can guide start-ups and corporations toward products that “treat people with respect.” Having companies rethink the metrics by which they measure success would be a start. “You have to imagine: What are the concrete benefits landed in space and in time in a person’s life?,” Harris said, coaching Mikailian and Chan.
At his speaking engagements, Harris has presented prototype products that embody other principles of ethical design. He argues that technology should help us set boundaries. This could be achieved by, for example, an inbox that asks how much time we want to dedicate to email, then gently reminds us when we’ve exceeded our quota. Technology should give us the ability to see where our time goes, so we can make informed decisions—imagine your phone alerting you when you’ve unlocked it for the 14th time in an hour. And technology should help us meet our goals, give us control over our relationships, and enable us to disengage without anxiety. Harris has demoed a hypothetical “focus mode” for Gmail that would pause incoming messages until someone has finished concentrating on a task, while allowing interruptions in case of an emergency. (Slack has implemented a similar feature.)
Harris hopes to create a Time Well Spent certification—akin to the leed s
eal or an organic label—that would designate software made with those values in mind. He already has a shortlist of apps that he endorses as early exemplars of the ethos, such as Pocket, Calendly, and f.lux, which, respectively, saves articles for future reading, lets people book empty slots on an individual’s calendar to streamline the process of scheduling meetings, and aims to improve sleep quality by adding a pinkish cast to the circadian-rhythm-disrupting blue light of screens. Intently could potentially join this coalition, he volunteered.
As a first step toward identifying other services that could qualify, Harris has experimented with creating software that would capture how many hours someone devotes weekly to each app on her phone, then ask her which ones were worthwhile. The data could be compiled to create a leaderboard that shames apps that addict but fail to satisfy. Edelman has released a related tool for websites, called Hindsight. “We have to change what it means to win,” Harris says.
The biggest obstacle to incorporating ethical design and “agency” is not technical complexity. According to Harris, it’s a “will thing.” And on that front, even his supporters worry that the culture of Silicon Valley may be inherently at odds with anything that undermines engagement or growth. “This is not the place where people tend to want to slow down and be deliberate about their actions and how their actions impact others,” says Jason Fried, who has spent the past 12 years running Basecamp, a project-management tool. “They want to make things more sugary and more tasty, and pull you in, and justify billions of dollars of valuation and hundreds of millions of dollars [in] VC funds.”
Rather than dismantling the entire attention economy, Harris hopes that companies will, at the very least, create a healthier alternative to the current diet of tech junk food. He recognizes that this shift would require reevaluating entrenched business models so success no longer hinges on claiming attention and time. As with organic vegetables, it’s possible that the first generation of Time Well Spent software might be available at a premium price, to make up for lost advertising dollars. “Would you pay $7 a month for a version of Facebook that was built entirely to empower you to live your life?,” Harris says. “I think a lot of people would pay for that.”
Like splurging on grass-fed beef, paying for services that are available for free and disconnecting for days (even hours) at a time are luxuries that few but the reasonably well-off can afford. I asked Harris whether this risked stratifying tech consumption, such that the privileged escape the mental hijacking and everyone else remains subjected to it. “It creates a new inequality. It does,” Harris admitted. But he countered that if his movement gains steam, broader change could occur, much in the way Walmart now stocks organic produce.
Currently, though, the trend is toward deeper manipulation in ever more sophisticated forms. Harris fears that Snapchat’s tactics for hooking users make Facebook’s look quaint. Facebook automatically tells a message’s sender when the recipient reads the note—a design choice that, per Fogg’s logic, activates our hardwired sense of social reciprocity and encourages the recipient to respond. Snapchat ups the ante: Unless the default settings are changed, users are informed the instant a friend begins typing a message to them—which effectively makes it a faux pas not to finish a message you start. Harris worries that the app’s Snapstreak feature, which displays how many days in a row two friends have snapped each other and rewards their loyalty with an emoji, seems to have been pulled straight from Fogg’s inventory of persuasive tactics.
Research shared with Harris by Emily Weinstein, a Harvard doctoral candidate, shows that Snapstreak is driving some teenagers nuts—to the point that before going on vacation, they give friends their log-in information and beg them to snap in their stead. “To be honest, it made me sick to my stomach to hear these anecdotes,” Harris told me.
Harris thinks his best shot at improving the status quo is to get users riled up about the ways they’re being manipulated, then create a groundswell of support for technology that respects people’s agency—something akin to the privacy outcry that prodded companies to roll out personal-information protections.
While Harris’s experience at Google convinced him that users must demand change for it to happen, Edelman suggests that the incentive to adapt can originate within the industry, as engineers become reluctant to build products they view as unethical and companies face a brain drain. The more people recognize the repercussions of tech firms’ persuasive tactics, the more working there “becomes uncool,” he says, a view I heard echoed by others in his field. “You can really burn through engineers hard.”
There is arguably an element of hypocrisy to the enlightened image that Silicon Valley projects, especially with its recent embrace of “mindfulness.” Companies like Google and Facebook, which have offered mindfulness training and meditation spaces for their employees, position themselves as corporate leaders in this movement. Yet this emphasis on mindfulness and consciousness, which has extended far beyond the tech world, puts the burden on users to train their focus, without acknowledging that the devices in their hands are engineered to chip away at their concentration. It’s like telling people to get healthy by exercising more, then offering the choice between a Big Mac and a Quarter Pounder when they sit down for a meal.
And being aware of software’s seductive power does not mean being immune to its influence. One evening, just as we were about to part ways for the night, Harris stood talking by his car when his phone flashed with a new text message. He glanced down at the screen and interrupted himself mid-sentence. “Oh!” he announced, more to his phone than to me, and mumbled something about what a coincidence it was that the person texting him knew his friend. He looked back up sheepishly. “That’s a great example,” he said, waving his phone. “I had no control over the process.”
On or about February 24, 1848, a twenty-three-page pamphlet was published in London. Modern industry, it proclaimed, had revolutionized the world. It surpassed, in its accomplishments, all the great civilizations of the past—the Egyptian pyramids, the Roman aqueducts, the Gothic cathedrals. Its innovations—the railroad, the steamship, the telegraph—had unleashed fantastic productive forces. In the name of free trade, it had knocked down national boundaries, lowered prices, made the planet interdependent and cosmopolitan. Goods and ideas now circulated everywhere.
Just as important, it swept away all the old hierarchies and mystifications. People no longer believed that ancestry or religion determined their status in life. Everyone was the same as everyone else. For the first time in history, men and women could see, without illusions, where they stood in their relations with others.
The new modes of production, communication, and distribution had also created enormous wealth. But there was a problem. The wealth was not equally distributed. Ten per cent of the population possessed virtually all of the property; the other ninety per cent owned nothing. As cities and towns industrialized, as wealth became more concentrated, and as the rich got richer, the middle class began sinking to the level of the working class.
Soon, in fact, there would be just two types of people in the world: the people who owned property and the people who sold their labor to them. As ideologies disappeared which had once made inequality appear natural and ordained, it was inevitable that workers everywhere would see the system for what it was, and would rise up and overthrow it. The writer who made this prediction was, of course, Karl Marx, and the pamphlet was “The Communist Manifesto.” He is not wrong yet.
Considering his rather glaring relevance to contemporary politics, it’s striking that two important recent books about Marx are committed to returning him to his own century. “Marx was not our contemporary,” Jonathan Sperber insists, in “Karl Marx: A Nineteenth-Century Life” (Liveright), which came out in 2013; he is “more a figure of the past than a prophet of the present.” And Gareth Stedman Jones explains that the aim of his new book, “Karl Marx: Greatness and Illusion” (Harvard), is “to put Marx back in his nineteenth-century surroundings.”
The mission is worthy. Historicizing—correcting for the tendency to presentize the past—is what scholars do. Sperber, who teaches at the University of Missouri, and Stedman Jones, who teaches at Queen Mary University of London and co-directs the Centre for History and Economics at the University of Cambridge, both bring exceptional learning to the business of rooting Marx in the intellectual and political life of nineteenth-century Europe.
Marx was one of the great infighters of all time, and a lot of his writing was topical and ad hominem—no-holds-barred disputes with thinkers now obscure and intricate interpretations of events largely forgotten. Sperber and Stedman Jones both show that if you read Marx in that context, as a man engaged in endless internecine political and philosophical warfare, then the import of some familiar passages in his writings can shrink a little. The stakes seem more parochial. In the end, their Marx isn’t radically different from the received Marx, but he is more Victorian. Interestingly, given the similarity of their approaches, there is not much overlap.
Still, Marx was also what Michel Foucault called the founder of a discourse. An enormous body of thought is named after him. “I am not a Marxist,” Marx is said to have said, and it’s appropriate to distinguish what he intended from the uses other people made of his writings. But a lot of the significance of the work lies in its downstream effects. However he managed it, and despite the fact that, as Sperber and Stedman Jones demonstrate, he can look, on some level, like just one more nineteenth-century system-builder who was convinced he knew how it was all going to turn out, Marx produced works that retained their intellectual firepower over time. Even today, “The Communist Manifesto” is like a bomb about to go off in your hands.
And, unlike many nineteenth-century critics of industrial capitalism—and there were a lot of them—Marx was a true revolutionary. All of his work was written in the service of the revolution that he predicted in “The Communist Manifesto” and that he was certain would come to pass. After his death, communist revolutions did come to pass—not exactly where or how he imagined they would but, nevertheless, in his name. By the middle of the twentieth century, more than a third of the people in the world were living under regimes that called themselves, and genuinely believed themselves to be, Marxist.
This matters because one of Marx’s key principles was that theory must always be united with practice. That’s the point of the famous eleventh thesis on Feuerbach: “Philosophers have hitherto only interpreted the world in various ways; the point is to change it.” Marx was not saying that philosophy is irrelevant; he was saying that philosophical problems arise out of real-life conditions, and they can be solved only by changing those conditions—by remaking the world. And Marx’s ideas were used to remake the world, or a big portion of it. Although no one would hold him responsible, in a juridical sense, for the outcome, on Marx’s own principle the outcome tells us something about the ideas.
In short, you can put Marx back into the nineteenth century, but you can’t keep him there. He wasted a ridiculous amount of his time feuding with rivals and putting out sectarian brush fires, and he did not even come close to completing the work he intended as his magnum opus, “Capital.” But, for better or for worse, it just is not the case that his thought is obsolete. He saw that modern free-market economies, left to their own devices, produce gross inequalities, and he transformed a mode of analysis that goes all the way back to Socrates—turning concepts that we think we understand and take for granted inside out—into a resource for grasping the social and economic conditions of our own lives.
Apart from his loyal and lifelong collaborator, Friedrich Engels, almost no one would have guessed, in 1883, the year Marx died, at the age of sixty-four, how influential he would become. Eleven people showed up for the funeral. For most of his career, Marx was a star in a tiny constellation of radical exiles and failed revolutionaries (and the censors and police spies who monitored them) but almost unknown outside it. The books he is famous for today were not exactly best-sellers. “The Communist Manifesto” vanished almost as soon as it was published and remained largely out of print for twenty-four years; “Capital” was widely ignored when the first volume came out, in 1867. After four years, it had sold a thousand copies, and it was not translated into English until 1886.
The second and third volumes of “Capital” were published after Marx died, stitched together by Engels from hundreds of pages of scrawled-over drafts. (Marx had spectacularly bad handwriting; Engels was one of the few people outside the family who could decipher it.) The “Theses on Feuerbach,” which Marx wrote in 1845, were not discovered until 1888, when Engels published them, and some of the texts most important for twentieth-century Marxists—the cobbled-together volume known as “The German Ideology,” the so-called Paris manuscripts of 1844, and the book entitled the “Grundrisse” by its Soviet editors—were unknown until after 1920. The unfinished Paris manuscripts, a holy text in the nineteen-sixties, did not appear in English until 1959. Marx seems to have regarded none of that material as publishable.
In Marx’s own lifetime, the work that finally brought him attention outside his circle was a thirty-five-page item called “The Civil War in France,” published in 1871, in which he hailed the short-lived and violently suppressed Paris Commune as “the glorious harbinger of a new”—that is, communist—“society.” It’s not a text that is cited much today.
One reason for Marx’s relative obscurity is that only toward the end of his life did movements to improve conditions for workers begin making gains in Europe and the United States. To the extent that those movements were reformist rather than revolutionary, they were not Marxist (although Marx did, in later years, speculate about the possibility of a peaceful transition to communism). With the growth of the labor movement came excitement about socialist thought and, with that, an interest in Marx.
Still, as Alan Ryan writes in his characteristically lucid and concise introduction to Marx’s political thought, “Karl Marx: Revolutionary and Utopian” (Liveright), if Vladimir Lenin had not arrived in Petrograd in 1917 and taken charge of the Russian Revolution, Marx would probably be known today as “a not very important nineteenth-century philosopher, sociologist, economist, and political theorist.” The Russian Revolution made the world take Marx’s criticism of capitalism seriously. After 1917, communism was no longer a utopian fantasy.
Marx is a warning about what can happen when people defy their parents and get a Ph.D. Marx’s father, a lawyer in the small city of Trier, in western Germany, had tried to steer him into the law, but Marx chose philosophy. He studied at the Friedrich-Wilhelms-Universität, where Hegel once taught, and he became involved with a group of intellectuals known as the Young Hegelians. Hegel was cautious about criticizing religion and the Prussian state; the Young Hegelians were not, and, just as Marx was being awarded his degree, in 1841, there was an official crackdown. Marx’s mentor was fired, and the Young Hegelians became academic pariahs. So Marx did what many unemployed Ph.D.s do: he went into journalism.
Apart from a few small book advances, journalism was Marx’s only source of earned income. (There is a story, though Sperber considers it unsubstantiated, that once, in desperation, he applied for a job as a railway clerk and was turned down for bad handwriting.) In the eighteen-forties, Marx edited and contributed to political newspapers in Europe; from 1852 to 1862, he wrote a column for the New York DailyTribune, the paper with the largest circulation in the world at the time.
When journalistic work dried up, he struggled. He depended frequently on support from Engels and advances on his inheritance. He was sometimes desperate for food; at one point, he couldn’t leave the house because he had pawned his only coat. The claim that the author of “Capital” was financially inept, and that he and his wife wasted what little money came their way on middle-class amenities like music and drawing lessons for the children, became a standard “irony” in Marx biographies. Sperber contests this. Marx had less money to waste than historians have assumed, and he accepted poverty as the price of his politics. He would gladly have lived in a slum himself, but he didn’t want his family to suffer. Three of the Marxes’ children died young and a fourth was stillborn; poverty and substandard living conditions may have been factors.
Marx’s journalism made him into a serial exile. He wrote and published articles offensive to the authorities, and, in 1843, he was kicked out of Cologne, where he was helping run a paper called Rheinische Zeitung. He went to Paris, which had a large German community, and that is where he and Engels became friends. An earlier encounter in Cologne had not gone well, but they met again at the Café de la Régence, in 1844, and ended up spending ten days together talking.
Engels, who was two years younger, had the same politics as Marx. Soon after they met, he wrote his classic study “The Condition of the Working Class in England,” which ends by predicting a communist revolution. Engels’s father was a German industrialist in the textile business, an owner of factories in Barmen and Bremen and in Manchester, England, and although he disapproved of his son’s politics and the company he kept, he gave him a position at the Manchester factory. Engels hated the work, but he was good at it, as he was at most things. He went fox hunting with the gentry he despised, and made fun of Marx’s attempts to ride a horse. Engels eventually became a partner, and the income helped him keep Marx alive.
In 1845, Marx was expelled from France. He moved to Brussels. Three years later, though, something happened that almost no one had foreseen: revolutions broke out across Europe, including in France, Italy, Germany, and the Austrian Empire. Marx wrote “The Communist Manifesto” just as those uprisings were getting under way. When unrest reached Brussels, he was suspected of arming insurgents and was evicted from Belgium, but he returned to Paris. Rioters there had broken into the Tuileries and set the French throne on fire.
By the year’s end, most of the revolutions had been crushed by monarchist forces. Many people who were or would become important figures in European art and literature—Wagner, Dostoyevsky, Baudelaire, Turgenev, Berlioz, Delacroix, Liszt, George Sand—had been caught up in the revolutionary excitement, and the outcome led to a crisis of faith in politics (the subject of Flaubert’s novel “Sentimental Education”). The failure of the 1848 revolutions is what Marx’s line “the first time as tragedy, the second time as farce” refers to. (He got the phrase from Engels.) The “tragedy” was the fate of the French Revolution under Napoleon; the “farce” was the election of Napoleon’s nephew, Louis-Napoleon Bonaparte, whom Marx considered a nonentity, to the Presidency of France, in December, 1848. Bonaparte eventually declared himself Emperor and ruled until 1870, when France lost a war with Prussia. The Paris Commune was a by-product of that war.
So in 1849 Marx was forced into exile once again. He fled with his family to London. He assumed that the stay would be temporary, but he lived there for the rest of his life. That is where, day after day in the Reading Room of the British Museum, he did the research for “Capital,” and it is where, in Highgate Cemetery, he is buried. The impressive bronze bust you see on his tombstone today was placed there, in 1956, by the Communist Party of Great Britain.
What was Marx like? The number of first-person reports is not large, but they tend to agree. He was, in some respects, a caricature of the German academic (which he had once expected to become): an imperious know-it-all with untamed hair in a misbuttoned frock coat. He once described himself to one of his children as “a machine condemned to devour books and then throw them, in a changed form, on the dunghill of history.” He wrote all night in clouds of tobacco smoke, books and papers piled around him. “They are my slaves,” he said, “and they must serve me as I please.”
In professional matters, he was forbidding. He was a cogent speaker but had a lisp and was a poor orator; he knew it, and rarely addressed a crowd. He was ruthless in print, made enemies of many friends and former allies, and did not suffer fools—a large subset of his acquaintance, in his view. One German exile referred to him as “an intellectual customs agent and border guard, appointed on his own authority.”
Still, he commanded respect. A colleague, recalling Marx at twenty-eight, described him as “a born leader of the people.” He was actually good at running the show—as an editor and, later on, as the dominant figure in the International Workingmen’s Association, known as the First International. His hair was black; his eyes were black; his complexion was swarthy. Engels called him the “black fellow from Trier”; his wife and children called him the Moor.
In private, he was modest and gracious. When he was not sick—he had a bad liver, suffered from bronchitis, and grew fist-size boils, which Sperber thinks were caused by an autoimmune disorder but which may have been a symptom of his liver disease—he was playful and affectionate. He loved Shakespeare, made up stories for his three daughters, and enjoyed cheap cigars and red wine. His wife and daughters adored him. A Prussian government spy who visited Marx at his home in 1852 was surprised to find him “the gentlest and mildest of men.”
He became engaged to Jenny von Westphalen, also from Trier, when he was eighteen and she was twenty-two. Sperber thinks that a fairy tale has grown up about the marriage, but Jenny is said to have been exceptionally beautiful, and she was devoted to Karl. He wrote passionate love poetry for her. The engagement lasted seven years, during which he finished his studies, and they rarely saw each other. The relationship was mainly epistolary. (Sperber believes that they had premarital sex. I certainly hope so.) In her letters, Jenny calls Karl her “little wild boar.”
The one possible flaw in the domestic idyll has to do with a child born to their servant, Helene Demuth. She was a “gift” to the Marxes from Jenny’s mother and lived with the family. (Almost all women in nineteenth-century Britain who could manage to retain a servant did so. Even Miss Bates, in Jane Austen’s “Emma,” who lives on the charity of her well-off neighbors, has a servant.) Helene’s child, named Frederick and called Freddy, was born in 1851 and was brought up by foster parents. Marx’s daughters didn’t meet him until after Marx’s death.
Engels claimed paternity. This was not implausible. Engels was unmarried and had a taste for working-class women; his longtime lover, Mary Burns, worked in a Manchester factory. On his deathbed, though, forty-four years later, he is supposed to have named Marx as Freddy’s real father, information that became known in Communist circles but was not made public until 1962. Sperber and Stedman Jones accept the story, as does the author of the standard English-language biography, David McLellan, although one of Engels’s biographers, Terrell Carver, thinks that the evidence is not conclusive. Demuth remained with the family; after Marx’s death, she went to work for Engels. And the Marxes’ marriage survived.
It is sympathy for Marx that leads Sperber and Stedman Jones to insist that we read him in his nineteenth-century context, because they hope to distance him from the interpretation of his work made after his death by people like Karl Kautsky, who was his chief German-language exponent; Georgi Plekhanov, his chief Russian exponent; and, most influentially, Engels. It was thanks mainly to those writers that people started to refer to Marxism as “scientific socialism,” a phrase that sums up what was most frightening about twentieth-century Communism: the idea that human beings can be reëngineered in accordance with a theory that presents itself as a law of history. The word the twentieth century coined for that was totalitarianism.
So, by 1939, when the British philosopher Isaiah Berlin published his widely read and not wholly unadmiring study “Karl Marx: His Life and Environment” (still in print), he could describe Marx as “among the great authoritarian founders of new faiths, ruthless subverters and innovators who interpret the world in terms of a single, clear, passionately held principle, denouncing and destroying all that conflicts with it. His faith . . . was of that boundless, absolute kind which puts an end to all questions and dissolves all difficulties.” This became the Cold War Marx.
It’s true that Marx was highly doctrinaire, something that did not wear well with his compatriots in the nineteenth century, and that certainly does not wear well today, after the experience of the regimes conceived in his name. It therefore sounds perverse to say that Marx’s philosophy was dedicated to human freedom. But it was. Marx was an Enlightenment thinker: he wanted a world that is rational and transparent, and in which human beings have been liberated from the control of external forces.
This was the essence of Marx’s Hegelianism. Hegel argued that history was the progress of humanity toward true freedom, by which he meant self-mastery and self-understanding, seeing the world without illusions—illusions that we ourselves have created. The Young Hegelians’ controversial example of this was the Christian God. (This is what Feuerbach wrote about.) We created God, and then pretended that God created us. We hypostatized our own concept and turned it into something “out there” whose commandments (which we made up) we struggle to understand and obey. We are supplicants to our own fiction.
Concepts like God are not errors. History is rational: we make the world the way we do for a reason. We invented God because God solved certain problems for us. But, once a concept begins impeding our progress toward self-mastery, it must be criticized and transcended, left behind. Otherwise, like the members of the Islamic State today, we become the tools of our Tool.
What makes it hard to discard the tools we have objectified is the persistence of the ideologies that justify them, and which make what is only a human invention seem like “the way things are.” Undoing ideologies is the task of philosophy. Marx was a philosopher. The subtitle of “Capital” is “Critique of Political Economy.” The uncompleted book was intended to be a criticism of the economic concepts that make social relations in a free-market economy seem natural and inevitable, in the same way that concepts like the great chain of being and the divine right of kings once made the social relations of feudalism seem natural and inevitable.
The reason that “Capital” looks more like a work of economics than like a work of philosophy—the reason that it is filled with tables and charts rather than with syllogisms—is the reason given in the eleventh thesis on Feuerbach: the purpose of philosophy is to understand conditions in order to change them. Marx liked to say that when he read Hegel he found philosophy standing on its head, so he turned it over and placed it on its feet. Life is doing, not thinking. It is not enough to be the masters of our armchairs.
Marx thought that industrial capitalism, too, was created for a good reason: to increase economic output—something that “The Communist Manifesto” celebrates. The cost, however, is a system in which one class of human beings, the property owners (in Marxian terms, the bourgeoisie), exploits another class, the workers (the proletariat).
Capitalists don’t do this because they are greedy or cruel (though one could describe their behavior that way, as Marx almost invariably did). They do it because competition demands it. That’s how the system operates. Industrial capitalism is a Frankenstein’s monster that threatens its own creators, a system that we constructed for our own purposes and is now controlling us.
Marx was a humanist. He believed that we are beings who transform the world around us in order to produce objects for the benefit of all. That is our essence as a species. A system that transforms this activity into “labor” that is bought and used to aggrandize others is an obstacle to the full realization of our humanity. Capitalism is fated to self-destruct, just as all previous economic systems have self-destructed. The working-class revolution will lead to the final stage of history: communism, which, Marx wrote, “is the solution to the riddle of history and knows itself as this solution.”
Marx was fanatically committed to finding empirical corroboration for his theory. That’s what it meant to put philosophy on its feet. And that’s why he spent all those hours alone in the British Museum, studying reports on factory conditions, data on industrial production, statistics about international trade. It was a heroic attempt to show that reality aligned with theory. No wonder he couldn’t finish his book.
Marx had very little to say about how the business of life would be conducted in a communist society, and this turned out to be a serious problem for regimes trying to put communism into practice. He had reasons for being vague. He thought that our concepts, values, and beliefs all arise out of the conditions of our own time, which means that it’s hard to know what lies on the other side of historical change. In theory, after the revolution, everything will be “up for grabs”—which has been the great dream of leftist radicalism ever since.
Marx was clearer about what a communist society would not have. There would be no class system, no private property, no individual rights (which Marx thought boil down to protecting the right of the owners of property to hang on to it), and no state (which he called “a committee for managing the common affairs of the whole bourgeoisie”). The state, in the form of the Party, proved to be one bourgeois concept that twentieth-century Communist regimes found impossible to transcend. Communism is not a religion; it truly is, as anti-Communists used say about it, godless. But the Party functions in the way that Feuerbach said God functions in Christianity, as a mysterious and implacable external power.
Marx did not, however, provide much guidance for how a society would operate without property or classes or a state. A good example of the problem is his criticism of the division of labor. In the first chapter of “The Wealth of Nations,” in 1776, Adam Smith identified the division of labor—that is, specialization—as the key to economic growth. Smith’s case study was the manufacture of pins. Rather than have a single worker make one pin at a time, Smith argued, a pin factory can split the job into eighteen separate operations, starting with drawing out the wire and ending with the packaging, and increase production by a factor of thousands.
To us, this seems an obviously efficient way to organize work, from automobile assembly lines to “knowledge production” in universities. But Marx considered the division of labor one of the evils of modern life. (So did Hegel.) It makes workers cogs in a machine and deprives them of any connection with the product of their labor. “Man’s own deed becomes an alien power opposed to him, which enslaves him instead of being controlled by him,” as Marx put it. In a communist society, he wrote, “nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes.” It will be possible “to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner . . . without ever becoming hunter, fisherman, herdsman, or critic.”
This often quoted passage sounds fanciful, but it is at the heart of Marx’s thought. Human beings are naturally creative and sociable. A system that treats them as mechanical monads is inhumane. But the question is, How would a society without a division of labor produce sufficient goods to survive? Nobody will want to rear the cattle (or clean the barn); everyone will want to be the critic. (Believe me.) As Marx conceded, capitalism, for all its evils, had created abundance. He seems to have imagined that, somehow, all the features of the capitalist mode of production could be thrown aside and abundance would magically persist.
In 1980, the philosopher Peter Singer published a short book on Marx in which he listed some of Marx’s predictions: the income gap between workers and owners would increase, independent producers would be forced down into the ranks of the proletariat, wages would remain at subsistence levels, the rate of profit would fall, capitalism would collapse, and there would be revolutions in the advanced countries. Singer thought that most of these predictions were “so plainly mistaken” that it was difficult to understand how anyone sympathetic to Marx could defend them. In 2016, it is harder to be dismissive.
“Economists today would do well to take inspiration from his example,” Thomas Piketty says about Marx, in the best-seller he published in 2013, “Capital in the Twenty-first Century.” The book did for many twenty-first-century readers what Marx hoped “Capital” might do for nineteenth-century ones. It uses data to show us the real nature of social relations and, by doing that, forces us to rethink concepts that have come to seem natural and inevitable. One of these is the concept of the market, which is often imagined as a self-optimizing mechanism it is a mistake to interfere with, but which in fact, left to itself, continually increases inequality. Another concept, closely related, is meritocracy, which is often imagined as a guarantor of social mobility but which, Piketty argues, serves mainly to make economic winners feel virtuous.
Piketty says that for thirty years after 1945 a high rate of growth in the advanced economies was accompanied by a rise in incomes that benefitted all classes. Severe wealth inequality came to seem a thing of the past (which is why, in 1980, people could quite reasonably call Marx’s predictions mistaken). It now appears that those thirty years were an anomaly. The Depression and the two world wars had effectively wiped out the owners of wealth, but the thirty years after 1945 rebooted the economic order.
“The very high level of private wealth that has been attained since the nineteen-eighties and nineteen-nineties in the wealthy countries of Europe and in Japan,” Piketty says, “directly reflects the Marxian logic.” Marx was correct that there is nothing naturally egalitarian about modern economies left to themselves. As Piketty puts it, “There is no natural, spontaneous process to prevent destabilizing, inegalitarian forces from prevailing permanently.”
The tendency of the system to increase inequality was certainly true in Marx’s own century. By 1900, the richest one per cent of the population in Britain and France owned more than fifty per cent of those nations’ wealth; the top ten per cent owned ninety per cent. We are approaching those levels again today. In the United States, according to the Federal Reserve, the top ten per cent of the population owns seventy-two per cent of the wealth, and the bottom fifty per cent has two per cent. About ten per cent of the national income goes to the top two hundred and forty-seven thousand adults (one-thousandth of the adult population).
This is not a problem restricted to the rich nations. Global wealth is also unequally distributed, and by the same ratios or worse. Piketty does not predict a worldwide working-class revolution; he does remark that this level of inequality is “unsustainable.” He can foresee a time when most of the planet is owned by billionaires.
Marx was also not wrong about the tendency of workers’ wages to stagnate as income for the owners of capital rises. For the first sixty years of the nineteenth century—the period during which he began writing “Capital”—workers’ wages in Britain and France were stuck at close to subsistence levels. It can be difficult now to appreciate the degree of immiseration in the nineteenth-century industrial economy. In one period in 1862, the average workweek in a Manchester factory was eighty-four hours.
It appears that wage stagnation is back. After 1945, wages rose as national incomes rose, but the income of the lowest earners peaked in 1969, when the minimum hourly wage in the United States was $1.60. That is the equivalent of $10.49 today, when the national minimum wage is $7.25. And, as wages for service-sector jobs decline in earning power, the hours in the workweek increase, because people are forced to take more than one job.
The rhetoric of our time, the time of Bernie Sanders and Donald Trump, Brexit, and popular unrest in Europe, appears to have a Marxist cast. Sanders’s proposals to reduce inequality are straight out of Piketty: tax wealth and give more people access to knowledge. Trump, since he admires authoritarian personalities, might be pleased to know that Marx supported free trade on a “the worse things get” theory: by driving wages lower, free trade increases the impoverishment of the working class and leads more quickly to the revolution. In the terms used everywhere today, on the left, on the right, and in the press: the system is “rigged” to reward “the élites.” Marx called them “the ruling class.”
How useful is Marx for understanding this bubble of ferment in the advanced economies? I think we don’t yet know very well the precise demographic profile of Brexit voters and Trump and Sanders supporters—whether they are people who have been materially damaged by free trade and immigration or people who are hostile to the status quo for other reasons. That they are basically all the former may turn out to be a consoling belief of the better-off, who can more easily understand why people who have suffered economic damage would be angry than why people who have nothing to complain about financially might simply want to blow the whole thing up.
Still, in the political confusion, we may feel that we are seeing something that has not been seen in countries like Britain and the United States since before 1945: people debating what Marx would call the real nature of social relations. The political earth is being somewhat scorched. And, as politics continues to shed its traditional restraints, ugly as it is to watch, we may get a clearer understanding of what those relations are.
They may not be entirely economic. A main theme of Stedman Jones’s book is that Marx and Engels, in their obsession with class, ignored the power of other forms of identity. One of these is nationalism. For Marx and Engels, the working-class movement was international. But today we seem to be seeing, among the voters for Brexit, for example, a reversion to nationalism and, in the United States, what looks like a surge of nativism.
Stedman Jones also argues that Marx and Engels failed to appreciate the extent to which the goal of working-class agitation in nineteenth-century Britain was not ownership of the means of production but political inclusion, being allowed to vote. When that was achieved, unrest subsided.
Voting is no longer the test of inclusion. What is happening in the rich democracies may be not so much a war between the haves and the have-nots as a war between the socially advantaged and the left-out. No one who lives in poverty would not trade that life for a better one, but what most people probably want is the life they have. They fear losing that more than they wish for a different life, although they probably also want their children to be able to lead a different life if they choose.
Of the features of modern society that exacerbate that fear and threaten that hope, the distribution of wealth may not be the most important. Money matters to people, but status matters more, and precisely because status is something you cannot buy. Status is related to identity as much as it is to income. It is also, unfortunately, a zero-sum game. The struggles over status are socially divisive, and they can resemble class warfare.
Ryan, in his book on Marx, makes an observation that Marx himself might have made. “The modern republic,” he says, “attempts to impose political equality on an economic inequality it has no way of alleviating.” This is a relatively recent problem, because the rise of modern capitalism coincided with the rise of modern democracies, making wealth inequality inconsistent with political equality. But the unequal distribution of social resources is not new. One of the most striking points Piketty makes is that, as he puts it, “in all known societies in all times, the least wealthy half of the population has owned virtually nothing,” and the top ten per cent has owned “most of what there is to own.”
This is probably not true of tribal societies, and it does not seem to have been true of the earliest known democratic state, Periclean Athens (at least, for the citizens). But inequality has been with us for a long time. Industrial capitalism didn’t reverse it in the nineteenth century, and finance capitalism is not reversing it in the twenty-first. The only thing that can reverse it is political action aimed at changing systems that seem to many people to be simply the way things have to be. We invented our social arrangements; we can alter them when they are working against us. There are no gods out there to strike us dead if we do. ♦