It’s a psychological truism that personal identity is fluid and that its continuity — whatever it is that links the you of today to older versions of yourself — is asserted against a backdrop of flux. Social platforms, however, have distorted and jumbled up the sense of which parts change and which don’t. The many means of expression they provide and the archives they maintain suggest that everything about who we are (and were) may be reimagined and exhibited in an endless array of new formats.
These formats may be app-specific, as are the strategies for maximizing visibility on them: Studies suggest that tweets that blend “general Twitter language” with “personal style” are more likely to go viral, while theories about the best days and times to post on Instagram abound. Beyond the specific strategies tailored to particular apps are more general approaches to standing out, including different approaches to writing, oscillating between sincerity and irony, or between grammatical precision and laxity. If these hairpin turns perplex some audiences, all the better: Hence the emergence of the post-brand account, which is meant to keep some audiences guessing.
Earlier this year, in an Atlantic essay titled “The Personal Brand Is Dead,” Kaitlyn Tiffany argued that the standard features of a brandable web presence — including visibility, relatability, and cohesion — are seen as trite and boring among the Gen Z cohort, who find it “natural” to be “confusing and inscrutable” online. Such attempts at incalculability are innocent enough; they can even be life-affirming, offsetting the stultifying effects of routine. As account owners construct enigmatic, shape-shifting mashups of motifs and concepts with no clear unifying theme, they attempt to resist their own commodification. It would seem to defy the ethos of digital capitalism, which is predicated on rational and predictable correlations between data and behavior.
WE LIVE IN UNDENIABLY UGLY TIMES. Architecture, industrial design, cinematography, probiotic soda branding — many of the defining features of the visual field aren’t sending their best. Despite more advanced manufacturing and design technologies than have existed in human history, our built environment tends overwhelmingly toward the insubstantial, the flat, and the gray, punctuated here and there by the occasional childish squiggle. This drab sublime unites flat-pack furniture and home electronics, municipal infrastructure and commercial graphic design: an ocean of stuff so homogenous and underthought that the world it has inundated can feel like a digital rendering — of a slightly duller, worse world.
If the Situationists drifted through Paris looking to get defamiliarized, today a scholar of the new ugliness can conduct their research in any contemporary American city — or upzoned American Main Street, or exurban American parking lot, or, if they’re really desperate, on the empty avenues of Meta’s Horizon Worlds. Our own walk begins across the street from our apartment, where, following the recent demolition of a perfectly serviceable hundred-year-old building, a monument to ugliness has recently besieged the block. Our new neighbor is a classic 5-over-1: retail on the ground floor, topped with several stories of apartments one wouldn’t want to be able to afford. The words THE JOSH have been appended to the canopy above the main entrance in a passionless font.
We spent the summer certain that the caution tape–yellow panels on The Josh’s south side were insulation, to be eventually supplanted by an actual facade. Alas, in its finished form The Josh really is yellow, and also burgundy, gray, and brown. Each of these colors corresponds to a different material — plastic, concrete, rolled-on brick, an obscure wood-like substance — and the overall effect is of an overactive spreadsheet. Trims, surfaces, and patterns compete for attention with shifty black windows, but there’s nothing bedazzling or flamboyant about all this chaos. Somehow the building’s plane feels flatter than it is, despite the profusion of arbitrary outcroppings and angular balconies. The lineage isn’t Bauhaus so much as a sketch of the Bauhaus that’s been xeroxed half a dozen times.
The Josh is aging rapidly for a 5-month-old. There are gaps between the panels, which have a taped-on look to them, and cracks in the concrete. Rust has bloomed on surfaces one would typically imagine to be rustproof. Every time it rains, The Josh gets conspicuously . . . wet. Attempts have been made to classify structures like this one and the ethos behind their appearance: SimCityist, McCentury Modern, fast-casual architecture. We prefer cardboard modernism, in part because The Josh looks like it might turn to pulp at the first sign of a hundred-year flood.
Writing a century ago, H. L. Mencken bemoaned America’s “libido for the ugly.” There exists, he wrote, a “love of ugliness for its own sake, the lust to make the world intolerable. Its habitat is the United States.” However mystical and psychosexual his era’s intolerability might have felt in its origins, by the 1940s the explanations were more prosaic. With the wartime rationing of steel and sudden dearth of skilled labor, concrete structural systems quickly gained appeal — as did buildings that could be made piecemeal in a factory, put on a trailer, and nailed together anywhere in the country. And as the postwar baby boom took hold, such buildings were soon in high demand, fulfilling modernism’s wildest dreams of standardization with little of the glamour. A few Levittowns later, the promise of salvation-by-mass-production would come to seem elusive: new manufacturing techniques were transforming both the buildings and the builders building them. In Prisoners of the American Dream, Mike Davis describes how, in the 1970s, “the adoption of new building technologies involving extensive use of prefabricated structures, like precast concrete, eroded the boundaries of traditional skills and introduced a larger semi-skilled component into the labor force.” If it’s cheaper to assemble concrete panels than to hire bricklayers, cityscapes will eventually contain fewer bricks.
Justin Halpern has more reason to love Twitter than most of us. The 28-year-old had trouble finding a writing job in Hollywood, so he moved back in with his parents in 2009 and started @shitmydadsays, where he posted all the shit his dad said. The account quickly went viral. By 2010, he had a book and a TV series based on it. He’s now an executive producer on Harley Quinn and Abbott Elementary.
“Twitter basically jump-started my entire career,” Halpern told Recode.
But @shitmydadsays has been dormant for years, and Halpern doesn’t tweet much anymore from his personal account. He says he went from posting daily, to weekly, and now mostly uses it to keep up with the news.
“I realized I felt much better the less I used the site,” he said.
Halpern’s move away from Twitter predates Elon Musk’s takeover by several years, and it’s indicative of some of the problems Twitter was facing before Musk came along. Twitter has always had an outsized impact as a major driver of news, thanks to who uses it and how. But that’s a fraction of the number of people who use competing platforms. Like Halpern, some of Twitter’s power users had significantly reduced or even stopped their usage of the platform, and user growth overall has slowed.
Now that Musk owns the site and prepares to launch his “Twitter 2.0,” it almost certainly won’t be the same. So a lot of people will be looking for a Twitter replacement. Some possibilities have already emerged, like Mastodon and Post.
We may not need another Twitter, or even the one we have now. But it did show us what a digital town square could be, even if Twitter itself never actually was one. Whatever replaces that, if anything, may not look much like Twitter. It may not even be text-based.
GREENVILLE, N.C. — On that terrible day nine years ago, Ellie Laughinghouse Crout was running late. The memorial service for her half sister was starting in an hour and she still hadn’t left home.
The 5-week-old child, Lacy, just seven pounds, had been found facedown in her crib two days earlier, devastating her half siblings, who had been so eager to welcome the baby.
And now Ellie’s phone was ringing. Annoyed, she answered and snapped at her mother, whose tone signaled more calamity. Ellie’s youngest brother, Jackson, distraught over the baby’s death, had gone out with friends the night before. When his mother tried to rouse him from bed that morning, he was gray, with almost no pulse. Tests would show he had four kinds of anti-anxiety medication in his blood. Five days later, just before his 19th birthday, he was taken off life support.
“I hate the saying, ‘Everything happens for a
reason,’ or ‘It’ll get easier,’ because it doesn’t,” Ellie said. “It doesn’t get easier. Grief and loss never do. I think they just get different. You learn where some days you’re an emotional wreck and others, you don’t think about them as much. Or you think about them with a smile.”
Oct. 2, 2013, was not the day the drug epidemic reached Greenville. But beginning with Jackson’s death that day, a group of at least 16 young men and women who grew up together in this small eastern North Carolina city would succumb to overdoses of opioids and other drugs over nine years. More of their peers became addicted or overdosed but managed to survive.
“It was almost like a generation that went to war didn’t come back,” said J.D. Fletcher, whose son died in 2019.
In a nation that suffered more than 107,000 drug overdose deaths in 2021 alone, there are many Greenvilles — places where the powerful opioid fentanyl and other drugs have produced clusters of overdose deaths, or picked off victims one at a time. Here, drugs worked their way inexorably through a group of friends, year after year, for nearly a decade. In one family, loss piled upon tragic loss until almost no one was left.
During one of my more desperate phases as a young novelist, I began to question whether I should actually be writing my own stories. I was deeply uninterested at the time in anything that resembled a plot, but I acknowledged that if I wanted to attain any sort of literary success I would need to tell a story that had a distinct beginning, middle, and end.
This was about twenty years ago. My graduate-school friends and I were obsessed with a Web site called the Postmodernism Generator that spat out nonsensical but hilarious critical-theory papers. The site, which was created by a coder named Andrew C. Bulhak, who was building off Jamie Zawinski’s Dada Engine, is still up today, and generates fake scholarly writing that reads like, “In the works of Tarantino, a predominant concept is the distinction between creation and destruction. Marx’s essay on capitalist socialism holds that society has objective value. But an abundance of appropriations concerning not theory, but subtheory exist.”
I figured that, if a bit of code could spit out an academic paper, it could probably just tell me what to write about. Most plots, I knew, followed very simple rules, and, because I couldn’t quite figure out how to string one of these out, I began talking to some computer-science graduate students about the possibilities of creating a bot that could just tell me who should go where, and what should happen to them. What I imagined was a simple text box in which I could type in a beginning—something like “A man and his dog arrive in a small town in Indiana”—and then the bot would just tell me that, on page 3, after six paragraphs of my beautiful descriptions and taut prose, the dog would find a mysterious set of bones in the back yard of their boarding house.
After a couple months of digging around, it became clear to me that I wasn’t going to find much backing for my plan. One of the computer-science students, as I recall, accused me of trying to strip everything good, original, and beautiful from the creative process. Bots, he argued, could imitate basic writing and would improve at that task, but A.I. could never tell you the way Karenin smiled, nor would it ever fixate on all the place names that filled Proust’s childhood. I understood why he felt that way, and agreed to a certain extent. But I didn’t see why a bot couldn’t just fill in all the parts where someone walks from point A to point B.
ChatGPT is the latest project released by OpenAI, a somewhat mysterious San Francisco company that is also responsible for dall-e, a program that generates art. Both have been viral sensations on social media, prompting people to share their creations and then immediately catastrophize about what A.I. technology means for the future. The chat version runs on GPT-3—the abbreviation stands for “Generative Pre-Trained Transformer,” —a pattern-recognition artificial intelligence that “learns” from huge caches of Internet text to generate believable responses to queries. The interface is refreshingly simple: you write questions and statements to ChatGPT, and it spits back remarkably coherent, if occasionally hilariously wrong, answers.
The concepts behind GPT-3 have been around for more than half a century now. They derive from language models that assign probabilities to sequences of words. If, for example, the word “parsimonious” appears within a sentence, a language model will assess that word, and all the words before it, and try to guess what should come next. Patterns require input: if your corpus of words only extends to, say, Jane Austen, then everything your model produces will sound like a nineteenth-century British novel.
What OpenAI did was feed the Internet through a language model; this then opened up the possibilities for imitation. “If you scale a language model to the Internet, you can regurgitate really interesting patterns,” Ben Recht, a friend of mine who is a professor of computer science at the University of California, Berkeley, said. “The Internet itself is just patterns—so much of what we do online is just knee-jerk, meme reactions to everything, which means that most of the responses to things on the Internet are fairly predictable. So this is just showing that.”
GPT-3 itself has been around since 2020, and a variety of people have already run it through the paces. (The recent hype around it comes from the new chat version.) Back in 2020, the Guardian had the program write an article about itself with a moderate, but not entirely disqualifying series of prompts from a human and some reasonable, light editing. Gwern Branwen, a writer and researcher, asked GPT-3 to write everything from poems to dad jokes. In one particularly illustrative example, Branwen fed the machine the opening of Shel Silverstein’s “Where the Sidewalk Ends” and asked it to fill in the rest.