AI is often hailed (by me, no less!) as a powerful tool for augmenting human intelligence and creativity. But what if relying on AI actually makes us less capable of formulating revolutionary ideas and innovations over time? That’s the alarming argument put forward by a new research paper that went viral on Reddit and Hacker News this week.
The paper’s central claim is that our growing use of AI systems like language models and knowledge bases could lead to a civilization-level threat the author dubs “knowledge collapse.” As we come to depend on AIs trained on mainstream, conventional information sources, we risk losing touch with the wild, unorthodox ideas on the fringes of knowledge — the same ideas that often fuel transformative discoveries and inventions.
You can find my full analysis of the paper, some counterpoint questions, and the technical breakdown below. But first, let’s dig into what “knowledge collapse” really means and why it matters so much…
The paper, authored by Andrew J. Peterson at the University of Poitiers, introduces the concept of knowledge collapse as the “progressive narrowing over time of the set of information available to humans, along with a concomitant narrowing in the perceived availability and utility of different sets of information.”
In plain terms, knowledge collapse is what happens when AI makes conventional knowledge and common ideas so easy to access that unconventional, esoteric, “long-tail” knowledge gets neglected and forgotten. It’s not about making us dumber as individuals, but rather about eroding the healthy diversity of human thought.
Peterson argues this is an existential threat to innovation because interacting with a wide variety of ideas, especially non-mainstream ones, is how we make novel conceptual connections and mental leaps. The most impactful breakthroughs in science, technology, art, and culture often come from synthesizing wildly different concepts or applying frameworks from one domain to another. But if AI causes us to draw from an ever-narrower slice of “normal” knowledge, those creative sparks become increasingly unlikely. Our collective intelligence gets trapped in a conformist echo chamber and stagnates. In the long run, the scope of human imagination shrinks to fit the limited information diet optimized by our AI tools.
To illustrate this, imagine if all book suggestions came from an AI trained only on the most popular mainstream titles. Fringe genres and niche subject matter would disappear over time, and the literary world would be stuck in a cycle of derivative, repetitive works. No more revolutionary ideas from mashing up wildly different influences.
Or picture a scenario where scientists and inventors get all their knowledge from an AI trained on a corpus of existing research. The most conventional, well-trodden lines of inquiry get reinforced (being highly represented in the training data), while the unorthodox approaches that lead to real paradigm shifts wither away. Entire frontiers of discovery go unexplored because our AI blinders cause us to ignore them.
That’s the insidious risk Peterson sees in outsourcing more and more of our information supply and knowledge curation to AI systems that prize mainstream data. The very diversity of thought required for humanity to continue making big creative leaps gradually erodes away, swallowed by the gravitational pull of the conventional and the quantitatively popular.
Read the rest of this article at: Hackernoon
Wabe didn’t expect to see his friends’ faces in the shadows. But it happened after just a few weeks on the job.
He had recently signed on with Sama, a San Francisco-based tech company with a major hub in Kenya’s capital. The middle-man company was providing the bulk of Facebook’s content moderation services for Africa. Wabe, whose name we’ve changed to protect his safety, had previously taught science courses to university students in his native Ethiopia.
Now, the 27-year-old was reviewing hundreds of Facebook photos and videos each day to decide if they violated the company’s rules on issues ranging from hate speech to child exploitation. He would get between 60 and 70 seconds to make a determination, sifting through hundreds of pieces of content over an eight-hour shift.
One day in January 2022, the system flagged a video for him to review. He opened up a Facebook livestream of a macabre scene from the civil war in his home country. What he saw next was dozens of Ethiopians being “slaughtered like sheep,” he said.
Then Wabe took a closer look at their faces and gasped. “They were people I grew up with,” he said quietly. People he knew from home. “My friends.”
Wabe leapt from his chair and stared at the screen in disbelief. He felt the room close in around him. Panic rising, he asked his supervisor for a five-minute break. “You don’t get five minutes,” she snapped. He turned off his computer, walked off the floor, and beelined to a quiet area outside of the building, where he spent 20 minutes crying by himself.
Wabe had been building a life for himself in Kenya while back home, a civil war was raging, claiming the lives of an estimated 600,000 people from 2020 to 2022. Now he was seeing it play out live on the screen before him.
That video was only the beginning. Over the next year, the job brought him into contact with videos he still can’t shake: recordings of people being beheaded, burned alive, eaten.
Read the rest of this article at: Coda
On a late December night many years ago, I was riding around midtown cheerfully stuffed into the back seat of a taxi with two of my kids. One was around seven, the other around four. We passed the skaters and the Christmas tree at Rockefeller Center. We passed by the twinkling displays in the windows of Saks Fifth Avenue, and the clusters of people clutching shopping bags and peering in. There were Santas tolling bells for the Salvation Army, venders hawking blistered chestnuts, flocks of pedicab drivers, tree hustlers, carollers, the whole frenetic birth-of-Jesus, half-off-at-Macy’s phantasmagoria.
My kids gazed out the window. A long silence set in. Finally, the four-year-old turned to me and said, “Daddy, why is there so much Christmas, not so much Hanukkah?” As I went about drafting an explanation in my head, the seven-year-old answered with absolute assurance: “Hitler.”
I admit, I felt some alarm, because it’s hard to take on board that your very young children already have an awareness of their own difference, much less an awareness of tyrants and of what tyrants have done to peoples of difference. But I was also filled with pride: one’s seven-year-old had just landed a one-word joke with the aplomb of a dues-paying member of the Friars Club.
Read the rest of this article at: The New Yorker
At the bottom right of my computer screen, just out of my direct line of vision, lurks an animated scold: a cartoon giraffe named Rafi. He is the playful icon of an app called Posture Pal, which works in concert with a wearer’s AirPods to warn against slumping while sitting at a computer. So long as I keep my line of vision trained on this text, Rafi stays discreetly out of sight. The minute I rest my chin in my hand in concentration, however—let alone sneak a glance at the iPhone that lies tantalizingly close to my keyboard—a baleful Rafi pops up, eyes wide, mouth down-turned. Sit up straight!
Rafi is actually less intrusive than the animated animal featured in another posture-correction desktop app, Nekoze. This one employs a computer’s camera to determine whether the user is slouching or slumping. If she is, an icon of a cat’s face pops up on her menu bar, accompanied by a surprisingly realistic meow. It’s a peculiar choice for a posture admonition: surely a meow could make a user look down at her ankles for a creature that wants feeding or petting, rather than stiffen her spine, eyes front? Then again, nobody would voluntarily install an icon of an angry drill sergeant on a personal computer.
The association of animals with posture correction goes beyond an accident of digital cuteness. As Beth Linker explains in her book “Slouch: Posture Panic in Modern America” (Princeton), a long history of anxiety about the proximity between human and bestial nature has played out in this area of social science. Linker, a historian of medicine at the University of Pennsylvania, argues that at the onset of the twentieth century the United States became gripped by what she characterizes as a poor-posture epidemic: a widespread social contagion of slumping that could, it was feared, have deleterious effects not just upon individual health but also upon the body politic. Sitting up straight would help remedy all kinds of failings, physical and moral, and Linker traces the history of this concern: from the exchanges of nineteenth-century scientists, who first identified the possible ancestral causes of contemporary back pain, to the late-twentieth-century popularity of the Alexander Technique, Pilates, and hatha yoga. The epidemic’s expression may have evolved, but even today it has hardly abated: on Goop, the wellness emporium, you can buy a foam roller to combat sitting-induced constriction of the waist and a plastic dome on which to therapeutically rock your pelvis. Sultry TikTok-ers demonstrate how to strap oneself into a corset-like garment that pins back the shoulders, while buff YouTube influencers explain how to appear inches taller by unfurling a tech-bent spine.
Linker makes no claim, she says, about the “realness of the epidemic or the degree to which poor posture is debilitating.” She’s not saying that Rafi and the Nekoze cat are wrong to harry me, or that your lower back doesn’t hurt. Rather, she sees the “past and present worries concerning posture as part of an enduring concern about so-called ‘diseases of civilization’ ”—grounded in a mythology of human ancestry that posits the hunter-gatherer as an ideal from which we have fallen.
Read the rest of this article at: The New Yorker
It was not a river. It was scarcely a stream. The Ruisseau des Quartes, Marcourt, Belgium. An unlovely and unremarkable tributary of the Ourthe, itself a tributary of the mighty Meuse, which thunders from France through Belgium and the Netherlands and on to the chilly oblivion of the North Sea. It was barely 2 metres wide, boggy in places, just 5cm deep in others. The parents dropping off their children at the United World Colleges summer camp on 10 July 2021 hopped over it as they lugged bags to the dormitories.
Fourteen-year-old Benjamin Van Bunderen Robberechts was nervous on the drive down. He would have to take a Covid test on arrival and he worried it would be positive. Belgium was beginning to relax restrictions and Benjamin was desperate to socialise with other teenagers. But the test was negative; soon, Benjamin was dropping off his things in his dorm and meeting his other campmates. And there was Rosa.
Rosa Reichel was 15, from Denmark and Germany by way of New York, but her family lived in Brussels. Dyed red hair and black eyeliner and chunky silver necklaces. She tapped Benjamin on the shoulder and told him a dirty joke. Just like that, they were friends.
The girl Benjamin met that day laughed a lot: loudly, happily. If you gathered Rosa’s friends in a room and asked them to describe one thing about Rosa, without question it would be her laugh. If you gave them more words, her friends would say how much fun she was, but also how caring: how Rosa was always the person who noticed when people were feeling low and tried to cheer them up. They felt that she was someone you could rely on, someone who brightened any room she was in. She stood up for her friends and for causes she believed in. At first, she might have been a little shy around you, but when she opened up, she would share her humour, her values, herself.
Benjamin was a little overwhelmed by her. “She was the greatest person I ever met,” he says.
Read the rest of this article at: The Guardian