If you’ve been enjoying these curated article summaries that dive into cultural, creative, and technological currents, you may find the discussions and analyses on our Substack page worthwhile as well. There, I explore themes and ideas that often intersect with the subjects covered in the articles I come across during my curation process.
While this curation simply aims to surface compelling pieces, our Substack writings delve deeper into topics that have piqued our curiosity over time. From examining the manifestation of language shaping our reality to unpacking philosophical undercurrents in society, our Substack serves as an outlet to unpack our perspectives on the notable trends and undercurrents reflected in these curated readings.
So if any of the articles here have stoked your intellectual interests, I invite you to carry that engagement over to our Substack, where we discuss related matters in more depth. Consider it an extension of the curation – a space to further engage with the fascinating ideas these pieces have surfaced.
Many teens and adults use the word addictive when describing social-media sites, as if the apps themselves are laced with nicotine. The U.S. surgeon general, Vivek Murthy, wants to drive that point home as glaringly as possible: In an op-ed published by The New York Times yesterday, he writes that the country should start labeling such sites as if they’re cigarettes.
Murthy proposes putting an official surgeon’s-general warning—the same type found on tobacco and alcohol products—on social-media websites to “regularly remind parents and adolescents that social media has not been proved safe.” Such a warning would require formal congressional approval. To make his case, Murthy cites a 2019 study that found that adolescents who spend more than three hours a day on social media may be at higher risk for certain mental-health problems; he also pointed to research in which teens reported that social media made them feel worse about their body. “The moral test of any society is how well it protects its children,” he writes. “Why is it that we have failed to respond to the harms of social media when they are no less urgent or widespread than those posed by unsafe cars, planes or food?”
It’s a radical idea, and one with a real basis in science: There is strong evidence that tobacco warnings work, David Hammond, a professor in the school of public-health sciences at Canada’s University of Waterloo, told me. Although no intervention is perfect, such labels reduce tobacco use by reaching the right audience at the moment of consumption, Hammond said, and they are particularly effective at deterring young people. But social media is not tobacco. Some platforms have no doubt caused real harm to many children, but research into the effects of social media on young people has been a mixed bag; even the studies cited by Murthy are not as straightforward as presented in the op-ed. A warning label on a pack of cigarettes is attention-grabbing and succinct: No one wants cancer or heart disease. Social media does not boil down as easily.
What would a social-media warning look like? Murthy doesn’t go into further detail in his article, and nothing would be decided until Congress authorized the label. (It’s unclear how likely it is to pass, but there has been bipartisan interest in the topic, broadly speaking; earlier this year, at a congressional hearing on kid safety on the internet, members from both parties expressed frustration with Big Tech CEOs.) It could be a persistent pop-up that a user has to click out of each time they open an app. Or it could be something that shows up only once, in the footer, when a person creates an account. Or it could be a banner that never goes away. To be effective, Hammond told me, the message must be “salient”—it should be noticeable and presented frequently.
Read the rest of this article at: The Atlantic
Elevated “deaths of despair” and declining birth rates in the West must be due to an array of factors, hard to tease apart. My hunch is that one of them is what the sociologist Richard Sennett called “the specter of uselessness.” He meant feeling redundant at work. But there is a deeper, existential version of this that may arise when the world feels already occupied, so there is no place for you to grow into and make your own.
In the normal course of human society, you are born into a culture that has prepared the way for you. It initiates you into its language and tells a story of where you came from. It is saturated with meaning due to a chain of begettings that reaches back in time, each generation of which started and grew through acts of love: at conception, and in the ongoing work of teaching, transmission and care. The world is welcoming, in other words. It was built by your ancestors, and they imagined you long before you arrived.11xThe “owned space” spoken of by our Nietzscheans is an inherited space, not a conquest of individual will. They wondered what sort of work you might do, before you knew there is such a thing as work. Your parents may have recognized the echo of a sibling or a parent in your face as you sought the nipple. They smiled at you.
This sense of a world handed down in love is interrupted when the basic contours and possibilities of life appear to be ordered by impersonal forces.
I was at a small dinner a few weeks ago in Grand Rapids, Michigan. Seated next to me was a man who related that his daughter had just gotten married. As the day approached, he had wanted to say some words at the reception, as is fitting for the father of the bride. It can be hard to come up with the right words for such an occasion, and he wanted to make a good showing. He said he gave a few prompts to ChatGPT, facts about her life, and sure enough it came back with a pretty good wedding toast. Maybe better than what he would have written. But in the end, he didn’t use it, and composed his own. This strikes me as telling, and the intuition that stopped him from deferring to AI is worth bringing to the surface.
To use the machine-generated speech would have been to absent himself from this significant moment in the life of his daughter, and in his own life. It would have been to not show up for her wedding, in some sense. I am reminded of a passage in Tocqueville where he noticed that America seemed to be on a trajectory that would have it erecting “an immense tutelary power” that wants only what is best for us, and is keen to “save [us] the trouble of living.”
In Aristotelian language, human “being” is an ergon, an activity or work that is distinctive of the peculiar sort of animals that we are, and in this the use of language is key. There have been rare cases of anatomically normal children who (whether by some monstrous crime or by circumstance) matured without human society, with no initiation into a language. They grew into feral creatures, resembling a human in form only.22x “Just before dawn on January 9, 1800, a mysterious creature emerged from a forest in southern France. Although he was human in form and walked upright, his habits were those of a young male animal. He was wearing only a tattered shirt, but did not seem troubled by the cold. Showing no modesty about his nakedness, he ate greedily, seizing roasted potatoes from a hot fire. He seemed to have no language skills, only grunting occasionally.” From the jacket of The Forbidden Experiment by Roger Shattuck.
Read the rest of this article at: The Hedgehog Review
Looking for a fun way to commune with the spirits? If you’re thinking about grabbing a Ouija board for your next conversation with the other side, you might want to think again.
Despite their long history as hoax spiritualist devices turned hit toys turned tools of the devil, Ouija boards won’t actually put you in contact with demons or ghosts. Any scary firsthand reports you might hear or read of real-life Ouija board horror stories are exaggerations, false claims, or a misunderstanding of how Ouija boards actually work.
Not only are Ouija boards sadly powered by all-too-human methods, there’s also a surprising link between this practice and an ongoing fake technique known as “facilitated communication.” Sometimes shortened to “FC” or given other names like “supported typing” to disguise its fraudulent history, facilitated communication is a dangerous and manipulative false medical practice that’s led to multiple lawsuits and even instances of sexual assault.
One of these cases, as detailed in the Netflix documentary series Tell Them You Love Me, involved the years-long victimization of a patient with cerebral palsy by a woman who falsely claimed to be interpreting his movements — using the same fake methods we use when we move a Ouija board pointer.
This all might be disappointing news if you’re hosting a Halloween sleepover, but it might also leave you asking, “How do Ouija boards work?” The answer is surprisingly simple.
Read the rest of this article at: Vox
Emerging science suggests that the effects of trauma—from war and genocide to abuse and environmental factors—could be genetically passed down from one generation to another.
Epigenetics is the study of how genes are turned off and on. The molecular process, known as gene expression, boosts the activity of some genes and quiets others by adding and removing chemical tags—called methyl groups—to genes. Multiple research studies have suggested that this may be a mechanism through which a parent’s trauma could be imprinted in the genes of offspring, and the epigenetic effects could be multi-generational.
The field “touches on all the questions that humanity has asked since it was walking on this planet,” says Moshe Szyf, a professor of pharmacology at McGill University. “How much of our destiny is predetermined? How much of it do we control?”
For some people, the concept that we can carry a legacy of trauma makes sense because it validates their sense that they are more than the sum of their experiences.
“If you feel you have been affected by a very traumatic, difficult, life-altering experience that your mother or father has had, there’s something to that,” says Rachel Yehuda, professor of psychiatry and neuroscience of trauma at Mount Sinai in New York. Her research points to a small epigenetic “signal” that a life-altering experience “doesn’t just die with you,” she says. “It has a life of its own afterwards in some form.”
To understand how emotional trauma can transcend generations, consider the distinction between the genome—the body’s full complement of DNA—and the epigenome. Isabelle Mansuy, professor in neuroepigenetics at the University of Zürich, likens it to the difference between hardware and software. You need the genome “hardware” to function. But it is epigenetic “software” that instructs how genes in the genome should behave.
“All the time, in every cell, every moment, the epigenome is changing,” Mansuy says. It responds to all sorts of environmental factors, from chemical exposures to nutritional deficiencies. The epigenome determines which genes will be activated at a given time and which will remain silent.
Yehuda uncovered an epigenetic mark in Holocaust survivors and their offspring, a group at greater risk for mental health challenges. She assessed 32 survivors and their adult children in 2015, examining the FKBP5 gene—which has been linked to anxiety and other mental health concerns.
By extracting DNA from blood samples, the team identified epigenetic changes in the same region of the gene in the survivors and their children; but those alterations were not present in the DNA of a small group of Jewish parents and their offspring who lived outside of Europe and didn’t experience the Holocaust.
In a subsequent study published in 2020, Yehuda examined a larger cohort of subjects, looking at variables such as the sex and age of the parent during the Holocaust. She examined DNA methylation, one of the methods the epigenome uses to activate or quiet genes. DNA methylation generally adds a chemical mark to DNA; demethylation removes it.
Read the rest of this article at: National Geographic
When did you encounter your first paradigm shift? Not the phenomenon itself, but the term? Perhaps at an airport bookstore, where bestselling authors of books with titles like Change Your Paradigm, Change Your Life and The 15 Commitments of Conscious Leadership: A New Paradigm for Sustainable Success use it more or less as a synonym for “game changer.” Or maybe on the taps of Paradigm Shift Brewery, a craft brewery located in Massilon, Ohio. The goods and services you can purchase with “paradigm” in their name include coffee and crypto, sneakers, and health care management. The corporate website for perhaps the best-known Paradigm, a high-end Canadian speaker company, explains that the founders “decided to, eh-hem, change the prevailing industry paradigm.”
The language of paradigms and paradigm shifts is ubiquitous except among the people most familiar with its source: historians and philosophers of science. Once upon a time—let’s say the late 1960s—a reference to “paradigm shifts” primarily signaled knowledge of Thomas Kuhn’s historicist approach to the philosophy of science. Kuhn’s 1962 classic, The Structure of Scientific Revolutions, transformed our understanding of scientific change and has become a foundational text for historians, philosophers, and social studies of science. It is nonetheless unusual these days for anyone who studies science professionally to invoke the term “paradigm shift.” The concept has become completely unmoored from the term.
The Structure of Scientific Revolutions, in other words, is one of those books that everybody knows but doesn’t read, or reads once and shelves. On rereading my copy, neglected since a first-year graduate seminar in the history of science over 25 years ago, I was struck by Kuhn’s insistence on the power of historical research to puncture idealized claims of scientific progress. Paradigms and normal science? Sure. But the truly radical idea here is that outsiders—in this case, historians—can offer better insight into the inner workings of a profession than the practitioners themselves.
What, exactly, is a paradigm shift? In Structure, Kuhn defines a scientific paradigm through its relation to what he calls “normal science.” A mature scientific community, one that is relatively secure in its methods, intellectual assumptions, and choice of problems, is operating in a period of “normal science.” Collectively, those rules and standards for scientific research constitute “shared paradigms.” These shared paradigms lay a path for scientific communities to work efficiently, allowing individual scientists to focus on the “mop-up work” of collecting data and solving puzzles suggested by the operating paradigm.
Read the rest of this article at: The New Republic