Dear readers,
We’re gradually migrating this curation feature to our Weekly Newsletter. If you enjoy these summaries, we think you’ll find our Substack equally worthwhile.
On Substack, we take a closer look at the themes from these curated articles, examine how language shapes reality and explore societal trends. Aside from the curated content, we continue to explore many of the topics we cover at TIG in an expanded format—from shopping and travel tips to music, fashion, and lifestyle.
If you’ve been following TIG, this is a chance to support our work, which we greatly appreciate.
Thank you,
the TIG Team
In the 1980s, the futurologist Hans Moravec warned that, paradoxically, it would be the actions that are easiest for humans (such as holding a piece of sushi with two chopsticks) that would pose the greatest difficulties for robots and computers. On the other hand, very complex tasks such as finding errors in medical prescriptions, distinguishing when a space telescope has detected something interesting, or choosing Christmas presents for the whole family have ended up being enormously simple for algorithms.
“Artificial intelligence already does that,” we argue more and more often. But according to thousands of scientists and philosophers, the label is not entirely appropriate. “Both words (artificial and intelligent) are controversial and highly suspect. I prefer the term machine learning, as it is easier to see what we are talking about: systems that allow machines to learn patterns or correlations and apply them to new situations and data,” explains Justin Joque, who teaches at the University of Michigan and is the author of Revolutionary Mathematics: Artificial Intelligence, Statistics and the Logic of Capitalism.
“It is reasonable that there is some confusion among the general public because these are concepts that are difficult to understand for those without a mathematical background. There is a lot of mysticism surrounding AI, as there is in any other scientific field: studies on cancer, astronomical observatories when we talk about UFOs… These are interesting issues and they are widely publicized, so there are always those who will create a morbid curiosity around them,” explains Celsa Pardo-Araujo, a mathematician at the Institute of Space Sciences whose research focuses on the application of machine learning to astrophysics. “What is also clear is that Google, DeepMind and Microsoft are creating algorithms that solve problems that could not be solved before,” she adds.
But here comes the part that affects us: in addition to solving certain problems and being very useful in scientific research, algorithms are also generating content and, above all, ordering and hierarchizing everything that we have created ourselves. And this includes both the vast array of universal culture and the last photo we took while having breakfast. What criteria do they use? What are these creations like? That is the most worrying part because, as Kyle Chayka shows in the 2024 book Filterworld: How Algorithms Flattened Culture, the map (that is, the algorithm that rewards some content over others) is already affecting the territory (that is, the form of the content itself and the reality in which we move, especially in cities).
Read the rest of this article at: El País
Maybe it’s because Paul Mescal in Gladiator II is younger than Russell Crowe was in the original (Mescal was 27 and Crowe was 35 during filming). Or maybe it’s because Mescal has a wry, sunny energy, and Crowe something more like the bright red character in Inside Out (what’s his name again? Oh yes, Anger)? It’s something, anyway, because the two films, both directed by Ridley Scott, give us two different versions of masculinity – not just in terms of the actors’ vibes, but how they deal with women, geopolitics and battle. And also how they see humour and sexuality. So what has changed for men in the 24 years between each release?
It’s impossible to overstate how much impact Crowe had when the first film was released in 2000. We used to talk about it at the level of the sentence. I lost an awesome amount of time arguing over a line in the opening sequence, where Crowe says, “At my signal, unleash hell.” Did he mean let loose the demons of soldiery? Or was his dog called Hell, and he wanted him off the lead so he could run about in a helpful way?
It was Sex and the City that got to the crux of the matter the following year, when Samantha, Miranda, Charlotte and Carrie are talking about who they fantasise about. “Russell Crowe.” “What did women do before Russell Crowe?” “George Clooney.” “Clooney’s like a Chanel suit.” “He’ll always be in style.” Crowe was not then, as the character Maximus Decimus Meridius, and never has been since, anything like a Chanel suit.
Hollywood’s idea of the male ideal swings, pendulum-style, from the urbane to the pre-verbal, from man-about-town to man-about-cave, from Cary Grant to Marlon Brando, George Clooney to Russell Crowe. Crowe’s pugnacity manifested on and off screen. He argued with Scott; with one of the producers, Branko Lustig; with the writers. The line of dialogue people would quote at each other in pubs, for years – “My name is Maximus Decimus Meridius … father to a murdered son, husband to a murdered wife. And I will have my vengeance, in this life or the next” – Crowe initially refused to utter, because he said it was terrible. Filming in Morocco, he was asked to leave the military-owned mansion he was staying in. David Franzoni, the producer who first pitched the idea of a gladiator film to Steven Spielberg, recalled later that he was told by a man in a military jeep that Crowe had “‘violated every tenet of the Qur’an’. I had no fucking idea what he was talking about! Drinking? Carousing? Cursing? I don’t know!” To be fair, the film didn’t create this monster. Crowe’s history included allegations of him punching a co-star in Blood Brothers in 1988 and, in 1998, claims about an altercation in a Sydney nightclub.
Read the rest of this article at: The Guardian
An alarming phenomenon has sprung up over the past few years: Many students are arriving at college unprepared to read entire books. That’s a broad statement to make, but I spoke with 33 professors at some of the country’s top universities, and over and over, they told me the same story. As I noted in my recent article on the topic, a Columbia professor said his students are overwhelmed at the thought of reading multiple books a semester; a professor at the University of Virginia told me that his students shut down when they’re confronted with ideas they don’t understand. Criticizing young people’s literacy stretches back centuries, but in the past decade, something seems to have noticeably shifted. Most of the professors I spoke with said they’ve seen a generational change in how their students engage with literature.
Why is this happening? The allure of smartphones and social media came up, and it appears that many middle and high schools are teaching fewer full books. (One student arrived at Columbia having read only poems, excerpts, and news articles in school.) But one possible cause that I nodded to in my article is a change in values, not ability. The problem does not appear to be that “kids these days” are incurious or uninterested in reading. Instead, young people might be responding to a cultural message: Books just aren’t that important.
Read the rest of this article at: The Atlantic
The world of work is changing fast. The rapid rise of artificial intelligence, from disembodied chatbots to humanoid robots, is driving a global wave of panic about job insecurity. Tech billionaires say that the nine-to-five will be extinct by 2034. UK think tanks believe almost 8 million jobs could be automated away by AI in coming years, while the investment bank Goldman Sachs has predicted that 300 million full-time roles could be replaced worldwide. The question is: do we even care? According to a recent poll, three-quarters of Gen Z don’t want to work a traditional job in their lives, one in ten never intend to enter the workforce, and almost half would rather be unemployed than unhappy. Tradwives dream about trading professional success for domestic bliss, while influencers pivot away from hustle culture, toward anti-work memes and calls to “seize the means of relaxation”.
Is it a coincidence that we’ve fallen out of love with work at the exact moment it seems destined to disappear from our lives, or is it a natural reaction to the coming “jobs apocalypse”? And, if we really can delegate the vast majority of our work to intelligent, self-supervising machines, then what comes next for humanity?
Before we try to answer any of these questions, let’s address an elephant in the room: experts (and billionaires like Kim Kardashian) have been claiming that “nobody wants to work these days” for a long time. Supposedly, people stopped wanting to work during the ‘Great Resignation’ of 2021. Or in 2014. Or 2006, or 1999. The 50s, 30s, 1922. As pointed out in a widely-shared X thread, critics have been complaining about a rising anti-work movement since at least 1894. 130 years later, though, these claims do have an added urgency. Thanks to rapid technological change, it seems increasingly likely that we actually could get along with a massively reduced workforce. And much of our work already feels expendable. Discourse about “fake email jobs” – what the late anthropologist David Graeber might have called “bullshit jobs” – is all over TikTok. University professors say they’ve been reduced to “human plagiarism detectors” by AI. And the coronavirus pandemic only stoked the suspicion that many of our jobs have been rendered basically meaningless, besides keeping us busy and paying us a wage.
Read the rest of this article at: Dazed
For anyone who teaches at a business school, the blog post was bad news. For Juliana Schroeder, it was catastrophic. She saw the allegations when they first went up, on a Saturday in early summer 2023. Schroeder teaches management and psychology at UC Berkeley’s Haas School of Business. One of her colleagues—a star professor at Harvard Business School named Francesca Gino—had just been accused of academic fraud. The authors of the blog post, a small team of business-school researchers, had found discrepancies in four of Gino’s published papers, and they suggested that the scandal was much larger. “We believe that many more Gino-authored papers contain fake data,” the blog post said. “Perhaps dozens.”
The story was soon picked up by the mainstream press. Reporters reveled in the irony that Gino, who had made her name as an expert on the psychology of breaking rules, may herself have broken them. (“Harvard Scholar Who Studies Honesty Is Accused of Fabricating Findings,” a New York Times headline read.) Harvard Business School had quietly placed Gino on administrative leave just before the blog post appeared. The school had conducted its own investigation; its nearly 1,300-page internal report, which was made public only in the course of related legal proceedings, concluded that Gino “committed research misconduct intentionally, knowingly, or recklessly” in the four papers. (Gino has steadfastly denied any wrongdoing.)
Schroeder’s interest in the scandal was more personal. Gino was one of her most consistent and important research partners. Their names appear together on seven peer-reviewed articles, as well as 26 conference talks. If Gino were indeed a serial cheat, then all of that shared work—and a large swath of Schroeder’s CV—was now at risk. When a senior academic is accused of fraud, the reputations of her honest, less established colleagues may get dragged down too. “Just think how horrible it is,” Katy Milkman, another of Gino’s research partners and a tenured professor at the University of Pennsylvania’s Wharton School, told me. “It could ruin your life”
To head that off, Schroeder began her own audit of all the research papers that she’d ever done with Gino, seeking out raw data from each experiment and attempting to rerun the analyses. As that summer progressed, her efforts grew more ambitious. With the help of several colleagues, Schroeder pursued a plan to verify not just her own work with Gino, but a major portion of Gino’s scientific résumé. The group started reaching out to every other researcher who had put their name on one of Gino’s 138 co-authored studies. The Many Co-Authors Project, as the self-audit would be called, aimed to flag any additional work that might be tainted by allegations of misconduct and, more important, to absolve the rest—and Gino’s colleagues, by extension—of the wariness that now afflicted the entire field.
Read the rest of this article at: The Atlantic