News 28.08.23: Five Essential Articles from Around the Web

It’s 2035, and artificial intelligence is everywhere. AI systems run hospitals, operate airlines, and battle each other in the courtroom. Productivity has spiked to unprecedented levels, and countless previously unimaginable businesses have scaled at blistering speed, generating immense advances in well-being. New products, cures, and innovations hit the market daily, as science and technology kick into overdrive. And yet the world is growing both more unpredictable and more fragile, as terrorists find new ways to menace societies with intelligent, evolving cyberweapons and white-collar workers lose their jobs en masse.

Just a year ago, that scenario would have seemed purely fictional; today, it seems nearly inevitable. Generative AI systems can already write more clearly and persuasively than most humans and can produce original images, art, and even computer code based on simple language prompts. And generative AI is only the tip of the iceberg. Its arrival marks a Big Bang moment, the beginning of a world-changing technological revolution that will remake politics, economies, and societies.

Like past technological waves, AI will pair extraordinary growth and opportunity with immense disruption and risk. But unlike previous waves, it will also initiate a seismic shift in the structure and balance of global power as it threatens the status of nation-states as the world’s primary geopolitical actors. Whether they admit it or not, AI’s creators are themselves geopolitical actors, and their sovereignty over AI further entrenches the emerging “technopolar” order—one in which technology companies wield the kind of power in their domains once reserved for nation-states. For the past decade, big technology firms have effectively become independent, sovereign actors in the digital realms they have created. AI accelerates this trend and extends it far beyond the digital world. The technology’s complexity and the speed of its advancement will make it almost impossible for governments to make relevant rules at a reasonable pace. If governments do not catch up soon, it is possible they never will.

Read the rest of this article at: Foreign Affairs

By the time I was twenty-one, I had made two short films and was dead set on making a feature. I had gone to a distinguished school in Munich, where I had few friends, and which I hated so passionately that I imagined setting it on fire. There is such a thing as academic intelligence, and I didn’t have it. Intelligence is always a bundle of qualities: logical thought, articulacy, originality, memory, musicality, sensitivity, speed of association, and so on. In my case, the bundle seemed to be differently composed. I remember asking a fellow-student to write a term paper for me, which he did quite easily. In jest, he asked me what I would do for him in return, and I promised that I would make him immortal. His name was Hauke Stroszek. I gave his last name to the main character in my first film, “Signs of Life.” I called another film “Stroszek.”

But some of my studies I found utterly absorbing. For a class on medieval history, I wrote a paper on the Privilegium maius. This was a flagrant forgery, from 1358 or 1359, conceived by Rudolf IV, a scion of the Habsburgs, who wanted to define his family’s territory and install them as one of the powers of Europe. He produced a set of five clumsy documents, in the guise of royal charters, with a supplement purportedly issued by Julius Caesar. Despite being clearly fraudulent, the documents were ultimately accepted by the Holy Roman Emperor, confirming the Habsburgs’ claim to Austria. It was an early instance of fake news, and it inspired in me an obsession with questions of factuality, reality, and truth. In life, we are confronted by facts. Art draws on their power, as they have a normative force, but making purely factual films has never interested me. Truth, like history and memory, is not a fixed star but a search, an approximation. In my paper, I declared, even if it was illogical, that the Privilegium was a true account.

What seemed to me a natural approach became a method. Because I knew it would be hopeless to make a feature right away, I accepted a scholarship to go to the United States. I applied to Duquesne University, in Pittsburgh, which had cameras and a film studio. I chose Pittsburgh because I had the sentimental notion that I wouldn’t be tied up with academic nonsense; I’d be in a city with real, down-to-earth people. Pittsburgh was the Steel City, and I had worked in a steel factory myself.

Read the rest of this article at: The New Yorker

News 28.08.23: Five Essential Articles from Around the Web

Is this the real life? Is this just fantasy?

Those aren’t just lyrics from the Queen song “Bohemian Rhapsody.” They’re also the questions that the brain must constantly answer while processing streams of visual signals from the eyes and purely mental pictures bubbling out of the imagination. Brain scan studies have repeatedly found that seeing something and imagining it evoke highly similar patterns of neural activity. Yet for most of us, the subjective experiences they produce are very different.

“I can look outside my window right now, and if I want to, I can imagine a unicorn walking down the street,” said Thomas Naselaris, an associate professor at the University of Minnesota. The street would seem real and the unicorn would not. “It’s very clear to me,” he said. The knowledge that unicorns are mythical barely plays into that: A simple imaginary white horse would seem just as unreal.

So “why are we not constantly hallucinating?” asked Nadine Dijkstra, a postdoctoral fellow at University College London. A study she led, recently published in Nature Communications, provides an intriguing answer: The brain evaluates the images it is processing against a “reality threshold.” If the signal passes the threshold, the brain thinks it’s real; if it doesn’t, the brain thinks it’s imagined.

Such a system works well most of the time because imagined signals are typically weak. But if an imagined signal is strong enough to cross the threshold, the brain takes it for reality.

Read the rest of this article at: Wired

News 28.08.23: Five Essential Articles from Around the Web

The worst kinds of duels are the ones where everybody loses. Thus far, the debate over generative AI can be split into two camps. There are those who believe that tech like GPT-4 and Stable Diffusion is overrated, empty, a “blurry JPEG of the web” rich in simulated insight and low on substance. And there are those who claim it could (possibly literally) take charge of the planet—an all–knowing expert ready to rearrange every aspect of our lives. Every computer-generated Balenciaga promo, each machine-mastered SAT, is another bullet whistling through the air, glittering with algorithmic gunpowder.

Both views presuppose the same future: a world where “artificial intelligence” is a relatively singular thing. Perhaps it will be an Oracle, shaping wise sausages out of the lips and assholes of human experience. More likely we’ll have constructed a mechanical shill, a mindless parrot, or a “firehose of discourse,” as Andrew Dean put it in the Sydney Review of Books, spraying “university mission statements until the end of time.” We know some things for sure: the AI will be controlled by large companies. It will rely on closed datasets, dubious labor practices, and grievous energy consumption. And these hallowed, expurgated, optimized creations will be homogenous and interchangeable, like microwaves or search engines or limping late-capitalist democracies.

The people making AI say they want to make something dependable. Critics claim the coders won’t succeed: that you can’t make a dumb script smart, a hollow thing whole. But the debate over whether AI can become a trustworthy or untrustworthy authority elides the question of whether authority is what we really want. Authority is what allows Google or Instagram to dictate the ways we see the internet; it’s what permits Big Tech’s algorithms to capture our attention, drawing our gaze like a magician with his silks. History has taught us that the more centralized an authority, the fewer guardrails there are on its behavior. Conformity goes up; negotiating power goes down; the scope of imagination narrows.

Much has gone wrong in the story of the internet, from the ascendancy of advertisers to the fascist expropriation of Pepe the Frog. But one of the starkest mistakes of recent years was the way the web’s wild chaos gave way to a series of private playgrounds run by billionaires of questionable taste. Quick as it may be changing, AI’s still in its infancy. The furniture’s not nailed down. It’s not yet one thing, or another—which means there’s a chance it could be both.

Read the rest of this article at: The Baffler

News 28.08.23: Five Essential Articles from Around the Web

In the autumn of 2007, a container ship called the Cosco Busan was leaving the port of Oakland, having just refueled, when it sideswiped one of the towers of the Bay Bridge, puncturing the ship’s fuel tank. Inside was bunker fuel, a heavy oil repurposed for marine vessels from the remnants of petroleum production. Bunker fuel is so dense it has the consistency of tar.

That morning, over 53,000 gallons of bunker fuel spilled into San Francisco Bay. It spread quickly: northeast to Richmond, to the beaches of San Francisco, to the rugged coasts of the Marin Headlands and then out to the Pacific and up and down the coast. In an urban area known for its natural beauty, over 50 public beaches across multiple counties were soon closed. The oil killed thousands of shoreline birds, damaged fish populations and contaminated shellfish. It derailed local fisheries for years.

In San Francisco, a woman named Lisa Gaultier had been preparing for a disaster like this. Lisa is the founder of a nonprofit called Matter of Trust that promotes sustainable living through recycling, reuse and the repurposing of surplus. Since the early 2000s, she had partnered with a retired hairdresser from Alabama named Phil McCrory who had invented an unusual technique for getting oil out of water using discarded hair. The hair technically adsorbs oil, attracting it to the surface like a magnet. (This is why our hair gets oily if we don’t wash it.) Matter of Trust began collecting discarded hair from salons and dog groomers and felting it by machine into large mats that were stored in a warehouse next to the nonprofit’s headquarters in San Francisco. After the spill, people spontaneously showed up at the beach wanting to help clean up, and Lisa was there with the hair mats.

Paul Stamets, a successful businessman, author and spokesman for the expanding world of do-it-yourself mycology, happened to be in town just a few days after the spill to headline the Green Festival, an expo for “sustainable and green living.” Stamets promoted relatively accessible techniques for cleaning up the environment using mushrooms — including oil spills. Lisa had heard about Stamets’s work and had already been in touch with him “about our hair project.” Lisa called him from the beach where, she recalled, “there were 80 surfers out there using our hair mats, trying to clean up the oil washing up onto the shore.” Stamets told her that if she could find a place to put the oily hair, he would donate $10,000 worth of mycelium.

Read the rest of this article at: Noema