Artificial intelligence is already making people rich. Jensen Huang, the co-founder and CEO of chip company Nvidia, which controls 80 percent of the data-center AI chip market, has seen his net worth explode from a mere $4 billion five years ago to a staggering $83.1 billion as of March 24 on the back of bottomless demand for his company’s product.
ChatGPT maker OpenAI is reportedly valued at $86 billion, with rivals Anthropic and Inflection at $15 billion and $4 billion as of their most recent funding rounds. While OpenAI CEO Sam Altman says he owns no shares in the company, it’s possible, even likely, that other AI founders and execs have joined the three commas club by now, at least on paper.
But some researchers think this is only the beginning — that AI won’t just make a few techies wildly rich, the way social networking, smartphones, and personal computers did before. Believers in a growth explosion argue that AI is set to make society much, much richer by causing economic growth at a scale it has never experienced before.
In 2020, the AI researcher Ajeya Cotra at grant maker Open Philanthropy released a report arguing that AI powerful enough to drive a surge in economic growth to 20 to 30 percent a year is coming, and more likely than not will emerge before 2100. The following year, her colleague Tom Davidson conducted a more in-depth investigation of the potential for AI to supercharge growth and concluded that per capita economic growth rates as high as 30 percent a year resulting from AI are plausible this century.
Read the rest of this article at: Vox
I spent the daytime during the summer of 2009 at an unpaid internship at a literary magazine, and I spent the nighttime, paid, behind the counter of the gelato stand at the Times Square location of Madame Tussauds wax museum. I was happy to have this job. The Great Recession gripped New York so tightly that all talk of “selling out” had been put on indefinite hold. There was no longer any shame in applying to work at McDonald’s or McKinsey among my friends who had, as recently as the previous summer, disavowed anything that didn’t come with an intellectual or moral gold star. Whatever you could get was fine. What I could get was the wax museum.
The job was easy, if physically exhausting. I wore an all-black outfit and scooped gelato for tourists who seemed to come exclusively from Indonesia or New Jersey. Snickers Bar was the most popular flavor. Everyone who worked there was striving to do something else—usually acting. I was the only one who wanted to be an editor, which set me apart from my coworkers mainly for my comparative lack of charisma. They wore newsboy caps and always seemed to be inviting each other to see LCD Soundsystem for free in Central Park.
Management kept the lobby pretty sparse as far as the actual product went. Michael Jackson died that summer, so they moved his wax figure to the hero spot outside the gelato stand, and the Incredible Hulk loomed over the lobby, but other than that, you had to pay to play. The museum’s curators cultivated a sense of scarcity that was in keeping with how it felt to be a twenty-one-year-old working person at the time. The economy had contracted, and with it, the budget to hire anyone my age. Everyone in my life had an internship that was off the books. We did countless hours in unpaid “trial shifts” for restaurants that never called us back. The subway was full of ads for off-brand-seeming medical schools in the Caribbean and fortune tellers. Everything available seemed like a compromise.
Read the rest of this article at: Esquire
Imagine you’re on a quest to understand the very nature of computation. You’re deep in the wilderness, far from any paths, and inscrutable messages are carved into the trunks of trees all around you — BPP, AC0[m], Σ2P, YACC, and hundreds of others. The glyphs are trying to tell you something, but where to begin? You can’t even keep them all straight.
Few researchers have done as much as Russell Impagliazzo to cut through this seeming chaos. For 40 years, Impagliazzo has worked at the forefront of computational complexity theory, the study of the intrinsic difficulty of different problems. The most famous open question in this field, called the P versus NP problem, asks whether many seemingly hard computational problems are actually easy — with the right algorithm. An answer would have far-reaching implications for science and the security of modern cryptography.
In the 1980s and 1990s, Impagliazzo played a leading role in unifying the theoretical foundations of cryptography. In 1995, he articulated the significance of these new developments in an iconic paper that reformulated possible solutions to P versus NP and a handful of related problems in the language of five hypothetical worlds we might inhabit, whimsically dubbed Algorithmica, Heuristica, Pessiland, Minicrypt and Cryptomania. Impagliazzo’s five worlds have inspired a generation of researchers, and they continue to guide research in the flourishing subfield of meta-complexity.
Read the rest of this article at: Quanta Magazine
Percival Everett’s novels seem to ward off the lazier hermeneutics of literary criticism, yet they also have a way of dangling the analytical ropes with which we critics hang ourselves. His latest novel follows the misadventures of a runaway named Jim and his young companion Huckleberry in the antebellum American South. As in another novel featuring those protagonists, Jim has fled enslavement in the state of Missouri, and Huckleberry, Huck for short, has faked his own death to escape his no-good abusive Pap. As in that other novel, the two are both bonded and divided by the circumstances of their respective fugitivity as they float together on a raft down the Mississippi River. As in that other novel, the narrator of Everett’s book is setting down his story as best he knows how, but—rather differently—the narrator here is not the boy but the man who has been deprived of the legal leave to be one. “With my pencil, I wrote myself into being,” Jim writes. The novel is titled, simply, “James,” the name Jim chooses for himself. In conferring interiority (and literacy) upon perhaps the most famous fictional emblem of American slavery after Uncle Tom, Everett seems to participate in the marketable trope of “writing back” from the margins, exorcizing old racial baggage to confront the perennial question of—to use another worn idiom—what “Huck Finn” means now. And yet, with small exceptions, “James” meanders away from the prefab idioms that await it.
What novel has borne the racial freight of American letters like “Adventures of Huckleberry Finn,” a book credited with gifting us a national literature (not to mention a sense of humor)? Norman Mailer, rereading the book on the occasion of its centennial, wrote of realizing “all over again that the near-burned-out, throttled, hate-filled dying affair between whites and blacks is still our great national love affair.” A decade later, as Americans fretted over the educational value of a book busting with more than two hundred instances of the word “nigger,” Toni Morrison defended “Huckleberry Finn” ’s status as a classic. The novel’s brilliance, she observed, lies in how it formally reproduces the very racial dynamic it depicts. Jim enables Huck’s moral maturation; without him, Twain’s Roman has no Bildung. Jim’s freedom is “withheld,” Morrison writes, lest there be “no more story to tell.” “James” posits a converse narrative problem: from the perspective of Jim, a man undertaking a deadly quest for freedom, managing the needs of a pubescent boy amounts to nothing so much as an inconvenience. Jim’s worries for his own family, a wife and child he’s left behind in bondage, must be slotted into the spaces between the boy’s gabbing, his questions, his anxieties. Jim’s sentiment toward Huck is unruly in its ambivalence: he is simultaneously protective and resentful, both relieved and uneasy when the two are separated, which in Everett’s novel they often are. With the boy in tow, Jim is mobile but stuck. Writing himself into being means leaving Huck, and much of “Huck,” behind.
Read the rest of this article at: The New Yorker
An end to privacy
On March 13, 2022, 34-year-old English teacher Yulia Zhivtsova left her Moscow apartment to meet her friends at the mall. Bundled up against the freezing cold, she entered the metro at the CSKA station on the Bolshaya Koltsevaya line, passing through station barriers that let travelers pay by scanning their faces.
But when she went down to the platform, two police officers plucked her out of the crowd.
“Hey!” said one, and then addressed her by her full name, including the Russian patronymic. “Yulia Maksimovna. Come with us.”
The officers looked back and forth between Zhivtsova and an image on their smartphones. They seemed unsure if they had the right person. Catching a glimpse of the screen, Zhivtsova recognized a photo of herself taken the month before, when she was detained for protesting Russia’s war in Ukraine. Her hair looked different: In the photo it was faded blue, but that day it was back to a gleaming teal. “I do tend to change my hair color a lot,” Zhivtsova told Rest of World.
After a while, the officers decided to trust the image on their smartphones. Another anti-war demonstration was taking place in Moscow that day, and even though Zhivtsova didn’t plan to attend, they detained her preventively, holding her for a few hours.
Over the past decade, there has been a steep rise globally in law enforcement using facial recognition technology. Data gathered by Steven Feldstein, a researcher with the Carnegie Endowment for International Peace, found that government agencies in 78 countries now use public facial recognition systems.
The public is often supportive of the use of such tech: 59% of U.K. adults told a survey they “somewhat” or “strongly” support police use of facial recognition technology in public spaces, and a Pew Research study found 46% of U.S. adults said they thought it was a good idea for society. In China, one study found that 51% of respondents approved of facial recognition tech in the public sphere, while in India, 69% of people said in a 2023 report that they supported its use by the police.
But while authorities generally pitch facial recognition as a tool to capture terrorists or wanted murderers, the technology has also emerged as a critical instrument in a very particular context: punishing protesters.
The last 20 years have shown that mass demonstrations can have real impacts. Starting in 2010, a wave of protests across the Middle East and North Africa, known as the Arab Spring, toppled regimes in Tunisia, Libya, Egypt, and Yemen, and spurred revolts in many other countries. In 2014, protesters in Hong Kong took to the streets for universal suffrage, sometimes called the Umbrella Revolution owing to protesters’ use of umbrellas to shield against pepper spray. While authorities did not make any concessions at the time, the protests drew global attention, and when Hong Kongers took to the streets with new demands in 2019, the government withdrew a controversial bill that would have allowed suspects to be extradited to mainland China. In 2020, the #EndSARS movement against police brutality in Nigeria resulted in the disbanding of the Special Anti-Robbery Squad, the police force at the center of the controversy, while mass protests in Chile led to a cabinet reshuffle and a referendum to rewrite the country’s constitution. So far, 2024 has been marked by several large-scale protest events, including farmers’ protests in India and Europe, and protests in many countries against the war in Gaza.
Read the rest of this article at: Rest Of World