How will Facebook celebrate its 20th birthday? Perhaps it will create one of those cute video montages they like to generate at significant moments. Starting with a tinkling piano soundtrack, a couple of breathless friend requests, and some self-conscious, tentative writing of “hello!” on other users’ walls, it might then pass quickly through moments of chronic oversharing, passive-aggressive, stalking of exes, and horrified untagging of yourself in unflattering photos.
In 2017, Simon McCarthy-Jones wrote an article about schizophrenia for The Conversation. The piece, he jokes, got read by more than two people, which, as an academic—he’s an associate professor of clinical psychology at Trinity College Dublin—was a thrill.
Artificial intelligence seems more powerful than ever, with chatbots like Bard and ChatGPT capable of producing uncannily humanlike text. But for all their talents, these bots still leave researchers wondering: Do such models actually understand what they are saying?
Like so many millennials, I entered the online world through AOL Instant Messenger. I created an account one unremarkable day in the late nineteen-nineties, sitting in the basement of my childhood home at our chunky white desktop computer, which connected to the Internet via a patchy dial-up modem.
You are currently logged on to the largest version of the internet that has ever existed. By clicking and scrolling, you’re one of the 5 billion–plus people contributing to an unfathomable array of networked information—quintillions of bytes produced each day.
IBM is one of the oldest technology companies in the world, with a raft of innovations to its credit, including mainframe computing, computer-programming languages, and AI-powered tools. But ask an ordinary person under the age of 40 what exactly IBM does (or did), and the responses will be vague at best.
A little more than a year ago, the world seemed to wake up to the promise and dangers of artificial intelligence when OpenAI released ChatGPT, an application that enables users to converse with a computer in a singularly human way.
During a reading project I undertook to better understand the “third wave of democracy” — the remarkable and rapid rise of democracies in Latin America, Asia, Europe and Africa in the 1970s and 80s — I came to realize that this ascendency of democratic polities was not the result of some force propelling history toward its natural, final state, as some scholars have argued.
Many academic fields can be said to ‘study morality’. Of these, the philosophical sub-discipline of normative ethics studies morality in what is arguably the least alienated way. Rather than focusing on how people and societies think and talk about morality, normative ethicists try to figure out which things are, simply, morally good or bad, and why.
For about five minutes a few months ago, people seemed to genuinely believe that our culture was entering the age of “deinfluencing.” “Step aside, influencers,” wrote CNN.
The myth of The Writer looms large in our cultural consciousness. When most readers picture an author, they imagine an astigmatic, scholarly type who wakes at the crack of dawn in a monastic, book-filled, shockingly affordable house surrounded by nature.
In your brain, neurons are arranged in networks big and small. With every action, with every thought, the networks change: neurons are included or excluded, and the connections between them strengthen or fade.
I still love software as much today as I did when Paul Allen and I started Microsoft. But—even though it has improved a lot in the decades since then—in many ways, software is still pretty dumb.
YOU’RE probably well-aware by now that AI is taking over the internet (and real life) and that it is becoming more and more difficult to believe your own eyes.
I do not think human beings are the last stage in the evolutionary process. Whatever comes next will be neither simply organic nor simply machinic but will be the result of the increasingly symbiotic relationship between human beings and technology.