Today’s generative AI systems like ChatGPT and Gemini are routinely described as heralding the imminent arrival of “superhuman” artificial intelligence.
But an ecosystem around text-based generative AI had evolved well before The Atlantic revealed the contents of key datasets. Large language models (LLMs) have been in development since 2017, and OpenAI’s GPT-3, the model that introduced generative AI to the mainstream, hit the world back in 2020.
Some art forms welcome, even require, collaboration. After all, it is the exceptionally rare film or television show that gets made by a single person. Music, too, often literally demands the assistance of others.
Artificial intelligence seems more powerful than ever, with chatbots like Bard and ChatGPT capable of producing uncannily humanlike text. But for all their talents, these bots still leave researchers wondering: Do such models actually understand what they are saying?
IBM is one of the oldest technology companies in the world, with a raft of innovations to its credit, including mainframe computing, computer-programming languages, and AI-powered tools. But ask an ordinary person under the age of 40 what exactly IBM does (or did), and the responses will be vague at best.
Many academic fields can be said to ‘study morality’. Of these, the philosophical sub-discipline of normative ethics studies morality in what is arguably the least alienated way. Rather than focusing on how people and societies think and talk about morality, normative ethicists try to figure out which things are, simply, morally good or bad, and why.
In your brain, neurons are arranged in networks big and small. With every action, with every thought, the networks change: neurons are included or excluded, and the connections between them strengthen or fade.
Many people have put forth theories about why, exactly, the internet is bad. The arguments go something like this: Social platforms encourage cruelty, snap reactions, and the spreading of disinformation, and they allow for all of this to take place without accountability, instantaneously and at scale.
SIX OR SEVEN years ago, I realized I should learn about artificial intelligence. I’m a journalist, but in my spare time I’d been writing a speculative novel set in a world ruled by a corporate, AI-run government. The problem was, I didn’t really understand what a system like that would look like.
The first time I heard about Taylor Swift, I was in a Los Angeles County jail, waiting to be sent to prison for murder. Sheriffs would hand out precious copies of the Los Angeles Times, and they would be passed from one reader to the next.