A touchscreen hanging in the middle of the exhibition highlighted all the questions for everyone to see. Would you have a chip implanted in your brain to make you smarter? Would you leave your elderly mother or baby in a robot’s care? Should that robot have rights? Would you allow supposedly impartial artificial intelligence (AI) software to judge your legal case? Would you transfer your consciousness to the cloud in order to live forever?
Within minutes of my decision to hand my life over to AI, ChatGPT suggested that, if able, I should go outside and play with my dog instead of work. I had asked the chatbot to make the choice for me, and it had said that I should prioritize “valuable experiences” that contribute to my “overall well-being.”
When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it’s become very common to compare powerful A.I.s to genies in fairy tales.
A few months ago, the writer Alice Sebold began to experience a kind of vertigo. She looked at a cup on the table, and it no longer appeared solid. Her vision fractured. Objects multiplied. Her awareness of depth shifted suddenly. Sometimes she glanced down and for a split second felt that there was no floor.
Increasingly, we’re surrounded by fake people. Sometimes we know it and sometimes we don’t. They offer us customer service on Web sites, target us in video games, and fill our social-media feeds; they trade stocks and, with the help of systems such as OpenAI’s ChatGPT, can write essays, articles, and e-mails. By no means are these A.I. systems up to all the tasks expected of a full-fledged person. But they excel in certain domains, and they’re branching out.
Well, that was fast. In November, the public was introduced to ChatGPT, and we began to imagine a world of abundance in which we all have a brilliant personal assistant, able to write everything from computer code to condolence cards for us. Then, in February, we learned that AI might soon want to kill us all.
Far out on the Arabian Sea one night in February, 2018, Sheikha Latifa bint Mohammed Al Maktoum, the fugitive daughter of Dubai’s ruling emir, marvelled at the stars. The voyage had been rough. Since setting out by dinghy and Jet Ski a few days before, she had been swamped by powerful waves, soaking the belongings she’d stowed in her backpack; after clambering aboard the yacht she’d secured for her escape, she’d spent days racked with nausea as it pitched on the swell. But tonight the sea was calmer, and she felt the stirring of an unfamiliar sensation. She was free.
Wind was the first thing I heard in the morning, along with a door opening and closing as someone got up first and went out to use the outhouse. Sounds reached into my awareness through the fog of sleep. Then: the lighter button of the propane heater pressed, a metallic clang sounding at least twice until it caught. I heard the kettle being lit and muted footsteps on plywood. Someone was brewing coffee. The old, damp smell of socks and mold faded into the earthy scent of coffee.
What is “creative nonfiction,” exactly? Isn’t the term an oxymoron? Creative writers—playwrights, poets, novelists—are people who make stuff up. Which means that the basic definition of “nonfiction writer” is a writer who doesn’t make stuff up, or is not supposed to make stuff up. If nonfiction writers are “creative” in the sense that poets and novelists are creative, if what they write is partly make-believe, are they still writing nonfiction?