Since the 1950s, discussions about AI have largely revolved around a big, tantalizing question: What can machines do, and where might they hit a wall? Will they ever truly think, understand, or maybe even become conscious? Could they reach the so-called “heights of human intelligence”?
It’s common to meet the idea of intuition with an eye roll. We tend to value reason over everything else, using expressions like “think before you act,” “think twice,” and “look before you leap.” We don’t trust intuition. In fact, we believe it’s flawed and magical thinking, either vaguely crazy or downright stupid. After all, good decisions should always be reasoned.
Time is not to be trusted. This should come as news to no one. Yet recent times have left people feeling betrayed that the reliable metronome laying down the beat of their lives has, in a word, gone bonkers. Time sulked and slipped away, or slogged to a stop, rushing ahead or hanging back unaccountably; it no longer came in tidy lumps clearly clustered in well-defined categories: past, present, future.
Is this the real life? Is this just fantasy? Those aren’t just lyrics from the Queen song “Bohemian Rhapsody.” They’re also the questions that the brain must constantly answer while processing streams of visual signals from the eyes and purely mental pictures bubbling out of the imagination.
The brain-powered individual, which is variously called the self, the ego, the mind, or “me,” lies at the center of Western thought. In the worldview of the West, we herald the greatest thinkers as world-changers. There is no more concise example of this than philosopher René Descartes’ famous statement, “Cogito, ergo sum,” or, “I think, therefore I am.” But who is this? Let’s take a closer look at the thinker, or the “me,” we all take for granted.