Christopher Hitchens and I weren’t close friends—I was a lesser planet in his orbit. Every so often I felt the rhetorical lash of his published words on my back, and then I tried to make him feel mine, and you can guess who got the better of those exchanges. They usually had to do with Iraq. We both supported the war, but I supported it in an ambivalent, liberal way, while Christopher supported it in a heroic, revolutionary way. The more I saw of the war, the deeper my despair became. Christopher made it a point of honor never to call retreat.
I know of many friendships that ended in those years, including a few of mine. But something strange happened between Christopher and me. For every time he called me a split-the-difference bien-pensant, and for every time I called him a pseudo–Lord Byron, we seemed to become better friends. We would say rude things about each other in print, and then we’d exchange tentatively regretful emails without yielding an inch, and then we’d meet for a drink and the whole thing would go unmentioned, and somehow there was more warmth between us than before. Exchanging barbs was a way of bonding with Christopher.
Read the rest of this article at: The Atlantic
It was early summer, and I was on the verge of turning 40. I found myself entertaining a recurring daydream of escaping from time. I would be hustling my son out the door to get him to school, or walking briskly to work on the day of a deadline, or castigating myself for being online when I should have been methodically and efficiently putting words on paper, and I would have this vision of myself as a character in a video game discovering a secret level. This vision was informed by the platform games I loved as a child – Super Mario Bros, Sonic the Hedgehog and so on – in which the character you controlled moved across the screen from left to right through a scrolling landscape, encountering obstacles and adversaries as you progressed to the end of the level. In this daydream, I would see myself pushing against a wall or lowering myself down the yawning mouth of a pipe, and thereby discovering this secret level, this hidden chamber where I could exist for a time outside of time, where the clock was not forever running down to zero.
My relationship with time had always been characterised by a certain baleful anxiety, but as I approached the start of the decade in which I would have no choice but to think of myself as middle-aged, this anxiety intensified. I was always in the middle of some calculation or quantification with respect to time, and such thoughts were always predicated on an understanding of it as a precious and limited resource. What time was it right now? How much time was left for me to do the thing I was doing, and when would I have to stop doing it to do the next thing?
This resource being as limited as it was, should I not be doing something better with it, something more urgent or interesting or authentic? At some point in my late 30s, I recognised the paradoxical source of this anxiety: that every single thing in life took much longer than I expected it to, except for life itself, which went much faster, and would be over before I knew where I was.
Read the rest of this article at: The Guardian
When you first meet someone, you “feel each other out” to see where your lives might connect—where are you from, what do you do, what music/art/books do you like, etc. You are looking for common ground on the basis of which your conversation might proceed. Call this the Basic Game; I’d like to contrast it with two more advanced games that can be played in its stead, or alongside it.
In the Importance Game, participants jockey for position. This usually works by way of casual references to wealth, talent, accomplishment or connections, but there are many variants. I can, for instance, play this game by pretending to eschew it: “Let’s get straight down to business” can telegraph my being much too important to waste time with such games; or your being so unimportant as to render the game otiose.
The other game is the Leveling Game, and it uses empathy to equalize the players. So I might performatively share feelings of stress, inadequacy or weakness; or express discontent with the Powers that Be; or home in on a source of communal outrage, frustration or oppression.
Read the rest of this article at: The Point
The sudden explosion of facial recognition into public consciousness can make it seem a force as unstoppable as the weather, but the technology didn’t come out of nowhere. The first facial-recognition technology was arguably created by the French police officer Alphonse Bertillon more than a century ago. In the late 1800s, Bertillon devised a method for identifying criminals based on their physical features. The index cards assigned to each person included 11 physical measurements plus standardized photographic portraits and a “verbal portrait.” This early biometric system allowed the body to be abstracted into information; it was “a transformation of the body’s signs into text,” wrote the late critic Allan Sekula. In theory, this text made the process of identifying people less prone to human error. Bertillon’s system became widely used in France and soon spread to the United States, where it gained brief popularity until it was replaced by a much quicker and more reliable ruler: fingerprinting.
Now, the human face is cataloged on a scale that Bertillon could not have imagined. A report by Georgetown Law’s Center on Privacy and Technology estimates that law-enforcement facial-recognition technology affects more than 117 million American adults. Within four years, the Department of Homeland Security aims to scan 97 percent of all passengers on outbound international flights. Since 2017, Facebook has used facial recognition to tag people in photos, and every month, it seems, another goofy photo-filter app that may or may not be a honey pot for a Russian data-mining operation goes viral.
Why has the use of facial recognition become such a hot-button issue now? The most obvious answer is that the technology has been improved, streamlined, and commercialized to the point that it has become widely accessible, available for purchase for as low as 40 cents an image if you opt for Amazon’s facial-recognition software plan. The earliest automated facial-recognition systems developed in the 1960s required human operators to manually enter facial features for a computer to learn, not unlike Bertillon’s first system. Computer scientists can now teach computers to teach themselves to recognize faces.
The 9/11 attacks were a sort of big-bang moment for automated facial recognition. The infamous surveillance-camera images of two of the hijackers, Mohamed Atta and Abdulaziz al-Omari, passing through airport security in Portland, Maine, suggested to many that facial recognition could have identified them and prevented the attack. In November 2001, the Senate held a hearing on biometrics where Dianne Feinstein argued that “because these cameras didn’t use facial biometric systems, security was not alerted, and the hijackers remained free to carry out their bloody plans.” By December, stock for Visionics, an early face-recognition pioneer, soared by more than 300 percent.
Like other efforts to secure the United States from foreign terrorists, facial-recognition technology had unexpected consequences. It was Visionics that developed the first real-time law-enforcement application of facial recognition, in 2001, on civilians in Tampa, Florida, placing cameras throughout the city’s downtown in the hope of finding criminals with outstanding warrants. As agencies like the FBI and Customs and Border Protection adopt the technology, it is becoming clear that there is no guarantee it will reduce human bias in identifying people. Joy Buolamwini, who studies the social implications of technology, has been raising awareness of the tendency for facial algorithms to misidentify black faces, which she suggests is the result of a stark lack of diversity in artificial-intelligence research. Then there is the issue of who is subjected to these technologies in the first place. When Detroit, which is about 80 percent black, installed a system of hundreds of cameras — some of which had facial-recognition technology — many residents saw it as a new chapter in the long history of black Americans being subjected to unique surveillance in the name of public safety.
Read the rest of this article at: The California Sunday Magazine
Gradually, Huawei’s success in the developing world opened up doors in more advanced economies. Edward Zhou, who started as an engineer in 1996 and is now vice president of global public affairs, remembers sleeping in customers’ machine rooms to debug systems on the fly during the early days when Huawei still had a lot to prove. Back then, Huawei’s telephone technology was not equal to that of its Western peers, but it was much cheaper to operate. “Our customers didn’t trust the technology, but they trusted the people, because we worked very hard,” Zhou says. He later went to posts in Spain, Germany, and Japan.
In 2003, Huawei won a contract in Russia to build an optical-cable transmission project stretching across 1,800 kilometers in Siberia. The same year, it landed a contract to build a backbone transmission network for France. By 2005 more than half of the company’s revenue was coming from outside of China. Time named Ren to its 100 Most Influential People list. As the smartphone age arrived, Huawei began making not only hulking transmission equipment but also its own branded phones, some 600 million of which are now in circulation. Today, Huawei’s red logo and retail stores full of sleek furniture, pale wood, tablets, laptops, and smartphones seem to be as common on Chinese streets as Starbucks mermaids.
Even as Huawei has grown into a global player, its structure remained unusual among major Chinese companies. It is not publicly traded like Alibaba, Tencent, and Baidu. Unlike ZTE, the other major Chinese telecom manufacturer, it does not have the state as its biggest shareholder. Instead, the company is employee-owned, with Ren holding just over 1 percent of its ownership shares. Some 96,000 employees and retirees own the remainder of the company. To make matters more complicated, legally the employees’ shares are owned by a Chinese worker trade union, which is registered with the Shenzhen city government’s union.
Huawei’s financials are released every year and audited by the global accounting firm KPMG. This offers some insight into the company. But Huawei’s decisionmaking structure is shrouded in mystery. Huawei only began publicly naming its executives in 2011, and the company is run by a rotating set of chairs with Ren named founder, director, and CEO. This structural opacity has long concerned US officials; as one senior law enforcement official said to me, “It’s turtles all the way down.”
Ren’s relationship to China’s ruling Communist Party is similarly ambiguous. In 2018, China named Ren one of the country’s 100 “outstanding private entrepreneurs” who “firmly support the leadership of the Communist Party of China,” an honor bestowed as the country celebrated its 40th anniversary of economic reforms. (Huawei executives might also point out that many US executives have military ties—and that their service is heralded.)
In 2007 the US confronted Huawei directly about its loyalties. That year, during a trip through New York, Ren met with FBI agents who were concerned about the company’s business dealings in Iran. At the time, Ren maintained that Huawei abided by the international sanctions the US had levied against the Islamic Republic.
Read the rest of this article at: Wired
Follow us on Instagram @thisisglamorous
[instagram-feed]