news

News 02.19.21 : Today’s Articles of Interest from Around the Internets

by

News 02.19.21 : Today’s Articles of Interest from Around the Internets
@beaujolais1944
News 02.19.21 : Today’s Articles of Interest from Around the Internets
@modedamour
News 02.19.21 : Today’s Articles of Interest from Around the Internets
@mossandstoneflorals

Beverley Schottenstein was 93 years old when she decided to go to war with the biggest bank in the U.S.

It was a June day, and the Atlantic shimmered beyond the balcony of her Florida condominium. Beverley studied an independent review of her accounts as family and lawyers gathered around a table and listened in by phone. The document confirmed her worst fears: Her two financial advisers at JPMorgan Chase & Co., who oversaw more than $80 million for her, had run up big commissions putting her money in risky investments they weren’t telling her about. It was the latest red flag about the bankers. There had been missing account statements. Document shredding. Unexplained credit-card charges.

Although some relatives urged Beverley not to make waves, she was resolute. What the money managers did was wrong, she told the group. They needed to pay, she said. Even though they were her own grandsons.

And pay they did. With the help of her lawyers, Beverley dragged her grandsons and JPMorgan in front of arbitrators from the Financial Industry Regulatory Authority, or Finra. She sought as much as $69 million. After testimony that spread over months and ended in January, the panel issued a swift decision in Beverley’s favor.

Read the rest of this article at: Bloomberg

News 02.19.21 : Today’s Articles of Interest from Around the Internets

News 02.19.21 : Today’s Articles of Interest from Around the Internets

When Abdul Tokhi, a father of two small children, arrived in the United States from Afghanistan in 2017, a local church group helped him and his family find an apartment in Corona, a city in California’s Inland Empire — a 27,000-square-mile stretch of deserts, mountains, farmland and sprawling housing communities east of Los Angeles. The group collected furniture donations, bringing over sofas, tables and a bulky, outdated television. They also connected him to a hiring company, which landed him a job as a “picker” in an Amazon warehouse, a 12-mile drive from Corona in a town named Eastvale. He started at Amazon earning $12.25 an hour, while his wife stayed home to take care of the kids.

In Afghanistan, Tokhi worked in construction and shipping, which sometimes involved transporting money to the bank for a contractor. “They paid cash, so it was very dangerous,” he said. “You could get robbed.” He felt safe at Amazon, and benefits were good. After a year, Tokhi got a raise to $15 an hour, along with the rest of the company’s starting-wage employees around the country. He also enrolled in computer-science classes at a local community college. By 2019, he was still pulling items off shelves and preparing them for shipment. Being a picker required a strong back and the ability to lift up to 50 pounds. His rent was $1,480 a month, and after that was paid, there was barely enough for gas, food and cellphone bills. “The work is hard,” he said at the time. “But I don’t care. I have a job. A good salary.” Tokhi made his first American friends in the warehouse lunchrooms.

Read the rest of this article at: The New York Times

The Balmoral in Chestnut

Shop the Balmoral in Chestnut
at Belgrave Crescent & shop.thisisglamorous.com

Shortly before Joe Biden was inaugurated, Sam’s mother began stocking up on food in a panic. He didn’t know why, but he knew it probably had something to do with QAnon.

The 19-year-old started to notice changes in his mother’s behavior around the beginning of the coronavirus pandemic. She had always been a nervous woman: She stopped flying after 9/11 and had hovered closely to Sam and his two younger siblings for their entire lives. But during the COVID-19 crisis, his mom’s paranoia spiraled from quirky to deranged. It has turned her into someone he hardly recognizes.

Though she didn’t used to be very political, she now fears the president is a pedophile who stole the election. She’s scared of radiation from the 5G towers in her neighborhood and, as a white woman, she told her son, she’s afraid of being harmed by Black Lives Matter protesters — a movement she once supported. She worries that Sam’s brother and sister are being “indoctrinated” at their public high school and wants to move them to a Catholic one. She’s also refusing to get them immunized against COVID-19 as false rumors swirl that the vaccine contains a secret location-tracking microchip. (She was initially terrified ofthe virus but now considers the lockdowns an affront to her freedoms.)

“She wasn’t always like this,” Sam said. “It just keeps getting worse.”

Read the rest of this article at: HuffPost

News 02.19.21 : Today’s Articles of Interest from Around the Internets

News 02.19.21 : Today’s Articles of Interest from Around the Internets

This story starts the way every story must start in this day, this age. Restaurant stories in particular, but really, every single story of any kind. It’s a disaster story. A love story. A food story. It’s about memory and family, fame and power, collard greens and potato salad. And it begins in a penthouse on Wall Street, Manhattan, USA, high above a site where, once upon a time, grain and oysters and Black people were bought and sold.

It starts in March. Early March 2020 — which, as we all know now, is a very different place than mid-March will be, or late March. It starts at the end of a certain kind of world.

Partnering with a company called Resident, chef Omar Tate was cooking a series of well-attended high-end dinners for groups of 10 or 20 at a time. They happened in luxury apartments on the Upper West Side, in Brooklyn, wherever. The series was called Honeysuckle Pop-Up, and he was getting $150 a head for “New York Oysters” with thyme and cream, smoked turkey neck over beans (a dish named after the MOVE bombings, dubbed “Smoked Turkey Necks in 1980s Philadelphia”), roasted yams inspired by that scene in Ralph Ellison’s Invisible Man, buttermilk-fried rabbit (which he called “Bre’r Rabbit”), and a course named “Remnants on a South Philly Stoop” — crab, sunflower seeds, charred lemon and garlic powder. There was Kool-Aid (his version) on every table, and ice cream for dessert, flavored with honeysuckle.

Omar had curated the art on the walls, the music coming out of the speakers. Every menu was deeply personal, deeply historic, and came “from a distinctly Black perspective.” He’d handwritten poems to guide people through the biographical experience of dining at Honeysuckle, in this pocket universe of Kool-Aid and collard greens he created in borrowed spaces. It was part dinner, part theater, and around the time he’d moved the Omar Tate Show to the Wall Street space — to an apartment overlooking the former location of one of Manhattan’s most notorious slave markets — he was publishing, speaking, meeting people. The New York Times was writing about him, and he was starting to work on a project with the National Museum of African American History and Culture in Washington, D.C. He was hustling. He was right on the verge of blowing up.

But instead, everything else did.

Read the rest of this article at: Philadelphia Magazine

Follow us on Instagram @thisisglamorous

News 02.19.21 : Today’s Articles of Interest from Around the Internets

In computer science, the main outlets for peer-reviewed research are not journals but conferences, where accepted papers are presented in the form of talks or posters. In June, 2019, at a large artificial-intelligence conference in Long Beach, California, called Computer Vision and Pattern Recognition, I stopped to look at a poster for a project called Speech2Face. Using machine learning, researchers had developed an algorithm that generated images of faces from recordings of speech. A neat idea, I thought, but one with unimpressive results: at best, the faces matched the speakers’ sex, age, and ethnicity—attributes that a casual listener might guess. That December, I saw a similar poster at another large A.I. conference, Neural Information Processing Systems (Neurips), in Vancouver, Canada. I didn’t pay it much mind, either.

Not long after, though, the research blew up on Twitter. ​“What is this hot garbage, #NeurIPS2019?” Alex Hanna, a trans woman and sociologist at Google who studies A.I. ethics, tweeted. “Computer scientists and machine learning people, please stop this awful transphobic shit.” Hanna objected to the way the research sought to tie identity to biology; a sprawling debate ensued. Some tweeters suggested that there could be useful applications for the software, such as helping to identify criminals. Others argued, incorrectly, that a voice revealed nothing about its speaker’s appearance. Some made jokes (“One fact that this should never have been approved: Rick Astley. There’s no way in hell that their [system] would have predicted his voice out of that head at the time”) or questioned whether the term “transphobic” was a fair characterization of the research. A number of people said that they were unsure of what exactly was wrong with the work. As Hanna argued that voice-to-face prediction was a line of research that “shouldn’t exist,” others asked whether science could or should be stopped. “It would be disappointing if we couldn’t investigate correlations—if done ethically,” one researcher wrote. “Difficult, yes. Impossible, why?”

​Some of the conversation touched on the reviewing and publishing process in computer science. “Curious if there have been discussions around having ethics review boards at either conferences or with funding agencies (like IRB) to guide AI research,” one person wrote. (An organization’s institutional review board, or I.R.B., performs an ethics review of proposed scientific research.) Many commenters pointed out that the stakes in A.I. research aren’t purely academic. “When a company markets this to police do they tell them that it can be totally off?” a researcher asked. I wrote to Subbarao Kambhampati, a computer scientist at Arizona State University and a past president of the Association for the Advancement of Artificial Intelligence, to find out what he thought of the debate. “When the ‘top tier’ AI conferences accept these types of studies,” he wrote back, “we have much less credibility in pushing back against nonsensical deployed applications such as ‘evaluating interview candidates from their facial features using AI technology’ or ‘recognizing terrorists, etc., from their mug shots’—both actual applications being peddled by commercial enterprises.” Michael Kearns, a computer scientist at the University of Pennsylvania and a co-author of “The Ethical Algorithm,” told me that we are in “a little bit of a Manhattan Project moment” for A.I. and machine learning. “The academic research in the field has been deployed at massive scale on society,” he said. “With that comes this higher responsibility.”

As I followed the speech-to-face controversy on Twitter, I thought back to a different moment at the same Neurips conference. Traditionally, conference sponsors, including Facebook, Google, and JPMorgan Chase, set up booths in the expo hall, mostly to attract talent. But that year, during the conference’s “town hall,” a graduate student approached the microphone. “I couldn’t help but be a bit heartbroken when I noticed an N.S.A. booth,” he said, referring to the intelligence agency. “I’m having a hard time understanding how that fits in with our scientific ideals.” The event’s treasurer replied, saying, “At this moment we don’t have a policy for excluding any particular sponsors. We will bring that up in the next board meeting.”

Before leaving Vancouver, I sat down with Katherine Heller, a computer scientist at Duke University and a Neurips co-chair for diversity and inclusion. Looking back on the conference—which had accepted a little more than fourteen hundred papers that year—she couldn’t recall ever having faced comparable pushback on the subject of ethics. “It’s new territory,” she said. In the year since we spoke, the field has begun to respond, with some conferences implementing new review procedures. At Neurips 2020—held remotely, this past December—papers faced rejection if the research posed a threat to society. “I don’t think one specific paper served as a tipping point,” Iason Gabriel, a philosopher at the research lab DeepMind and the leader of the conference’s ethics-review process, told me. “It just seemed very likely that if we didn’t have a process in place, something challenging of that kind would pass through the system this year, and we wouldn’t make progress as a field.”

Read the rest of this article at: The New Yorker

P.S. previous articles & more by P.F.M.