Skip to main content

Home/ TOK Friends/ Group items tagged job

Rss Feed Group items tagged

peterconnelly

Where Will We Be in 20 Years? - The New York Times - 0 views

  • “Demographics are destiny.”It is a phrase, often attributed to the French philosopher Auguste Comte, that suggests much of the future is preordained by the very simple trend lines of populations. Want to understand how the power dynamic between the United States and China will change over the next 20 years? An economist would tell you to look at the demographics of both countries. (China’s economy is likely to overtake the U.S. economy by 2028, but remain smaller on a per capita basis.)
  • Predicting the future may be a fool’s errand. But using demographic data to assess the opportunities and challenges of the next two decades is something that business and political leaders don’t do enough. We’re all too swept up in the here and now, the next quarter and the next year.
  • More people around the world had more disposable income and increasingly chose to live closer to cities with greater access to airports. That, married with the human condition that people like to be around other people, makes forecasting certain elements of the future almost mathematical.
  • ...6 more annotations...
  • One aspect of the future that demographics can’t help predict are technological innovations.
  • About 70 percent of the world population is expected to live in urban areas by 2050, according to data from the United Nations.
  • The U.S. Energy Information Administration projects that the world will need about 28 percent more energy in 2040 than it did in 2015 based on the number of people in the country and consumption patterns; on our current trajectory, about 42 percent of electricity in the United States will come from renewable sources.
  • Technology has led us to expect that goods and services will be delivered at the push of a button, often within minutes.
  • Entrepreneurs, industry leaders and policymakers are already at work solving some of the problems that demographic data suggest are ahead of us, whether it’s figuring out how to incentivize farmers to sequester carbon, use insurance as a tool for reducing coal production, reinvent the motors that power heavy industry so they use less energy, or write laws that help govern code.
  • What about the metaverse? Or crypto technology? Or robots taking our jobs? Or A.I. taking over everything? Demographics can’t answer those questions. All of those things may happen, but life in 2041 may also look a lot like it does today — maybe with the exception of those flying cars.
peterconnelly

German Chancellor accused of comparing climate activists to Nazis - CNN - 0 views

  • German Chancellor Olaf Scholz was accused Monday of comparing climate activists to Nazis, in allegations that his spokesperson said were "completely absurd."
  • "I'll be honest: These black-clad displays at various events by the same people over and over again remind me of a time that is, thank God, long gone by," he said in an exchange captured on camera.
  • Scholz was speaking about the phase-out of coal-fired power generation and resulting jobs losses in open cast mining when he was interrupted.
  • ...4 more annotations...
  • Prominent German Climate scientist Friederike Otto commented that "Scholz 'forgets' our worst history, dismisses every generation that comes after him as irrelevant & the audience just applauds."
  • "Where does one begin? In just one half-sentence, the Chancellor of the Federal Republic relativizes the Nazi regime and, in a paradoxical way, also the climate crisis," she wrote on Twitter. "He stylizes climate protection as an ideology with parallels to the Nazi regime. In 2022. Jesus. This is such a scandal."
  • "I have also been to events where five people sat dressed in the same way, each had a well-rehearsed stance, and then they do it again every time," he said. "And that's why I think that is not a discussion, that is not participation in a discussion, but an attempt to manipulate events for one's own purposes. One should not do this."
  • The chancellor leads a three-party coalition with partners the Greens and pro-business Free Democrats, and their pledge to improve climate change action was central to their campaign.
peterconnelly

Elon Musk Hates Ads. Twitter Needs Them. That May Be a Problem. - The New York Times - 0 views

  • Since he started pursuing his $44 billion purchase of Twitter — and for years before that — the world’s richest man has made clear that advertising is not a priority.
  • Ads account for roughly 90 percent of Twitter’s revenue.
  • They have cited a litany of complaints, including that the company cannot target ads nearly as well as competitors like Facebook, Google and Amazon.
  • ...10 more annotations...
  • Now, numerous advertising executives say they’re willing to move their money elsewhere, especially if Mr. Musk eliminates the safeguards that allowed Twitter to remove racist rants and conspiracy theories.
  • “At the end of the day, it’s not the brands who need to be concerned, because they’ll just spend their budgets elsewhere — it’s Twitter that needs to be concerned,” said David Jones
  • On Wednesday night, at Twitter’s annual NewFronts presentation for advertisers at Pier 17 in New York, company representatives stressed Twitter’s value for marketers: as a top destination for people to gather and discuss major cultural moments like sporting events or the Met Gala, increasingly through video posts.
  • “Twitter’s done a better job than many platforms at building trust with advertisers — they’ve been more progressive, more responsive and more humble about initiating ways to learn,” said Mark Read
  • Twitter earns the vast majority of its ad revenue from brand awareness campaigns, whose effectiveness is much harder to evaluate than ads that target users based on their interests or that push for a direct response, such as clicking through to a website.
  • Twitter’s reach is also narrower than many rivals, with 229 million users who see ads, compared with 830 million users on LinkedIn and 1.96 billion daily users on Facebook.
  • “Even the likes of LinkedIn have eclipsed the ability for us to target consumers beyond what Twitter is providing,” he said. “We’re going to go where the results are, and with a lot of our clients, we haven’t seen the performance on Twitter from an ad perspective that we have with other platforms.”
  • Twitter differs from Facebook, whose millions of small and midsize advertisers generate the bulk of the company’s revenue and depend on its enormous size and targeting abilities to reach customers. Twitter’s clientele is heavily weighted with large, mainstream companies, which tend to be wary of their ads appearing alongside problematic content.
  • On Twitter, he has criticized ads as “manipulating public opinion” and discussed his refusal to “pay famous people to fake endorse.”
  • “There’s a fork in the road, where Path A leads to an unfiltered place with the worst of human behavior and no brands want to go anywhere near it,” said Mr. Jones of Brandtech. “And Path B has one of the world’s genius entrepreneurs, who knows a lot about running companies, unleashing a wave of innovation that has people looking back in a few years and saying, ‘Remember when everyone was worried about Musk coming in?’”
criscimagnael

Living better with algorithms | MIT News | Massachusetts Institute of Technology - 0 views

  • At a talk on ethical artificial intelligence, the speaker brought up a variation on the famous trolley problem, which outlines a philosophical choice between two undesirable outcomes.
  • Say a self-driving car is traveling down a narrow alley with an elderly woman walking on one side and a small child on the other, and no way to thread between both without a fatality. Who should the car hit?
  • To get a sense of what this means, suppose that regulators require that any public health content — for example, on vaccines — not be vastly different for politically left- and right-leaning users. How should auditors check that a social media platform complies with this regulation? Can a platform be made to comply with the regulation without damaging its bottom line? And how does compliance affect the actual content that users do see?
  • ...12 more annotations...
  • a self-driving car could have avoided choosing between two bad outcomes by making a decision earlier on — the speaker pointed out that, when entering the alley, the car could have determined that the space was narrow and slowed to a speed that would keep everyone safe.
  • Auditors have to inspect the algorithm without accessing sensitive user data.
  • Other considerations come into play as well, such as balancing the removal of misinformation with the protection of free speech.
  • To meet these challenges, Cen and Shah developed an auditing procedure that does not need more than black-box access to the social media algorithm (which respects trade secrets), does not remove content (which avoids issues of censorship), and does not require access to users (which preserves users’ privacy).
  • which is known to help reduce the spread of misinformation
  • In labor markets, for example, workers learn their preferences about what kinds of jobs they want, and employers learn their preferences about the qualifications they seek from workers.
  • But learning can be disrupted by competition
  • it is indeed possible to get to a stable outcome (workers aren’t incentivized to leave the matching market), with low regret (workers are happy with their long-term outcomes), fairness (happiness is evenly distributed), and high social welfare.
  • For instance, when Covid-19 cases surged in the pandemic, many cities had to decide what restrictions to adopt, such as mask mandates, business closures, or stay-home orders. They had to act fast and balance public health with community and business needs, public spending, and a host of other considerations.
  • But of course, no county exists in a vacuum.
  • These complex interactions matter,
  • “Accountability, legitimacy, trust — these principles play crucial roles in society and, ultimately, will determine which systems endure with time.” 
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • The Reformers
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
Javier E

How thinking hard makes the brain tired | The Economist - 0 views

  • Mental labour can also be exhausting. Even resisting that last glistening chocolate-chip cookie after a long day at a consuming desk job is difficult. Cognitive control, the umbrella term encompassing mental exertion, self-control and willpower, also fades with effort.
  • unlike the mechanism of physical fatigue, the cause of cognitive fatigue has been poorly understood.
  • It posits that exerting cognitive control uses up energy in the form of glucose. At the end of a day spent intensely cogitating, the brain is metaphorically running on fumes. The problem with this version of events is that the energy cost associated with thinking is minimal.
  • ...8 more annotations...
  • To induce cognitive fatigue, a group of participants were asked to perform just over six hours of various tasks that involve thinking.
  • In other words, cognitive work results in chemical changes in the brain, which present behaviourally as fatigue. This, therefore, is a signal to stop working in order to restore balance to the brain.
  • a neurometabolic point of view. They hypothesise that cognitive fatigue results from an accumulation of a certain chemical in the region of the brain underpinning control. That substance, glutamate, is an excitatory neurotransmitter
  • Periodically, throughout the experiment, participants were asked to make decisions that could reveal their cognitive fatigue.
  • The time it takes for the pupil to subsequently dilate reflects the amount of mental exerted. The pupil-dilation times of participants assigned hard tasks fell off significantly as the experiment progressed.
  • During the experiment the scientists used a technique called magnetic-resonance spectroscopy to measure biochemical changes in the brain. In particular, they focused on the lateral prefrontal cortex, a region of the brain associated with cognitive control. If their hypothesis was to hold, there would be a measurable chemical difference between the brains of hard- and easy-task participants
  • Their analysis indicated higher concentrations of glutamate in the synapses of a hard-task participant’s lateral prefrontal cortex. Thus showing cognitive fatigue is associated with increased glutamate in the prefrontal cortex
  • There may well be ways to reduce the glutamate levels, and no doubt some researchers will now be looking at potions that might hack the brain in a way to artificially speed up its recovery from fatigue. Meanwhile, the best solution is the natural one: sleep
Javier E

Avoidance, not anxiety, may be sabotaging your life - The Washington Post - 0 views

  • Anxiety, for many people, is like an unwelcome houseguest — a lingering presence that causes tension, clouds the mind with endless “what ifs” and shows up as various physical sensations.
  • About 12 percent of U.S. adults regularly felt worry, nervousness or anxiety, according to a National Health Interview Survey conducted between October and December 2022.
  • Anxiety, though, is not the puppeteer pulling the strings in many of our lives. There is a more subtle and insidious marionette, and it’s called psychological avoidance. When we avoid certain situations and decisions, it can lead to heightened anxiety and more problems.
  • ...23 more annotations...
  • Psychological avoidance is akin to an ostrich burying its head in the sand, choosing ignorance over confrontation, all while a storm brews in the background.
  • depression and anxiety disorders cost the global economy $1 trillion each year in lost productivity.
  • avoidance, a strategy that not only fails to solve problems but fuels them.
  • Psychological avoidance isn’t about the actions we take or don’t take, but the intentions behind them. If our actions aim to squash discomfort hastily, then we’re probably 2favoiding
  • the three ways people tend to practice psychological avoidance.
  • Reacting
  • It’s when we reply hastily to an email that upsets us or raise our voices without considering the consequences.
  • Reacting is any response that seeks to eliminate the source of discomfort
  • Retreating
  • Retreating is the act of moving away or pulling back from anxiety-inducing situations
  • For example, my client with the fear of public speaking took a different job to avoid it. Others may reach for a glass of wine to numb out o
  • Remaining
  • Remaining is sticking to the status quo to avoid the discomfort of change.
  • Psychological avoidance is a powerful enemy, but there are three science-based skills to fight it.
  • Shifting involves checking in with your thoughts, especially when anxiety comes knocking. In those moments, we often have black-and-white, distorted thoughts, just like my client, who was worried about being in a romantic relationship, telling himself, “I will never be in a good relationship.”
  • Shifting is taking off dark, monochrome glasses and seeing the world in color again. Challenge your thoughts, clean out your lenses, by asking yourself, “Would I say this to my best friend in this scenario?
  • Approaching
  • taking a step that feels manageable.
  • The opposite of avoiding is approaching
  • Ask yourself: What is one small step I can take toward my fears and anxiety to overcome my avoidance.
  • Aligning
  • Aligning is living a values-driven life, where our daily actions are aligned with what matters the most to us: our values.
  • This is the opposite of what most of us do while anxious. In moments of intense anxiety, we tend to let our emotions, not our values, dictate our actions. To live a values-driven life, we need to first identify our values, whether that is health, family, work or something else. Then we need to dedicate time and effort to our values.
Javier E

A Commencement Address Too Honest to Deliver in Person - The Atlantic - 0 views

  • Use this hiatus to do something you would never have done if this emergency hadn’t hit. When the lockdown lifts, move to another state or country. Take some job that never would have made sense if you were worrying about building a career—bartender, handyman, AmeriCorps volunteer.
  • If you use the next two years as a random hiatus, you may not wind up richer, but you’ll wind up more interesting.
  • The biggest way most colleges fail is this: They don’t plant the intellectual and moral seeds students are going to need later, when they get hit by the vicissitudes of life.
  • ...13 more annotations...
  • If you didn’t study Jane Austen while you were here, you probably lack the capacity to think clearly about making a marriage decision. If you didn’t read George Eliot, then you missed a master class on how to judge people’s character. If you didn’t read Nietzsche, you are probably unprepared to handle the complexities of atheism—and if you didn’t read Augustine and Kierkegaard, you’re probably unprepared to handle the complexities of faith.
  • The list goes on. If you didn’t read de Tocqueville, you probably don’t understand your own country. If you didn’t study Gibbon, you probably lack the vocabulary to describe the rise and fall of cultures and nations.
  • The wisdom of the ages is your inheritance; it can make your life easier. These resources often fail to get shared because universities are too careerist, or because faculty members are more interested in their academic specialties or politics than in teaching undergraduates, or because of a host of other reasons
  • What are you putting into your mind? Our culture spends a lot less time worrying about this, and when it does, it goes about it all wrong.
  • my worry is that, especially now that you’re out of college, you won’t put enough really excellent stuff into your brain.
  • I worry that it’s possible to grow up now not even aware that those upper registers of human feeling and thought exist.
  • The theory of maximum taste says that each person’s mind is defined by its upper limit—the best that it habitually consumes and is capable of consuming.
  • After college, most of us resolve to keep doing this kind of thing, but we’re busy and our brains are tired at the end of the day. Months and years go by. We get caught up in stuff, settle for consuming Twitter and, frankly, journalism. Our maximum taste shrinks.
  • I’m worried about the future of your maximum taste. People in my and earlier generations, at least those lucky enough to get a college education, got some exposure to the classics, which lit a fire that gets rekindled every time we sit down to read something really excellent.
  • the “theory of maximum taste.” This theory is based on the idea that exposure to genius has the power to expand your consciousness. If you spend a lot of time with genius, your mind will end up bigger and broader than if you spend your time only with run-of-the-mill stuff.
  • the whole culture is eroding the skill the UCLA scholar Maryanne Wolf calls “deep literacy,” the ability to deeply engage in a dialectical way with a text or piece of philosophy, literature, or art.
  • “To the extent that you cannot perceive the world in its fullness, to the same extent you will fall back into mindless, repetitive, self-reinforcing behavior, unable to escape.”
  • I can’t say that to you, because it sounds fussy and elitist and OK Boomer. And if you were in front of me, you’d roll your eyes.
  •  
    Or as the neurologist Richard Cytowic put it to Adam Garfinkle, "To the extent that you cannot perceive the world in its fullness, to the same extent you will fall back into mindless, repetitive, self-reinforcing behavior, unable to escape."*
Javier E

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the AI equivalent of a 777 will be like? Fasten your seatbelts.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explain and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the AI's. But once trained, the AI is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
Javier E

DeepMind uncovers structure of 200m proteins in scientific leap forward | DeepMind | Th... - 0 views

  • Highlighter
  • Proteins are the building blocks of life. Formed of chains of amino acids, folded up into complex shapes, their 3D structure largely determines their function. Once you know how a protein folds up, you can start to understand how it works, and how to change its behaviour
  • Although DNA provides the instructions for making the chain of amino acids, predicting how they interact to form a 3D shape was more tricky and, until recently, scientists had only deciphered a fraction of the 200m or so proteins known to science
  • ...7 more annotations...
  • In November 2020, the AI group DeepMind announced it had developed a program called AlphaFold that could rapidly predict this information using an algorithm. Since then, it has been crunching through the genetic codes of every organism that has had its genome sequenced, and predicting the structures of the hundreds of millions of proteins they collectively contain.
  • Last year, DeepMind published the protein structures for 20 species – including nearly all 20,000 proteins expressed by humans – on an open database. Now it has finished the job, and released predicted structures for more than 200m proteins.
  • “Essentially, you can think of it as covering the entire protein universe. It includes predictive structures for plants, bacteria, animals, and many other organisms, opening up huge new opportunities for AlphaFold to have an impact on important issues, such as sustainability, food insecurity, and neglected diseases,”
  • In May, researchers led by Prof Matthew Higgins at the University of Oxford announced they had used AlphaFold’s models to help determine the structure of a key malaria parasite protein, and work out where antibodies that could block transmission of the parasite were likely to bind.
  • “Previously, we’d been using a technique called protein crystallography to work out what this molecule looks like, but because it’s quite dynamic and moves around, we just couldn’t get to grips with it,” Higgins said. “When we took the AlphaFold models and combined them with this experimental evidence, suddenly it all made sense. This insight will now be used to design improved vaccines which induce the most potent transmission-blocking antibodies.”
  • AlphaFold’s models are also being used by scientists at the University of Portsmouth’s Centre for Enzyme Innovation, to identify enzymes from the natural world that could be tweaked to digest and recycle plastics. “It took us quite a long time to go through this massive database of structures, but opened this whole array of new three-dimensional shapes we’d never seen before that could actually break down plastics,” said Prof John McGeehan, who is leading the work. “There’s a complete paradigm shift. We can really accelerate where we go from here
  • “AlphaFold protein structure predictions are already being used in a myriad of ways. I expect that this latest update will trigger an avalanche of new and exciting discoveries in the months and years ahead, and this is all thanks to the fact that the data are available openly for all to use.”
Javier E

Opinion | David Brooks: I Was Wrong About Capitalism - The New York Times - 0 views

  • sometimes I’m just slow. I suffer an intellectual lag.
  • Reality has changed, but my mental frameworks just sit there. Worse, they prevent me from even seeing the change that is already underway — what the experts call “conceptual blindness.”
  • For a while this bet on free-market economic dynamism seemed to be paying off. It was the late 1980s and 1990s
  • ...6 more annotations...
  • In the early 1990s, The Journal sent me on many reporting trips to the U.S.S.R.
  • I saw but did not see the enormous amount of corruption that was going on. I saw but did not see that property rights alone do not spontaneously make a decent society. The primary problem in all societies is order — moral, legal and social order.
  • By the time I came to this job, in 2003, I was having qualms about the free-market education I’d received — but not fast enough. It took me a while to see that the postindustrial capitalism machine — while innovative, dynamic and wonderful in many respects — had some fundamental flaws.
  • The most educated Americans were amassing more and more wealth, dominating the best living areas, pouring advantages into their kids. A highly unequal caste system was forming. Bit by bit it dawned on me that the government would have to get much more active if every child was going to have an open field and a fair chance.
  • the financial crisis hit, the flaws in modern capitalism were blindingly obvious, but my mental frames still didn’t shift fast enough.
  • Sometimes in life you should stick to your worldview and defend it against criticism. But sometimes the world is genuinely different than it was before. At those moments the crucial skills are the ones nobody teaches you: how to reorganize your mind, how to see with new eyes.
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

How Kevins Got a Bad Rap in France | The New Yorker - 0 views

  • Traditionally, the bourgeoisie dictated the fashions for names, which then percolated down the social scale to the middle and working classes. By looking out, rather than up, for inspiration, the parents of Kevins—along with Brandons, Ryans, Jordans, and other pop-culture-inspired names that took off in France in the nineteen-nineties—asserted the legitimacy of their tastes and their unwillingness to continue taking cues from their supposed superiors
  • I asked Coulmont if he could think of any other first name that provoked such strong feelings. “The name Mohamed, perhaps,” he replied. “But Kevin triggers reactions from people who reject the cultural autonomy of the popular classes, while Mohamed triggers the reactions of xenophobes.”
  • Kévin Fafournoux received some three hundred messages from Kevins around the country, testifying to their ordeals. Some told of being shunned after introducing themselves at bars, or zapped on dating apps as soon as their names popped up. One Kevin, a psychologist, said that he had agonized about whether to put his full name on the plaque outside his office building. “We find people who have a sense of malaise and who have real problems,” Fafournoux said. Coulmont found that students named Kevin perform proportionally worse on the baccalaureate exam, not because of a stigma surrounding the name but because Kevins tend to be, as he put it, of a “lower social origin.”
  • ...2 more annotations...
  • a watchdog group, claimed that a candidate named Kevin had a ten to thirty per cent lower chance of being hired for a job than a competitor named Arthur.
  • Kevins also have a hard time in such countries as Germany, where the practice of name discrimination is referred to as “Kevinismus,” and an app that purports to help parents avoid it is called the Kevinometer.
Javier E

Heather Cox Richardson Wants You to Study History - The New York Times - 0 views

  • “Anybody who studies history learns two things: They learn to do research and they learn to write,” Richardson said. “The reason that matters now is that most people who are in college now are going to end up switching jobs a number of times in their careers.”
  • What history will give you is the ability to pivot into the different ideas, the different fields, the different careers as they arise.”
  • A historian will also know how to metabolize confounding situations, distill them to their essence and communicate that information to others
  • ...1 more annotation...
  • “It makes sense to recognize that these skills provide a tool kit for moving into the future in a way that I’m not entirely sure we always recognize.”
Javier E

Among the Disrupted - The New York Times - 0 views

  • even as technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science.
  • The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university,
  • So, too, does the view that the strongest defense of the humanities lies not in the appeal to their utility — that literature majors may find good jobs, that theaters may economically revitalize neighborhoods
  • ...27 more annotations...
  • The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy.
  • Greif’s book is a prehistory of our predicament, of our own “crisis of man.” (The “man” is archaic, the “crisis” is not.) It recognizes that the intellectual history of modernity may be written in part as the epic tale of a series of rebellions against humanism
  • We are not becoming transhumanists, obviously. We are too singular for the Singularity. But are we becoming posthumanists?
  • In American culture right now, as I say, the worldview that is ascendant may be described as posthumanism.
  • The posthumanism of the 1970s and 1980s was more insular, an academic affair of “theory,” an insurgency of professors; our posthumanism is a way of life, a social fate.
  • In “The Age of the Crisis of Man: Thought and Fiction in America, 1933-1973,” the gifted essayist Mark Greif, who reveals himself to be also a skillful historian of ideas, charts the history of the 20th-century reckonings with the definition of “man.
  • Here is his conclusion: “Anytime your inquiries lead you to say, ‘At this moment we must ask and decide who we fundamentally are, our solution and salvation must lie in a new picture of ourselves and humanity, this is our profound responsibility and a new opportunity’ — just stop.” Greif seems not to realize that his own book is a lasting monument to precisely such inquiry, and to its grandeur
  • “Answer, rather, the practical matters,” he counsels, in accordance with the current pragmatist orthodoxy. “Find the immediate actions necessary to achieve an aim.” But before an aim is achieved, should it not be justified? And the activity of justification may require a “picture of ourselves.” Don’t just stop. Think harder. Get it right.
  • — but rather in the appeal to their defiantly nonutilitarian character, so that individuals can know more than how things work, and develop their powers of discernment and judgment, their competence in matters of truth and goodness and beauty, to equip themselves adequately for the choices and the crucibles of private and public life.
  • Who has not felt superior to humanism? It is the cheapest target of all: Humanism is sentimental, flabby, bourgeois, hypocritical, complacent, middlebrow, liberal, sanctimonious, constricting and often an alibi for power
  • what is humanism? For a start, humanism is not the antithesis of religion, as Pope Francis is exquisitely demonstrating
  • The worldview takes many forms: a philosophical claim about the centrality of humankind to the universe, and about the irreducibility of the human difference to any aspect of our animality
  • Here is a humanist proposition for the age of Google: The processing of information is not the highest aim to which the human spirit can aspire, and neither is competitiveness in a global economy. The character of our society cannot be determined by engineers.
  • And posthumanism? It elects to understand the world in terms of impersonal forces and structures, and to deny the importance, and even the legitimacy, of human agency.
  • There have been humane posthumanists and there have been inhumane humanists. But the inhumanity of humanists may be refuted on the basis of their own worldview
  • the condemnation of cruelty toward “man the machine,” to borrow the old but enduring notion of an 18th-century French materialist, requires the importation of another framework of judgment. The same is true about universalism, which every critic of humanism has arraigned for its failure to live up to the promise of a perfect inclusiveness
  • there has never been a universalism that did not exclude. Yet the same is plainly the case about every particularism, which is nothing but a doctrine of exclusion; and the correction of particularism, the extension of its concept and its care, cannot be accomplished in its own name. It requires an idea from outside, an idea external to itself, a universalistic idea, a humanistic idea.
  • Asking universalism to keep faith with its own principles is a perennial activity of moral life. Asking particularism to keep faith with its own principles is asking for trouble.
  • there is no more urgent task for American intellectuals and writers than to think critically about the salience, even the tyranny, of technology in individual and collective life
  • a methodological claim about the most illuminating way to explain history and human affairs, and about the essential inability of the natural sciences to offer a satisfactory explanation; a moral claim about the priority, and the universal nature, of certain values, not least tolerance and compassion
  • “Our very mastery seems to escape our mastery,” Michel Serres has anxiously remarked. “How can we dominate our domination; how can we master our own mastery?”
  • universal accessibility is not the end of the story, it is the beginning. The humanistic methods that were practiced before digitalization will be even more urgent after digitalization, because we will need help in navigating the unprecedented welter
  • Searches for keywords will not provide contexts for keywords. Patterns that are revealed by searches will not identify their own causes and reasons
  • The new order will not relieve us of the old burdens, and the old pleasures, of erudition and interpretation.
  • Is all this — is humanism — sentimental? But sentimentality is not always a counterfeit emotion. Sometimes sentiment is warranted by reality.
  • The persistence of humanism through the centuries, in the face of formidable intellectual and social obstacles, has been owed to the truth of its representations of our complexly beating hearts, and to the guidance that it has offered, in its variegated and conflicting versions, for a soulful and sensitive existence
  • a complacent humanist is a humanist who has not read his books closely, since they teach disquiet and difficulty. In a society rife with theories and practices that flatten and shrink and chill the human subject, the humanist is the dissenter.
Javier E

Opinion | How to be Human - The New York Times - 0 views

  • I have learned something profound along the way. Being openhearted is a prerequisite for being a full, kind and wise human being. But it is not enough. People need social skills
  • The real process of, say, building a friendship or creating a community involves performing a series of small, concrete actions well: being curious about other people; disagreeing without poisoning relationships; revealing vulnerability at an appropriate pace; being a good listener; knowing how to ask for and offer forgiveness; knowing how to host a gathering where everyone feels embraced; knowing how to see things from another’s point of view.
  • People want to connect. Above almost any other need, human beings long to have another person look into their faces with love and acceptance
  • ...68 more annotations...
  • we lack practical knowledge about how to give one another the attention we crave
  • Some days it seems like we have intentionally built a society that gives people little guidance on how to perform the most important activities of life.
  • If I can shine positive attention on others, I can help them to blossom. If I see potential in others, they may come to see potential in themselves. True understanding is one of the most generous gifts any of us can give to another.
  • I see the results, too, in the epidemic of invisibility I encounter as a journalist. I often find myself interviewing people who tell me they feel unseen and disrespected
  • I’ve been working on a book called “How to Know a Person: The Art of Seeing Others Deeply and Being Deeply Seen.” I wanted it to be a practical book — so that I would learn these skills myself, and also, I hope, teach people how to understand others, how to make them feel respected, valued and understood.
  • I wanted to learn these skills for utilitarian reasons
  • If I’m going to work with someone, I don’t just want to see his superficial technical abilities. I want to understand him more deeply — to know whether he is calm in a crisis, comfortable with uncertainty or generous to colleagues.
  • I wanted to learn these skills for moral reasons
  • Many of the most productive researchers were in the habit of having breakfast or lunch with an electrical engineer named Harry Nyquist. Nyquist really listened to their challenges, got inside their heads, brought out the best in them. Nyquist, too, was an illuminator.
  • Finally, I wanted to learn these skills for reasons of national survival
  • We evolved to live with small bands of people like ourselves. Now we live in wonderfully diverse societies, but our social skills are inadequate for the divisions that exist. We live in a brutalizing time.
  • In any collection of humans, there are diminishers and there are illuminators. Diminishers are so into themselves, they make others feel insignificant
  • They stereotype and label. If they learn one thing about you, they proceed to make a series of assumptions about who you must be.
  • Illuminators, on the other hand, have a persistent curiosity about other people.
  • hey have been trained or have trained themselves in the craft of understanding others. They know how to ask the right questions at the right times — so that they can see things, at least a bit, from another’s point of view. They shine the brightness of their care on people and make them feel bigger, respected, lit up.
  • A biographer of the novelist E.M. Forster wrote, “To speak with him was to be seduced by an inverse charisma, a sense of being listened to with such intensity that you had to be your most honest, sharpest, and best self.” Imagine how good it would be to offer people that kind of hospitality.
  • social clumsiness I encounter too frequently. I’ll be leaving a party or some gathering and I’ll realize: That whole time, nobody asked me a single question. I estimate that only 30 percent of the people in the world are good question askers. The rest are nice people, but they just don’t ask. I think it’s because they haven’t been taught to and so don’t display basic curiosity about others.
  • Many years ago, patent lawyers at Bell Labs were trying to figure out why some employees were much more productive than others.
  • Illuminators are a joy to be around
  • The gift of attention.
  • Each of us has a characteristic way of showing up in the world. A person who radiates warmth will bring out the glowing sides of the people he meets, while a person who conveys formality can meet the same people and find them stiff and detached. “Attention,” the psychiatrist Iain McGilchrist writes, “is a moral act: It creates, brings aspects of things into being.”
  • When Jimmy sees a person — any person — he is seeing a creature with infinite value and dignity, made in the image of God. He is seeing someone so important that Jesus was willing to die for that person.
  • Accompaniment.
  • Accompaniment is an other-centered way of being with people during the normal routines of life.
  • If we are going to accompany someone well, we need to abandon the efficiency mind-set. We need to take our time and simply delight in another person’s way of being
  • I know a couple who treasure friends who are what they call “lingerable.” These are the sorts of people who are just great company, who turn conversation into a form of play and encourage you to be yourself. It’s a great talent, to be lingerable.
  • Other times, a good accompanist does nothing more than practice the art of presence, just being there.
  • The art of conversation.
  • If you tell me something important and then I paraphrase it back to you, what psychologists call “looping,” we can correct any misimpressions that may exist between us.
  • Be a loud listener. When another person is talking, you want to be listening so actively you’re burning calories.
  • He’s continually responding to my comments with encouraging affirmations, with “amen,” “aha” and “yes!” I love talking to that guy.
  • I no longer ask people: What do you think about that? Instead, I ask: How did you come to believe that? That gets them talking about the people and experiences that shaped their values.
  • Storify whenever possible
  • People are much more revealing and personal when they are telling stories.
  • Do the looping, especially with adolescents
  • If you want to know how the people around you see the world, you have to ask them. Here are a few tips I’ve collected from experts on how to become a better conversationalist:
  • Turn your partner into a narrator
  • People don’t go into enough detail when they tell you a story. If you ask specific follow-up questions — Was your boss screaming or irritated when she said that to you? What was her tone of voice? — then they will revisit the moment in a more concrete way and tell a richer story
  • If somebody tells you he is having trouble with his teenager, don’t turn around and say: “I know exactly what you mean. I’m having incredible problems with my own Susan.” You may think you’re trying to build a shared connection, but what you are really doing is shifting attention back to yourself.
  • Don’t be a topper
  • Big questions.
  • The quality of your conversations will depend on the quality of your questions
  • As adults, we get more inhibited with our questions, if we even ask them at all. I’ve learned we’re generally too cautious. People are dying to tell you their stories. Very often, no one has ever asked about them.
  • So when I first meet people, I tend to ask them where they grew up. People are at their best when talking about their childhoods. Or I ask where they got their names. That gets them talking about their families and ethnic backgrounds.
  • After you’ve established trust with a person, it’s great to ask 30,000-foot questions, ones that lift people out of their daily vantage points and help them see themselves from above.
  • These are questions like: What crossroads are you at? Most people are in the middle of some life transition; this question encourages them to step back and describe theirs
  • I’ve learned it’s best to resist this temptation. My first job in any conversation across difference or inequality is to stand in other people’s standpoint and fully understand how the world looks to them. I’ve found it’s best to ask other people three separate times and in three different ways about what they have just said. “I want to understand as much as possible. What am I missing here?”
  • Can you be yourself where you are and still fit in? And: What would you do if you weren’t afraid? Or: If you died today, what would you regret not doing?
  • “What have you said yes to that you no longer really believe in?
  • “What is the no, or refusal, you keep postponing?”
  • “What is the gift you currently hold in exile?,” meaning, what talent are you not using
  • “Why you?” Why was it you who started that business? Why was it you who ran for school board? She wants to understand why a person felt the call of responsibility. She wants to understand motivation.
  • “How do your ancestors show up in your life?” But it led to a great conversation in which each of us talked about how we’d been formed by our family heritages and cultures. I’ve come to think of questioning as a moral practice. When you’re asking good questions, you’re adopting a posture of humility, and you’re honoring the other person.
  • Stand in their standpoint
  • I used to feel the temptation to get defensive, to say: “You don’t know everything I’m dealing with. You don’t know that I’m one of the good guys here.”
  • If the next five years is a chapter in your life, what is the chapter about?
  • every conversation takes place on two levels
  • The official conversation is represented by the words we are saying on whatever topic we are talking about. The actual conversations occur amid the ebb and flow of emotions that get transmitted as we talk. With every comment I am showing you respect or disrespect, making you feel a little safer or a little more threatened.
  • If we let fear and a sense of threat build our conversation, then very quickly our motivations will deteriorate
  • If, on the other hand, I show persistent curiosity about your viewpoint, I show respect. And as the authors of “Crucial Conversations” observe, in any conversation, respect is like air. When it’s present nobody notices it, and when it’s absent it’s all anybody can think about.
  • the novelist and philosopher Iris Murdoch argued that the essential moral skill is being considerate to others in the complex circumstances of everyday life. Morality is about how we interact with each other minute by minute.
  • I used to think the wise person was a lofty sage who doled out life-altering advice in the manner of Yoda or Dumbledore or Solomon. But now I think the wise person’s essential gift is tender receptivity.
  • The illuminators offer the privilege of witness. They take the anecdotes, rationalizations and episodes we tell and see us in a noble struggle. They see the way we’re navigating the dialectics of life — intimacy versus independence, control versus freedom — and understand that our current selves are just where we are right now on our long continuum of growth.
  • The really good confidants — the people we go to when we are troubled — are more like coaches than philosopher kings.
  • They take in your story, accept it, but prod you to clarify what it is you really want, or to name the baggage you left out of your clean tale.
  • They’re not here to fix you; they are here simply to help you edit your story so that it’s more honest and accurate. They’re here to call you by name, as beloved
  • They see who you are becoming before you do and provide you with a reputation you can then go live into.
  • there has been a comprehensive shift in my posture. I think I’m more approachable, vulnerable. I know more about human psychology than I used to. I have a long way to go, but I’m evidence that people can change, sometimes dramatically, even in middle and older age.
Javier E

His Job Was to Make Instagram Safe for Teens. His 14-Year-Old Showed Him What the App W... - 0 views

  • The experience of young users on Meta’s Instagram—where Bejar had spent the previous two years working as a consultant—was especially acute. In a subsequent email to Instagram head Adam Mosseri, one statistic stood out: One in eight users under the age of 16 said they had experienced unwanted sexual advances on the platform over the previous seven days.
  • For Bejar, that finding was hardly a surprise. His daughter and her friends had been receiving unsolicited penis pictures and other forms of harassment on the platform since the age of 14, he wrote, and Meta’s systems generally ignored their reports—or responded by saying that the harassment didn’t violate platform rules.
  • “I asked her why boys keep doing that,” Bejar wrote to Zuckerberg and his top lieutenants. “She said if the only thing that happens is they get blocked, why wouldn’t they?”
  • ...39 more annotations...
  • For the well-being of its users, Bejar argued, Meta needed to change course, focusing less on a flawed system of rules-based policing and more on addressing such bad experiences
  • The company would need to collect data on what upset users and then work to combat the source of it, nudging those who made others uncomfortable to improve their behavior and isolating communities of users who deliberately sought to harm others.
  • “I am appealing to you because I believe that working this way will require a culture shift,” Bejar wrote to Zuckerberg—the company would have to acknowledge that its existing approach to governing Facebook and Instagram wasn’t working.
  • During and after Bejar’s time as a consultant, Meta spokesman Andy Stone said, the company has rolled out several product features meant to address some of the Well-Being Team’s findings. Those features include warnings to users before they post comments that Meta’s automated systems flag as potentially offensive, and reminders to be kind when sending direct messages to users like content creators who receive a large volume of messages. 
  • Meta’s classifiers were reliable enough to remove only a low single-digit percentage of hate speech with any degree of precision.
  • Bejar was floored—all the more so when he learned that virtually all of his daughter’s friends had been subjected to similar harassment. “DTF?” a user they’d never met would ask, using shorthand for a vulgar proposition. Instagram acted so rarely on reports of such behavior that the girls no longer bothered reporting them. 
  • Meta’s own statistics suggested that big problems didn’t exist. 
  • Meta had come to approach governing user behavior as an overwhelmingly automated process. Engineers would compile data sets of unacceptable content—things like terrorism, pornography, bullying or “excessive gore”—and then train machine-learning models to screen future content for similar material.
  • While users could still flag things that upset them, Meta shifted resources away from reviewing them. To discourage users from filing reports, internal documents from 2019 show, Meta added steps to the reporting process. Meta said the changes were meant to discourage frivolous reports and educate users about platform rules. 
  • The outperformance of Meta’s automated enforcement relied on what Bejar considered two sleights of hand. The systems didn’t catch anywhere near the majority of banned content—only the majority of what the company ultimately removed
  • “Please don’t talk about my underage tits,” Bejar’s daughter shot back before reporting his comment to Instagram. A few days later, the platform got back to her: The insult didn’t violate its community guidelines.
  • Also buttressing Meta’s statistics were rules written narrowly enough to ban only unambiguously vile material. Meta’s rules didn’t clearly prohibit adults from flooding the comments section on a teenager’s posts with kiss emojis or posting pictures of kids in their underwear, inviting their followers to “see more” in a private Facebook Messenger group. 
  • “Mark personally values freedom of expression first and foremost and would say this is a feature and not a bug,” Rosen responded
  • Narrow rules and unreliable automated enforcement systems left a lot of room for bad behavior—but they made the company’s child-safety statistics look pretty good according to Meta’s metric of choice: prevalence.
  • Defined as the percentage of content viewed worldwide that explicitly violates a Meta rule, prevalence was the company’s preferred measuring stick for the problems users experienced.
  • According to prevalence, child exploitation was so rare on the platform that it couldn’t be reliably estimated, less than 0.05%, the threshold for functional measurement. Content deemed to encourage self-harm, such as eating disorders, was just as minimal, and rule violations for bullying and harassment occurred in just eight of 10,000 views. 
  • “There’s a grading-your-own-homework problem,”
  • Meta defines what constitutes harmful content, so it shapes the discussion of how successful it is at dealing with it.”
  • It could reconsider its AI-generated “beauty filters,” which internal research suggested made both the people who used them and those who viewed the images more self-critical
  • the team built a new questionnaire called BEEF, short for “Bad Emotional Experience Feedback.
  • A recurring survey of issues 238,000 users had experienced over the past seven days, the effort identified problems with prevalence from the start: Users were 100 times more likely to tell Instagram they’d witnessed bullying in the last week than Meta’s bullying-prevalence statistics indicated they should.
  • “People feel like they’re having a bad experience or they don’t,” one presentation on BEEF noted. “Their perception isn’t constrained by policy.
  • they seemed particularly common among teens on Instagram.
  • Among users under the age of 16, 26% recalled having a bad experience in the last week due to witnessing hostility against someone based on their race, religion or identity
  • More than a fifth felt worse about themselves after viewing others’ posts, and 13% had experienced unwanted sexual advances in the past seven days. 
  • The vast gap between the low prevalence of content deemed problematic in the company’s own statistics and what users told the company they experienced suggested that Meta’s definitions were off, Bejar argued
  • To minimize content that teenagers told researchers made them feel bad about themselves, Instagram could cap how much beauty- and fashion-influencer content users saw.
  • Proving to Meta’s leadership that the company’s prevalence metrics were missing the point was going to require data the company didn’t have. So Bejar and a group of staffers from the Well-Being Team started collecting it
  • And it could build ways for users to report unwanted contacts, the first step to figuring out how to discourage them.
  • One experiment run in response to BEEF data showed that when users were notified that their comment or post had upset people who saw it, they often deleted it of their own accord. “Even if you don’t mandate behaviors,” said Krieger, “you can at least send signals about what behaviors aren’t welcome.”
  • But among the ranks of Meta’s senior middle management, Bejar and Krieger said, BEEF hit a wall. Managers who had made their careers on incrementally improving prevalence statistics weren’t receptive to the suggestion that the approach wasn’t working. 
  • After three decades in Silicon Valley, he understood that members of the company’s C-Suite might not appreciate a damning appraisal of the safety risks young users faced from its product—especially one citing the company’s own data. 
  • “This was the email that my entire career in tech trained me not to send,” he says. “But a part of me was still hoping they just didn’t know.”
  • “Policy enforcement is analogous to the police,” he wrote in the email Oct. 5, 2021—arguing that it’s essential to respond to crime, but that it’s not what makes a community safe. Meta had an opportunity to do right by its users and take on a problem that Bejar believed was almost certainly industrywide.
  • fter Haugen’s airing of internal research, Meta had cracked down on the distribution of anything that would, if leaked, cause further reputational damage. With executives privately asserting that the company’s research division harbored a fifth column of detractors, Meta was formalizing a raft of new rules for employees’ internal communication.
  • Among the mandates for achieving “Narrative Excellence,” as the company called it, was to keep research data tight and never assert a moral or legal duty to fix a problem.
  • “I had to write about it as a hypothetical,” Bejar said. Rather than acknowledging that Instagram’s survey data showed that teens regularly faced unwanted sexual advances, the memo merely suggested how Instagram might help teens if they faced such a problem.
  • The hope that the team’s work would continue didn’t last. The company stopped conducting the specific survey behind BEEF, then laid off most everyone who’d worked on it as part of what Zuckerberg called Meta’s “year of efficiency.
  • If Meta was to change, Bejar told the Journal, the effort would have to come from the outside. He began consulting with a coalition of state attorneys general who filed suit against the company late last month, alleging that the company had built its products to maximize engagement at the expense of young users’ physical and mental health. Bejar also got in touch with members of Congress about where he believes the company’s user-safety efforts fell short. 
Javier E

AI could end independent UK news, Mail owner warns - 0 views

  • Artificial intelligence could destroy independent news organisations in Britain and potentially is an “existential threat to democracy”, the executive chairman of DMGT has warned.
  • “They have basically taken all our data, without permission and without even a consideration of the consequences. They are using it to train their models and to start producing content. They’re commercialising it,
  • AI had the potential to destroy independent news organisations “by ripping off all our content and then repurposing it to people … without any responsibility for the efficacy of that content”
  • ...4 more annotations...
  • The risk was that the internet had become an echo chamber of stories produced by special interest groups and rogue states, he said.
  • The danger is that these huge platforms end up in an arms race with each other. They’re like elephants fighting and then everybody else is like mice that get stamped on without them even realising the consequences of their actions.”
  • there are huge consequences to this technology. And it’s not just the danger of ripping our industry apart, but also ripping other industries apart, all the creative industries. How many jobs are going to be lost? What’s the damage to the economy going to be if these rapacious organisations can continue to operate without any legal ramifications?
  • Rothermere revealed that DMGT had experimented with using AI to help journalists to publish stories faster, but that it then took longer “to check the accuracy of what it comes up” than it would have done to write the article.
  •  
    Rothermere revealed that DMGT had experimented with using AI to help journalists to publish stories faster, but that it then took longer "to check the accuracy of what it comes up" than it would have done to write the article.
« First ‹ Previous 421 - 440 of 451 Next ›
Showing 20 items per page