Skip to main content

Home/ TOK Friends/ Group items tagged apocalypse

Rss Feed Group items tagged

Javier E

Relax, the Apocalypse Isn't on Friday-It's Just a 'Reset' - Eugen Tomiuc - The Atlantic - 0 views

  • To what extent is this reflecting people's need for balance between scientific fact and religious reassurance?
  • There is some sort of desire to make a synthesis of both spiritual ideas and also science. And you can trace that back to theosophy at the turn of the last century. So I think there is a synthesis here of both science and spirituality which in our age is something that people want. They want to somehow be able to cross that great divide -- what [British scientist and novelist] C.P. Snow called 'the two cultures' -- and try to find meaning in both worlds.
knudsenlu

You Are Already Living Inside a Computer - The Atlantic - 1 views

  • Nobody really needs smartphone-operated bike locks or propane tanks. And they certainly don’t need gadgets that are less trustworthy than the “dumb” ones they replace, a sin many smart devices commit. But people do seem to want them—and in increasing numbers.
  • Why? One answer is that consumers buy what is on offer, and manufacturers are eager to turn their dumb devices smart. Doing so allows them more revenue, more control, and more opportunity for planned obsolescence. It also creates a secondary market for data collected by means of these devices. Roomba, for example, hopes to deduce floor plans from the movement of its robotic home vacuums so that it can sell them as business intelligence.
  • And the more people love using computers for everything, the more life feels incomplete unless it takes place inside them.
  • ...15 more annotations...
  • Computers already are predominant, human life already takes place mostly within them, and people are satisfied with the results.
  • These devices pose numerous problems. Cost is one. Like a cheap propane gauge, a traditional bike lock is a commodity. It can be had for $10 to $15, a tenth of the price of Nokē’s connected version. Security and privacy are others. The CIA was rumored to have a back door into Samsung TVs for spying. Disturbed people have been caught speaking to children over hacked baby monitors. A botnet commandeered thousands of poorly secured internet-of-things devices to launch a massive distributed denial-of-service attack against the domain-name syste
  • Reliability plagues internet-connected gadgets, too. When the network is down, or the app’s service isn’t reachable, or some other software behavior gets in the way, the products often cease to function properly—or at all.
  • Turing guessed that machines would become most compelling when they became convincing companions, which is essentially what today’s smartphones (and smart toasters) do.
  • But Turing never claimed that machines could think, let alone that they might equal the human mind. Rather, he surmised that machines might be able to exhibit convincing behavior.
  • People choose computers as intermediaries for the sensual delight of using computers
  • ne such affection is the pleasure of connectivity. You don’t want to be offline. Why would you want your toaster or doorbell to suffer the same fate? Today, computational absorption is an ideal. The ultimate dream is to be online all the time, or at least connected to a computational machine of some kind.
  • Doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers.
  • “Being a computer” means something different today than in 1950, when Turing proposed the imitation game. Contra the technical prerequisites of artificial intelligence, acting like a computer often involves little more than moving bits of data around, or acting as a controller or actuator. Grill as computer, bike lock as computer, television as computer. An intermediary
  • Or consider doorbells once more. Forget Ring, the doorbell has already retired in favor of the computer. When my kids’ friends visit, they just text a request to come open the door. The doorbell has become computerized without even being connected to an app or to the internet. Call it “disruption” if you must, but doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers, where they can produce new affections.
  • The present status of intelligent machines is more powerful than any future robot apocalypse.
  • Why would anyone ever choose a solution that doesn’t involve computers, when computers are available? Propane tanks and bike locks are still edge cases, but ordinary digital services work similarly: The services people seek out are the ones that allow them to use computers to do things—from finding information to hailing a cab to ordering takeout. This is a feat of aesthetics as much as it is one of business. People choose computers as intermediaries for the sensual delight of using computers, not just as practical, efficient means for solving problems.
  • This is not where anyone thought computing would end up. Early dystopic scenarios cautioned that the computer could become a bureaucrat or a fascist, reducing human behavior to the predetermined capacities of a dumb machine. Or else, that obsessive computer use would be deadening, sucking humans into narcotic detachment.Those fears persist to some extent, partly because they have been somewhat realized. But they have also been inverted. Being away from them now feels deadening, rather than being attached to them without end. And thus, the actions computers take become self-referential: to turn more and more things into computers to prolong that connection.
  • But the real present status of intelligent machines is both humdrum and more powerful than any future robot apocalypse. Turing is often called the father of AI, but he only implied that machines might become compelling enough to inspire interaction. That hardly counts as intelligence, artificial or real. It’s also far easier to achieve. Computers already have persuaded people to move their lives inside of them. The machines didn’t need to make people immortal, or promise to serve their every whim, or to threaten to destroy them absent assent. They just needed to become a sufficient part of everything human beings do such that they can’t—or won’t—imagine doing those things without them.
  • . The real threat of computers isn’t that they might overtake and destroy humanity with their future power and intelligence. It’s that they might remain just as ordinary and impotent as they are today, and yet overtake us anyway.
katherineharron

Shouting into the apocalypse: The decade in climate change (opinion) - CNN - 0 views

  • What's that worn-out phrase? Shouting into the wind? Well, after a decade of rising pollution, failed politics and worsening disasters, it seems the many, many of us who care about the climate crisis increasingly are shouting into the hurricane, if not the apocalypse.
  • On the cusp of 2020, the state of the planet is far more dire than in 2010. Preserving a safe and healthy ecological system is no longer a realistic possibility. Now, we're looking at less bad options, ceding the fact that the virtual end of coral reefs, the drowning of some island nations, the worsening of already-devastating storms and the displacement of millions -- they seem close to inevitable. The climate crisis is already costly, deadly and deeply unjust, putting the most vulnerable people in the world, often who've done the least to cause this, at terrible risk.
  • There are two numbers you need to understand to put this moment in perspective.The first is 1.5. The Paris Agreement -- the international treaty on climate change, which admittedly is in trouble, but also is the best thing we've got -- sets the goal of limiting warming to 1.5 or, at most, below 2 degrees Celsius of warming.
  • ...6 more annotations...
  • Worldwide fossil fuel emissions are expected to be up 0.6% in 2019 over 2018, according to projections from the Global Carbon Project. In the past decade, humans have put more than 350 metric gigatons of carbon dioxide into the atmosphere from burning fossil fuels and other industrial processes, according to calculations provided by the World Resources Institute.
  • Meanwhile, scientists are becoming even more concerned about tipping points in the climate system that could lead to rapid rise in sea levels, the deterioration of the Amazon and so on. One particularly frightening commentary last month in the journal Nature, by several notable climate scientists, says the odds we can avoid tipping points in the climate system "could already have shrunk towards zero." In non-science-speak: We're there now.
  • This was the decade when some people finally started to see the climate crisis as personal. Climate attribution science, which looks for human fingerprints on extreme weather events, made its way into the popular imagination. We're starting to realize there are no truly "natural" disasters anymore. We've warmed the climate, and we're already making storms riskier.
  • The news media is picking that up, using terms such as "climate emergency" and "climate crisis" instead of the blander "climate change." Increasingly, lots of people are making these critical connections, which should motivate the political, social and economic revolution necessary to fix things.
  • Only 52% of American adults say they are "very" or "extremely" sure global warming is happening, according to a report from the Yale Program on Climate Change Communication and the George Mason University Center for Climate Change Communication, which is based on a 1,303 person survey conducted in November 2019. Yale's been asking that question for a while now. Go back a decade, to 2009, and the rate is about the same: 51%.
  • The bright spot -- and it truly is a bright one -- is that young people are waking up. They are shouting, loudly and with purpose. Witness Greta Thunberg, the dynamic teenager who started a one-girl protest outside the Swedish Parliament last year, demanding that adults take seriously this emergency, which threatens young people and future generations disproportionately.
Javier E

Google's new media apocalypse: How the search giant wants to accelerate the end of the ... - 0 views

  • Google is announcing that it wants to cut out the middleman—that is to say, other websites—and serve you content within its own lovely little walled garden. That sound you just heard was a bunch of media publishers rushing to book an extra appointment with their shrink.
  • Back when search, and not social media, ruled the internet, Google was the sun around which the news industry orbited. Getting to the top of Google’s results was the key that unlocked buckets of page views. Outlet after outlet spent countless hours trying to figure out how to game Google’s prized, secretive algorithm. Whole swaths of the industry were killed instantly if Google tweaked the algorithm.
  • Facebook is now the sun. Facebook is the company keeping everyone up at night. Facebook is the place shaping how stories get chosen, how they get written, how they are packaged and how they show up on its site. And Facebook does all of this with just as much secrecy and just as little accountability as Google did.
  • ...3 more annotations...
  • Facebook just opened up its Instant Articles feature to all publishers. The feature allows external outlets to publish their content directly onto Facebook’s platform, eliminating that pesky journey to their actual website. They can either place their own ads on the content or join a revenue-sharing program with Facebook. Facebook has touted this plan as one which provides a better user experience and has noted the ability for publishers to create ads on the platform as well.
  • The benefit to Facebook is obvious: It gets to keep people inside its house. They don’t have to leave for even a second. The publisher essentially has to accept this reality, sigh about the gradual death of websites and hope that everything works out on the financial side.
  • It’s all part of a much bigger story: that of how the internet, that supposed smasher of gates and leveler of playing fields, has coalesced around a mere handful of mega-giants in the space of just a couple of decades. The gates didn’t really come down. The identities of the gatekeepers just changed. Google, Facebook, Apple, Amazon
maxwellokolo

NASA launched a superbug into space - 0 views

  •  
    Before you start to worry, this isn't a sign of an impending apocalypse. Working in conjunction with NASA, lead researcher Dr. Anita Goel hopes that by sending MRSA bacteria to a zero-gravity environment, we can better understand how superbugs mutate to become resistant to available antibiotics.
Javier E

But What Would the End of Humanity Mean for Me? - James Hamblin - The Atlantic - 0 views

  • Tegmark is more worried about much more immediate threats, which he calls existential risks. That’s a term borrowed from physicist Nick Bostrom, director of Oxford University’s Future of Humanity Institute, a research collective modeling the potential range of human expansion into the cosmos
  • "I am finding it increasingly plausible that existential risk is the biggest moral issue in the world, even if it hasn’t gone mainstream yet,"
  • Existential risks, as Tegmark describes them, are things that are “not just a little bit bad, like a parking ticket, but really bad. Things that could really mess up or wipe out human civilization.”
  • ...17 more annotations...
  • The single existential risk that Tegmark worries about most is unfriendly artificial intelligence. That is, when computers are able to start improving themselves, there will be a rapid increase in their capacities, and then, Tegmark says, it’s very difficult to predict what will happen.
  • Tegmark told Lex Berko at Motherboard earlier this year, "I would guess there’s about a 60 percent chance that I’m not going to die of old age, but from some kind of human-caused calamity. Which would suggest that I should spend a significant portion of my time actually worrying about this. We should in society, too."
  • "Longer term—and this might mean 10 years, it might mean 50 or 100 years, depending on who you ask—when computers can do everything we can do," Tegmark said, “after that they will probably very rapidly get vastly better than us at everything, and we’ll face this question we talked about in the Huffington Post article: whether there’s really a place for us after that, or not.”
  • "This is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”
  • Tegmark and his op-ed co-author Frank Wilczek, the Nobel laureate, draw examples of cold-war automated systems that assessed threats and resulted in false alarms and near misses. “In those instances some human intervened at the last moment and saved us from horrible consequences,” Wilczek told me earlier that day. “That might not happen in the future.”
  • there are still enough nuclear weapons in existence to incinerate all of Earth’s dense population centers, but that wouldn't kill everyone immediately. The smoldering cities would send sun-blocking soot into the stratosphere that would trigger a crop-killing climate shift, and that’s what would kill us all
  • “We are very reckless with this planet, with civilization,” Tegmark said. “We basically play Russian roulette.” The key is to think more long term, “not just about the next election cycle or the next Justin Bieber album.”
  • “There are several issues that arise, ranging from climate change to artificial intelligence to biological warfare to asteroids that might collide with the earth,” Wilczek said of the group’s launch. “They are very serious risks that don’t get much attention.
  • a widely perceived issue is when intelligent entities start to take on a life of their own. They revolutionized the way we understand chess, for instance. That’s pretty harmless. But one can imagine if they revolutionized the way we think about warfare or finance, either those entities themselves or the people that control them. It could pose some disquieting perturbations on the rest of our lives.”
  • Wilczek’s particularly concerned about a subset of artificial intelligence: drone warriors. “Not necessarily robots,” Wilczek told me, “although robot warriors could be a big issue, too. It could just be superintelligence that’s in a cloud. It doesn’t have to be embodied in the usual sense.”
  • it’s important not to anthropomorphize artificial intelligence. It's best to think of it as a primordial force of nature—strong and indifferent. In the case of chess, an A.I. models chess moves, predicts outcomes, and moves accordingly. If winning at chess meant destroying humanity, it might do that.
  • Even if programmers tried to program an A.I. to be benevolent, it could destroy us inadvertently. Andersen’s example in Aeon is that an A.I. designed to try and maximize human happiness might think that flooding your bloodstream with heroin is the best way to do that.
  • “It’s not clear how big the storm will be, or how long it’s going to take to get here. I don’t know. It might be 10 years before there’s a real problem. It might be 20, it might be 30. It might be five. But it’s certainly not too early to think about it, because the issues to address are only going to get more complex as the systems get more self-willed.”
  • Even within A.I. research, Tegmark admits, “There is absolutely not a consensus that we should be concerned about this.” But there is a lot of concern, and sense of lack of power. Because, concretely, what can you do? “The thing we should worry about is that we’re not worried.”
  • Tegmark brings it to Earth with a case-example about purchasing a stroller: If you could spend more for a good one or less for one that “sometimes collapses and crushes the baby, but nobody’s been able to prove that it is caused by any design flaw. But it’s 10 percent off! So which one are you going to buy?”
  • “There are seven billion of us on this little spinning ball in space. And we have so much opportunity," Tegmark said. "We have all the resources in this enormous cosmos. At the same time, we have the technology to wipe ourselves out.”
  • Ninety-nine percent of the species that have lived on Earth have gone extinct; why should we not? Seeing the biggest picture of humanity and the planet is the heart of this. It’s not meant to be about inspiring terror or doom. Sometimes that is what it takes to draw us out of the little things, where in the day-to-day we lose sight of enormous potentials.
Javier E

The Unrealized Horrors of Population Explosion - NYTimes.com - 0 views

  • No one was more influential — or more terrifying, some would say — than Paul R. Ehrlich, a Stanford University biologist. His 1968 book, “The Population Bomb,” sold in the millions with a jeremiad that humankind stood on the brink of apocalypse because there were simply too many of us. Dr. Ehrlich’s opening statement was the verbal equivalent of a punch to the gut: “The battle to feed all of humanity is over.” He later went on to forecast that hundreds of millions would starve to death in the 1970s, that 65 million of them would be Americans, that crowded India was essentially doomed, that odds were fair “England will not exist in the year 2000.” Dr. Ehrlich was so sure of himself that he warned in 1970 that “sometime in the next 15 years, the end will come.” By “the end,” he meant “an utter breakdown of the capacity of the planet to support humanity.”
  • After the passage of 47 years, Dr. Ehrlich offers little in the way of a mea culpa. Quite the contrary. Timetables for disaster like those he once offered have no significance, he told Retro Report, because to someone in his field they mean something “very, very different” from what they do to the average person. The end is still nigh, he asserted, and he stood unflinchingly by his 1960s insistence that population control was required, preferably through voluntary methods. But if need be, he said, he would endorse “various forms of coercion” like eliminating “tax benefits for having additional children.”
  • Stewart Brand, founding editor of the Whole Earth Catalog. On this topic, Mr. Brand may be deemed a Keynesian, in the sense of an observation often attributed to John Maynard Keynes: “When the facts change, I change my mind, sir. What do you do?” Mr. Brand’s formulation for Retro Report was to ask, “How many years do you have to not have the world end” to reach a conclusion that “maybe it didn’t end because that reason was wrong?”
  • ...8 more annotations...
  • One thing that happened on the road to doom was that the world figured out how to feed itself despite its rising numbers. No small measure of thanks belonged to Norman E. Borlaug, an American plant scientist whose breeding of high-yielding, disease-resistant crops led to the agricultural savior known as the Green Revolution.
  • Some preternaturally optimistic analysts concluded that humans would always find their way out of tough spots. Among them was Julian L. Simon, an economist who established himself as the anti-Ehrlich, arguing that “humanity’s condition will improve in just about every material way.”
  • In fact, birthrates are now below long-term replacement levels, or nearly so, across much of Earth, not just in the industrialized West and Japan but also in India, China, much of Southeast Asia, Latin America — just about everywhere except Africa, although even there the continentwide rates are declining. “Girls that are never born cannot have babies,”
  • Because of improved health standards, birthing many children is not the survival imperative for families that it once was. In cramped cities, large families are not the blessing they were in the agricultural past. And women in many societies are ever more independent, socially and economically; they no longer accept that their fate is to be endlessly pregnant. If anything, the worry in many countries is that their populations are aging and that national vitality is ebbing.
  • Still, enough people are already around to ensure that the world’s population will keep rising. But for how long? That is a devilishly difficult question. One frequently cited demographic model by the United Nations envisions a peak of about nine billion around 2050. Other forecasts are for continued growth into the next century. Still others say the population will begin to drop before the middle of this century.
  • In Mr. Pearce’s view, the villain is not overpopulation but, rather, overconsumption. “We can survive massive demographic change,” he said in 2011. But he is less sanguine about the overuse of available resources and its effects on climate change
  • “Rising consumption today far outstrips the rising head count as a threat to the planet,” Mr. Pearce wrote in Prospect, a British magazine, in 2010. “And most of the extra consumption has been in rich countries that have long since given up adding substantial numbers to their population,
  • “Let’s look at carbon dioxide emissions, the biggest current concern because of climate change,” he continued. “The world’s richest half billion people — that’s about 7 percent of the global population — are responsible for half of the world’s carbon dioxide emissions. Meanwhile, the poorest 50 percent of the population are responsible for just 7 percent of emissions.”
Javier E

HBO's 'Years and Years' and the Numbness of Survival - The Atlantic - 0 views

  • the thing that struck me most about the Lyonses wasn’t that they sighed and did nothing while the world around them disintegrated into disease and disinformation. It was that—mostly—they survived. The more things happened to them, the harder they clung to life and to one another. Most dystopian narratives deal with one kind of unimaginable crisis: a zombie apocalypse, a totalitarian regime, a terrifying disease
  • Years and Years, instead, shows how the alienation and paralysis sparked by a decade-plus of constant calamity are also symptoms of a kind of resilience. Human nature is to panic, to agonize, to fret and lose sleep and weep. Inevitably, though, it’s also to adapt.
  • The cost of getting through crisis after crisis, the show suggests, is numbness. “Emotion is a luxury,” Governor Andrew Cuomo of New York said in his daily press conference on Thursday morning. “We don’t have ... [that] luxury. Let’s just get through it.”
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

Accelerationism: how a fringe philosophy predicted the future we live in | World news |... - 1 views

  • Roger Zelazny, published his third novel. In many ways, Lord of Light was of its time, shaggy with imported Hindu mythology and cosmic dialogue. Yet there were also glints of something more forward-looking and political.
  • accelerationism has gradually solidified from a fictional device into an actual intellectual movement: a new way of thinking about the contemporary world and its potential.
  • Accelerationists argue that technology, particularly computer technology, and capitalism, particularly the most aggressive, global variety, should be massively sped up and intensified – either because this is the best way forward for humanity, or because there is no alternative.
  • ...31 more annotations...
  • Accelerationists favour automation. They favour the further merging of the digital and the human. They often favour the deregulation of business, and drastically scaled-back government. They believe that people should stop deluding themselves that economic and technological progress can be controlled.
  • Accelerationism, therefore, goes against conservatism, traditional socialism, social democracy, environmentalism, protectionism, populism, nationalism, localism and all the other ideologies that have sought to moderate or reverse the already hugely disruptive, seemingly runaway pace of change in the modern world
  • Robin Mackay and Armen Avanessian in their introduction to #Accelerate: The Accelerationist Reader, a sometimes baffling, sometimes exhilarating book, published in 2014, which remains the only proper guide to the movement in existence.
  • “We all live in an operating system set up by the accelerating triad of war, capitalism and emergent AI,” says Steve Goodman, a British accelerationist
  • A century ago, the writers and artists of the Italian futurist movement fell in love with the machines of the industrial era and their apparent ability to invigorate society. Many futurists followed this fascination into war-mongering and fascism.
  • One of the central figures of accelerationism is the British philosopher Nick Land, who taught at Warwick University in the 1990s
  • Land has published prolifically on the internet, not always under his own name, about the supposed obsolescence of western democracy; he has also written approvingly about “human biodiversity” and “capitalistic human sorting” – the pseudoscientific idea, currently popular on the far right, that different races “naturally” fare differently in the modern world; and about the supposedly inevitable “disintegration of the human species” when artificial intelligence improves sufficiently.
  • In our politically febrile times, the impatient, intemperate, possibly revolutionary ideas of accelerationism feel relevant, or at least intriguing, as never before. Noys says: “Accelerationists always seem to have an answer. If capitalism is going fast, they say it needs to go faster. If capitalism hits a bump in the road, and slows down” – as it has since the 2008 financial crisis – “they say it needs to be kickstarted.”
  • On alt-right blogs, Land in particular has become a name to conjure with. Commenters have excitedly noted the connections between some of his ideas and the thinking of both the libertarian Silicon Valley billionaire Peter Thiel and Trump’s iconoclastic strategist Steve Bannon.
  • “In Silicon Valley,” says Fred Turner, a leading historian of America’s digital industries, “accelerationism is part of a whole movement which is saying, we don’t need [conventional] politics any more, we can get rid of ‘left’ and ‘right’, if we just get technology right. Accelerationism also fits with how electronic devices are marketed – the promise that, finally, they will help us leave the material world, all the mess of the physical, far behind.”
  • In 1972, the philosopher Gilles Deleuze and the psychoanalyst Félix Guattari published Anti-Oedipus. It was a restless, sprawling, appealingly ambiguous book, which suggested that, rather than simply oppose capitalism, the left should acknowledge its ability to liberate as well as oppress people, and should seek to strengthen these anarchic tendencies, “to go still further … in the movement of the market … to ‘accelerate the process’”.
  • By the early 90s Land had distilled his reading, which included Deleuze and Guattari and Lyotard, into a set of ideas and a writing style that, to his students at least, were visionary and thrillingly dangerous. Land wrote in 1992 that capitalism had never been properly unleashed, but instead had always been held back by politics, “the last great sentimental indulgence of mankind”. He dismissed Europe as a sclerotic, increasingly marginal place, “the racial trash-can of Asia”. And he saw civilisation everywhere accelerating towards an apocalypse: “Disorder must increase... Any [human] organisation is ... a mere ... detour in the inexorable death-flow.”
  • With the internet becoming part of everyday life for the first time, and capitalism seemingly triumphant after the collapse of communism in 1989, a belief that the future would be almost entirely shaped by computers and globalisation – the accelerated “movement of the market” that Deleuze and Guattari had called for two decades earlier – spread across British and American academia and politics during the 90s. The Warwick accelerationists were in the vanguard.
  • In the US, confident, rainbow-coloured magazines such as Wired promoted what became known as “the Californian ideology”: the optimistic claim that human potential would be unlocked everywhere by digital technology. In Britain, this optimism influenced New Labour
  • The Warwick accelerationists saw themselves as participants, not traditional academic observers
  • The CCRU gang formed reading groups and set up conferences and journals. They squeezed into the narrow CCRU room in the philosophy department and gave each other impromptu seminars.
  • The main result of the CCRU’s frantic, promiscuous research was a conveyor belt of cryptic articles, crammed with invented terms, sometimes speculative to the point of being fiction.
  • At Warwick, however, the prophecies were darker. “One of our motives,” says Plant, “was precisely to undermine the cheery utopianism of the 90s, much of which seemed very conservative” – an old-fashioned male desire for salvation through gadgets, in her view.
  • K-punk was written by Mark Fisher, formerly of the CCRU. The blog retained some Warwick traits, such as quoting reverently from Deleuze and Guattari, but it gradually shed the CCRU’s aggressive rhetoric and pro-capitalist politics for a more forgiving, more left-leaning take on modernity. Fisher increasingly felt that capitalism was a disappointment to accelerationists, with its cautious, entrenched corporations and endless cycles of essentially the same products. But he was also impatient with the left, which he thought was ignoring new technology
  • lex Williams, co-wrote a Manifesto for an Accelerationist Politics. “Capitalism has begun to constrain the productive forces of technology,” they wrote. “[Our version of] accelerationism is the basic belief that these capacities can and should be let loose … repurposed towards common ends … towards an alternative modernity.”
  • What that “alternative modernity” might be was barely, but seductively, sketched out, with fleeting references to reduced working hours, to technology being used to reduce social conflict rather than exacerbate it, and to humanity moving “beyond the limitations of the earth and our own immediate bodily forms”. On politics and philosophy blogs from Britain to the US and Italy, the notion spread that Srnicek and Williams had founded a new political philosophy: “left accelerationism”.
  • Two years later, in 2015, they expanded the manifesto into a slightly more concrete book, Inventing the Future. It argued for an economy based as far as possible on automation, with the jobs, working hours and wages lost replaced by a universal basic income. The book attracted more attention than a speculative leftwing work had for years, with interest and praise from intellectually curious leftists
  • Even the thinking of the arch-accelerationist Nick Land, who is 55 now, may be slowing down. Since 2013, he has become a guru for the US-based far-right movement neoreaction, or NRx as it often calls itself. Neoreactionaries believe in the replacement of modern nation-states, democracy and government bureaucracies by authoritarian city states, which on neoreaction blogs sound as much like idealised medieval kingdoms as they do modern enclaves such as Singapore.
  • Land argues now that neoreaction, like Trump and Brexit, is something that accelerationists should support, in order to hasten the end of the status quo.
  • In 1970, the American writer Alvin Toffler, an exponent of accelerationism’s more playful intellectual cousin, futurology, published Future Shock, a book about the possibilities and dangers of new technology. Toffler predicted the imminent arrival of artificial intelligence, cryonics, cloning and robots working behind airline check-in desks
  • Land left Britain. He moved to Taiwan “early in the new millennium”, he told me, then to Shanghai “a couple of years later”. He still lives there now.
  • In a 2004 article for the Shanghai Star, an English-language paper, he described the modern Chinese fusion of Marxism and capitalism as “the greatest political engine of social and economic development the world has ever known”
  • Once he lived there, Land told me, he realised that “to a massive degree” China was already an accelerationist society: fixated by the future and changing at speed. Presented with the sweeping projects of the Chinese state, his previous, libertarian contempt for the capabilities of governments fell away
  • Without a dynamic capitalism to feed off, as Deleuze and Guattari had in the early 70s, and the Warwick philosophers had in the 90s, it may be that accelerationism just races up blind alleys. In his 2014 book about the movement, Malign Velocities, Benjamin Noys accuses it of offering “false” solutions to current technological and economic dilemmas. With accelerationism, he writes, a breakthrough to a better future is “always promised and always just out of reach”.
  • “The pace of change accelerates,” concluded a documentary version of the book, with a slightly hammy voiceover by Orson Welles. “We are living through one of the greatest revolutions in history – the birth of a new civilisation.”
  • Shortly afterwards, the 1973 oil crisis struck. World capitalism did not accelerate again for almost a decade. For much of the “new civilisation” Toffler promised, we are still waiting
huffem4

It's not enough to "believe science" - 1 views

  • the “believe science” mantra can be classist; moreover, “sexism, racism, & eugenics were all scientific.” Science isn’t safe from bias, and it can get things wrong.
  • Coronavirus denialism and climate denialism aren’t the product of skeptical masses but disingenuous elites
  • it’s important to distinguish between genuine grassroots resistance and the funding of denialism by corporate interests
tongoscar

Women's March Peters Out After Women Find Trump's Not Ruining Lives - 0 views

  • The Women’s March has an identity crisis. The march was inspired in 2017 out of fear that Donald Trump would in “Handmaid’s Tale” fashion strip women of all rights and dignity. After two years of a Trump presidency, and no such apocalypse, the Women’s March has lost much of its vigor.
  • This year the march began with a short rally at Freedom Plaza. Rev. T. Sheri Dickerson, one of the march’s board members, started off the rally with the chant, “My body, my choice.” The marchers first made their way to Lafayette Park, then ended in front of the Trump Hotel.
  • Few brought up women’s rights when asked why they’d attended the event. Many answered that they were there to fight for climate change and immigration. One young woman named Bianca from Raleigh, North Carolina pointed out that she was disappointed the organizers decided to make their platform so broad. She said she believed a women’s march should be about issues specific to women.
  • ...4 more annotations...
  • Some of the anti-Trump signs read, “Trump is a danger to our Democracy,” “Impeach the Mother f-cker,” “Arrest Trump,” and “All these Women yet Trump is the only b-tch.”
  • In front of Trump Tower, about 200 protestors gathered from a group called Out Now. They chanted, “We cannot rely on the election, we cannot rely on the normal channels, because Donald Trump is a fascist…We have to drive him out.”
  • Like many women at the march, “access to health care” was the only policy they could name that had anything to do with women’s rights, and it was always used as a euphemism for abortion.
  • I agreed with many of the women at the march that unfettered access to abortion is in danger. Trump has done a lot to see that abortion is no longer funded by taxpayers, and many states are requiring abortion to meet the same safety standards as other medical procedures.
Javier E

How to Navigate a 'Quarterlife' Crisis - The New York Times - 0 views

  • Satya Doyle Byock, a 39-year-old therapist, noticed a shift in tone over the past few years in the young people who streamed into her office: frenetic, frazzled clients in their late teens, 20s and 30s. They were unnerved and unmoored, constantly feeling like something was wrong with them.
  • “Crippling anxiety, depression, anguish, and disorientation are effectively the norm,”
  • her new book, “Quarterlife: The Search for Self in Early Adulthood.” The book uses anecdotes from Ms. Byock’s practice to outline obstacles faced by today’s young adults — roughly between the ages of 16 and 36 — and how to deal with them.
  • ...40 more annotations...
  • Just like midlife, quarterlife can bring its own crisis — trying to separate from your parents or caregivers and forge a sense of self is a struggle. But the generation entering adulthood now faces novel, sometimes debilitating, challenges.
  • Many find themselves so mired in day-to-day monetary concerns, from the relentless crush of student debt to the swelling costs of everything, that they feel unable to consider what they want for themselves long term
  • “We’ve been constrained by this myth that you graduate from college and you start your life,” she said. Without the social script previous generations followed — graduate college, marry, raise a family — Ms. Byock said her young clients often flailed around in a state of extended adolescence.
  • nearly one-third of Gen Z adults are living with their parents or other relatives and plan to stay there.
  • Many young people today struggle to afford college or decide not to attend, and the “existential crisis” that used to hit after graduation descends earlier and earlier
  • Ms. Byock said to pay attention to what you’re naturally curious about, and not to dismiss your interests as stupid or futile.
  • Experts said those entering adulthood need clear guidance for how to make it out of the muddle. Here are their top pieces of advice on how to navigate a quarterlife crisis today.
  • She recommends scheduling reminders to check in with yourself, roughly every three months, to examine where you are in your life and whether you feel stuck or dissatisfied
  • From there, she said, you can start to identify aspects of your life that you want to change.
  • “Start to give your own inner life the respect that it’s due,”
  • But quarterlife is about becoming a whole person, Ms. Byock said, and both groups need to absorb each other’s characteristics to balance themselves out
  • However, there is a difference between self-interest and self-indulgence, Ms. Byock said. Investigating and interrogating who you are takes work. “It’s not just about choosing your labels and being done,” she said.
  • Be patient.
  • Quarterlifers may feel pressure to race through each step of their lives, Ms. Byock said, craving the sense of achievement that comes with completing a task.
  • But learning to listen to oneself is a lifelong process.
  • Instead of searching for quick fixes, she said, young adults should think about longer-term goals: starting therapy that stretches beyond a handful of sessions, building healthy nutrition and exercise habits, working toward self-reliance.
  • “I know that seems sort of absurdly large and huge in scope,” she said. “But it’s allowing ourselves to meander and move through life, versus just ‘Check the boxes and get it right.’”
  • take stock of your day-to-day life and notice where things are missing. She groups quarterlifers into two categories: “stability types” and “meaning types.”
  • “Stability types” are seen by others as solid and stable. They prioritize a sense of security, succeed in their careers and may pursue building a family.
  • “But there’s a sense of emptiness and a sense of faking it,” she said. “They think this couldn’t possibly be all that life is about.”
  • On the other end of the spectrum, there are “meaning types” who are typically artists; they have intense creative passions but have a hard time dealing with day-to-day tasks
  • “These are folks for whom doing what society expects of you is so overwhelming and so discordant with their own sense of self that they seem to constantly be floundering,” she said. “They can’t quite figure it out.”
  • That paralysis is often exacerbated by mounting climate anxiety and the slog of a multiyear pandemic that has left many young people mourning family and friends, or smaller losses like a conventional college experience or the traditions of starting a first job.
  • Stability types need to think about how to give their lives a sense of passion and purpose. And meaning types need to find security, perhaps by starting with a consistent routine that can both anchor and unlock creativity.
  • perhaps the prototypical inspiration for staying calm in chaos: Yoda. The Jedi master is “one of the few images we have of what feeling quiet amid extreme pain and apocalypse can look like,
  • Even when there seems to be little stability externally, she said, quarterlifers can try to create their own steadiness.
  • establishing habits that help you ground yourself as a young adult is critical because transitional periods make us more susceptible to burnout
  • He suggests building a practical tool kit of self-care practices, like regularly taking stock of what you’re grateful for, taking controlled breaths and maintaining healthy nutrition and exercise routines. “These are techniques that can help you find clarity,”
  • Don’t be afraid to make a big change.
  • It’s important to identify what aspects of your life you have the power to alter, Dr. Brown said. “You can’t change an annoying boss,” he said, “but you might be able to plan a career change.”
  • That’s easier said than done, he acknowledged, and young adults should weigh the risks of continuing to live in their status quo — staying in their hometown, or lingering in a career that doesn’t excite them — with the potential benefits of trying something new.
  • quarterlife is typically “the freest stage of the whole life span,
  • Young adults may have an easier time moving to a new city or starting a new job than their older counterparts would.
  • Know when to call your parents — and when to call on yourself.
  • Quarterlife is about the journey from dependence to independence, Ms. Byock said — learning to rely on ourselves, after, for some, growing up in a culture of helicopter parenting and hands-on family dynamics.
  • there are ways your relationship with your parents can evolve, helping you carve out more independence
  • That can involve talking about family history and past memories or asking questions about your parents’ upbringing
  • “You’re transitioning the relationship from one of hierarchy to one of friendship,” she said. “It isn’t just about moving away or getting physical distance.”
  • Every quarterlifer typically has a moment when they know they need to step away from their parents and to face obstacles on their own
  • That doesn’t mean you can’t, or shouldn’t, still depend on your parents in moments of crisis, she said. “I don’t think it’s just about never needing one’s parents again,” she said. “But it’s about doing the subtle work within oneself to know: This is a time I need to stand on my own.”
1 - 13 of 13
Showing 20 items per page