Skip to main content

Home/ TOK Friends/ Group items tagged risks

Rss Feed Group items tagged

Duncan H

Raising the Chance of Some Cancers With Two Drinks a Day - WSJ.com - 0 views

  • Regularly drinking, even in moderation, raises the long-term risk of many kinds of cancer. A burgeoning body of research links alcohol to cancers of the breast, liver, colon, pancreas, mouth, throat, larynx and esophagus. A large new study last week added lung cancer to the list—even for people who have never smoked cigarettes.
  • For some of these cancers, such as lung, larynx and colorectal, the cancer risk only sets in when people drink heavily—three or four drinks a day on a regular basis. But just one drink a day raises the risk for cancers of the mouth and esophagus, several studies show.
  • "It's the repeated exposure to alcohol over a long period of time that will cause damage and it has a cumulative effect."
  • ...4 more annotations...
  • One study found that men who consumed eight to 14 drinks a week had a 59% lower risk of heart failure compared with those who didn't drink.
  • But experts warn that regularly drinking more than that can cause cardiovascular damage instead, raising blood pressure, increasing the risk of hemorrhagic stroke and leading to cardiomyopathy, a dangerous enlargement of the heart.
  • Benefits of moderate drinking, defined as one drink a day for women, two for men. •Reduces the risk of coronary heart disease by 30% to 35%. Increases HDL 'good' cholesterol. •Prevents platelets from sticking together, reducing blood clots, and lowers the risk of congestive heart failure. •Cuts the risk of heart attack by 40% to 50% in healthy men. •Reduces the risk of stroke and dementia.
  • Cancer risks linked to drinking. (Risks vary with the amount of alcohol consumed.)•Raises the risk of oral and pharyngeal cancer by 20% and risk of breast cancer by 8% among people who have one or fewer drinks a day. •Raises risk of oral cancers 73%, risk of liver cancer 20% and risk of breast cancer 31% among people who have two to three drinks per day. •Associated with a fivefold increase in risk of oral, pharyngeal and esophageal cancers in people who have four or more drinks per day. •Raises the risk of colorectal cancer by 52%, pancreatic cancer by 22%, breast cancer by 46%.
  •  
    Should adults drink in moderation then? How should the risks and benefits be balanced.
kushnerha

The Psychology of Risk Perception Explains Why People Don't Fret the Pacific Northwest'... - 0 views

  • what psychology teaches us. Turns out most of us just aren’t that good at calculating risk, especially when it comes to huge natural events like earthquakes. That also means we’re not very good at mitigating those kinds of risks. Why? And is it possible to get around our short-sightedness, so that this time, we’re actually prepared? Risk perception is a vast, complex field of research. Here are just some of the core findings.
  • Studies show that when people calculate risk, especially when the stakes are high, we rely much more on feeling than fact. And we have trouble connecting emotionally to something scary if the odds of it happening today or tomorrow aren’t particularly high. So, if an earthquake, flood, tornado or hurricane isn’t immediately imminent, people are unlikely to act. “Perceiving risk is all about how scary or not do the facts feel,”
  • This feeling also relates to how we perceive natural, as opposed to human-made, threats. We tend to be more tolerant of nature than of other people who would knowingly impose risks upon us—terrorists being the clearest example. “We think that nature is out of our control—it’s not malicious, it’s not profiting from us, we just have to bear with it,”
  • ...8 more annotations...
  • And in many cases, though not all, people living in areas threatened by severe natural hazards do so by choice. If a risk has not been imposed on us, we take it much less seriously. Though Schulz’s piece certainly made a splash online, it is hard to imagine a mass exodus of Portlanders and Seattleites in response. Hey, they like it there.
  • They don’t have much to compare the future earthquake to. After all, there hasn’t been an earthquake or tsunami like it there since roughly 1700. Schulz poeticizes this problem, calling out humans for their “ignorance of or an indifference to those planetary gears which turn more slowly than our own.” Once again, this confounds our emotional connection to the risk.
  • The belief that an unlikely event won’t happen again for a while is called a gambler’s fallacy. Probability doesn’t work like that. The odds are the same with every roll of the dice.
  • But our “temporal parochialism,” as Schulz calls it, also undoes our grasp on probability. “We think probability happens with some sort of regularity or pattern,” says Ropeik. “If an earthquake is projected to hit within 50 years, when there hasn’t been one for centuries, we don’t think it’s going to happen.” Illogical thinking works in reverse, too: “If a minor earthquake just happened in Seattle, we think we’re safe.”
  • For individuals and government alike, addressing every point of concern requires a cost-benefit analysis. When kids barely have pencils and paper in schools that already exist, how much is appropriate to invest in earthquake preparedness? Even when that earthquake will kill thousands, displace millions, and cripple a region’s economy for decades to come—as Cascadia is projected to—the answer is complicated. “You immediately run into competing issues,” says Slovic. “When you’re putting resources into earthquake protection that has to be taken away from current social needs—that is a very difficult sell.”​
  • There are things people can do to combat our innate irrationality. The first is obvious: education. California has a seismic safety commission whose job is to publicize the risks of earthquakes and advocate for preparedness at household and state policy levels.
  • Another idea is similar to food safety ratings in the windows of some cities’ restaurants. Schulz reports that some 75 percent of Oregon’s structures aren’t designed to hold up to a really big Cascadia quake. “These buildings could have their risk and safety score publicly posted,” says Slovic. “That would motivate people to retrofit or mitigate those risks, particularly if they are schools.”
  • science points to a hard truth. Humans are simply inclined to be more concerned about what’s immediately in front of us: Snakes, fast-moving cars, unfamiliar chemical compounds in our breakfast cereal and the like will always elicit a quicker response than an abstract, far-off hazard.
Javier E

Atul Gawande: Failure and Rescue : The New Yorker - 0 views

  • the critical skills of the best surgeons I saw involved the ability to handle complexity and uncertainty. They had developed judgment, mastery of teamwork, and willingness to accept responsibility for the consequences of their choices. In this respect, I realized, surgery turns out to be no different than a life in teaching, public service, business, or almost anything you may decide to pursue. We all face complexity and uncertainty no matter where our path takes us. That means we all face the risk of failure. So along the way, we all are forced to develop these critical capacities—of judgment, teamwork, and acceptance of responsibility.
  • people admonish us: take risks; be willing to fail. But this has always puzzled me. Do you want a surgeon whose motto is “I like taking risks”? We do in fact want people to take risks, to strive for difficult goals even when the possibility of failure looms. Progress cannot happen otherwise. But how they do it is what seems to matter. The key to reducing death after surgery was the introduction of ways to reduce the risk of things going wrong—through specialization, better planning, and technology.
  • there continue to be huge differences between hospitals in the outcomes of their care. Some places still have far higher death rates than others. And an interesting line of research has opened up asking why.
  • ...8 more annotations...
  • I thought that the best places simply did a better job at controlling and minimizing risks—that they did a better job of preventing things from going wrong. But, to my surprise, they didn’t. Their complication rates after surgery were almost the same as others. Instead, what they proved to be really great at was rescuing people when they had a complication, preventing failures from becoming a catastrophe.
  • this is what distinguished the great from the mediocre. They didn’t fail less. They rescued more.
  • This may in fact be the real story of human and societal improvement. We talk a lot about “risk management”—a nice hygienic phrase. But in the end, risk is necessary. Things can and will go wrong. Yet some have a better capacity to prepare for the possibility, to limit the damage, and to sometimes even retrieve success from failure.
  • When things go wrong, there seem to be three main pitfalls to avoid, three ways to fail to rescue. You could choose a wrong plan, an inadequate plan, or no plan at all. Say you’re cooking and you inadvertently set a grease pan on fire. Throwing gasoline on the fire would be a completely wrong plan. Trying to blow the fire out would be inadequate. And ignoring it—“Fire? What fire?”—would be no plan at all.
  • All policies court failure—our war in Iraq, for instance, or the effort to stimulate our struggling economy. But when you refuse to even acknowledge that things aren’t going as expected, failure can become a humanitarian disaster. The sooner you’re able to see clearly that your best hopes and intentions have gone awry, the better. You have more room to pivot and adjust. You have more of a chance to rescue.
  • But recognizing that your expectations are proving wrong—accepting that you need a new plan—is commonly the hardest thing to do. We have this problem called confidence. To take a risk, you must have confidence in yourself
  • Yet you cannot blind yourself to failure, either. Indeed, you must prepare for it. For, strangely enough, only then is success possible.
  • So you will take risks, and you will have failures. But it’s what happens afterward that is defining. A failure often does not have to be a failure at all. However, you have to be ready for it—will you admit when things go wrong? Will you take steps to set them right?—because the difference between triumph and defeat, you’ll find, isn’t about willingness to take risks. It’s about mastery of rescue.
Javier E

The Irrational Risk of Thinking We Can Be Rational About Risk | Risk: Reason and Realit... - 0 views

  • in the most precise sense of the word, facts are meaningless…just disconnected ones and zeroes in the computer until we run them through the software of how those facts feel
  • Of all the building evidence about human cognition that suggests we ought to be a little more humble about our ability to reason, no other finding has more significance, because Elliott teaches us that no matter how smart we like to think we are, our perceptions are inescapably a blend of reason and gut reaction, intellect and instinct, facts and feelings.
  • many people, particularly intellectuals and academics and policy makers, maintain a stubborn post-Enlightenment confidence in the supreme power of rationality. They continue to believe that we can make the ‘right’ choices about risk based on the facts, that with enough ‘sound science’ evidence from toxicology and epidemiology and cost-benefit analysis, the facts will reveal THE TRUTH. At best this confidence is hopeful naivete. At worst, it is intellectual arrogance that denies all we’ve learned about the realities of human cognition. In either case, it’s dangerous
  • ...5 more annotations...
  • There are more than a dozen of these risk perception factors, (see Ch. 3 of “How Risky Is It, Really? Why Our Fears Don’t Match the Facts", available online free at)
  • Because our perceptions rely as much as or more on feelings than simply on the facts, we sometimes get risk wrong. We’re more afraid of some risks than we need to be (child abduction, vaccines), and not as afraid of some as we ought to be (climate change, particulate air pollution), and that “Perception Gap” can be a risk in and of itself
  • We must understand that instinct and intellect are interwoven components of a single system that helps us perceive the world and make our judgments and choices, a system that worked fine when the risks we faced were simpler but which can make dangerous mistakes as we try to figure out some of the more complex dangers posed in our modern world.
  • What we can do to avoid the dangers that arise when our fears don’t match the facts—the most rational thing to do—is, first, to recognize that our risk perceptions can never be purely objectively perfectly 'rational', and that our subjective perceptions are prone to potentially dangerous mistakes.
  • Then we can begin to apply all the details we've discovered of how our risk perception system works, and use that knowledge and self-awareness to make wiser, more informed, healthier choices
Javier E

Study Causes Splash, but Here's Why You Should Stay Calm on Alcohol's Risks - The New Y... - 0 views

  • there are limitations here that warrant consideration. Observational data can be very confounded, meaning that unmeasured factors might be the actual cause of the harm. Perhaps people who drink also smoke tobacco. Perhaps people who drink are also poorer. Perhaps there are genetic differences, health differences or other factors that might be the real cause
  • There are techniques to analyze observational data in a more causal fashion, but none of them could be used here, because this analysis aggregated past studies — and those studies didn’t use them.
  • when we compile observational study on top of observational study, we become more likely to achieve statistical significance without improving clinical significance. In other words, very small differences are real, but that doesn’t mean those differences are critical.
  • ...9 more annotations...
  • even one drink per day carries a risk. But how great is that risk?
  • For each set of 100,000 people who have one drink a day per year, 918 can expect to experience one of the 23 alcohol-related problems in any year. Of those who drink nothing, 914 can expect to experience a problem. This means that 99,082 are unaffected, and 914 will have an issue no matter what. Only 4 in 100,000 people who consume a drink a day may have a problem caused by the drinking, according to this study.
  • I’m not advocating that people should ignore these risks. They are real, but they are much smaller than many other risks in our lives
  • This is a population-level study, arguably a worldwide study, but the results are being interpreted at an individual level. There are merging, for instance, the 23 alcohol-related health issues together. But not everyone experiences them at the same rate.
  • For diabetes and heart disease, for instance, the risks actually go down with light or moderate drinking. The authors argue that this result is overrun, however, by risks for things like cancer and tuberculosis, which go up. But for many individuals, the risks for diabetes and heart disease are much higher than those for cancer and tuberculosis.
  • For this study, a drink was defined as 10 grams of pure alcohol, as much as you might get in one ounce of spirits (a small shot glass) that is 40 percent alcohol; 3.4 ounces of wine that’s 13 percent alcohol; or 12 ounces of beer that’s 3.5 percent alcohol. Many people consume more than that and consider it “a drink.”
  • just because something is unhealthy in large amounts doesn’t mean that we must completely abstain. A chart in the study showed rising risks from alcohol from 0 to 15 drinks.
  • Consider that 15 desserts a day would be bad for you. I am sure that I could create a chart showing increasing risk for many diseases from 0 to 15 desserts. This could lead to assertions that “there’s no safe amount of dessert.” But it doesn’t mean you should never, ever eat dessert.
  • we could spend lifetimes arguing over where the line is for many people. The truth is we just don’t know. If these studies are intended to drive population-level policy, we should use them as such, to argue that we might want to push people to be wary of overconsumption.
Javier E

Taking B12 Energy Vitamins May Cause Lung Cancer - The Atlantic - 3 views

  • around 50 percent of people in the United States take some form of “dietary supplement” product, and among the most common are B vitamins.
  • Worse than just a harmless waste of money, this usage could be actively dangerous. In an issue of the Journal of Clinical Oncology, published this week, researchers reported that taking vitamin B6 and B12 supplements in high doses (like those sold in many stores) appears to triple or almost quadruple some people’s risk of lung cancer.
  • Starting in 1998, researchers assigned 6,837 people with heart disease to take either B vitamins or a placebo.
  • ...5 more annotations...
  • . In 2009, the researchers reported in the Journal of the American Medical Association that taking high doses of vitamin B12 along with folic acid (technically vitamin B9) was associated with greater risk of cancer and all-cause mortality.
  • Lung-cancer risk among men who took 20 milligrams of B6 daily for years was twice that of men who didn’t. Among people who smoke, the effect appeared to be synergistic, with B6 usage increasing risk threefold. The risk was even worse among smokers taking B12. Using more than 55 micrograms daily appeared to almost quadruple lung-cancer risk.
  • The research team is quick to note that the doses of B vitamins in question are enormous. The U.S. Recommended Dietary Allowance for B6 is 1.7 milligrams per day, and for B12 it’s 2.4 micrograms. The high-risk group in the study was taking around 20 times these amounts.That could seem nonsensical, except that these are the doses for sale at healthy-seeming places like Whole Foods and GNC. Many sellers offer daily 100-milligram B6 pills. B12 is available in doses of 5,000 micrograms.
  • There are legitimate and important uses for B-vitamin supplements, but the emerging evidence suggests we’re best to treat them more like pharmaceuticals than like panaceas to be shoveled into us in pursuit of energy, metabolic fortitude, “cardioprotection,” “bone wellness,” or whatever way in which we’d like to be better.
  • The current law gives consumers no reason to expect that risks will be listed on the labels of these products, or that health claims are accurate. A product like a high-dose B6 and B12 supplement hits shelves, and only decades later do researchers begin to understand the long-term health effects, who might benefit from taking it, and who might be harmed.  
Javier E

Lacking Brains, Plants Can Still Make Good Judgments About Risks - The New York Times - 0 views

  • Plants may not be getting enough credit. Not only do they remember when you touch them, it turns out that they can make risky decisions that are as sophisticated as those made by humans, all without brains or complex nervous systems. And they may even judge risks more efficiently than we do.
  • Researchers showed that when faced with the choice between a pot containing constant levels of nutrients or one with unpredictable levels, a plant will pick the mystery pot when conditions are sufficiently poor.
  • When nutrient levels were low, the plants laid more roots in the unpredictable pot. But when nutrients were abundant, they chose the one that always had the same amount. The plants somehow knew the best time to take risks.
  • ...3 more annotations...
  • “In bad conditions, the only chance of success is to take a chance and hope it works out, and that’s what the plants are doing,
  • This complex behavior in a plant supports an idea, known as risk sensitivity theory, that scientists have long had trouble testing in insects and animals. It states that when choosing between stable and uncertain outcomes, an organism will play it safe when things are going well, and take risks when times are hard.
  • The simplicity of plants makes it much easier to create a proper test for at least one reason: Plants don’t worry about feelings.
Javier E

How our brains numb us to covid-19's risks - and what we can do about it - The Washingt... - 1 views

  • Social scientists have long known that we perceive risks that are acute, such as an impending tsunami, differently than chronic, ever-present threats like car accidents
  • Part of what’s happening is that covid-19 — which we initially saw as a terrifying acute threat — is morphing into more of a chronic one in our minds. That shift likely dulls our perception of the danger,
  • Now, when they think about covid-19, “most people have a reduced emotional reaction. They see it as less salient.”
  • ...15 more annotations...
  • This habituation stems from a principle well-known in psychological therapy: The more we’re exposed to a given threat, the less intimidating it seems.
  • As the pandemic drags on, people are unknowingly performing a kind of exposure therapy on themselves, said University of Oregon psychologist Paul Slovic, author of “The Perception of Risk” — and the results can be deadly.
  • “You have an experience and the experience is benign. It feels okay and comfortable. It’s familiar. Then you do it again,” Slovic said. “If you don’t see anything immediately bad happening, your concerns get deconditioned.”
  • The end result of all this desensitizing is a kind of overriding heedlessness decoupled from evidence — the anti-mask movements, the beach gatherings, the overflowing dance parties
  • One of the best ways to reinforce a certain behavior is to make sure that behavior is rewarded and that deviations from it are punished (or ignored).
  • But when it comes to lifesaving behaviors such as mask-wearing or staying home from parties, this reward-punishment calculus gets turned on its head.
  • With parties, when you do the right thing and stay home, “you feel an immediate cost: You’re not able to be with your friends,
  • while there is an upside to this decision — helping to stop the spread of the virus — it feels distant. “The benefit is invisible, but the costs are very tangible.”
  • By contrast, Slovic said, when you flout guidelines about wearing masks or avoiding gatherings, you get an immediate reward: You rejoice at not having to breathe through fabric, or you enjoy celebrating a close friend’s birthday in person.
  • Because risk perception fails as we learn to live with covid-19, Griffin and other researchers are calling for the renewal of tough government mandates to curb virus spread. They see measures such as strict social distancing, enforced masking outside the home and stay-at-home orders as perhaps the only things that can protect us from our own faulty judgment.
  • But these kinds of measures aren’t enough on their own, Griffin said. It’s also important for authorities to supply in-your-face reminders of those mandates, especially visual cues, so people won’t draw their own erroneous conclusions about what’s safe.
  • “A few parks have drawn circles [on their lawns]: ‘Don’t go out of the circle,’ ” Griffin said. “We need to take those kinds of metaphors and put them throughout the entire day.”
  • “The first step is awareness that sometimes you can’t trust your feelings.”
  • For people considering how to assess covid-19 risks, Slovic advised pivoting from emotionally driven gut reactions to what psychologist Daniel Kahneman — winner of the 2002 Nobel Prize in economics for his integration of psychological research into economic science — calls “slow thinking.” That means making decisions based on careful analysis of the evidence. “You need to either do the slow thinking yourself,” Slovic said, “or trust experts who do the slow thinking and understand the situation.”
  • Thousands of us are less afraid than we were at the pandemic’s outset, even though in many parts of the country mounting case counts have increased the danger of getting the virus. We’re swarming the beaches and boardwalks, often without masks.
Javier E

The Real Reason You and Your Neighbor Make Different Covid-19 Risk Decisions - WSJ - 0 views

  • Personality traits that are shaped by genetics and early life experiences strongly influence our Covid-19-related decisions, studies from the U.S. and Japan have found.
  • In a study of more than 400 U.S. adults, Dr. Byrne and her colleagues found that how people perceive risks, whether they make risky decisions, and their preference for immediate or delayed rewards were the largest predictors of whether they followed public-health guidelines when it came to wearing masks and social distancing.
  • These factors accounted for 55% of the difference in people’s behaviors—more than people’s political affiliation, level of education or age.
  • ...12 more annotations...
  • Dr. Byrne and her colleagues measured risky decision-making by presenting people with a gambling scenario. They could choose between two bets: One offered a guaranteed amount of money, while the other offered the possibility of a larger amount of money but also the possibility of receiving no cash. A different exercise measured people’s preference for immediate versus delayed rewards: Participants could choose a certain amount of money now, or a larger amount later.
  • Study subjects also reported Covid-19 precautions they had taken in their daily lives, including masking and social distancing.
  • with Covid-19, people don’t feel sick immediately after an exposure so the benefits of wearing a mask, social distancing or getting vaccinated aren’t immediately apparent. “You don’t see the lives you potentially save,” she says
  • “People generally are more motivated by immediate gratification or immediate benefits rather than long-term benefits, even when the long-term benefits are much greater,
  • Research has also found being extroverted or introverted affects how people make decisions about Covid-19 precautions. A recent study of more than 8,500 people in Japan published in the journal PLOS One in October 2020 found that those who scored high on a scale of extraversion were 7% less likely to wear masks in public and avoid large gatherings, among other precautions.
  • The study also found that people who scored high on a measure of conscientiousness—valuing hard work and achievement—were 31% more likely to follow Covid-19 public-health precautions.
  • Scientists believe that a person’s propensity to take risks is partly genetic and partly the result of early life experiences
  • Certain negative childhood experiences including physical, emotional or sexual abuse, parental divorce, or living with someone who was depressed or abused drugs or alcohol are linked to risky behavior in adulthood like smoking and drinking heavily, other research has found.
  • Studies of twins have generally found that about 30% of the difference in individual risk tolerance is genetic
  • And scientists have discovered that the brains of people who are more willing to take risks look different than those of people who are more cautious.
  • ambling task had differences in the structure and function of the amygdala, a part of the brain involved in detecting threats, and the prefrontal cortex, a region involved in executive
  • Even people who have the same information and a similar perception of the risks may make different decisions because of the ways they interpret the information. When public-health officials talk about breakthrough infections in vaccinated individuals being rare, for example, “rare means different things” to different people
Javier E

If We Knew Then What We Know Now About Covid, What Would We Have Done Differently? - WSJ - 0 views

  • For much of 2020, doctors and public-health officials thought the virus was transmitted through droplets emitted from one person’s mouth and touched or inhaled by another person nearby. We were advised to stay at least 6 feet away from each other to avoid the droplets
  • A small cadre of aerosol scientists had a different theory. They suspected that Covid-19 was transmitted not so much by droplets but by smaller infectious aerosol particles that could travel on air currents way farther than 6 feet and linger in the air for hours. Some of the aerosol particles, they believed, were small enough to penetrate the cloth masks widely used at the time.
  • The group had a hard time getting public-health officials to embrace their theory. For one thing, many of them were engineers, not doctors.
  • ...37 more annotations...
  • “My first and biggest wish is that we had known early that Covid-19 was airborne,”
  • , “Once you’ve realized that, it informs an entirely different strategy for protection.” Masking, ventilation and air cleaning become key, as well as avoiding high-risk encounters with strangers, he says.
  • Instead of washing our produce and wearing hand-sewn cloth masks, we could have made sure to avoid superspreader events and worn more-effective N95 masks or their equivalent. “We could have made more of an effort to develop and distribute N95s to everyone,” says Dr. Volckens. “We could have had an Operation Warp Speed for masks.”
  • We didn’t realize how important clear, straight talk would be to maintaining public trust. If we had, we could have explained the biological nature of a virus and warned that Covid-19 would change in unpredictable ways.  
  • We didn’t know how difficult it would be to get the basic data needed to make good public-health and medical decisions. If we’d had the data, we could have more effectively allocated scarce resources
  • In the face of a pandemic, he says, the public needs an early basic and blunt lesson in virology
  • and mutates, and since we’ve never seen this particular virus before, we will need to take unprecedented actions and we will make mistakes, he says.
  • Since the public wasn’t prepared, “people weren’t able to pivot when the knowledge changed,”
  • By the time the vaccines became available, public trust had been eroded by myriad contradictory messages—about the usefulness of masks, the ways in which the virus could be spread, and whether the virus would have an end date.
  • , the absence of a single, trusted source of clear information meant that many people gave up on trying to stay current or dismissed the different points of advice as partisan and untrustworthy.
  • “The science is really important, but if you don’t get the trust and communication right, it can only take you so far,”
  • people didn’t know whether it was OK to visit elderly relatives or go to a dinner party.
  • Doctors didn’t know what medicines worked. Governors and mayors didn’t have the information they needed to know whether to require masks. School officials lacked the information needed to know whether it was safe to open schools.
  • Had we known that even a mild case of Covid-19 could result in long Covid and other serious chronic health problems, we might have calculated our own personal risk differently and taken more care.
  • just months before the outbreak of the pandemic, the Council of State and Territorial Epidemiologists released a white paper detailing the urgent need to modernize the nation’s public-health system still reliant on manual data collection methods—paper records, phone calls, spreadsheets and faxes.
  • While the U.K. and Israel were collecting and disseminating Covid case data promptly, in the U.S. the CDC couldn’t. It didn’t have a centralized health-data collection system like those countries did, but rather relied on voluntary reporting by underfunded state and local public-health systems and hospitals.
  • doctors and scientists say they had to depend on information from Israel, the U.K. and South Africa to understand the nature of new variants and the effectiveness of treatments and vaccines. They relied heavily on private data collection efforts such as a dashboard at Johns Hopkins University’s Coronavirus Resource Center that tallied cases, deaths and vaccine rates globally.
  • For much of the pandemic, doctors, epidemiologists, and state and local governments had no way to find out in real time how many people were contracting Covid-19, getting hospitalized and dying
  • To solve the data problem, Dr. Ranney says, we need to build a public-health system that can collect and disseminate data and acts like an electrical grid. The power company sees a storm coming and lines up repair crews.
  • If we’d known how damaging lockdowns would be to mental health, physical health and the economy, we could have taken a more strategic approach to closing businesses and keeping people at home.
  • t many doctors say they were crucial at the start of the pandemic to give doctors and hospitals a chance to figure out how to accommodate and treat the avalanche of very sick patients.
  • The measures reduced deaths, according to many studies—but at a steep cost.
  • The lockdowns didn’t have to be so harmful, some scientists say. They could have been more carefully tailored to protect the most vulnerable, such as those in nursing homes and retirement communities, and to minimize widespread disruption.
  • Lockdowns could, during Covid-19 surges, close places such as bars and restaurants where the virus is most likely to spread, while allowing other businesses to stay open with safety precautions like masking and ventilation in place.  
  • The key isn’t to have the lockdowns last a long time, but that they are deployed earlier,
  • If England’s March 23, 2020, lockdown had begun one week earlier, the measure would have nearly halved the estimated 48,600 deaths in the first wave of England’s pandemic
  • If the lockdown had begun a week later, deaths in the same period would have more than doubled
  • It is possible to avoid lockdowns altogether. Taiwan, South Korea and Hong Kong—all countries experienced at handling disease outbreaks such as SARS in 2003 and MERS—avoided lockdowns by widespread masking, tracking the spread of the virus through testing and contact tracing and quarantining infected individuals.
  • With good data, Dr. Ranney says, she could have better managed staffing and taken steps to alleviate the strain on doctors and nurses by arranging child care for them.
  • Early in the pandemic, public-health officials were clear: The people at increased risk for severe Covid-19 illness were older, immunocompromised, had chronic kidney disease, Type 2 diabetes or serious heart conditions
  • t had the unfortunate effect of giving a false sense of security to people who weren’t in those high-risk categories. Once case rates dropped, vaccines became available and fear of the virus wore off, many people let their guard down, ditching masks, spending time in crowded indoor places.
  • it has become clear that even people with mild cases of Covid-19 can develop long-term serious and debilitating diseases. Long Covid, whose symptoms include months of persistent fatigue, shortness of breath, muscle aches and brain fog, hasn’t been the virus’s only nasty surprise
  • In February 2022, a study found that, for at least a year, people who had Covid-19 had a substantially increased risk of heart disease—even people who were younger and had not been hospitalized
  • respiratory conditions.
  • Some scientists now suspect that Covid-19 might be capable of affecting nearly every organ system in the body. It may play a role in the activation of dormant viruses and latent autoimmune conditions people didn’t know they had
  •  A blood test, he says, would tell people if they are at higher risk of long Covid and whether they should have antivirals on hand to take right away should they contract Covid-19.
  • If the risks of long Covid had been known, would people have reacted differently, especially given the confusion over masks and lockdowns and variants? Perhaps. At the least, many people might not have assumed they were out of the woods just because they didn’t have any of the risk factors.
cvanderloo

Long COVID: who is at risk? - 0 views

  • But some people have long-lasting symptoms after their infection – this has been dubbed “long COVID”.
  • In defining who is at risk from long COVID and the mechanisms involved, we may reveal suitable treatments to be tried – or whether steps taken early in the course of the illness might ameliorate it.
  • Indeed, early analysis of self-reported data submitted through the COVID Symptom Study app suggests that 13% of people who experience COVID-19 symptoms have them for more than 28 days, while 4% have symptoms after more than 56 days.
  • ...7 more annotations...
  • Patients in this study had a mean age of 44 years, so were very much part of the young, working-age population. Only 18% had been hospitalised with COVID-19, meaning organ damage may occur even after a non-severe infection.
  • Another piece of early research (awaiting peer review) suggests that SARS-CoV-2 could also have a long-term impact on people’s organs.
  • Perhaps unsurprisingly, people with more severe disease initially – characterised by more than five symptoms – seem to be at increased risk of long COVID. Older age and being female also appear to be risk factors for having prolonged symptoms, as is having a higher body mass index.
  • Rather harder to explore is the symptom of fatigue. Another recent large-scale study has shown that this symptom is common after COVID-19 – occurring in more than half of cases – and appears unrelated to the severity of the early illness.
  • While men are at increased risk of severe infection, that women seem to be more affected by long COVID may reflect their different or changing hormone status.
  • Some symptoms of long COVID overlap with menopausal symptoms, and hormone replacement using medication may be one route to reducing the impact of symptoms.
  • What is clear, however, is that long-term symptoms after COVID-19 are common, and that research into the causes and treatments of long COVID will likely be needed long after the outbreak itself has subsided.
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • The Reformers
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
Javier E

I Had My DNA Picture Taken, With Varying Results - NYTimes.com - 0 views

  • Scientists have identified about 10 million SNPs within our three billion nucleotides. But an entire genome sequencing — looking at all three billion nucleotides — would cost around $3,000; the tests I took examined fewer than a million SNPs.
  • “Imagine if you took a book and you only looked at the first letter of every other page,” said Dr. Robert Klitzman, a bioethicist and professor of clinical psychiatry at Columbia. (I am a graduate student there in his Master of Bioethics program.) “You’re missing 99.9 percent of the letters that make the genome. The information is going to be limited.”
  • the major issue, experts say, is that the causes of most common diseases remain unknown. Genes account for just 5 to 20 percent of the whole picture.
  • ...5 more annotations...
  • “Your results are not the least bit surprising,” he told me. “Anything short of sequencing is going to be short on accuracy — and even then, there’s almost no comprehensive data sets to compare to.”
  • “Even if they are accurately looking at 5 percent of the attributable risk, they’ve ignored the vast majority of the other risk factors — the dark matter for genetics — because we as a scientific community haven’t yet identified those risk factors,”
  • There are only 23 diseases that start in adulthood, can be treated, and for which highly predictive tests exist. All are rare, with hereditary breast cancer the most common. “A small percentage of people who get tested will get useful information,” Dr. Klitzman said. “But for most people, the results are not clinically useful, and they may be misleading or confusing.”
  • To be sure, my tests did provide some beneficial information. They all agreed that I lack markers associated with an increased risk of breast cancer and Alzheimer’s. That said, they were testing for only a small fraction of the genetic risks for these diseases, not for rare genetic variants that confer much of the risk. I could still develop those diseases, of course, but I don’t have reason to pursue aggressive screenings as I age.
  • He added: “If you want to spend money wisely to protect your health and you have a few hundred dollars, buy a scale, stand on it, and act accordingly.”
Javier E

The Startling Link Between Sugar and Alzheimer's - The Atlantic - 0 views

  • A longitudinal study, published Thursday in the journal Diabetologia, followed 5,189 people over 10 years and found that people with high blood sugar had a faster rate of cognitive decline than those with normal blood sugar
  • In other words, the higher the blood sugar, the faster the cognitive decline.
  • “Currently, dementia is not curable, which makes it very important to study risk factors.”
  • ...9 more annotations...
  • People who have type 2 diabetes are about twice as likely to get Alzheimer’s, and people who have diabetes and are treated with insulin are also more likely to get Alzheimer’s, suggesting elevated insulin plays a role in Alzheimer’s. In fact, many studies have found that elevated insulin, or “hyperinsulinemia,” significantly increases your risk of Alzheimer’s. On the other hand, people with type 1 diabetes, who don’t make insulin at all, are also thought to have a higher risk of Alzheimer’s. How could these both be true?
  • Schilling posits this happens because of the insulin-degrading enzyme, a product of insulin that breaks down both insulin and amyloid proteins in the brain—the same proteins that clump up and lead to Alzheimer’s disease. People who don’t have enough insulin, like those whose bodies’ ability to produce insulin has been tapped out by diabetes, aren’t going to make enough of this enzyme to break up those brain clumps. Meanwhile, in people who use insulin to treat their diabetes and end up with a surplus of insulin, most of this enzyme gets used up breaking that insulin down, leaving not enough enzyme to address those amyloid brain clumps.
  • this can happen even in people who don’t have diabetes yet—who are in a state known as “prediabetes.” It simply means your blood sugar is higher than normal, and it’s something that affects roughly 86 million Americans.
  • In a 2012 study, Roberts broke nearly 1,000 people down into four groups based on how much of their diet came from carbohydrates. The group that ate the most carbs had an 80 percent higher chance of developing mild cognitive impairment—a pit stop on the way to dementia—than those who ate the smallest amount of carbs.
  • “It’s hard to be sure at this stage, what an ‘ideal’ diet would look like,” she said. “There’s a suggestion that a Mediterranean diet, for example, may be good for brain health.”
  • there are several theories out there to explain the connection between high blood sugar and dementia. Diabetes can also weaken the blood vessels, which increases the likelihood that you’ll have ministrokes in the brain, causing various forms of dementia. A high intake of simple sugars can make cells, including those in the brain, insulin resistant, which could cause the brain cells to die. Meanwhile, eating too much in general can cause obesity. The extra fat in obese people releases cytokines, or inflammatory proteins that can also contribute to cognitive deterioration, Roberts said. In one study by Gottesman, obesity doubled a person’s risk of having elevated amyloid proteins in their brains later in life.
  • even people who don’t have any kind of diabetes should watch their sugar intake, she said.
  • as these and other researchers point out, decisions we make about food are one risk factor we can control. And it’s starting to look like decisions we make while we’re still relatively young can affect our future cognitive health.
  • “Alzheimer’s is like a slow-burning fire that you don’t see when it starts,” Schilling said. It takes time for clumps to form and for cognition to begin to deteriorate. “By the time you see the signs, it’s way too late to put out the fire.”
Javier E

Cancer Doctors Cite Risks of Drinking Alcohol - The New York Times - 0 views

  • For women, just one alcoholic drink a day can increase breast cancer risk,
  • “The more you drink, the higher the risk,” said Dr. Clifford A. Hudis, the chief executive of ASCO. “It’s a pretty linear dose-response.”
  • Even those who drink moderately, defined by the Centers for Disease Control as one daily drink for women and two for men, face nearly a doubling of the risk for mouth and throat cancer and more than double the risk of squamous cell carcinoma of the esophagu
  • ...1 more annotation...
  • One way alcohol may lead to cancer is because the body metabolizes it into acetaldehyde, which causes changes and mutations in DNA, Dr. Gapstur said. The formation of acetaldehyde starts when alcohol comes in contact with bacteria in the mouth, which may explain the link between alcohol and cancers of the throat, voice box and esophagus
kaylynfreeman

Coronavirus 'Hits All the Hot Buttons' for How We Misjudge Risk - The New York Times - 0 views

  • But there is a lesson, psychologists and public health experts say, in the near-terror that the virus induces, even as serious threats like the flu receive little more than a shrug. It illustrates the unconscious biases in how human beings think about risk, as well as the impulses that often guide our responses — sometimes with serious consequences.
  • When you encounter a potential risk, your brain does a quick search for past experiences with it. If it can easily pull up multiple alarming memories, then your brain concludes the danger is high. But it often fails to assess whether those memories are truly representative.
  • Risks that we take on voluntarily, or that at least feel voluntary, are often seen as less dangerous than they really are. One study found that people will raise their threshold for the amount of danger they are willing to take on by a factor of one thousand if they see the risk as voluntary.
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
Javier E

But What Would the End of Humanity Mean for Me? - James Hamblin - The Atlantic - 0 views

  • Tegmark is more worried about much more immediate threats, which he calls existential risks. That’s a term borrowed from physicist Nick Bostrom, director of Oxford University’s Future of Humanity Institute, a research collective modeling the potential range of human expansion into the cosmos
  • "I am finding it increasingly plausible that existential risk is the biggest moral issue in the world, even if it hasn’t gone mainstream yet,"
  • Existential risks, as Tegmark describes them, are things that are “not just a little bit bad, like a parking ticket, but really bad. Things that could really mess up or wipe out human civilization.”
  • ...17 more annotations...
  • The single existential risk that Tegmark worries about most is unfriendly artificial intelligence. That is, when computers are able to start improving themselves, there will be a rapid increase in their capacities, and then, Tegmark says, it’s very difficult to predict what will happen.
  • Tegmark told Lex Berko at Motherboard earlier this year, "I would guess there’s about a 60 percent chance that I’m not going to die of old age, but from some kind of human-caused calamity. Which would suggest that I should spend a significant portion of my time actually worrying about this. We should in society, too."
  • "Longer term—and this might mean 10 years, it might mean 50 or 100 years, depending on who you ask—when computers can do everything we can do," Tegmark said, “after that they will probably very rapidly get vastly better than us at everything, and we’ll face this question we talked about in the Huffington Post article: whether there’s really a place for us after that, or not.”
  • "This is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”
  • Tegmark and his op-ed co-author Frank Wilczek, the Nobel laureate, draw examples of cold-war automated systems that assessed threats and resulted in false alarms and near misses. “In those instances some human intervened at the last moment and saved us from horrible consequences,” Wilczek told me earlier that day. “That might not happen in the future.”
  • there are still enough nuclear weapons in existence to incinerate all of Earth’s dense population centers, but that wouldn't kill everyone immediately. The smoldering cities would send sun-blocking soot into the stratosphere that would trigger a crop-killing climate shift, and that’s what would kill us all
  • “We are very reckless with this planet, with civilization,” Tegmark said. “We basically play Russian roulette.” The key is to think more long term, “not just about the next election cycle or the next Justin Bieber album.”
  • “There are several issues that arise, ranging from climate change to artificial intelligence to biological warfare to asteroids that might collide with the earth,” Wilczek said of the group’s launch. “They are very serious risks that don’t get much attention.
  • a widely perceived issue is when intelligent entities start to take on a life of their own. They revolutionized the way we understand chess, for instance. That’s pretty harmless. But one can imagine if they revolutionized the way we think about warfare or finance, either those entities themselves or the people that control them. It could pose some disquieting perturbations on the rest of our lives.”
  • Wilczek’s particularly concerned about a subset of artificial intelligence: drone warriors. “Not necessarily robots,” Wilczek told me, “although robot warriors could be a big issue, too. It could just be superintelligence that’s in a cloud. It doesn’t have to be embodied in the usual sense.”
  • it’s important not to anthropomorphize artificial intelligence. It's best to think of it as a primordial force of nature—strong and indifferent. In the case of chess, an A.I. models chess moves, predicts outcomes, and moves accordingly. If winning at chess meant destroying humanity, it might do that.
  • Even if programmers tried to program an A.I. to be benevolent, it could destroy us inadvertently. Andersen’s example in Aeon is that an A.I. designed to try and maximize human happiness might think that flooding your bloodstream with heroin is the best way to do that.
  • “It’s not clear how big the storm will be, or how long it’s going to take to get here. I don’t know. It might be 10 years before there’s a real problem. It might be 20, it might be 30. It might be five. But it’s certainly not too early to think about it, because the issues to address are only going to get more complex as the systems get more self-willed.”
  • Even within A.I. research, Tegmark admits, “There is absolutely not a consensus that we should be concerned about this.” But there is a lot of concern, and sense of lack of power. Because, concretely, what can you do? “The thing we should worry about is that we’re not worried.”
  • Tegmark brings it to Earth with a case-example about purchasing a stroller: If you could spend more for a good one or less for one that “sometimes collapses and crushes the baby, but nobody’s been able to prove that it is caused by any design flaw. But it’s 10 percent off! So which one are you going to buy?”
  • “There are seven billion of us on this little spinning ball in space. And we have so much opportunity," Tegmark said. "We have all the resources in this enormous cosmos. At the same time, we have the technology to wipe ourselves out.”
  • Ninety-nine percent of the species that have lived on Earth have gone extinct; why should we not? Seeing the biggest picture of humanity and the planet is the heart of this. It’s not meant to be about inspiring terror or doom. Sometimes that is what it takes to draw us out of the little things, where in the day-to-day we lose sight of enormous potentials.
summertyler

Healthy diet may improve memory, says study - CNN.com - 0 views

  • "You are what you eat." But could what we eat also affect how we think?
  • eating a healthy diet could potentially be linked to a lower risk of memory and thinking decline
  • a higher diet quality could have on reducing the risk of memory loss.
  • ...8 more annotations...
  • eating a balanced diet may be beneficial to reduce your risk of cognitive decline
  • there are many aspects of diet in combination with engaging in a healthy lifestyle that may influence cognitive decline
  • Participants were tested for their thinking and memory skills, at the start of the study, then again after two and five years.
  • "healthy diet" as one containing lots of fruits and vegetables, nuts, fish, moderate alcohol use and minimal red meat
  • "We just wanted to look at a diverse cohort of people from all around the world and analyze what their risk for cognitive decline would be if they consumed what most organizations would consider a 'healthy diet',"
  • this new study suggests that improving overall diet quality is an important factor for lowering the risk of memory and thinking loss
  • participants with the healthiest diets were 24% less likely to experience cognitive decline compared to those with the least healthy diets. These individuals were slightly older in age, more active, less likely to smoke and had a lower BMI.
  • blueberries may boost memory, and that a high intake of saturated and trans fats can have negative effects
  •  
    good diets mean it is less likely for someone to lose their memory.
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
1 - 20 of 454 Next › Last »
Showing 20 items per page