Skip to main content

Home/ History Readings/ Group items tagged accuracy

Rss Feed Group items tagged

Javier E

Who's Afraid of Early Cancer Detection? - WSJ - 0 views

  • A diagnosis of pancreatic cancer usually means a quick death—but not for Roger Royse, who was in Stage II of the disease when he got the bad news in July 2022. The five-year relative survival rate for late-stage metastatic pancreatic cancer is 3%—which means that patients are 3% as likely to live five years after their diagnosis as other cancer-free individuals. But if pancreatic cancer is caught before it has spread to other organs, the survival rate is 44%.
  • some public-health experts think that’s just as well. They fret that widespread use of multicancer early-detection tests would cause healthcare spending to explode. Those fears have snarled Galleri and similar tests in a web of red tape.
  • Early diagnosis is the best defense against most cancers,
  • ...25 more annotations...
  • But only a handful of cancers—of the breast, lung, colon and cervix—have screening tests recommended by the U.S. Preventive Services Task Force
  • Many companies are developing blood tests that can detect cancer signals before symptoms occur, and Grail’s is the most advanced. A study found it can identify more than 50 types of cancer 52% of the time and the 12 deadliest cancers in Stages I through III 68% of the time.
  • There’s a hitch. The test costs $949 and isn’t covered by Medicare or most private insurance.
  • The trouble is that this cancer is almost never caught early. There’s no routine screening for it, and symptoms don’t develop until it is advanced. Mr. Royse, 64, had no idea he was sick until he took a blood test called Galleri, produced by the Menlo Park, Calif., startup Grail. He had surgery and chemotherapy and is now cancer-free.
  • Mr. Royse visited Grail’s website, which referred him to a telemedicine provider who ordered a test. Another telemedicine doctor walked him through his results, which showed a cancer signal likely emanating from the pancreas, gallbladder, stomach or esophagus.
  • An MRI revealed a suspicious mass on his pancreas, which a biopsy confirmed was cancerous. Mr. Royse had three months of chemotherapy, surgery and another three months of chemotherapy, which ended last February. Because pancreatic cancer often recurs, he gets CT and MRI scans every three months. In addition, he has signed up for startup Natera’s Signatera customized blood test, which checks DNA specific to the patient’s cancer and can signal its return before signs are visible on the scans
  • Grail’s test likewise looks for DNA shed by cancer cells, which is tagged by molecules called methyl groups that are specific to a cancer’s origin. Grail uses genetic sequencing and machine learning to recognize links between DNA methyl groups and particular cancers
  • The test “is based on how much DNA is being shed by tumor,” Grail’s president, Josh Ofman, says. “Some tumors shed a lot of DNA. Some shed almost none.
  • ut slow-growing tumors typically aren’t shedding a lot of DNA.” That reduces the probability that Grail’s test will identify indolent cancers that pose no immediate danger.
  • Grail’s test has a roughly 0.5% false-positive rate, meaning 1 in 200 patients who don’t have cancer will get a positive signal
  • Its positive predictive value is 43%, so that of every 100 patients with a positive signal, 43 actually have cancer
  • the legislation’s price tag could reduce political support. According to one private company’s estimate, the test could cost the government $39 billion to $145 billion over a decade. Mr. Goldman counters that analysts usually overestimate the costs and underestimate the benefits of medical interventions.
  • Because Grail uses machine learning to detect DNA-methylation cancer linkages, the Grail test’s accuracy should improve as more tests and patient data are collected
  • regulators may balk at approving the test, and insurers at covering it, until it becomes cheaper and more reliable.
  • How would the FDA weigh the risk that a false positive on a test like Grail’s could require invasive follow-up testing against the dire but hard-to-quantify risk that a deadly cancer wouldn’t be caught until it’s much harder to treat? It’s unclear.
  • some experts urge the FDA to require large randomized controlled trials before approving blood cancer tests. “Multicancer screening would entail tremendous costs and potentially substantial harms,” H. Gilbert Welch and Tanujit Dey of Brigham and Women’s Hospital wrote
  • Dr. Welch and Mr. Dey also suggested that companies should be required to prove their tests reduce overall mortality, even though the FDA doesn’t require drugmakers to prove their products reduce deaths or extend life. Clinical trials for the mRNA Covid vaccines didn’t show they reduced deaths.
  • One alternative is to rely on real-world studies, which Grail is already doing. One study of patients 50 and older without signs of cancer showed that the test doubled the number of cancers detected.
  • One recurring problem he has seen: “Epidemiologists are always getting cancer wrong,” he says. “Epidemiologists a decade ago said U.S. overtreats cancers. Well, no, the EU undertreats cancer.”
  • A 2012 study that he co-authored found that the higher U.S. spending on cancer care relative to Europe between 1983 and 1999 resulted in significantly higher survival rates for American patients than for those in Europe
  • By his study’s calculation, U.S. spending on cancer treatments during that period resulted in $556 billion in net benefits owing to reduced mortality.
  • He expects Galleri and other multicancer early-detection tests to reduce deaths and produce public-health and economic benefits that exceed their monetary costs
  • Expanding access to multicancer early-detection tests could also help solve the chicken-and-egg problem of drug development. Because few patients are diagnosed at early stages of some cancers, it’s hard to develop treatments for them
  • the positive predictive value for some recommended cancer screenings is far lower. Fewer than 1 in 10 women with an abnormal finding on a mammogram are diagnosed with breast cancer.
  • Mr. Royse makes the same point with personal force. “I would be dead right now if not for multicancer early-detection testing,” Mr. Royse told an FDA advisory committee last fall. “The longer the FDA waits, the more people are going to die. It’s that simple.”
Javier E

'The machine did it coldly': Israel used AI to identify 37,000 Hamas targets | Israel-G... - 0 views

  • All six said that Lavender had played a central role in the war, processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the AI system to Hamas or PIJ.
  • The health ministry in the Hamas-run territory says 32,000 Palestinians have been killed in the conflict in the past six months. UN data shows that in the first month of the war alone, 1,340 families suffered multiple losses, with 312 families losing more than 10 members.
  • Several of the sources described how, for certain categories of targets, the IDF applied pre-authorised allowances for the estimated number of civilians who could be killed before a strike was authorised.
  • ...32 more annotations...
  • Two sources said that during the early weeks of the war they were permitted to kill 15 or 20 civilians during airstrikes on low-ranking militants. Attacks on such targets were typically carried out using unguided munitions known as “dumb bombs”, the sources said, destroying entire homes and killing all their occupants.
  • “You don’t want to waste expensive bombs on unimportant people – it’s very expensive for the country and there’s a shortage [of those bombs],” one intelligence officer said. Another said the principal question they were faced with was whether the “collateral damage” to civilians allowed for an attack.
  • “Because we usually carried out the attacks with dumb bombs, and that meant literally dropping the whole house on its occupants. But even if an attack is averted, you don’t care – you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”
  • ccording to conflict experts, if Israel has been using dumb bombs to flatten the homes of thousands of Palestinians who were linked, with the assistance of AI, to militant groups in Gaza, that could help explain the shockingly high death toll in the war.
  • Details about the specific kinds of data used to train Lavender’s algorithm, or how the programme reached its conclusions, are not included in the accounts published by +972 or Local Call. However, the sources said that during the first few weeks of the war, Unit 8200 refined Lavender’s algorithm and tweaked its search parameters.
  • Responding to the publication of the testimonies in +972 and Local Call, the IDF said in a statement that its operations were carried out in accordance with the rules of proportionality under international law. It said dumb bombs are “standard weaponry” that are used by IDF pilots in a manner that ensures “a high level of precision”.
  • “The IDF does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist,” it added. “Information systems are merely tools for analysts in the target identification process.”
  • In earlier military operations conducted by the IDF, producing human targets was often a more labour-intensive process. Multiple sources who described target development in previous wars to the Guardian, said the decision to “incriminate” an individual, or identify them as a legitimate target, would be discussed and then signed off by a legal adviser.
  • n the weeks and months after 7 October, this model for approving strikes on human targets was dramatically accelerated, according to the sources. As the IDF’s bombardment of Gaza intensified, they said, commanders demanded a continuous pipeline of targets.
  • “We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us,” said one intelligence officer. “We were told: now we have to fuck up Hamas, no matter what the cost. Whatever you can, you bomb.”
  • Lavender was developed by the Israel Defense Forces’ elite intelligence division, Unit 8200, which is comparable to the US’s National Security Agency or GCHQ in the UK.
  • After randomly sampling and cross-checking its predictions, the unit concluded Lavender had achieved a 90% accuracy rate, the sources said, leading the IDF to approve its sweeping use as a target recommendation tool.
  • Lavender created a database of tens of thousands of individuals who were marked as predominantly low-ranking members of Hamas’s military wing, they added. This was used alongside another AI-based decision support system, called the Gospel, which recommended buildings and structures as targets rather than individuals.
  • The accounts include first-hand testimony of how intelligence officers worked with Lavender and how the reach of its dragnet could be adjusted. “At its peak, the system managed to generate 37,000 people as potential human targets,” one of the sources said. “But the numbers changed all the time, because it depends on where you set the bar of what a Hamas operative is.”
  • broadly, and then the machine started bringing us all kinds of civil defence personnel, police officers, on whom it would be a shame to waste bombs. They help the Hamas government, but they don’t really endanger soldiers.”
  • Before the war, US and Israeli estimated membership of Hamas’s military wing at approximately 25-30,000 people.
  • there was a decision to treat Palestinian men linked to Hamas’s military wing as potential targets, regardless of their rank or importance.
  • According to +972 and Local Call, the IDF judged it permissible to kill more than 100 civilians in attacks on a top-ranking Hamas officials. “We had a calculation for how many [civilians could be killed] for the brigade commander, how many [civilians] for a battalion commander, and so on,” one source said.
  • Another source, who justified the use of Lavender to help identify low-ranking targets, said that “when it comes to a junior militant, you don’t want to invest manpower and time in it”. They said that in wartime there was insufficient time to carefully “incriminate every target”
  • So you’re willing to take the margin of error of using artificial intelligence, risking collateral damage and civilians dying, and risking attacking by mistake, and to live with it,” they added.
  • When it came to targeting low-ranking Hamas and PIJ suspects, they said, the preference was to attack when they were believed to be at home. “We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” one said. “It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”
  • Such a strategy risked higher numbers of civilian casualties, and the sources said the IDF imposed pre-authorised limits on the number of civilians it deemed acceptable to kill in a strike aimed at a single Hamas militant. The ratio was said to have changed over time, and varied according to the seniority of the target.
  • The IDF’s targeting processes in the most intensive phase of the bombardment were also relaxed, they said. “There was a completely permissive policy regarding the casualties of [bombing] operations,” one source said. “A policy so permissive that in my opinion it had an element of revenge.”
  • “There were regulations, but they were just very lenient,” another added. “We’ve killed people with collateral damage in the high double digits, if not low triple digits. These are things that haven’t happened before.” There appears to have been significant fluctuations in the figure that military commanders would tolerate at different stages of the war
  • One source said that the limit on permitted civilian casualties “went up and down” over time, and at one point was as low as five. During the first week of the conflict, the source said, permission was given to kill 15 non-combatants to take out junior militants in Gaza
  • at one stage earlier in the war they were authorised to kill up to “20 uninvolved civilians” for a single operative, regardless of their rank, military importance, or age.
  • “It’s not just that you can kill any person who is a Hamas soldier, which is clearly permitted and legitimate in terms of international law,” they said. “But they directly tell you: ‘You are allowed to kill them along with many civilians.’ … In practice, the proportionality criterion did not exist.”
  • Experts in international humanitarian law who spoke to the Guardian expressed alarm at accounts of the IDF accepting and pre-authorising collateral damage ratios as high as 20 civilians, particularly for lower-ranking militants. They said militaries must assess proportionality for each individual strike.
  • An international law expert at the US state department said they had “never remotely heard of a one to 15 ratio being deemed acceptable, especially for lower-level combatants. There’s a lot of leeway, but that strikes me as extreme”.
  • Sarah Harrison, a former lawyer at the US Department of Defense, now an analyst at Crisis Group, said: “While there may be certain occasions where 15 collateral civilian deaths could be proportionate, there are other times where it definitely wouldn’t be. You can’t just set a tolerable number for a category of targets and say that it’ll be lawfully proportionate in each case.”
  • Whatever the legal or moral justification for Israel’s bombing strategy, some of its intelligence officers appear now to be questioning the approach set by their commanders. “No one thought about what to do afterward, when the war is over, or how it will be possible to live in Gaza,” one said.
  • Another said that after the 7 October attacks by Hamas, the atmosphere in the IDF was “painful and vindictive”. “There was a dissonance: on the one hand, people here were frustrated that we were not attacking enough. On the other hand, you see at the end of the day that another thousand Gazans have died, most of them civilians.”
Javier E

'Social Order Could Collapse' in AI Era, Two Top Japan Companies Say - WSJ - 0 views

  • Japan’s largest telecommunications company and the country’s biggest newspaper called for speedy legislation to restrain generative artificial intelligence, saying democracy and social order could collapse if AI is left unchecked.
  • the manifesto points to rising concern among American allies about the AI programs U.S.-based companies have been at the forefront of developing.
  • The Japanese companies’ manifesto, while pointing to the potential benefits of generative AI in improving productivity, took a generally skeptical view of the technology
  • ...8 more annotations...
  • Without giving specifics, it said AI tools have already begun to damage human dignity because the tools are sometimes designed to seize users’ attention without regard to morals or accuracy.
  • Unless AI is restrained, “in the worst-case scenario, democracy and social order could collapse, resulting in wars,” the manifesto said.
  • It said Japan should take measures immediately in response, including laws to protect elections and national security from abuse of generative AI.
  • The Biden administration is also stepping up oversight, invoking emergency federal powers last October to compel major AI companies to notify the government when developing systems that pose a serious risk to national security. The U.S., U.K. and Japan have each set up government-led AI safety institutes to help develop AI guidelines.
  • NTT and Yomiuri said their manifesto was motivated by concern over public discourse. The two companies are among Japan’s most influential in policy. The government still owns about one-third of NTT, formerly the state-controlled phone monopoly.
  • Yomiuri Shimbun, which has a morning circulation of about six million copies according to industry figures, is Japan’s most widely-read newspaper. Under the late Prime Minister Shinzo Abe and his successors, the newspaper’s conservative editorial line has been influential in pushing the ruling Liberal Democratic Party to expand military spending and deepen the nation’s alliance with the U.S.
  • The Yomiuri’s news pages and editorials frequently highlight concerns about artificial intelligence. An editorial in December, noting the rush of new AI products coming from U.S. tech companies, said “AI models could teach people how to make weapons or spread discriminatory ideas.” It cited risks from sophisticated fake videos purporting to show politicians speaking.
  • NTT is active in AI research, and its units offer generative AI products to business customers. In March, it started offering these customers a large-language model it calls “tsuzumi” which is akin to OpenAI’s ChatGPT but is designed to use less computing power and work better in Japanese-language contexts.
« First ‹ Previous 61 - 63 of 63
Showing 20 items per page