Skip to main content

Home/ History Readings/ Group items tagged legality

Rss Feed Group items tagged

12More

Abortion Rights Debate Shifts to Pregnancy and Fertility as Election Nears - The New Yo... - 0 views

  • The public conversation about abortion has grown into one about the complexities of pregnancy and reproduction, as the consequences of bans have played out in the news. The question is no longer just whether you can get an abortion, but also, Can you get one if pregnancy complications put you in septic shock? Can you find an obstetrician when so many are leaving states with bans? If you miscarry, will the hospital send you home to bleed? Can you and your partner do in vitro fertilization?
  • That shift helps explain why a record percentage of Americans are now declaring themselves single-issue voters on abortion rights — especially among Black voters, Democrats, women and those ages 18 to 29. Republican women are increasingly saying their party’s opposition to abortion is too extreme, and Democrats are running on the issue after years of running away from it.
  • Tresa Undem, who has been polling people on abortion for 25 years, estimated that before the Supreme Court’s ruling in Dobbs v. Jackson Women’s Health Organization, the case that overturned Roe, less than 15 percent of the public considered abortion personally relevant — women who could get pregnant and would choose an abortion.
  • ...9 more annotations...
  • “People used to talk about politicians trying to control our bodies,” she said. “Now it’s, they have no business getting involved in these medical decisions, these politicians don’t have medical expertise, they’re making these laws, and they’re not basing it on health care or science.”
  • Seventy-three percent of independents who support abortion rights said stories about women almost dying because of bans would affect how they vote.
  • “Now it’s about pregnancy, and everybody knows someone who had a baby or wants to have a baby or might get pregnant,” she said. “It’s profoundly personal to a majority of the public.”
  • Anti-abortion groups have responded by trying to carve out a difference between “elective abortion” for unwanted pregnancies — which they want banned — and “maternal fetal separation” in medical emergencies. (The medical procedure is the same.)
  • Opponents have long stigmatized abortion as something irresponsible women use as birth control or because they care more about their careers than having children. “When the focus shifts to the dangers that abortion bans inflict on pregnant people,” said Reva Siegel, a constitutional law professor at Yale who has written extensively about the country’s abortion conflict, “it’s easier for Americans to talk about.”
  • Technology and criminal law have flipped the script, she said.
  • Before Roe legalized abortion nationally in 1973, the law allowed more leeway for what were considered “therapeutic abortions.” Doctors, often solo practitioners, could use their good faith judgment to provide them. Even the Southern Baptist Convention supported abortions in cases of fetal deformity or when a woman’s physical or mental health was at risk.
  • Now, the threat of prosecution, $100,000 fines and loss of their medical licenses have chilled doctors and hospital systems in treating women with pregnancy complications. More often than not in some states, lawyers are making the decisions.
  • In Georgia, she said, more people opposed the state’s ban on abortion after six weeks of pregnancy once they were told that this meant two weeks after the average woman misses her period — not, as her own partner believed, six weeks after conception. Some voters, she said, believed that six weeks meant six weeks after women found out they were pregnant.
18More

AI scientist Ray Kurzweil: 'We are going to expand intelligence a millionfold by 2045' ... - 0 views

  • American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer
  • no longer seem so wacky.
  • Your 2029 and 2045 projections haven’t changed…I have stayed consistent. So 2029, both for human-level intelligence and for artificial general intelligence (AGI) – which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time.
  • ...15 more annotations...
  • Why write this book? The Singularity Is Near talked about the future, but 20 years ago, when people didn’t know what AI was. It was clear to me what would happen, but it wasn’t clear to everybody. Now AI is dominating the conversation. It is time to take a look again both at the progress we’ve made – large language models (LLMs) are quite delightful to use – and the coming breakthroughs.
  • It is hard to imagine what this would be like, but it doesn’t sound very appealing… Think of it like having your phone, but in your brain. If you ask a question your brain will be able to go out to the cloud for an answer similar to the way you do on your phone now – only it will be instant, there won’t be any input or output issues, and you won’t realise it has been done (the answer will just appear). People do say “I don’t want that”: they thought they didn’t want phones either!
  • The most important driver is the exponential growth in the amount of computing power for the price in constant dollars. We are doubling price-performance every 15 months. LLMs just began to work two years ago because of the increase in computation.
  • What’s missing currently to bring AI to where you are predicting it will be in 2029? One is more computing power – and that’s coming. That will enable improvements in contextual memory, common sense reasoning and social interaction, which are all areas where deficiencies remain
  • LLM hallucinations [where they create nonsensical or inaccurate outputs] will become much less of a problem, certainly by 2029 – they already happen much less than they did two years ago. The issue occurs because they don’t have the answer, and they don’t know that. They look for the best thing, which might be wrong or not appropriate. As AI gets smarter, it will be able to understand its own knowledge more precisely and accurately report to humans when it doesn’t know.
  • What exactly is the Singularity? Today, we have one brain size which we can’t go beyond to get smarter. But the cloud is getting smarter and it is growing really without bounds. The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots – robots the size of molecules – that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness.
  • Why should we believe your dates? I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.
  • I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]
  • All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive.
  • Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time.
  • The book looks in detail at AI’s job-killing potential. Should we be worried? Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.
  • Everything is progressing exponentially: not only computing power but our understanding of biology and our ability to engineer at far smaller scales. In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress. And as we move past that we’ll actually get back more years.
  • What is your own plan for immortality? My first plan is to stay alive, therefore reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. I’m also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think we’ll all have in the late 2020s
  • I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him. [My replicant] will be able to draw on more material and so represent my personality more faithfully.
  • What should we be doing now to best prepare for the future? It is not going to be us versus AI: AI is going inside ourselves. It will allow us to create new things that weren’t feasible before. It’ll be a pretty fantastic future.
2More

The Bottomless College Parent Trap - WSJ - 0 views

  • Payments to thousands of former and current athletes will approach $2.8 billion, minus the trial lawyers’ cut of the class-action suits. This follows the NCAA’s decision to let college athletes benefit financially from their names, images and likenesses
  • Most legal analysis of the settlement concludes that the days of the “amateur” college athlete are over. In the future, the men and women on Division I teams and others likely will be regarded as professionals who will be paid to play by universities through revenue-sharing agreements up to $20 million a year per school.
1More

Wrong Case, Right Verdict - The Atlantic - 0 views

  • If Trump does somehow return to the presidency, his highest priority will be smashing up the American legal system to punish it for holding him to some kind of account—and to prevent it from holding him to higher account for the yet-more-terrible charges pending before state and federal courts. The United States can have a second Trump presidency, or it can retain the rule of law, but not both.
24More

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times - 0 views

  • A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.
  • The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.
  • The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
  • ...21 more annotations...
  • They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
  • “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.
  • Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company,
  • At OpenAI, Mr. Kokotajlo saw that even though the company had safety protocols in place — including a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released — they rarely seemed to slow anything down.
  • So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.”
  • “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders said.
  • Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
  • He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent.
  • Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.
  • Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed.
  • In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly" as its systems approach human-level intelligence.
  • “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
  • On his way out, Mr. Kokotajlo refused to sign OpenAI’s standard paperwork for departing employees, which included a strict nondisparagement clause barring them from saying negative things about the company, or else risk having their vested equity taken away.
  • Many employees could lose out on millions of dollars if they refused to sign. Mr. Kokotajlo’s vested equity was worth roughly $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to forfeit all of it.
  • Mr. Altman said he was “genuinely embarrassed” not to have known about the agreements, and the company said it would remove nondisparagement clauses from its standard paperwork and release former employees from their agreements.)
  • In their open letter, Mr. Kokotajlo and the other former OpenAI employees call for an end to using nondisparagement and nondisclosure agreements at OpenAI and other A.I. companies.
  • “Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,”
  • They also call for A.I. companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise safety-related concerns.
  • They have retained a pro bono lawyer, Lawrence Lessig, the prominent legal scholar and activist
  • Mr. Kokotajlo and his group are skeptical that self-regulation alone will be enough to prepare for a world with more powerful A.I. systems. So they are calling for lawmakers to regulate the industry, too.
  • “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process," Mr. Kokotajlo said. “Instead of just a couple of different private companies racing with each other, and keeping it all secret.”
31More

Opinion | How We've Lost Our Moorings as a Society - The New York Times - 0 views

  • To my mind, one of the saddest things that has happened to America in my lifetime is how much we’ve lost so many of our mangroves. They are endangered everywhere today — but not just in nature.
  • Our society itself has lost so many of its social, normative and political mangroves as well — all those things that used to filter toxic behaviors, buffer political extremism and nurture healthy communities and trusted institutions for young people to grow up in and which hold our society together.
  • You see, shame used to be a mangrove
  • ...28 more annotations...
  • That shame mangrove has been completely uprooted by Trump.
  • The reason people felt ashamed is that they felt fidelity to certain norms — so their cheeks would turn red when they knew they had fallen short
  • in the kind of normless world we have entered where societal, institutional and leadership norms are being eroded,” Seidman said to me, “no one has to feel shame anymore because no norm has been violated.”
  • People in high places doing shameful things is hardly new in American politics and business. What is new, Seidman argued, “is so many people doing it so conspicuously and with such impunity: ‘My words were perfect,’ ‘I’d do it again.’ That is what erodes norms — that and making everyone else feel like suckers for following them.”
  • Nothing is more corrosive to a vibrant democracy and healthy communities, added Seidman, than “when leaders with formal authority behave without moral authority.
  • Without leaders who, through their example and decisions, safeguard our norms and celebrate them and affirm them and reinforce them, the words on paper — the Bill of Rights, the Constitution or the Declaration of Independence — will never unite us.”
  • . Trump wants to destroy our social and legal mangroves and leave us in a broken ethical ecosystem, because he and people like him best thrive in a broken system.
  • He keeps pushing our system to its breaking point, flooding the zone with lies so that the people trust only him and the truth is only what he says it is. In nature, as in society, when you lose your mangroves, you get flooding with lots of mud.
  • Responsibility, especially among those who have taken oaths of office — another vital mangrove — has also experienced serious destruction.
  • It used to be that if you had the incredible privilege of serving as U.S. Supreme Court justice, in your wildest dreams you would never have an American flag hanging upside down
  • Your sense of responsibility to appear above partisan politics to uphold the integrity of the court’s rulings would not allow it.
  • Civil discourse and engaging with those with whom you disagree — instead of immediately calling for them to be fired — also used to be a mangrove.
  • when moral arousal manifests as moral outrage — and immediate demands for firings — “it can result in a vicious cycle of moral outrage being met with equal outrage, as opposed to a virtuous cycle of dialogue and the hard work of forging real understanding.”
  • In November 2022, the Heterodox Academy, a nonprofit advocacy group, surveyed 1,564 full-time college students ages 18 to 24. The group found that nearly three in five students (59 percent) hesitate to speak about controversial topics like religion, politics, race, sexual orientation and gender for fear of negative backlashes by classmates.
  • Locally owned small-town newspapers used to be a mangrove buffering the worst of our national politics. A healthy local newspaper is less likely to go too far to one extreme or another, because its owners and editors live in the community and they know that for their local ecosystem to thrive, they need to preserve and nurture healthy interdependencies
  • in 2023, the loss of local newspapers accelerated to an average of 2.5 per week, “leaving more than 200 counties as ‘news deserts’ and meaning that more than half of all U.S. counties now have limited access to reliable local news and information.”
  • As in nature, it leaves the local ecosystem with fewer healthy interdependencies, making it more vulnerable to invasive species and disease — or, in society, diseased ideas.
  • It’s not that the people in these communities have changed. It’s that if that’s what you are being fed, day in and day out, then you’re going to come to every conversation with a certain set of predispositions that are really hard to break through.”
  • we have gone from you’re not supposed to say “hell” on the radio to a nation that is now being permanently exposed to for-profit systems of political and psychological manipulation (and throw in Russia and China stoking the fires today as well), so people are not just divided, but being divided. Yes, keeping Americans morally outraged is big business at home now and war by other means by our geopolitical rivals.
  • More than ever, we are living in the “never-ending storm” that Seidman described to me back in 2016, in which moral distinctions, context and perspective — all the things that enable people and politicians to make good judgments — get blown away.
  • Blown away — that is exactly what happens to the plants, animals and people in an ecosystem that loses its mangroves.
  • a trend ailing America today: how much we’ve lost our moorings as a society.
  • Civil discourse and engaging with those with whom you disagree — instead of immediately calling for them to be fired — also used to be mangroves.
  • civility itself also used to be a mangrove.
  • “Why the hell not?” Drummond asks.“You’re not supposed to say ‘hell,’ either,” the announcer says.You are not supposed to say “hell,” either. What a quaint thought. That is a polite exclamation point in today’s social media.
  • Another vital mangrove is religious observance. It has been declining for decades:
  • So now the most partisan national voices on Fox News, or MSNBC — or any number of polarizing influencers like Tucker Carlson — go straight from their national studios direct to small-town America, unbuffered by a local paper’s or radio station’s impulse to maintain a community where people feel some degree of connection and mutual respect
  • In a 2021 interview with my colleague Ezra Klein, Barack Obama observed that when he started running for the presidency in 2007, “it was still possible for me to go into a small town, in a disproportionately white conservative town in rural America, and get a fair hearing because people just hadn’t heard of me. … They didn’t have any preconceptions about what I believed. They could just take me at face value.”
14More

AI Has Become a Technology of Faith - The Atlantic - 0 views

  • Altman told me that his decision to join Huffington stemmed partly from hearing from people who use ChatGPT to self-diagnose medical problems—a notion I found potentially alarming, given the technology’s propensity to return hallucinated information. (If physicians are frustrated by patients who rely on Google or Reddit, consider how they might feel about patients showing up in their offices stuck on made-up advice from a language model.)
  • I noted that it seemed unlikely to me that anyone besides ChatGPT power users would trust a chatbot in this way, that it was hard to imagine people sharing all their most intimate information with a computer program, potentially to be stored in perpetuity.
  • “I and many others in the field have been positively surprised about how willing people are to share very personal details with an LLM,” Altman told me. He said he’d recently been on Reddit reading testimonies of people who’d found success by confessing uncomfortable things to LLMs. “They knew it wasn’t a real person,” he said, “and they were willing to have this hard conversation that they couldn’t even talk to a friend about.”
  • ...11 more annotations...
  • That willingness is not reassuring. For example, it is not far-fetched to imagine insurers wanting to get their hands on this type of medical information in order to hike premiums. Data brokers of all kinds will be similarly keen to obtain people’s real-time health-chat records. Altman made a point to say that this theoretical product would not trick people into sharing information.
  • . Neither Altman nor Huffington had an answer to my most basic question—What would the product actually look like? Would it be a smartwatch app, a chatbot? A Siri-like audio assistant?—but Huffington suggested that Thrive’s AI platform would be “available through every possible mode,” that “it could be through your workplace, like Microsoft Teams or Slack.
  • This led me to propose a hypothetical scenario in which a company collects this information and stores it inappropriately or uses it against employees. What safeguards might the company apply then? Altman’s rebuttal was philosophical. “Maybe society will decide there’s some version of AI privilege,” he said. “When you talk to a doctor or a lawyer, there’s medical privileges, legal privileges. There’s no current concept of that when you talk to an AI, but maybe there should be.”
  • So much seems to come down to: How much do you want to believe in a future mediated by intelligent machines that act like humans? And: Do you trust these people?
  • A fundamental question has loomed over the world of AI since the concept cohered in the 1950s: How do you talk about a technology whose most consequential effects are always just on the horizon, never in the present? Whatever is built today is judged partially on its own merits, but also—perhaps even more important—on what it might presage about what is coming next.
  • the models “just want to learn”—a quote attributed to the OpenAI co-founder Ilya Sutskever that means, essentially, that if you throw enough money, computing power, and raw data into these networks, the models will become capable of making ever more impressive inferences. True believers argue that this is a path toward creating actual intelligence (many others strongly disagree). In this framework, the AI people become something like evangelists for a technology rooted in faith: Judge us not by what you see, but by what we imagine.
  • I found it outlandish to invoke America’s expensive, inequitable, and inarguably broken health-care infrastructure when hyping a for-profit product that is so nonexistent that its founders could not tell me whether it would be an app or not.
  • Thrive AI Health is profoundly emblematic of this AI moment precisely because it is nothing, yet it demands that we entertain it as something profound.
  • you don’t have to get apocalyptic to see the way that AI’s potential is always muddying people’s ability to evaluate its present. For the past two years, shortcomings in generative-AI products—hallucinations; slow, wonky interfaces; stilted prose; images that showed too many teeth or couldn’t render fingers; chatbots going rogue—have been dismissed by AI companies as kinks that will eventually be worked out
  • Faith is not a bad thing. We need faith as a powerful motivating force for progress and a way to expand our vision of what is possible. But faith, in the wrong context, is dangerous, especially when it is blind. An industry powered by blind faith seems particularly troubling.
  • The greatest trick of a faith-based industry is that it effortlessly and constantly moves the goal posts, resisting evaluation and sidestepping criticism. The promise of something glorious, just out of reach, continues to string unwitting people along. All while half-baked visions promise salvation that may never come.
« First ‹ Previous 1181 - 1187 of 1187
Showing 20 items per page