Skip to main content

Home/ TOK Friends/ Group items tagged service

Rss Feed Group items tagged

karenmcgregor

Is ComputerNetworkAssignmentHelp.com a Legitimate Source for Network Security Assignmen... - 0 views

In the dynamic landscape of academic support services, finding a trustworthy platform for network security assignment writing help is crucial. Today, we'll delve into the legitimacy of https://www....

#networksecurityassignmentwritinghelp #networksecurity #onlineassignmenthelp education

started by karenmcgregor on 08 Jan 24 no follow-up yet
clairemann

Service dogs can help veterans with PTSD - growing evidence shows they may reduce anxie... - 0 views

  • As many as 1 in 5 of the roughly 2.7 million Americans deployed to Iraq and Afghanistan since 2001 are experiencing post-traumatic stress disorder.
  • Our lab is studying whether service dogs can help these military veterans, who may also have depression and anxiety – and run an elevated risk of death by suicide – in addition to having PTSD.
  • Unlike emotional support dogs or therapy dogs, service dogs must be trained to do specific tasks – in this case, helping alleviate PTSD symptoms. In keeping with the Americans with Disabilities Act, service dogs are allowed in public places where other dogs are not.
  • ...2 more annotations...
  • Once veterans got service dogs, they described themselves in surveys as more satisfied with their lives, said they felt a greater sense of well-being and deemed themselves as having better relationships with friends and loved ones.
  • There can also be a new sense of stigma that goes along with making a disability that might otherwise be hidden readily apparent. Someone who has PTSD might not stick out until they get a service dog that is always present.
karenmcgregor

Empower Your Studies with a Trusted CCNA Assignment Helper: Navigating the Path to Netw... - 2 views

Are you a student immersed in the complexities of CCNA coursework, searching for a reliable CCNA assignment helper to lighten your academic load? Look no further! At computernetworkassignmenthelp.c...

#domyccnaassignment #ccna #ccnaassignmenthelp #paytodomyccnaassignment #education

started by karenmcgregor on 05 Dec 23 no follow-up yet
dpittenger

Four Demoted at Secret Service in Wake of Scandals - 0 views

  • The Secret Service’s interim director has demoted four of the agency’s senior executives as part of a management shake-up after a series of scandals, the agency said Wednesday.
  • Julia Pierson, resigned amid criticism of the agency after a man with a knife climbed the White House fence, ran through the North Portico doors and into the East Room before he was tackled by agency officers.
  • “The Secret Service has suffered from a lack of leadership and that has had a detrimental impact on security, training, protocols, and overall culture,” said Mr. Chaffetz, the chairman of the House Oversight Committee and an ardent critic of the Secret Service.
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
lucieperloff

Under Biden, Diplomacy Is an Attractive Career Again - The New York Times - 1 views

  • The steady erosion of traditional American diplomatic principles under the Trump administration had taken a toll, he added.
  • Their colleagues in China were mysteriously falling sick.
  • But when President Biden was elected, Mr. Luce said he was encouraged by the new president’s embrace of diplomacy, along with his nominations for senior foreign policy posts. Mr. Luce took the Foreign Service exam in February.
  • ...14 more annotations...
  • And despite the rising interest, applicants have signaled the agency must tackle racism and improve diversity in the United States’ diplomatic ranks.
  • Applications to the school’s graduate programs, including its masters of science degree in foreign service, jumped 40 percent this cycle,
  • Foreign Service officers — who undergo a rigorous selection process that includes written exams, oral assessments, security checks and medical clearances — pledge to be nonpartisan as they become the machinery of foreign policymaking that political appointees rely on for expertise and continuity between administrations.
  • During Mr. Trump’s tenure, his administration installed an “America First” policy, prioritizing domestic interests over foreign policy concerns and minimizing the importance of day-to-day diplomatic work.
  • “That undercuts not only morale but also a policy process that depends on apolitical experts airing contrary views, however inconvenient they may be to the politically appointed leadership,
  • “This is a critical time in terms of America’s role in the world. We need a strong, diverse and talented pool of people from which to select our newest diplomats and aid officers.”
  • It “has been a difficult few years,” he said. “You are the face of America, and it matters.
  • In the time between Mr. Biden’s election and his inauguration, the group grew by over 2,000 members,
  • How could he promote American values abroad when they were being upended at home, he wondered.
  • “President Trump was very unusual,” Mr. Whittier said. “That’s what completely put me off of joining the Foreign Service.”
  • The Biden administration said that tackling the lack of diversity in the diplomatic corps would be a priority.
  • When she brought up issues to State Department officials, they advised her against speaking out and transferred her to Mexico City, she said.
  • she felt the “new administration brings a lot of hope to people who were marginalized in the past.”
  • “When you see more faces that look like you,” she said, “I think it definitely will bring more people to work in public service.”
katherineharron

Postal service forced to keep working despite shortages of cash and protection - CNNPol... - 0 views

  • United States Postal Service workers, who are still delivering and sorting mail at distribution centers around the country even as millions of Americans telecommute, are worried that they may not have the protection -- or even the funding -- they will need to keep delivering mail for months longer.
  • Reports have popped up across the country where postal workers say they don't have hand sanitizer, gloves or masks, and are being told to work despite illnesses and are looking to community donations to address supply shortfalls.
  • Rep. Mary Gay Scanlon, a Pennsylvania Democrat, said postal workers in her region have said they don't have hand sanitizer or materials needed to keep post offices disinfected.
  • ...2 more annotations...
  • For years, USPS has struggled to stay afloat with the emergence of technology threatening their bottom line, but coronavirus has further strained their financial situation as mail volume has declined. The Postal Service has warned that it could be insolvent by June, the House Oversight Committee said this week.
  • "The Postal Service remains concerned that this measure will be insufficient to enable the Postal Service to withstand the significant downturn in our business that could directly result from the pandemic," said USPS spokesman David Partenheimer. "Under a worst case scenario, such downturn could result in the Postal Service having insufficient liquidity to continue operations."
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
demetriar

Spotify Wants Listeners to Break Down Music Barriers - NYTimes.com - 0 views

  • Their cultural acumen is entirely the product of technology — in particular, being introduced to new artists through Spotify, the world’s largest subscription music-streaming service. According to executives at Spotify, my children offer a peek at the future of music consumption.
  • On average, the company said, the service exposes each of these listeners to one new artist every day. That is making listeners less beholden to music of certain styles and eras. Instead, many of us will try anything, just because we can easily sample it online.
  • Spotify is betting that fixed musical genres will fade away. In its new version rolling out to iPhone users, the company has expanded its effort to program for moods and activities rather than merely certain kinds of musical tastes.
  • ...5 more annotations...
  • If Spotify is right about our increasing willingness to try new stuff — and critics who follow the pop charts said it may be — the trend could upend how we think about music.
  • Until recently, because of the narrowcasting ethos of terrestrial radio, music was fiercely segregated by genre. In an era less bound by those niches and instead dominated by an online free-for-all, we may discover new artists more quickly than in the past — though, on the other side of the coin, we may also develop less fierce attachments to certain artists, flitting, as my children do, between anything and everything. For better or worse, streaming services may turn us into cultural nomads.
  • By suggesting tracks based on my activities and parts of the day, I found the service exposed me to music out of my comfort zone.
  • Programmers for radio stations also look at these services to decide what to add to their rotations.
  • “These were all songs that were different from what radio was playing, and radio tends to be a homogeneous medium,” Mr. Molanphy said.
  •  
    How online music streaming sorts music into categories other than by typical music genres, allowing people to be exposed to more types of music. Why do we categorize music by genres? How has online music streaming effected our knowledge of music?
Javier E

A Super-Simple Way to Understand the Net Neutrality Debate - NYTimes.com - 0 views

  • there is a really simple way of thinking of the debate over net neutrality: Is access to the Internet more like access to electricity, or more like cable television service?
  • For all the technical complexity of generating electricity and distributing it to millions of people, the economic arrangement is very simple: I give them money. They give me electricity. I do with it what I will.
  • Comcast, my cable provider, offers me a menu of packages from which I might choose, each with a different mix of channels. It goes through long and sometimes arduous negotiations with the owners of those cable channels and has a different business arrangement with each of them. The details of those arrangements are opaque to me as the consumer; all I know is that I can get the movie package for X dollars a month or the sports package for Y dollars and so on.
  • ...4 more annotations...
  • One theory of the case, and the one that the Obama administration embraced Monday, is that the Internet is like electricity. It is fundamental to the 21st century economy, as essential to functioning in modern society as electricity. It is a public utility. “We cannot allow Internet service providers (ISPs) to restrict the best access or to pick winners and losers in the online marketplace for services and ideas,” the president said in his written statement.
  • just as your electric utility has no say in how you use the electricity they sell you, the Internet should be a reliable way to access content produced by anyone, regardless of whether they have any special business arrangement with the utility.
  • Those arguing against net neutrality, most significantly the cable companies, say the Internet will be a richer experience if the profit motive applies, if they can negotiate deals with major content providers (the equivalent of cable channels) so that Netflix or Hulu or other streaming services that use huge bandwidth have to pay for the privilege.
  • It would also give your Internet provider considerably more economic leverage. It would, in the non-net-neutrality world, be free to throttle the speed with which you could access services that don’t pay up, or block sites entirely, as surely as you cannot watch a cable channel that your cable provider chooses not to offer (perhaps because of a dispute with the channel over fees).
Javier E

Why Tech Support Is (Purposely) Unbearable - The New York Times - 0 views

  • “Don’t think companies haven’t studied how far they can take things in providing the minimal level of service,” Mr. Robbins said. “Some organizations have even monetized it by intentionally engineering it so you have to wait an hour at least to speak to someone in support, and while you are on hold, you’re hearing messages like, ‘If you’d like premium support, call this number and for a fee, we will get to you immediately.’”
  • The most egregious offenders are companies like cable and mobile service providers, which typically have little competition and whose customers are bound by contracts or would be considerably inconvenienced if they canceled their service.
  • When things don’t make sense and feel out of control, mental health experts say, humans instinctively feel threatened. Though you would like to think you can employ reason in this situation, you’re really just a mass of neural impulses and primal reactions. Think fight or flight, but you can’t do either because you are stuck on the phone, which provokes rage.
  • ...6 more annotations...
  • Mr. Valenti, like several other tech support workers who have posted confessions online, said rudeness generally gets customers placed on hold for long periods or “accidentally” disconnected. It also may result in the agent fixing the immediate problem but not the root cause.
  • Don’t bother demanding to speak to a supervisor, either. You’re just going to get transferred to another agent who has been alerted ahead of time that you have come unhinged,
  • Customer support experts recommended using social media, like tweeting or sending a Facebook message, to contact a company instead of calling.
  • To get better service by phone, dial the prompt designated for “sales” or “to place an order,” which almost always gets you an onshore agent, while tech support is usually offshore with the associated language difficulties.
  • You can also consult websites like DialAHuman.com and GetHuman.com for phone numbers and directions on what digits to press to bypass the automated system and get a live person.
  • Failing that, apps like Lucy Phone and Fast Customer will wait on hold for you and call you when an actual person picks up. No need to stoke your rage listening to grating hold music.
Javier E

The Scoreboards Where You Can't See Your Score - NYTimes.com - 0 views

  • The characters in Gary Shteyngart’s novel “Super Sad True Love Story” inhabit a continuously surveilled and scored society.
  • Consider the protagonist, Lenny Abramov, age 39. A digital dossier about him accumulates his every health condition (high cholesterol, depression), liability (mortgage: $560,330), purchase (“bound, printed, nonstreaming media artifact”), tendency (“heterosexual, nonathletic, nonautomotive, nonreligious”) and probability (“life span estimated at 83”). And that profile is available for perusal by employers, friends and even strangers in bars.
  • Even before the appearance of these books, a report called “The Scoring of America” by the World Privacy Forum showed how analytics companies now offer categorization services like “churn scores,” which aim to predict which customers are likely to forsake their mobile phone carrier or cable TV provider for another company; “job security scores,” which factor a person’s risk of unemployment into calculations of his or her ability to pay back a loan; “charitable donor scores,” which foundations use to identify the households likeliest to make large donations; and “frailty scores,” which are typically used to predict the risk of medical complications and death in elderly patients who have surgery.
  • ...12 more annotations...
  • In two nonfiction books, scheduled to be published in January, technology experts examine similar consumer-ranking techniques already in widespread use.
  • While a federal law called the Fair Credit Reporting Act requires consumer reporting agencies to provide individuals with copies of their credit reports on request, many other companies are free to keep their proprietary consumer scores to themselves.
  • Befitting the founder of a firm that markets reputation management, Mr. Fertik contends that individuals have some power to influence commercial scoring systems.
  • “This will happen whether or not you want to participate, and these scores will be used by others to make major decisions about your life, such as whether to hire, insure, or even date you,”
  • “Important corporate actors have unprecedented knowledge of the minutiae of our daily lives,” he writes in “The Black Box Society: The Secret Algorithms That Control Money and Information” (Harvard University Press), “while we know little to nothing about how they use this knowledge to influence important decisions that we — and they — make.”
  • Data brokers amass dossiers with thousands of details about individual consumers, like age, religion, ethnicity, profession, mortgage size, social networks, estimated income and health concerns such as impotence and irritable bowel syndrome. Then analytics engines can compare patterns in those variables against computer forecasting models. Algorithms are used to assign consumers scores — and to recommend offering, or withholding, particular products, services or fees — based on predictions about their behavior.
  • It’s a fictional forecast of a data-deterministic culture in which computer algorithms constantly analyze consumers’ profiles, issuing individuals numeric rankings that may benefit or hinder them.
  • Think of this technique as reputation engine optimization. If an algorithm incorrectly pegs you as physically unfit, for instance, the book suggests that you can try to mitigate the wrong. You can buy a Fitbit fitness tracker, for instance, and upload the exercise data to a public profile — or even “snap that Fitbit to your dog” and “you’ll quickly be the fittest person in your town.”
  • Professor Pasquale offers a more downbeat reading. Companies, he says, are using such a wide variety of numerical rating systems that it would be impossible for average people to significantly influence their scores.
  • “Corporations depend on automated judgments that may be wrong, biased or destructive,” Professor Pasquale writes. “Faulty data, invalid assumptions and defective models can’t be corrected when they are hidden.”
  • Moreover, trying to influence scoring systems could backfire. If a person attached a fitness device to a dog and tried to claim the resulting exercise log, he suggests, an algorithm might be able to tell the difference and issue that person a high score for propensity toward fraudulent activity.
  • “People shouldn’t think they can outwit corporations with hundreds of millions of dollars,” Professor Pasquale said in a phone interview.Consumers would have more control, he argues, if Congress extended the right to see and correct credit reports to other kinds of rankings.
Javier E

Opinion | How to Serve a Deranged Tyrant, Stoically - The New York Times - 0 views

  • In A.D. 49, the well-known writer and Stoic philosopher was recalled from exile to tutor the successor of the emperor Claudius, a promising teenager named Nero. Like many people today, Seneca entered public service with ideals mitigated by a pragmatic understanding of the reality of the politics of his time.
  • Seneca, by contrast, had no hope that he could achieve anything by direct opposition to any of the emperors under whom he lived. His best hope was to moderate some of Nero’s worst tendencies and to maximize his own sense of autonomy.”
  • Though Nero had good qualities, he was obsessed with fame and had an endless need for validation. He was also unstable and paranoid, and began to eliminate his rivals — including murdering his own mother. Was Seneca personally involved in these decisions? We don’t know. But he helped legitimize the regime with his presence, and profited from it as well, becoming one of Rome’s richest men through his 13 years of service.
  • ...6 more annotations...
  • Live Nation Rules Music Ticketing, Some Say With Threats
  • To the Stoics, contributing to public affairs was a critical duty of the philosopher. Could Seneca decline to serve because he disagreed with the emperor? Could he leave a deranged Nero unsupervised? In time, Seneca would also come to the conclusion that when “the state is so rotten as to be past helping, if evil has entire dominion over it, the wise man will not labor in vain or waste his strength in unprofitable efforts.”
  • My own early career involved some questionable service to businesspeople. Employed and paid by them, I planned and carried out controversial publicity stunts, and used dishonest tactics with the public and the media. When I finally left those roles, I found a knowledge of Stoic philosophy integral to my ability to assess my past actions, and set a more honorable course going forward.
  • Seneca seemed to realize only belatedly that one can contribute to his fellow citizens in ways other than through the state — for instance, by writing or simply by being a good man at home.
  • In 65 A.D., Seneca would again find that philosophy did not exist only in the ethereal world. Conspirators began to plot against Nero’s life, and Seneca, finally accepting that the monster he had helped create needed to be stopped, appears to have participated — or covered for those who did.
  • The effort failed but provided Seneca an opportunity: His life up to that point had contradicted many of his own teachings, but now when Nero’s guards came and demanded his life, he would be brave and wise. The man who had written much about learning how to die and facing the end without fear would comfort his friends, finish an essay he was writing and distribute some finished pieces for safekeeping. Then, he slit his veins, took hemlock and succumbed to the suffocating steam of a bath.
Javier E

How Calls for Privacy May Upend Business for Facebook and Google - The New York Times - 0 views

  • People detailed their interests and obsessions on Facebook and Google, generating a river of data that could be collected and harnessed for advertising. The companies became very rich. Users seemed happy. Privacy was deemed obsolete, like bloodletting and milkmen
  • It has been many months of allegations and arguments that the internet in general and social media in particular are pulling society down instead of lifting it up.
  • That has inspired a good deal of debate about more restrictive futures for Facebook and Google. At the furthest extreme, some dream of the companies becoming public utilities.
  • ...20 more annotations...
  • There are other avenues still, said Jascha Kaykas-Wolff, the chief marketing officer of Mozilla, the nonprofit organization behind the popular Firefox browser, including advertisers and large tech platforms collecting vastly less user data and still effectively customizing ads to consumers.
  • The greatest likelihood is that the internet companies, frightened by the tumult, will accept a few more rules and work a little harder for transparency.
  • The Cambridge Analytica case, said Vera Jourova, the European Union commissioner for justice, consumers and gender equality, was not just a breach of private data. “This is much more serious, because here we witness the threat to democracy, to democratic plurality,” she said.
  • Although many people had a general understanding that free online services used their personal details to customize the ads they saw, the latest controversy starkly exposed the machinery.
  • Consumers’ seemingly benign activities — their likes — could be used to covertly categorize and influence their behavior. And not just by unknown third parties. Facebook itself has worked directly with presidential campaigns on ad targeting, describing its services in a company case study as “influencing voters.”
  • “If your personal information can help sway elections, which affects everyone’s life and societal well-being, maybe privacy does matter after all.”
  • some trade group executives also warned that any attempt to curb the use of consumer data would put the business model of the ad-supported internet at risk.
  • “You’re undermining a fundamental concept in advertising: reaching consumers who are interested in a particular product,”
  • If suspicion of Facebook and Google is a relatively new feeling in the United States, it has been embedded in Europe for historical and cultural reasons that date back to the Nazi Gestapo, the Soviet occupation of Eastern Europe and the Cold War.
  • “We’re at an inflection point, when the great wave of optimism about tech is giving way to growing alarm,” said Heather Grabbe, director of the Open Society European Policy Institute. “This is the moment when Europeans turn to the state for protection and answers, and are less likely than Americans to rely on the market to sort out imbalances.”
  • In May, the European Union is instituting a comprehensive new privacy law, called the General Data Protection Regulation. The new rules treat personal data as proprietary, owned by an individual, and any use of that data must be accompanied by permission — opting in rather than opting out — after receiving a request written in clear language, not legalese.
  • the protection rules will have more teeth than the current 1995 directive. For example, a company experiencing a data breach involving individuals must notify the data protection authority within 72 hours and would be subject to fines of up to 20 million euros or 4 percent of its annual revenue.
  • “With the new European law, regulators for the first time have real enforcement tools,” said Jeffrey Chester, the executive director of the Center for Digital Democracy, a nonprofit group in Washington. “We now have a way to hold these companies accountable.”
  • Privacy advocates and even some United States regulators have long been concerned about the ability of online services to track consumers and make inferences about their financial status, health concerns and other intimate details to show them behavior-based ads. They warned that such microtargeting could unfairly categorize or exclude certain people.
  • the Do Not Track effort and the privacy bill were both stymied.Industry groups successfully argued that collecting personal details posed no harm to consumers and that efforts to hinder data collection would chill innovation.
  • “If it can be shown that the current situation is actually a market failure and not an individual-company failure, then there’s a case to be made for federal regulation” under certain circumstances
  • The business practices of Facebook and Google were reinforced by the fact that no privacy flap lasted longer than a news cycle or two. Nor did people flee for other services. That convinced the companies that digital privacy was a dead issue.
  • If the current furor dies down without meaningful change, critics worry that the problems might become even more entrenched. When the tech industry follows its natural impulses, it becomes even less transparent.
  • “To know the real interaction between populism and Facebook, you need to give much more access to researchers, not less,” said Paul-Jasper Dittrich, a German research fellow
  • There’s another reason Silicon Valley tends to be reluctant to share information about what it is doing. It believes so deeply in itself that it does not even think there is a need for discussion. The technology world’s remedy for any problem is always more technology
dicindioha

How a Nation Reconciles After Genocide Killed Nearly a Million People - The New York Times - 2 views

  • Scenes like this one were playing out across Rwanda on this Saturday — a monthly day of service known as Umuganda.The premise is simple and extraordinary in its efficient enforcement: Every able-bodied Rwandan citizen between the ages of 18 and 65 must take part in community service for three hours once a month. The community identifies a new public works problem to tackle each month.
  • “We never had Umuganda before the genocide,”
  • Though the genocide ended a year before Mr. Kwizera was born, it is deeply ingrained in the lives of even the youngest Rwandans.
  • ...8 more annotations...
  • reconciliation, development and social control
  • thousands in the country’s Hutu ethnic majority unleashed unspeakable violence on the Tutsi minority and moderate Hutu countrymen who refused to take part in the slaughter. In just 100 days, nearly one million people perished.
  • Many political analysts and human rights groups say Mr. Kagame has created a nation that is orderly but repressive. Laws banning so-called genocidal ideology that were adopted to deter a resurgence of sectarian or hate speech are also used to squelch even legitimate criticism of the government.
  • Like others in this generation, who have been taught from their earliest school days to suppress any sense of ethnic identity, he considers himself simply Rwandan.
  • That was the fault of the then government that pushed us to kill Tutsis,” he said, his eyes gazing steadily ahead as he echoed a sentiment heard throughout the community from both perpetrators and survivors. “We massacred them, killed and ate their cows. I offended them gravely.”
  • Mr. Sendegeya re-entered society through a program that allows perpetrators to be released if they seek forgiveness from their victims.
  • But the healing process has taken years.
  • She has taught them about the history of the genocide, and she said that they knew the role that Mr. Sendegeya had played in killing members of their family, but that they had never feared him.
  •  
    I think it's really interesing that across the country people are required to community service, and I think that's a good way to bring people together. It shows how when a mass killing happens, that at least forces people to reflect on history and how to prevent death. This has occurred in Germany, but we talked last year about how the US hasn't really done that with slavery as a community. But we could really benefit from it. But on the flip side of this, people are still forced to reject ethnicity, so there are solutions, but solutions still have problems.
ilanaprincilus06

Cost Of Her Usual Pain Shot Rose From $30 To $300 Thanks To 'Facility Fee' : Shots - He... - 0 views

  • $1,394, including a $1,262 facility fee listed as "operating room services." The balance included a clinic charge and a pharmacy charge. Lee's portion of the bill was $354.68.
  • Lee owed more than 10 times what she had paid for the same procedure done before by the same physician, Dr. Elisabeth Roter.
  • Lee says it was the "same talking, same injection — same time."
  • ...10 more annotations...
  • "This is a senior citizen for whom English is not her first language. She doesn't have the resources to fight this,"
  • Nonetheless, that slight location change allowed the hospital system to bill what's called a "facility fee," laid out on Lee's bill as "operating room services."
  • comes without warning, as hospitals are not required to inform patients of it ahead of time.
  • But she's worried her mom will delay getting the shots now, putting up with the pain longer, as she knows they are more expensive.
  • as more private practices have been bought by hospitals and facility fees are tacked on to their charges.
  • Ohio, where Lee lives, is considering legislation that would prohibit facility fees for telehealth services.
  • it's difficult to fight powerful hospital lobbyists in a pandemic political climate, where hospitals are considered heroic.
  • "Even if it was a lot of money for services properly rendered, then of course she would pay it. But that's not the case here."
  • "Facility fees are designed by hospitals in particular to grab more revenue from the weakest party in health care: namely, the individual patient,"
  • Ask outright if there will be a facility fee — and how much — even if there has not been one before.
caelengrubb

Free Market - Econlib - 0 views

  • Free market” is a summary term for an array of exchanges that take place in society.
  • Each exchange is undertaken as a voluntary agreement between two people or between groups of people represented by agents. These two individuals (or agents) exchange two economic goods, either tangible commodities or nontangible services
  • Both parties undertake the exchange because each expects to gain from it. Also, each will repeat the exchange next time (or refuse to) because his expectation has proved correct (or incorrect) in the recent past.
  • ...25 more annotations...
  • Trade, or exchange, is engaged in precisely because both parties benefit; if they did not expect to gain, they would not agree to the exchange.
  • This simple reasoning refutes the argument against free trade typical of the “mercantilist” period of sixteenth- to eighteenth-century Europe and classically expounded by the famed sixteenth-century French essayist Montaigne.
  • The mercantilists argued that in any trade, one party can benefit only at the expense of the other—that in every transaction there is a winner and a loser, an “exploiter” and an “exploited.”
  • We can immediately see the fallacy in this still-popular viewpoint: the willingness and even eagerness to trade means that both parties benefit. In modern game-theory jargon, trade is a win-win situation, a “positive-sum” rather than a “zero-sum” or “negative-sum” game.
  • Each one values the two goods or services differently, and these differences set the scene for an exchange.
  • Two factors determine the terms of any agreement: how much each participant values each good in question, and each participant’s bargaining skills.
  • the market in relation to how favorably buyers evaluate these goods—in shorthand, by the interaction of their supply with the demand for them.
  • On the other hand, given the buyers’ evaluation, or demand, for a good, if the supply increases, each unit of supply—each baseball card or loaf of bread—will fall in value, and therefore the price of the good will fall. The reverse occurs if the supply of the good decreases.
  • The market, then, is not simply an array; it is a highly complex, interacting latticework of exchanges.
  • Production begins with natural resources, and then various forms of machines and capital goods, until finally, goods are sold to the consumer.
  • At each stage of production from natural resource to consumer good, money is voluntarily exchanged for capital goods, labor services, and land resources. At each step of the way, terms of exchanges, or prices, are determined by the voluntary interactions of suppliers and demanders. This market is “free” because choices, at each step, are made freely and voluntarily.
  • The free market and the free price system make goods from around the world available to consumers.
  • Saving and investment can then develop capital goods and increase the productivity and wages of workers, thereby increasing their standard of living.
  • The free competitive market also rewards and stimulates technological innovation that allows the innovator to get a head start in satisfying consumer wants in new and creative ways.
  • Government, in every society, is the only lawful system of coercion. Taxation is a coerced exchange, and the heavier the burden of taxation on production, the more likely it is that economic growth will falter and decline
  • The ultimate in government coercion is socialism.
  • Under socialist central planning the socialist planning board lacks a price system for land or capital goods.
  • Market socialism is, in fact, a contradiction in terms.
  • The fashionable discussion of market socialism often overlooks one crucial aspect of the market: When two goods are exchanged, what is really exchanged is the property titles in those goods.
  • This means that the key to the existence and flourishing of the free market is a society in which the rights and titles of private property are respected, defended, and kept secure.
  • The key to socialism, on the other hand, is government ownership of the means of production, land, and capital goods.
  • Under socialism, therefore, there can be no market in land or capital goods worthy of the name.
  • ome critics of the free market argue that property rights are in conflict with “human” rights. But the critics fail to realize that in a free-market system, every person has a property right over his own person and his own labor and can make free contracts for those services.
  • A common charge against the free-market society is that it institutes “the law of the jungle,” of “dog eat dog,” that it spurns human cooperation for competition and exalts material success as opposed to spiritual values, philosophy, or leisure activities.
  • It is the coercive countries with little or no market activity—the notable examples in the last half of the twentieth century were the communist countries—where the grind of daily existence not only impoverishes people materially but also deadens their spirit.
knudsenlu

You Are Already Living Inside a Computer - The Atlantic - 1 views

  • Nobody really needs smartphone-operated bike locks or propane tanks. And they certainly don’t need gadgets that are less trustworthy than the “dumb” ones they replace, a sin many smart devices commit. But people do seem to want them—and in increasing numbers.
  • Why? One answer is that consumers buy what is on offer, and manufacturers are eager to turn their dumb devices smart. Doing so allows them more revenue, more control, and more opportunity for planned obsolescence. It also creates a secondary market for data collected by means of these devices. Roomba, for example, hopes to deduce floor plans from the movement of its robotic home vacuums so that it can sell them as business intelligence.
  • And the more people love using computers for everything, the more life feels incomplete unless it takes place inside them.
  • ...15 more annotations...
  • Computers already are predominant, human life already takes place mostly within them, and people are satisfied with the results.
  • These devices pose numerous problems. Cost is one. Like a cheap propane gauge, a traditional bike lock is a commodity. It can be had for $10 to $15, a tenth of the price of Nokē’s connected version. Security and privacy are others. The CIA was rumored to have a back door into Samsung TVs for spying. Disturbed people have been caught speaking to children over hacked baby monitors. A botnet commandeered thousands of poorly secured internet-of-things devices to launch a massive distributed denial-of-service attack against the domain-name syste
  • Reliability plagues internet-connected gadgets, too. When the network is down, or the app’s service isn’t reachable, or some other software behavior gets in the way, the products often cease to function properly—or at all.
  • Turing guessed that machines would become most compelling when they became convincing companions, which is essentially what today’s smartphones (and smart toasters) do.
  • But Turing never claimed that machines could think, let alone that they might equal the human mind. Rather, he surmised that machines might be able to exhibit convincing behavior.
  • People choose computers as intermediaries for the sensual delight of using computers
  • ne such affection is the pleasure of connectivity. You don’t want to be offline. Why would you want your toaster or doorbell to suffer the same fate? Today, computational absorption is an ideal. The ultimate dream is to be online all the time, or at least connected to a computational machine of some kind.
  • Doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers.
  • “Being a computer” means something different today than in 1950, when Turing proposed the imitation game. Contra the technical prerequisites of artificial intelligence, acting like a computer often involves little more than moving bits of data around, or acting as a controller or actuator. Grill as computer, bike lock as computer, television as computer. An intermediary
  • Or consider doorbells once more. Forget Ring, the doorbell has already retired in favor of the computer. When my kids’ friends visit, they just text a request to come open the door. The doorbell has become computerized without even being connected to an app or to the internet. Call it “disruption” if you must, but doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers, where they can produce new affections.
  • The present status of intelligent machines is more powerful than any future robot apocalypse.
  • Why would anyone ever choose a solution that doesn’t involve computers, when computers are available? Propane tanks and bike locks are still edge cases, but ordinary digital services work similarly: The services people seek out are the ones that allow them to use computers to do things—from finding information to hailing a cab to ordering takeout. This is a feat of aesthetics as much as it is one of business. People choose computers as intermediaries for the sensual delight of using computers, not just as practical, efficient means for solving problems.
  • This is not where anyone thought computing would end up. Early dystopic scenarios cautioned that the computer could become a bureaucrat or a fascist, reducing human behavior to the predetermined capacities of a dumb machine. Or else, that obsessive computer use would be deadening, sucking humans into narcotic detachment.Those fears persist to some extent, partly because they have been somewhat realized. But they have also been inverted. Being away from them now feels deadening, rather than being attached to them without end. And thus, the actions computers take become self-referential: to turn more and more things into computers to prolong that connection.
  • But the real present status of intelligent machines is both humdrum and more powerful than any future robot apocalypse. Turing is often called the father of AI, but he only implied that machines might become compelling enough to inspire interaction. That hardly counts as intelligence, artificial or real. It’s also far easier to achieve. Computers already have persuaded people to move their lives inside of them. The machines didn’t need to make people immortal, or promise to serve their every whim, or to threaten to destroy them absent assent. They just needed to become a sufficient part of everything human beings do such that they can’t—or won’t—imagine doing those things without them.
  • . The real threat of computers isn’t that they might overtake and destroy humanity with their future power and intelligence. It’s that they might remain just as ordinary and impotent as they are today, and yet overtake us anyway.
knudsenlu

How World War II Spurred a Veteran's Ambition - The Atlantic - 0 views

  • My first 18 months of military service were uninspiring. Donning the uniform did not fill me with pride, nor did the experience alter my perspective on life. What basic training had taught me was that the best way to get by was to stay out of sight.
  • Accordingly, my detachment expanded its mission: all males—soldiers and others of military serviceable age, no matter where encountered or whether in uniform—were to be taken prisoner unless they had persuasive evidence of either having been exempted or discharged from military service. Anyone without such proof was considered a potential guerilla. A sweep of the countryside would yield scores of German “civilians,” among them soldiers who had simply shed their uniform or party activists suspected of organizing a resistance usually with cover stories that I had to break. Not only did I become very adept at this task, but it also gave me some great insights into postwar German mentalities—insights that would later inspire me to revisit my views on higher education.
  • Wilke and his family had emigrated from Germany in the early 1920s, and he still spoke German with a genuine Saxon dialect.
  • ...4 more annotations...
  • As soon as I was released from the hospital, I worked my way step-by-step from France back to Berlin, the capital of the four occupying powers. After some stumbling blocks and a job I despised, I found a position as a research analyst in the intelligence branch of the military government’s information-control division.
  • This job turned out to be extremely challenging, but that also made it a real blessing. I wrote drafts on a wide range of political topics, including the identification of potential political leaders not yet recognized, a catalog of the rumors circulating among the population, incidents indicative of how people felt about American troops, and the dominant mood among German youth. We gleaned information from reports compiled by field representatives stationed in roughly a dozen communities throughout the American occupation zone, supplemented with details from German newspapers—and, in my case, with insights based on contacts and conversation I had whenever I passed myself off as a German civilian.
  • That is how my war and post-war service induced me, in the fall of 1947, after an interlude of more than five years, to enroll in the University of Chicago as a freshman. I stayed for six years, left with a Ph.D., and ever since enjoyed a long academic career as a sociologist, first specializing in the military and then studying propaganda and the effect of television on politics.
  • World War II spurred my ambition by teaching me how to navigate the army. Those lessons led me to confront the society I had once known so well, and to study politics and people living in a time of upheaval.
Javier E

AI is about to completely change how you use computers | Bill Gates - 0 views

  • Health care
  • Entertainment and shopping
  • Today, AI’s main role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.
  • ...38 more annotations...
  • agents will open up many more learning opportunities.
  • Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past
  • Productivity
  • copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.
  • before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it.
  • Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.
  • To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store
  • Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like
  • For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job.
  • Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it.
  • I don’t think any single company will dominate the agents business--there will be many different AI engines available.
  • The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
  • They’ll replace word processors, spreadsheets, and other productivity apps.
  • Education
  • For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.
  • your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.
  • To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences.
  • The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans.
  • Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
  • other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?
  • In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.
  • A shock wave in the tech industry
  • Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
  • Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you
  • they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter.
  • Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.
  • AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.
  • If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.
  • Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.
  • Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.
  • The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people
  • The ramifications for the software business and for society will be profound.
  • In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
  • You’ll also be able to get news and entertainment that’s been tailored to your interests. CurioAI, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.
  • An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.
  • even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
  • In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?
  • They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.
1 - 20 of 326 Next › Last »
Showing 20 items per page