Skip to main content

Home/ Future of the Web/ Group items tagged system deleted

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

Uninstall Stop Online Piracy Automatic Protection System Completely From Windows! - Vid... - 1 views

  •  
    [http://www.malware-protection.net/remove-stop-online-piracy-automatic-protection-system-infections-easy-steps-to-delete-stop-online-piracy-automatic-protection-system-effects Stop Online Piracy Automatic Protection System infections are spread over the internet in tremendous, and always recommended to be avoided from your system else it would be very frustrating to you. but unfortunately if the infections are running on your system, then here are some of the procedures aiding you to remove Stop Online Piracy Automatic Protection System effects.]
Gonzalo San Gil, PhD.

[# ! #Open #Tech:] How to Recover a Deleted File in Linux - 1 views

  •  
    "Did this ever happen to you? You realized that you had mistakenly deleted a file - either through the Del key, or using rm in the command line."
Gonzalo San Gil, PhD.

Sub Pop artist creates music-streaming site to mock Pandora, Spotify | Ars Technica [# ... - 0 views

  •  
    "On Tuesday, Josh Tillman, the lead singer and songwriter of the band Father John Misty, announced a phony, satirical music-streaming service called Streamline Audio Protocol, or, better put, SAP. ... On the site, Tillman calls his music-delivery system "a new signal-to-audio process by which popular albums are 'sapped' of their performances, original vocal, atmosphere, and other distracting affectations so the consumer can decide quickly and efficiently whether they like a musical composition, based strictly on its formal attributes, enough to spend money on it. ..."
  •  
    "On Tuesday, Josh Tillman, the lead singer and songwriter of the band Father John Misty, announced a phony, satirical music-streaming service called Streamline Audio Protocol, or, better put, SAP. ... On the site, Tillman calls his music-delivery system "a new signal-to-audio process by which popular albums are 'sapped' of their performances, original vocal, atmosphere, and other distracting affectations so the consumer can decide quickly and efficiently whether they like a musical composition, based strictly on its formal attributes, enough to spend money on it. ..."
Paul Merrell

Cy Vance's Proposal to Backdoor Encrypted Devices Is Riddled With Vulnerabilities | Jus... - 0 views

  • Less than a week after the attacks in Paris — while the public and policymakers were still reeling, and the investigation had barely gotten off the ground — Cy Vance, Manhattan’s District Attorney, released a policy paper calling for legislation requiring companies to provide the government with backdoor access to their smartphones and other mobile devices. This is the first concrete proposal of this type since September 2014, when FBI Director James Comey reignited the “Crypto Wars” in response to Apple’s and Google’s decisions to use default encryption on their smartphones. Though Comey seized on Apple’s and Google’s decisions to encrypt their devices by default, his concerns are primarily related to end-to-end encryption, which protects communications that are in transit. Vance’s proposal, on the other hand, is only concerned with device encryption, which protects data stored on phones. It is still unclear whether encryption played any role in the Paris attacks, though we do know that the attackers were using unencrypted SMS text messages on the night of the attack, and that some of them were even known to intelligence agencies and had previously been under surveillance. But regardless of whether encryption was used at some point during the planning of the attacks, as I lay out below, prohibiting companies from selling encrypted devices would not prevent criminals or terrorists from being able to access unbreakable encryption. Vance’s primary complaint is that Apple’s and Google’s decisions to provide their customers with more secure devices through encryption interferes with criminal investigations. He claims encryption prevents law enforcement from accessing stored data like iMessages, photos and videos, Internet search histories, and third party app data. He makes several arguments to justify his proposal to build backdoors into encrypted smartphones, but none of them hold water.
  • Before addressing the major privacy, security, and implementation concerns that his proposal raises, it is worth noting that while an increase in use of fully encrypted devices could interfere with some law enforcement investigations, it will help prevent far more crimes — especially smartphone theft, and the consequent potential for identity theft. According to Consumer Reports, in 2014 there were more than two million victims of smartphone theft, and nearly two-thirds of all smartphone users either took no steps to secure their phones or their data or failed to implement passcode access for their phones. Default encryption could reduce instances of theft because perpetrators would no longer be able to break into the phone to steal the data.
  • Vance argues that creating a weakness in encryption to allow law enforcement to access data stored on devices does not raise serious concerns for security and privacy, since in order to exploit the vulnerability one would need access to the actual device. He considers this an acceptable risk, claiming it would not be the same as creating a widespread vulnerability in encryption protecting communications in transit (like emails), and that it would be cheap and easy for companies to implement. But Vance seems to be underestimating the risks involved with his plan. It is increasingly important that smartphones and other devices are protected by the strongest encryption possible. Our devices and the apps on them contain astonishing amounts of personal information, so much that an unprecedented level of harm could be caused if a smartphone or device with an exploitable vulnerability is stolen, not least in the forms of identity fraud and credit card theft. We bank on our phones, and have access to credit card payments with services like Apple Pay. Our contact lists are stored on our phones, including phone numbers, emails, social media accounts, and addresses. Passwords are often stored on people’s phones. And phones and apps are often full of personal details about their lives, from food diaries to logs of favorite places to personal photographs. Symantec conducted a study, where the company spread 50 “lost” phones in public to see what people who picked up the phones would do with them. The company found that 95 percent of those people tried to access the phone, and while nearly 90 percent tried to access private information stored on the phone or in other private accounts such as banking services and email, only 50 percent attempted contacting the owner.
  • ...8 more annotations...
  • Vance attempts to downplay this serious risk by asserting that anyone can use the “Find My Phone” or Android Device Manager services that allow owners to delete the data on their phones if stolen. However, this does not stand up to scrutiny. These services are effective only when an owner realizes their phone is missing and can take swift action on another computer or device. This delay ensures some period of vulnerability. Encryption, on the other hand, protects everyone immediately and always. Additionally, Vance argues that it is safer to build backdoors into encrypted devices than it is to do so for encrypted communications in transit. It is true that there is a difference in the threats posed by the two types of encryption backdoors that are being debated. However, some manner of widespread vulnerability will inevitably result from a backdoor to encrypted devices. Indeed, the NSA and GCHQ reportedly hacked into a database to obtain cell phone SIM card encryption keys in order defeat the security protecting users’ communications and activities and to conduct surveillance. Clearly, the reality is that the threat of such a breach, whether from a hacker or a nation state actor, is very real. Even if companies go the extra mile and create a different means of access for every phone, such as a separate access key for each phone, significant vulnerabilities will be created. It would still be possible for a malicious actor to gain access to the database containing those keys, which would enable them to defeat the encryption on any smartphone they took possession of. Additionally, the cost of implementation and maintenance of such a complex system could be high.
  • Privacy is another concern that Vance dismisses too easily. Despite Vance’s arguments otherwise, building backdoors into device encryption undermines privacy. Our government does not impose a similar requirement in any other context. Police can enter homes with warrants, but there is no requirement that people record their conversations and interactions just in case they someday become useful in an investigation. The conversations that we once had through disposable letters and in-person conversations now happen over the Internet and on phones. Just because the medium has changed does not mean our right to privacy has.
  • In addition to his weak reasoning for why it would be feasible to create backdoors to encrypted devices without creating undue security risks or harming privacy, Vance makes several flawed policy-based arguments in favor of his proposal. He argues that criminals benefit from devices that are protected by strong encryption. That may be true, but strong encryption is also a critical tool used by billions of average people around the world every day to protect their transactions, communications, and private information. Lawyers, doctors, and journalists rely on encryption to protect their clients, patients, and sources. Government officials, from the President to the directors of the NSA and FBI, and members of Congress, depend on strong encryption for cybersecurity and data security. There are far more innocent Americans who benefit from strong encryption than there are criminals who exploit it. Encryption is also essential to our economy. Device manufacturers could suffer major economic losses if they are prohibited from competing with foreign manufacturers who offer more secure devices. Encryption also protects major companies from corporate and nation-state espionage. As more daily business activities are done on smartphones and other devices, they may now hold highly proprietary or sensitive information. Those devices could be targeted even more than they are now if all that has to be done to access that information is to steal an employee’s smartphone and exploit a vulnerability the manufacturer was required to create.
  • Vance also suggests that the US would be justified in creating such a requirement since other Western nations are contemplating requiring encryption backdoors as well. Regardless of whether other countries are debating similar proposals, we cannot afford a race to the bottom on cybersecurity. Heads of the intelligence community regularly warn that cybersecurity is the top threat to our national security. Strong encryption is our best defense against cyber threats, and following in the footsteps of other countries by weakening that critical tool would do incalculable harm. Furthermore, even if the US or other countries did implement such a proposal, criminals could gain access to devices with strong encryption through the black market. Thus, only innocent people would be negatively affected, and some of those innocent people might even become criminals simply by trying to protect their privacy by securing their data and devices. Finally, Vance argues that David Kaye, UN Special Rapporteur for Freedom of Expression and Opinion, supported the idea that court-ordered decryption doesn’t violate human rights, provided certain criteria are met, in his report on the topic. However, in the context of Vance’s proposal, this seems to conflate the concepts of court-ordered decryption and of government-mandated encryption backdoors. The Kaye report was unequivocal about the importance of encryption for free speech and human rights. The report concluded that:
  • States should promote strong encryption and anonymity. National laws should recognize that individuals are free to protect the privacy of their digital communications by using encryption technology and tools that allow anonymity online. … States should not restrict encryption and anonymity, which facilitate and often enable the rights to freedom of opinion and expression. Blanket prohibitions fail to be necessary and proportionate. States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows. Additionally, the group of intelligence experts that was hand-picked by the President to issue a report and recommendations on surveillance and technology, concluded that: [R]egarding encryption, the U.S. Government should: (1) fully support and not undermine efforts to create encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and (3) increase the use of encryption and urge US companies to do so, in order to better protect data in transit, at rest, in the cloud, and in other storage.
  • The clear consensus among human rights experts and several high-ranking intelligence experts, including the former directors of the NSA, Office of the Director of National Intelligence, and DHS, is that mandating encryption backdoors is dangerous. Unaddressed Concerns: Preventing Encrypted Devices from Entering the US and the Slippery Slope In addition to the significant faults in Vance’s arguments in favor of his proposal, he fails to address the question of how such a restriction would be effectively implemented. There is no effective mechanism for preventing code from becoming available for download online, even if it is illegal. One critical issue the Vance proposal fails to address is how the government would prevent, or even identify, encrypted smartphones when individuals bring them into the United States. DHS would have to train customs agents to search the contents of every person’s phone in order to identify whether it is encrypted, and then confiscate the phones that are. Legal and policy considerations aside, this kind of policy is, at the very least, impractical. Preventing strong encryption from entering the US is not like preventing guns or drugs from entering the country — encrypted phones aren’t immediately obvious as is contraband. Millions of people use encrypted devices, and tens of millions more devices are shipped to and sold in the US each year.
  • Finally, there is a real concern that if Vance’s proposal were accepted, it would be the first step down a slippery slope. Right now, his proposal only calls for access to smartphones and devices running mobile operating systems. While this policy in and of itself would cover a number of commonplace devices, it may eventually be expanded to cover laptop and desktop computers, as well as communications in transit. The expansion of this kind of policy is even more worrisome when taking into account the speed at which technology evolves and becomes widely adopted. Ten years ago, the iPhone did not even exist. Who is to say what technology will be commonplace in 10 or 20 years that is not even around today. There is a very real question about how far law enforcement will go to gain access to information. Things that once seemed like merely science fiction, such as wearable technology and artificial intelligence that could be implanted in and work with the human nervous system, are now available. If and when there comes a time when our “smart phone” is not really a device at all, but is rather an implant, surely we would not grant law enforcement access to our minds.
  • Policymakers should dismiss Vance’s proposal to prohibit the use of strong encryption to protect our smartphones and devices in order to ensure law enforcement access. Undermining encryption, regardless of whether it is protecting data in transit or at rest, would take us down a dangerous and harmful path. Instead, law enforcement and the intelligence community should be working to alter their skills and tactics in a fast-evolving technological world so that they are not so dependent on information that will increasingly be protected by encryption.
Paul Merrell

Microsoft Ordered to Delete Browser - NYTimes.com - 0 views

  • BRUSSELS (AP) — The European Union said Friday that Microsoft’s practice of selling the Internet Explorer browser together with its Windows operating system violated the union’s antitrust rules. It ordered the software giant to untie the browser from its operating system in the 27-nation union, enabling makers of rival browsers to compete fairly.
  •  
    The Times goes farther than the DG Competition announcement, saying that Microsoft has been ordered to untie MSIE from Windows throughout the E.U. No source is attributed for the statement. The DG Competition announcement does not state what remedy it proposes to order. So take this report with a grain of salt. The Times is well capable of error.
Paul Merrell

As Belgium threatens fines, Facebook's defence of tracking visitors rings hollow | nsnb... - 0 views

  • Facebook has been ordered by a Belgian court to stop tracking non-Facebook users when they visit the Facebook site. Facebook has been given 48 hours to stop the tracking or face possible fines of up to 250,000 Euro a day.
  • Facebook has said that it will appeal the ruling, claiming that since their european headquarters are situated in Ireland, they should only be bound by the Irish Data Protection Regulator. Facebook’s chief of security Alex Stamos has posted an explanation about why non-Facebook users are tracked when they visit the site. The tracking issue centres around the creation of a “cookie” called “datr” whenever anyone visits a Facebook page. This cookie contains an identification number that identifies the same browser returning each time to different Facebook pages. Once created, the cookie will last 2 years unless the user explicitly deletes it. The cookie is created for all visitors to Facebook, irrespective of whether they are a Facebook user or even whether they are logged into Facebook at the time. According to Stamos, the measure is needed to: Prevent the creation of fake and spammy accounts Reduce the risk of someone’s account being taken over by someone else Protect people’s content from being stolen Stopping denial of service attacks against Facebook
  • The principle behind this is that if you can identify requests that arrive at the site for whatever reason, abnormal patterns may unmask people creating fake accounts, hijacking a real account or just issuing so many requests that it overwhelms the site. Stamos’ defence of tracking users is that they have been using it for the past 5 years and nobody had complained until now, that it was common practice and that there was little harm because the data was not collected for any purpose other than security. The dilemma raised by Facebook’s actions is a common one in the conflicting spheres of maintaining privacy and maintaining security. It is obvious that if you can identify all visitors to a site, then it is possible to determine more information about what they are doing than if they were anonymous. The problem with this from a moral perspective is that everyone is being tagged, irrespective of whether their intent was going to be malicious or not. It is essentially compromising the privacy of the vast majority for the sake of a much smaller likelihood of bad behaviour.
  •  
    I checked and sure enough: five Facebook cookies even though I have no Facebook account. They're gone now, and I've created an exception blocking Facebook from planting more cookies on my systems. 
Paul Merrell

Microsoft Pitches Technology That Can Read Facial Expressions at Political Rallies - 1 views

  • On the 21st floor of a high-rise hotel in Cleveland, in a room full of political operatives, Microsoft’s Research Division was advertising a technology that could read each facial expression in a massive crowd, analyze the emotions, and report back in real time. “You could use this at a Trump rally,” a sales representative told me. At both the Republican and Democratic conventions, Microsoft sponsored event spaces for the news outlet Politico. Politico, in turn, hosted a series of Microsoft-sponsored discussions about the use of data technology in political campaigns. And throughout Politico’s spaces in both Philadelphia and Cleveland, Microsoft advertised an array of products from “Microsoft Cognitive Services,” its artificial intelligence and cloud computing division. At one exhibit, titled “Realtime Crowd Insights,” a small camera scanned the room, while a monitor displayed the captured image. Every five seconds, a new image would appear with data annotated for each face — an assigned serial number, gender, estimated age, and any emotions detected in the facial expression. When I approached, the machine labeled me “b2ff” and correctly identified me as a 23-year-old male.
  • “Realtime Crowd Insights” is an Application Programming Interface (API), or a software tool that connects web applications to Microsoft’s cloud computing services. Through Microsoft’s emotional analysis API — a component of Realtime Crowd Insights — applications send an image to Microsoft’s servers. Microsoft’s servers then analyze the faces and return emotional profiles for each one. In a November blog post, Microsoft said that the emotional analysis could detect “anger, contempt, fear, disgust, happiness, neutral, sadness or surprise.” Microsoft’s sales representatives told me that political campaigns could use the technology to measure the emotional impact of different talking points — and political scientists could use it to study crowd response at rallies.
  • Facial recognition technology — the identification of faces by name — is already widely used in secret by law enforcement, sports stadiums, retail stores, and even churches, despite being of questionable legality. As early as 2002, facial recognition technology was used at the Super Bowl to cross-reference the 100,000 attendees to a database of the faces of known criminals. The technology is controversial enough that in 2013, Google tried to ban the use of facial recognition apps in its Google glass system. But “Realtime Crowd Insights” is not true facial recognition — it could not identify me by name, only as “b2ff.” It did, however, store enough data on each face that it could continuously identify it with the same serial number, even hours later. The display demonstrated that capability by distinguishing between the number of total faces it had seen, and the number of unique serial numbers. Photo: Alex Emmons
  • ...2 more annotations...
  • Instead, “Realtime Crowd Insights” is an example of facial characterization technology — where computers analyze faces without necessarily identifying them. Facial characterization has many positive applications — it has been tested in the classroom, as a tool for spotting struggling students, and Microsoft has boasted that the tool will even help blind people read the faces around them. But facial characterization can also be used to assemble and store large profiles of information on individuals, even anonymously.
  • Alvaro Bedoya, a professor at Georgetown Law School and expert on privacy and facial recognition, has hailed that code of conduct as evidence that Microsoft is trying to do the right thing. But he pointed out that it leaves a number of questions unanswered — as illustrated in Cleveland and Philadelphia. “It’s interesting that the app being shown at the convention ‘remembered’ the faces of the people who walked by. That would seem to suggest that their faces were being stored and processed without the consent that Microsoft’s policy requires,” Bedoya said. “You have to wonder: What happened to the face templates of the people who walked by that booth? Were they deleted? Or are they still in the system?” Microsoft officials declined to comment on exactly what information is collected on each face and what data is retained or stored, instead referring me to their privacy policy, which does not address the question. Bedoya also pointed out that Microsoft’s marketing did not seem to match the consent policy. “It’s difficult to envision how companies will obtain consent from people in large crowds or rallies.”
  •  
    But nobody is saying that the output of this technology can't be combined with the output of facial recognition technology to let them monitor you individually AND track your emotions. Fortunately, others are fighting back with knowledge and tech to block facial recognition. http://goo.gl/JMQM2W
Paul Merrell

Blink! Google Is Forking WebKit - Slashdot - 0 views

  • "In a blog post titled Blink: A rendering engine for the Chromium project, Google has announced that Chromium (the open source backend for Chrome) will be switching to Blink, a new WebKit-based web rendering engine. Quoting: 'Chromium uses a different multi-process architecture than other WebKit-based browsers, and supporting multiple architectures over the years has led to increasing complexity for both the WebKit and Chromium projects. This has slowed down the collective pace of innovation... This was not an easy decision. We know that the introduction of a new rendering engine can have significant implications for the web. Nevertheless, we believe that having multiple rendering engines—similar to having multiple browsers—will spur innovation and over time improve the health of the entire open web ecosystem. ... In the short term, Blink will bring little change for web developers. The bulk of the initial work will focus on internal architectural improvements and a simplification of the codebase. For example, we anticipate that we’ll be able to remove 7 build systems and delete more than 7,000 files—comprising more than 4.5 million lines—right off the bat. Over the long term a healthier codebase leads to more stability and fewer bugs.'"
Paul Merrell

EFF to Court: Don't Undermine Legal Protections for Online Platforms that Enable Free S... - 0 views

  • EFF filed a brief in federal court arguing that a lower court’s ruling jeopardizes the online platforms that make the Internet a robust platform for users’ free speech. The brief, filed in the U.S. Court of Appeals for the Ninth Circuit, argues that 47 U.S.C. § 230, enacted as part of the Communications Decency Act (known simply as “Section 230”) broadly protects online platforms, including review websites, when they aggregate or otherwise edit users’ posts. Generally, Section 230 provides legal immunity for online intermediaries that host or republish speech by protecting them against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. Section 230’s immunity directly led to the development of the platforms everyone uses today, allowing people to upload videos to their favorite platforms such as YouTube, as well as leave reviews on Amazon or Yelp. It also incentivizes the creation of new platforms that can host users’ content, leading to more innovation that enables the robust free speech found online. The lower court’s decision in Consumer Cellular v. ConsumerAffairs.com, however, threatens to undermine the broad protections of Section 230, EFF’s brief argues.
  • In the case, Consumer Cellular alleged, among other things, that ConsumerAffairs.com should be held liable for aggregating negative reviews about its business into a star rating. It also alleged that ConsumerAffairs.com edited or otherwise deleted certain reviews of Consumer Cellular in bad faith. Courts and the text of Section 230, however, plainly allow platforms to edit or aggregate user-generated content into summaries or star ratings without incurring legal liability, EFF’s brief argues. It goes on: “And any function protected by Section 230 remains so regardless of the publisher’s intent.” By allowing Consumer Cellular’s claims against ConsumerAffairs.com to proceed, the lower court seriously undercut Section 230’s legal immunity for online platforms. If the decision is allowed to stand, EFF’s brief argues, then platforms may take steps to further censor or otherwise restrict user content out of fear of being held liable. That outcome, EFF warns, could seriously diminish the Internet’s ability to serve as a diverse forum for free speech. The Internet it is constructed of and depends upon intermediaries. The many varied online intermediary platforms, including Twitter, Reddit, YouTube, and Instagram, all give a single person, with minimal resources, almost anywhere in the world the ability to communicate with the rest of the world. Without intermediaries, that speaker would need technical skill and money that most people lack to disseminate their message. If our legal system fails to robustly protect intermediaries, it fails to protect free speech online.
Paul Merrell

A Secret Catalogue of Government Gear for Spying on Your Cellphone - 0 views

  • HE INTERCEPT HAS OBTAINED a secret, internal U.S. government catalogue of dozens of cellphone surveillance devices used by the military and by intelligence agencies. The document, thick with previously undisclosed information, also offers rare insight into the spying capabilities of federal law enforcement and local police inside the United States. The catalogue includes details on the Stingray, a well-known brand of surveillance gear, as well as Boeing “dirt boxes” and dozens of more obscure devices that can be mounted on vehicles, drones, and piloted aircraft. Some are designed to be used at static locations, while others can be discreetly carried by an individual. They have names like Cyberhawk, Yellowstone, Blackfin, Maximus, Cyclone, and Spartacus. Within the catalogue, the NSA is listed as the vendor of one device, while another was developed for use by the CIA, and another was developed for a special forces requirement. Nearly a third of the entries focus on equipment that seems to have never been described in public before.
  • The Intercept obtained the catalogue from a source within the intelligence community concerned about the militarization of domestic law enforcement. (The original is here.) A few of the devices can house a “target list” of as many as 10,000 unique phone identifiers. Most can be used to geolocate people, but the documents indicate that some have more advanced capabilities, like eavesdropping on calls and spying on SMS messages. Two systems, apparently designed for use on captured phones, are touted as having the ability to extract media files, address books, and notes, and one can retrieve deleted text messages. Above all, the catalogue represents a trove of details on surveillance devices developed for military and intelligence purposes but increasingly used by law enforcement agencies to spy on people and convict them of crimes. The mass shooting earlier this month in San Bernardino, California, which President Barack Obama has called “an act of terrorism,” prompted calls for state and local police forces to beef up their counterterrorism capabilities, a process that has historically involved adapting military technologies to civilian use. Meanwhile, civil liberties advocates and others are increasingly alarmed about how cellphone surveillance devices are used domestically and have called for a more open and informed debate about the trade-off between security and privacy — despite a virtual blackout by the federal government on any information about the specific capabilities of the gear.
  • “We’ve seen a trend in the years since 9/11 to bring sophisticated surveillance technologies that were originally designed for military use — like Stingrays or drones or biometrics — back home to the United States,” said Jennifer Lynch, a senior staff attorney at the Electronic Frontier Foundation, which has waged a legal battle challenging the use of cellphone surveillance devices domestically. “But using these technologies for domestic law enforcement purposes raises a host of issues that are different from a military context.”
  • ...2 more annotations...
  • ANY OF THE DEVICES in the catalogue, including the Stingrays and dirt boxes, are cell-site simulators, which operate by mimicking the towers of major telecom companies like Verizon, AT&T, and T-Mobile. When someone’s phone connects to the spoofed network, it transmits a unique identification code and, through the characteristics of its radio signals when they reach the receiver, information about the phone’s location. There are also indications that cell-site simulators may be able to monitor calls and text messages. In the catalogue, each device is listed with guidelines about how its use must be approved; the answer is usually via the “Ground Force Commander” or under one of two titles in the U.S. code governing military and intelligence operations, including covert action.
  • But domestically the devices have been used in a way that violates the constitutional rights of citizens, including the Fourth Amendment prohibition on illegal search and seizure, critics like Lynch say. They have regularly been used without warrants, or with warrants that critics call overly broad. Judges and civil liberties groups alike have complained that the devices are used without full disclosure of how they work, even within court proceedings.
Paul Merrell

Google Is Constantly Tracking, Even If You Turn Off Device 'Location History' | Zero Hedge - 1 views

  • In but the latest in a continuing saga of big tech tracking and surveillance stories which should serve to convince us all we are living in the beginning phases of a Minority Report style tracking and pansophical "pre-crime" system, it's now confirmed that the world's most powerful tech company and search tool will always find a way to keep your location data. The Associated Press sought the help of Princeton researchers to prove that while Google is clear and upfront about giving App users the ability to turn off or "pause" Location History on their devices, there are other hidden means through which it retains the data.
  • According to the AP report: Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.” That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking. For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like “chocolate chip cookies,” or “kids science kits,” pinpoint your precise latitude and longitude — accurate to the square foot — and save it to your Google account. The issue directly affects around two billion people using Google's Android operating software and iPhone users relying on Google maps or a simple search. Among the computer science researchers at Princeton conducting the tests is Jonathan Mayer, who told the AP, “If you’re going to allow users to turn off something called ‘Location History,’ then all the places where you maintain location history should be turned off,” and added, “That seems like a pretty straightforward position to have.”
Paul Merrell

Sick Of Facebook? Read This. - 2 views

  • In 2012, The Guardian reported on Facebook’s arbitrary and ridiculous nudity and violence guidelines which allow images of crushed limbs but – dear god spare us the image of a woman breastfeeding. Still, people stayed – and Facebook grew. In 2014, Facebook admitted to mind control games via positive or negative emotional content tests on unknowing and unwilling platform users. Still, people stayed – and Facebook grew. Following the 2016 election, Facebook responded to the Harpie shrieks from the corporate Democrats bysetting up a so-called “fake news” task force to weed out those dastardly commies (or socialists or anarchists or leftists or libertarians or dissidents or…). And since then, I’ve watched my reach on Facebook drain like water in a bathtub – hard to notice at first and then a spastic swirl while people bicker about how to plug the drain. And still, we stayed – and the censorship tightened. Roughly a year ago, my show Act Out! reported on both the censorship we were experiencing but also the cramped filter bubbling that Facebook employs in order to keep the undesirables out of everyone’s news feed. Still, I stayed – and the censorship tightened. 2017 into 2018 saw more and more activist organizers, particularly black and brown, thrown into Facebook jail for questioning systemic violence and demanding better. In August, puss bag ass hat in a human suit Alex Jones was banned from Facebook – YouTube, Apple and Twitter followed suit shortly thereafter. Some folks celebrated. Some others of us skipped the party because we could feel what was coming.
  • On Thursday, October 11th of this year, Facebook purged more than 800 pages including The Anti-Media, Police the Police, Free Thought Project and many other social justice and alternative media pages. Their explanation rested on the painfully flimsy foundation of “inauthentic behavior.” Meanwhile, their fake-news checking team is stacked with the likes of the Atlantic Council and the Weekly Standard, neocon junk organizations that peddle such drivel as “The Character Assassination of Brett Kavanaugh.” Soon after, on the Monday before the Midterm elections, Facebook blocked another 115 accounts citing once again, “inauthentic behavior.” Then, in mid November, a massive New York Times piece chronicled Facebook’s long road to not only save its image amid rising authoritarian behavior, but “to discredit activist protesters, in part by linking them to the liberal financier George Soros.” (I consistently find myself waiting for those Soros and Putin checks in the mail that just never appear.)
  • What we need is an open source, non-surveillance platform. And right now, that platform is Minds. Before you ask, I’m not being paid to write that.
  • ...2 more annotations...
  • Fashioned as an alternative to the closed and creepy Facebook behemoth, Minds advertises itself as “an open source and decentralized social network for Internet freedom.” Minds prides itself on being hands-off with regards to any content that falls in line with what’s permitted by law, which has elicited critiques from some on the left who say Minds is a safe haven for fascists and right-wing extremists. Yet, Ottman has himself stated openly that he wants ideas on content moderation and ways to make Minds a better place for social network users as well as radical content creators. What a few fellow journos and I are calling #MindsShift is an important step in not only moving away from our gagged existence on Facebook but in building a social network that can serve up the real news folks are now aching for.
  • To be clear, we aren’t advocating that you delete your Facebook account – unless you want to. For many, Facebook is still an important tool and our goal is to add to the outreach toolkit, not suppress it. We have set January 1st, 2019 as the ultimate date for this #MindsShift. Several outlets with a combined reach of millions of users will be making the move – and asking their readerships/viewerships to move with them. Along with fellow journalists, I am working with Minds to brainstorm new user-friendly functions and ways to make this #MindsShift a loud and powerful move. We ask that you, the reader, add to the conversation by joining the #MindsShift and spreading the word to your friends and family. (Join Minds via this link) We have created the #MindsShift open group on Minds.com so that you can join and offer up suggestions and ideas to make this platform a new home for radical and progressive media.
Paul Merrell

Google confirms that advanced backdoor came preinstalled on Android devices | Ars Technica - 0 views

  • Criminals in 2017 managed to get an advanced backdoor preinstalled on Android devices before they left the factories of manufacturers, Google researchers confirmed on Thursday. Triada first came to light in 2016 in articles published by Kaspersky here and here, the first of which said the malware was "one of the most advanced mobile Trojans" the security firm's analysts had ever encountered. Once installed, Triada's chief purpose was to install apps that could be used to send spam and display ads. It employed an impressive kit of tools, including rooting exploits that bypassed security protections built into Android and the means to modify the Android OS' all-powerful Zygote process. That meant the malware could directly tamper with every installed app. Triada also connected to no fewer than 17 command and control servers. In July 2017, security firm Dr. Web reported that its researchers had found Triada built into the firmware of several Android devices, including the Leagoo M5 Plus, Leagoo M8, Nomu S10, and Nomu S20. The attackers used the backdoor to surreptitiously download and install modules. Because the backdoor was embedded into one of the OS libraries and located in the system section, it couldn't be deleted using standard methods, the report said. On Thursday, Google confirmed the Dr. Web report, although it stopped short of naming the manufacturers. Thursday's report also said the supply chain attack was pulled off by one or more partners the manufacturers used in preparing the final firmware image used in the affected devices.
Paul Merrell

Lessons (So Far) From WhatsApp v. NSO - Lawfare - 0 views

  • NSO Group, an Israeli vendor of “lawful” hacking tools designed to infect a target’s phone with spyware, is regarded by many as a bad actor. The group claims to be shocked when its products are misused, as they have been in Mexico, Saudi Arabia and the United Arab Emirates. One incident might be excusable, but the group’s continued enabling of misbehavior has resulted in well-earned enmity. Recently, Facebook struck back. NSO Group deployed a weaponized exploit for Facebook’s WhatsApp messenger, integrated it into its Pegasus malcode system, and offered it to its customers (a mix of legitimate government agencies and nefarious government actors) interested in hacking WhatsApp users beginning in April. This was a particularly powerful exploit because it required no user interaction and the only sign of the exploit a user might discover would be a series of “missed calls” received on the user’s phone. Facebook patched the vulnerability on May 13, blocking the NSO campaign. Facebook wasn’t satisfied with simply closing the vulnerability. In cooperation with CitizenLab, Facebook identified more than 100 incidents in which NSO Group’s WhatsApp exploit appeared to target human rights activists and journalists. In total, Facebook and CitizenLab identified 1,400 targets (which apparently also included government officials in U.S. allied governments). They then filed a federal lawsuit against NSO Group, closed NSO Group member accounts, and, most damaging of all to NSO’s customers, sent a notice to all identified victims alerting them of the attack. This meant that all targets, both dissidents and drug lords alike, were notified of this surveillance. The lawsuit will be a case to watch. Facebook has already revealed a large amount of detail concerning NSO Group’s internal workings, including the hands-on nature of its business model: NSO Group actively assists countries in hacking targets. For example, we now know that while an NSO Group employee may not press the “Enter” key for a target, NSO employees do act to advise and consult on targeting; and NSO Group is largely responsible for running the infrastructure used to exploit targets and manage implants. Expect more revelations like this as the case proceeds.
1 - 14 of 14
Showing 20 items per page