Skip to main content

Home/ Future of the Web/ Group items tagged renders

Rss Feed Group items tagged

Gary Edwards

Ajaxian » Making creating DOM-based applications less of a hassle - 0 views

  • Dojo also has an implementation of the Django templating language, dojox.dtl. This is an extremely powerful template engine that, similar to this one, creates the HTML once, then updates it when the data changes. You simply update the data, call the template.render method, and the HTML is updated - no creating nodes repeatedly, no innerHTML or nodeValue access.
  •  
    a framework for JavaScript applications called ViewsHandler. ViewsHandler is not another JavaScript templating solution but works on the assumption that in most cases you'll have to create a lot of HTML initially but you'll only have to change the content of some elements dynamically as new information gets loaded or users interact with the app. So instead of creating a lot of HTML over and over again all I wanted to provide is a way to create all the needed HTML upfront and then have easy access to the parts of the HTML that need updating. The first thing you'll need to do to define your application is to create an object with the different views and pointers to the methods that populate the views:
Paul Merrell

Accessible Rich Internet Applications (WAI-ARIA) Version 1.0 - 0 views

  • Accessibility of Web content to people with disabilities requires semantic information about widgets, structures, and behaviors, in order to allow Assistive Technologies to make appropriate transformations. This specification provides an ontology of roles, states, and properties that set out an abstract model for accessible interfaces and can be used to improve the accessibility and interoperability of Web Content and Applications. This information can be mapped to accessibility frameworks that use this information to provide alternative access solutions. Similarly, this information can be used to change the rendering of content dynamically using different style sheet properties. The result is an interoperable method for associating behaviors with document-level markup. This document is part of the WAI-ARIA suite described in the ARIA Overview.
  •  
    New working draft from W3C. See also details of the call for comment: http://lists.w3.org/Archives/Public/w3c-wai-ig/2008JulSep/0034
Paul Merrell

IDABC - Revision of the EIF and AG - 0 views

  • In 2006, the European Commission has started the revision of the European Interoperability Framework (EIF) and the Architecture Guidelines (AG).
  • The European Commission has started drafting the EIF v2.0 in close cooperation with the concerned Commission services and with the Members States as well as with the Candidate Countries and EEA Countries as observers.
  • A draft document from which the final EIF V2.0 will be elaborated was available for external comments till the 22nd September. The proposal for the new EIF v2.0 that has been subject to consultation, is available: [3508 Kb]
  •  
    This planning document forms the basis for the forthcoming work to develop European Interoperability Framework v. 2.0. It is the overview of things to come, so to speak. Well worth the read to see how SOA concepts are evolving at the bleeding edge. But also noteworthy for the faceted expansion in the definition of "interoperability," which now includes: [i] political context; [ii] legal interop; [iii] organizational interop; [iv] semantic interop; and [v] technical interop. A lot of people talk the interop talk; this is a document from people who are walking the interop walk, striving to bring order out of the chaos of incompatible ICT systems across the E.U.
  •  
    Full disclosure: I submitted detailed comments on the draft of the subject document on behalf of the Universal Interoperability Council. One theme of my comments was embraced in this document: the document recognizes human-machine interactions as a facet of interoperability, moving accessibility and usability from sideshow treatment in the draft to part of the technical interop dimension of the plan.
Paul Merrell

Why the Sony hack is unlikely to be the work of North Korea. | Marc's Security Ramblings - 0 views

  • Everyone seems to be eager to pin the blame for the Sony hack on North Korea. However, I think it’s unlikely. Here’s why:1. The broken English looks deliberately bad and doesn’t exhibit any of the classic comprehension mistakes you actually expect to see in “Konglish”. i.e it reads to me like an English speaker pretending to be bad at writing English. 2. The fact that the code was written on a PC with Korean locale & language actually makes it less likely to be North Korea. Not least because they don’t speak traditional “Korean” in North Korea, they speak their own dialect and traditional Korean is forbidden. This is one of the key things that has made communication with North Korean refugees difficult. I would find the presence of Chinese far more plausible.
  • 3. It’s clear from the hard-coded paths and passwords in the malware that whoever wrote it had extensive knowledge of Sony’s internal architecture and access to key passwords. While it’s plausible that an attacker could have built up this knowledge over time and then used it to make the malware, Occam’s razor suggests the simpler explanation of an insider. It also fits with the pure revenge tact that this started out as. 4. Whoever did this is in it for revenge. The info and access they had could have easily been used to cash out, yet, instead, they are making every effort to burn Sony down. Just think what they could have done with passwords to all of Sony’s financial accounts? With the competitive intelligence in their business documents? From simple theft, to the sale of intellectual property, or even extortion – the attackers had many ways to become rich. Yet, instead, they chose to dump the data, rendering it useless. Likewise, I find it hard to believe that a “Nation State” which lives by propaganda would be so willing to just throw away such an unprecedented level of access to the beating heart of Hollywood itself.
  • 5. The attackers only latched onto “The Interview” after the media did – the film was never mentioned by GOP right at the start of their campaign. It was only after a few people started speculating in the media that this and the communication from DPRK “might be linked” that suddenly it became linked. I think the attackers both saw this as an opportunity for “lulz” and as a way to misdirect everyone into thinking it was a nation state. After all, if everyone believes it’s a nation state, then the criminal investigation will likely die.
  • ...4 more annotations...
  • 6. Whoever is doing this is VERY net and social media savvy. That, and the sophistication of the operation, do not match with the profile of DPRK up until now. Grugq did an excellent analysis of this aspect his findings are here – http://0paste.com/6875#md 7. Finally, blaming North Korea is the easy way out for a number of folks, including the security vendors and Sony management who are under the microscope for this. Let’s face it – most of today’s so-called “cutting edge” security defenses are either so specific, or so brittle, that they really don’t offer much meaningful protection against a sophisticated attacker or group of attackers.
  • 8. It probably also suits a number of political agendas to have something that justifies sabre-rattling at North Korea, which is why I’m not that surprised to see politicians starting to point their fingers at the DPRK also. 9. It’s clear from the leaked data that Sony has a culture which doesn’t take security very seriously. From plaintext password files, to using “password” as the password in business critical certificates, through to just the shear volume of aging unclassified yet highly sensitive data left out in the open. This isn’t a simple slip-up or a “weak link in the chain” – this is a serious organization-wide failure to implement anything like a reasonable security architecture.
  • The reality is, as things stand, Sony has little choice but to burn everything down and start again. Every password, every key, every certificate is tainted now and that’s a terrifying place for an organization to find itself. This hack should be used as the definitive lesson in why security matters and just how bad things can get if you don’t take it seriously. 10. Who do I think is behind this? My money is on a disgruntled (possibly ex) employee of Sony.
  • EDIT: This appears (at least in part) to be substantiated by a conversation the Verge had with one of the alleged hackers – http://www.theverge.com/2014/11/25/7281097/sony-pictures-hackers-say-they-want-equality-worked-with-staff-to-break-in Finally for an EXCELLENT blow by blow analysis of the breach and the events that followed, read the following post by my friends from Risk Based Security – https://www.riskbasedsecurity.com/2014/12/a-breakdown-and-analysis-of-the-december-2014-sony-hack EDIT: Also make sure you read my good friend Krypt3ia’s post on the hack – http://krypt3ia.wordpress.com/2014/12/18/sony-hack-winners-and-losers/
  •  
    Seems that the FBI overlooked a few clues before it told Obama to go ahead and declare war against North Korea. 
Paul Merrell

How to Encrypt the Entire Web for Free - The Intercept - 0 views

  • If we’ve learned one thing from the Snowden revelations, it’s that what can be spied on will be spied on. Since the advent of what used to be known as the World Wide Web, it has been a relatively simple matter for network attackers—whether it’s the NSA, Chinese intelligence, your employer, your university, abusive partners, or teenage hackers on the same public WiFi as you—to spy on almost everything you do online. HTTPS, the technology that encrypts traffic between browsers and websites, fixes this problem—anyone listening in on that stream of data between you and, say, your Gmail window or bank’s web site would get nothing but useless random characters—but is woefully under-used. The ambitious new non-profit Let’s Encrypt aims to make the process of deploying HTTPS not only fast, simple, and free, but completely automatic. If it succeeds, the project will render vast regions of the internet invisible to prying eyes.
  • Encryption also prevents attackers from tampering with or impersonating legitimate websites. For example, the Chinese government censors specific pages on Wikipedia, the FBI impersonated The Seattle Times to get a suspect to click on a malicious link, and Verizon and AT&T injected tracking tokens into mobile traffic without user consent. HTTPS goes a long way in preventing these sorts of attacks. And of course there’s the NSA, which relies on the limited adoption of HTTPS to continue to spy on the entire internet with impunity. If companies want to do one thing to meaningfully protect their customers from surveillance, it should be enabling encryption on their websites by default.
  • Let’s Encrypt, which was announced this week but won’t be ready to use until the second quarter of 2015, describes itself as “a free, automated, and open certificate authority (CA), run for the public’s benefit.” It’s the product of years of work from engineers at Mozilla, Cisco, Akamai, Electronic Frontier Foundation, IdenTrust, and researchers at the University of Michigan. (Disclosure: I used to work for the Electronic Frontier Foundation, and I was aware of Let’s Encrypt while it was being developed.) If Let’s Encrypt works as advertised, deploying HTTPS correctly and using all of the best practices will be one of the simplest parts of running a website. All it will take is running a command. Currently, HTTPS requires jumping through a variety of complicated hoops that certificate authorities insist on in order prove ownership of domain names. Let’s Encrypt automates this task in seconds, without requiring any human intervention, and at no cost.
  • ...2 more annotations...
  • The benefits of using HTTPS are obvious when you think about protecting secret information you send over the internet, like passwords and credit card numbers. It also helps protect information like what you search for in Google, what articles you read, what prescription medicine you take, and messages you send to colleagues, friends, and family from being monitored by hackers or authorities. But there are less obvious benefits as well. Websites that don’t use HTTPS are vulnerable to “session hijacking,” where attackers can take over your account even if they don’t know your password. When you download software without encryption, sophisticated attackers can secretly replace the download with malware that hacks your computer as soon as you try installing it.
  • The transition to a fully encrypted web won’t be immediate. After Let’s Encrypt is available to the public in 2015, each website will have to actually use it to switch over. And major web hosting companies also need to hop on board for their customers to be able to take advantage of it. If hosting companies start work now to integrate Let’s Encrypt into their services, they could offer HTTPS hosting by default at no extra cost to all their customers by the time it launches.
  •  
    Don't miss the video. And if you have a web site, urge your host service to begin preparing for Let's Encrypt. (See video on why it's good for them.)
Paul Merrell

European Human Rights Court Deals a Heavy Blow to the Lawfulness of Bulk Surveillance |... - 0 views

  • In a seminal decision updating and consolidating its previous jurisprudence on surveillance, the Grand Chamber of the European Court of Human Rights took a sideways swing at mass surveillance programs last week, reiterating the centrality of “reasonable suspicion” to the authorization process and the need to ensure interception warrants are targeted to an individual or premises. The decision in Zakharov v. Russia — coming on the heels of the European Court of Justice’s strongly-worded condemnation in Schrems of interception systems that provide States with “generalised access” to the content of communications — is another blow to governments across Europe and the United States that continue to argue for the legitimacy and lawfulness of bulk collection programs. It also provoked the ire of the Russian government, prompting an immediate legislative move to give the Russian constitution precedence over Strasbourg judgments. The Grand Chamber’s judgment in Zakharov is especially notable because its subject matter — the Russian SORM system of interception, which includes the installation of equipment on telecommunications networks that subsequently enables the State direct access to the communications transiting through those networks — is similar in many ways to the interception systems currently enjoying public and judicial scrutiny in the United States, France, and the United Kingdom. Zakharov also provides a timely opportunity to compare the differences between UK and Russian law: Namely, Russian law requires prior independent authorization of interception measures, whereas neither the proposed UK law nor the existing legislative framework do.
  • The decision is lengthy and comprises a useful restatement and harmonization of the Court’s approach to standing (which it calls “victim status”) in surveillance cases, which is markedly different from that taken by the US Supreme Court. (Indeed, Judge Dedov’s separate but concurring opinion notes the contrast with Clapper v. Amnesty International.) It also addresses at length issues of supervision and oversight, as well as the role played by notification in ensuring the effectiveness of remedies. (Marko Milanovic discusses many of these issues here.) For the purpose of the ongoing debate around the legitimacy of bulk surveillance regimes under international human rights law, however, three particular conclusions of the Court are critical.
  • The Court took issue with legislation permitting the interception of communications for broad national, military, or economic security purposes (as well as for “ecological security” in the Russian case), absent any indication of the particular circumstances under which an individual’s communications may be intercepted. It said that such broadly worded statutes confer an “almost unlimited degree of discretion in determining which events or acts constitute such a threat and whether that threat is serious enough to justify secret surveillance” (para. 248). Such discretion cannot be unbounded. It can be limited through the requirement for prior judicial authorization of interception measures (para. 249). Non-judicial authorities may also be competent to authorize interception, provided they are sufficiently independent from the executive (para. 258). What is important, the Court said, is that the entity authorizing interception must be “capable of verifying the existence of a reasonable suspicion against the person concerned, in particular, whether there are factual indications for suspecting that person of planning, committing or having committed criminal acts or other acts that may give rise to secret surveillance measures, such as, for example, acts endangering national security” (para. 260). This finding clearly constitutes a significant threshold which a number of existing and pending European surveillance laws would not meet. For example, the existence of individualized reasonable suspicion runs contrary to the premise of signals intelligence programs where communications are intercepted in bulk; by definition, those programs collect information without any consideration of individualized suspicion. Yet the Court was clearly articulating the principle with national security-driven surveillance in mind, and with the knowledge that interception of communications in Russia is conducted by Russian intelligence on behalf of law enforcement agencies.
  • ...6 more annotations...
  • This element of the Grand Chamber’s decision distinguishes it from prior jurisprudence of the Court, namely the decisions of the Third Section in Weber and Saravia v. Germany (2006) and of the Fourth Section in Liberty and Ors v. United Kingdom (2008). In both cases, the Court considered legislative frameworks which enable bulk interception of communications. (In the German case, the Court used the term “strategic monitoring,” while it referred to “more general programmes of surveillance” in Liberty.) In the latter case, the Fourth Section sought to depart from earlier European Commission of Human Rights — the court of first instance until 1998 — decisions which developed the requirements of the law in the context of surveillance measures targeted at specific individuals or addresses. It took note of the Weber decision which “was itself concerned with generalized ‘strategic monitoring’, rather than the monitoring of individuals” and concluded that there was no “ground to apply different principles concerning the accessibility and clarity of the rules governing the interception of individual communications, on the one hand, and more general programmes of surveillance, on the other” (para. 63). The Court in Liberty made no mention of any need for any prior or reasonable suspicion at all.
  • In Weber, reasonable suspicion was addressed only at the post-interception stage; that is, under the German system, bulk intercepted data could be transmitted from the German Federal Intelligence Service (BND) to law enforcement authorities without any prior suspicion. The Court found that the transmission of personal data without any specific prior suspicion, “in order to allow the institution of criminal proceedings against those being monitored” constituted a fairly serious interference with individuals’ privacy rights that could only be remedied by safeguards and protections limiting the extent to which such data could be used (para. 125). (In the context of that case, the Court found that Germany’s protections and restrictions were sufficient.) When you compare the language from these three cases, it would appear that the Grand Chamber in Zakharov is reasserting the requirement for individualized reasonable suspicion, including in national security cases, with full knowledge of the nature of surveillance considered by the Court in its two recent bulk interception cases.
  • The requirement of reasonable suspicion is bolstered by the Grand Chamber’s subsequent finding in Zakharov that the interception authorization (e.g., the court order or warrant) “must clearly identify a specific person to be placed under surveillance or a single set of premises as the premises in respect of which the authorisation is ordered. Such identification may be made by names, addresses, telephone numbers or other relevant information” (para. 264). In making this finding, it references paragraphs from Liberty describing the broad nature of the bulk interception warrants under British law. In that case, it was this description that led the Court to find the British legislation possessed insufficient clarity on the scope or manner of exercise of the State’s discretion to intercept communications. In one sense, therefore, the Grand Chamber seems to be retroactively annotating the Fourth Section’s Liberty decision so that it might become consistent with its decision in Zakharov. Without this revision, the Court would otherwise appear to depart to some extent — arguably, purposefully — from both Liberty and Weber.
  • Finally, the Grand Chamber took issue with the direct nature of the access enjoyed by Russian intelligence under the SORM system. The Court noted that this contributed to rendering oversight ineffective, despite the existence of a requirement for prior judicial authorization. Absent an obligation to demonstrate such prior authorization to the communications service provider, the likelihood that the system would be abused through “improper action by a dishonest, negligent or overly zealous official” was quite high (para. 270). Accordingly, “the requirement to show an interception authorisation to the communications service provider before obtaining access to a person’s communications is one of the important safeguards against abuse by the law-enforcement authorities” (para. 269). Again, this requirement arguably creates an unconquerable barrier for a number of modern bulk interception systems, which rely on the use of broad warrants to authorize the installation of, for example, fiber optic cable taps that facilitate the interception of all communications that cross those cables. In the United Kingdom, the Independent Reviewer of Terrorism Legislation David Anderson revealed in his essential inquiry into British surveillance in 2015, there are only 20 such warrants in existence at any time. Even if these 20 warrants are served on the relevant communications service providers upon the installation of cable taps, the nature of bulk interception deprives this of any genuine meaning, making the safeguard an empty one. Once a tap is installed for the purposes of bulk interception, the provider is cut out of the equation and can no longer play the role the Court found so crucial in Zakharov.
  • The Zakharov case not only levels a serious blow at bulk, untargeted surveillance regimes, it suggests the Grand Chamber’s intention to actively craft European Court of Human Rights jurisprudence in a manner that curtails such regimes. Any suggestion that the Grand Chamber’s decision was issued in ignorance of the technical capabilities or intentions of States and the continued preference for bulk interception systems should be dispelled; the oral argument in the case took place in September 2014, at a time when the Court had already indicated its intention to accord priority to cases arising out of the Snowden revelations. Indeed, the Court referenced such forthcoming cases in the fact sheet it issued after the Zakharov judgment was released. Any remaining doubt is eradicated through an inspection of the multiple references to the Snowden revelations in the judgment itself. In the main judgment, the Court excerpted text from the Director of the European Union Agency for Human Rights discussing Snowden, and in the separate opinion issued by Judge Dedov, he goes so far as to quote Edward Snowden: “With each court victory, with every change in the law, we demonstrate facts are more convincing than fear. As a society, we rediscover that the value of the right is not in what it hides, but in what it protects.”
  • The full implications of the Zakharov decision remain to be seen. However, it is likely we will not have to wait long to know whether the Grand Chamber intends to see the demise of bulk collection schemes; the three UK cases (Big Brother Watch & Ors v. United Kingdom, Bureau of Investigative Journalism & Alice Ross v. United Kingdom, and 10 Human Rights Organisations v. United Kingdom) pending before the Court have been fast-tracked, indicating the Court’s willingness to continue to confront the compliance of bulk collection schemes with human rights law. It is my hope that the approach in Zakharov hints at the Court’s conviction that bulk collection schemes lie beyond the bounds of permissible State surveillance.
Paul Merrell

Report: Microsoft is scrapping Edge, switching to just another Chrome clone | Ars Technica - 0 views

  • Windows Central reports that Microsoft is planning to replace its Edge browser, which uses Microsoft's own EdgeHTML rendering engine and Chakra JavaScript engine, with a new browser built on Chromium, the open source counterpart to Google's Chrome. The new browser has the codename Anaheim.
Paul Merrell

The De-Americanization of Internet Freedom - Lawfare - 0 views

  • Why did the internet freedom agenda fail? Goldsmith’s essay tees up, but does not fully explore, a range of explanatory hypotheses. The most straightforward have to do with unrealistic expectations and unintended consequences. The idea that a minimally regulated internet would usher in an era of global peace, prosperity, and mutual understanding, Goldsmith tells us, was always a fantasy. As a project of democracy and human rights promotion, the internet freedom agenda was premised on a wildly overoptimistic view about the capacity of information flows, on their own, to empower oppressed groups and effect social change. Embracing this market-utopian view led the United States to underinvest in cybersecurity, social media oversight, and any number of other regulatory tools. In suggesting this interpretation of where U.S. policymakers and their civil society partners went wrong, Goldsmith’s essay complements recent critiques of the neoliberal strains in the broader human rights and transparency movements. Perhaps, however, the internet freedom agenda has faltered not because it was so naïve and unrealistic, but because it was so effective at achieving its realist goals. The seeds of this alternative account can be found in Goldsmith’s concession that the commercial non-regulation principle helped companies like Apple, Google, Facebook, and Amazon grab “huge market share globally.” The internet became an increasingly valuable cash cow for U.S. firms and an increasingly potent instrument of U.S. soft power over the past two decades; foreign governments, in due course, felt compelled to fight back. If the internet freedom agenda is understood as fundamentally a national economic project, rather than an international political or moral crusade, then we might say that its remarkable early success created the conditions for its eventual failure. Goldsmith’s essay also points to a third set of possible explanations for the collapse of the internet freedom agenda, involving its internal contradictions. Magaziner’s notion of a completely deregulated marketplace, if taken seriously, is incoherent. As Goldsmith and Tim Wu have discussed elsewhere, it takes quite a bit of regulation for any market, including markets related to the internet, to exist and to work. And indeed, even as Magaziner proposed “complete deregulation” of the internet, he simultaneously called for new legal protections against computer fraud and copyright infringement, which were soon followed by extensive U.S. efforts to penetrate foreign networks and to militarize cyberspace. Such internal dissonance was bound to invite charges of opportunism, and to render the American agenda unstable.
Paul Merrell

The Supreme Court's Groundbreaking Privacy Victory for the Digital Age | American Civil... - 0 views

  • The Supreme Court on Friday handed down what is arguably the most consequential privacy decision of the digital age, ruling that police need a warrant before they can seize people’s sensitive location information stored by cellphone companies. The case specifically concerns the privacy of cellphone location data, but the ruling has broad implications for government access to all manner of information collected about people and stored by the purveyors of popular technologies. In its decision, the court rejects the government’s expansive argument that people lose their privacy rights merely by using those technologies. Carpenter v. U.S., which was argued by the ACLU, involves Timothy Carpenter, who was convicted in 2013 of a string of burglaries in Detroit. To tie Carpenter to the burglaries, FBI agents obtained — without seeking a warrant — months’ worth of his location information from Carpenter’s cellphone company. They got almost 13,000 data points tracking Carpenter’s whereabouts during that period, revealing where he slept, when he attended church, and much more. Indeed, as Chief Justice John Roberts wrote in Friday’s decision, “when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.”.
  • The ACLU argued the agents had violated Carpenter’s Fourth Amendment rights when they obtained such detailed records without a warrant based on probable cause. In a decision written by Chief Justice John Roberts, the Supreme Court agreed, recognizing that the Fourth Amendment must apply to records of such unprecedented breadth and sensitivity: Mapping a cell phone’s location over the course of 127 days provides an all-encompassing record of the holder’s whereabouts. As with GPS information, the timestamped data provides an intimate window into a person’s life, revealing not only his particular movements, but through them his ‘familial, political, professional, religious, and sexual associations.’
  • The government’s argument that it needed no warrant for these records extends far beyond cellphone location information, to any data generated by modern technologies and held by private companies rather than in our own homes or pockets. To make their case, government lawyers relied on an outdated, 1970s-era legal doctrine that says that once someone shares information with a “third party” — in Carpenter’s case, a cellphone company — that data is no longer protected by the Fourth Amendment. The Supreme Court made abundantly clear that this doctrine has its limits and cannot serve as a carte blanche for the government seizure of any data of its choosing without judicial oversight.
  • ...1 more annotation...
  • While the decision extends in the immediate term only to historical cellphone location data, the Supreme Court’s reasoning opens the door to the protection of the many other kinds of data generated by popular technologies. Today’s decision provides a groundbreaking update to privacy rights that the digital age has rendered vulnerable to abuse by the government’s appetite for surveillance. It recognizes that “cell phones and the services they provide are ‘such a pervasive and insistent part of daily life’ that carrying one is indispensable to participation in modern society.” And it helps ensure that we don’t have to give up those rights if we want to participate in modern life. 
Paul Merrell

Google will 'de-rank' RT articles to make them harder to find - Eric Schmidt - RT World... - 0 views

  • Eric Schmidt, the Executive Chairman of Google’s parent company Alphabet, says the company will “engineer” specific algorithms for RT and Sputnik to make their articles less prominent on the search engine’s news delivery services. “We are working on detecting and de-ranking those kinds of sites – it’s basically RT and Sputnik,” Schmidt said during a Q & A session at the Halifax International Security Forum in Canada on Saturday, when asked about whether Google facilitates “Russian propaganda.”
  • “We are well of aware of it, and we are trying to engineer the systems to prevent that [the content being delivered to wide audiences]. But we don’t want to ban the sites – that’s not how we operate.”The discussion focused on the company’s popular Google News service, which clusters the news by stories, then ranks the various media outlets depending on their reach, article length and veracity, and Google Alerts, which proactively informs subscribers of new publications.
  • The Alphabet chief, who has been referred to by Hillary Clinton as a “longtime friend,” added that the experience of “the last year” showed that audiences could not be trusted to distinguish fake and real news for themselves.“We started with the default American view that ‘bad’ speech would be replaced with ‘good’ speech, but the problem found in the last year is that this may not be true in certain situations, especially when you have a well-funded opponent who is trying to actively spread this information,” he told the audience.
  • ...1 more annotation...
  • RT America registered under FARA earlier this month, after being threatened by the US Department of Justice with arrests and confiscations of property if it failed to comply. The broadcaster is fighting the order in court.
Paul Merrell

U.S. looking at ways to hold Zuckerberg accountable for Facebook's problems - 0 views

  • Federal regulators are discussing whether and how to hold Facebook Chief Executive Mark Zuckerberg personally accountable for the company's history of mismanaging users' private data, two sources familiar with the discussions told NBC News on Thursday.The sources wouldn't elaborate on what measures are specifically under consideration. The Washington Post, which first reported the development, reported that regulators were exploring increased oversight of Zuckerberg's leadership.While Facebook has come under scrutiny for its privacy practices for years, both of the Democratic members of the FTC have said the agency should target individual executives when appropriate.Justin Brookman, a former policy director for technology research at the Federal Trade Commission, or FTC, said Thursday night that while the FTC can name individual company leaders if they directed, controlled and knew about any wrongdoing, "they typically only use that authority in fraud-like cases, so far as I can tell."
Paul Merrell

Federal Trade Commission calls for breakup of Facebook - 0 views

  • The Federal Trade Commission sued to break up Facebook on Wednesday, asking a federal court to force the sell-off of assets such as Instagram and WhatsApp as independent businesses.“Facebook has maintained its monopoly position by buying up companies that present competitive threats and by imposing restrictive policies that unjustifiably hinder actual or potential rivals that Facebook does not or cannot acquire,” the commission said in the lawsuit filed in federal court in Washington, D.C.The lawsuit asks the court to order the “divestiture of assets, divestiture or reconstruction of businesses (including, but not limited to, Instagram and/or WhatsApp),” as well as other possible relief the court might want to add.
  • Attorneys general from 48 states and territories said they were filing their own lawsuit against Facebook, reflecting the broad and bipartisan concern about how much power Facebook and its CEO, Mark Zuckerberg, have accumulated on the internet.
Paul Merrell

How the GOP muzzled the coalition fighting foreign propaganda on Twitter, Facebook and ... - 0 views

  • A once-robust alliance of federal agencies, tech companies, election officials and researchers that worked together to thwart foreign propaganda and disinformation has fragmented after years of sustained Republican attacks.The GOP offensive started during the 2020 election as public critiques and has since escalated into lawsuits, governmental inquiries and public relations campaigns that have succeeded in stopping almost all coordination between the government and social media platforms.The most recent setback came when the FBI put an indefinite hold on most briefings to social media companies about Russian, Iranian and Chinese influence campaigns. Employees at two U.S. tech companies who used to receive regular briefings from the FBI’s Foreign Influence Task Force told NBC News that it has been months since the bureau reached out. In a testimony last week to the Senate Homeland Security Committee, FBI Director Christopher Wray signaled a significant pullback in communications with tech companies and tied the move to rulings by a conservative federal judge and appeals court that said some government agencies and officials should be restricted from communicating and meeting with social media companies to moderate content. The case is now on hold pending Supreme Court review.“We’re having some interaction with social media companies,” Wray said. “But all of those interactions have changed fundamentally in the wake of the court rulings.”
« First ‹ Previous 41 - 53 of 53
Showing 20 items per page