Skip to main content

Home/ Open Web/ Group items tagged images

Rss Feed Group items tagged

Paul Merrell

European Parliament Urges Protection for Edward Snowden - The New York Times - 0 views

  • The European Parliament narrowly adopted a nonbinding but nonetheless forceful resolution on Thursday urging the 28 nations of the European Union to recognize Edward J. Snowden as a “whistle-blower and international human rights defender” and shield him from prosecution.On Twitter, Mr. Snowden, the former National Security Agency contractor who leaked millions of documents about electronic surveillance by the United States government, called the vote a “game-changer.” But the resolution has no legal force and limited practical effect for Mr. Snowden, who is living in Russia on a three-year residency permit.Whether to grant Mr. Snowden asylum remains a decision for the individual European governments, and none have done so thus far. Continue reading the main story Related Coverage Open Source: Now Following the N.S.A. on Twitter, @SnowdenSEPT. 29, 2015 Snowden Sees Some Victories, From a DistanceMAY 19, 2015 Still, the resolution was the strongest statement of support seen for Mr. Snowden from the European Parliament. At the same time, the close vote — 285 to 281 — suggested the extent to which some European lawmakers are wary of alienating the United States.
  • The resolution calls on European Union members to “drop any criminal charges against Edward Snowden, grant him protection and consequently prevent extradition or rendition by third parties.”In June 2013, shortly after Mr. Snowden’s leaks became public, the United States charged him with theft of government property and violations of the Espionage Act of 1917. By then, he had flown to Moscow, where he spent weeks in legal limbo before he was granted temporary asylum and, later, a residency permit.Four Latin American nations have offered him permanent asylum, but he does not believe he could travel from Russia to those countries without running the risk of arrest and extradition to the United States along the way.
  • The White House, which has used diplomatic efforts to discourage even symbolic resolutions of support for Mr. Snowden, immediately criticized the resolution.“Our position has not changed,” said Ned Price, a spokesman for the National Security Council in Washington.“Mr. Snowden is accused of leaking classified information and faces felony charges here in the United States. As such, he should be returned to the U.S. as soon as possible, where he will be accorded full due process.”Jan Philipp Albrecht, one of the lawmakers who sponsored the resolution in Europe, said it should increase pressure on national governments.
  • ...1 more annotation...
  • “It’s the first time a Parliament votes to ask for this to be done — and it’s the European Parliament,” Mr. Albrecht, a German lawmaker with the Greens political bloc, said in a phone interview shortly after the vote, which was held in Strasbourg, France. “So this has an impact surely on the debate in the member states.”The resolution “is asking or demanding the member states’ governments to end all the charges and to prevent any extradition to a third party,” Mr. Albrecht said. “That’s a very clear call, and that can’t be just ignored by the governments,” he said.
Paul Merrell

Civil Society Groups Ask Facebook To Provide Method To Appeal Censorship | PopularResis... - 0 views

  • EFF, Human Rights Watch, and Over 70 Civil Society Groups Ask Mark Zuckerberg to Provide All Users with Mechanism to Appeal Content Censorship on Facebook World’s Freedom of Expression Is In Your Hands, Groups Tell CEO San Francisco—The Electronic Frontier Foundation (EFF) and more than 70 human and digital rights groups called on Mark Zuckerberg today to add real transparency and accountability to Facebook’s content removal process. Specifically, the groups demand that Facebook clearly explain how much content it removes, both rightly and wrongly, and provide all users with a fair and timely method to appeal removals and get their content back up. While Facebook is under enormous—and still mounting—pressure to remove material that is truly threatening, without transparency, fairness, and processes to identify and correct mistakes, Facebook’s content takedown policies too often backfire and silence the very people that should have their voices heard on the platform.  Politicians, museums, celebrities, and other high profile groups and individuals whose improperly removed content can garner media attention seem to have little trouble reaching Facebook to have content restored—they sometimes even receive an apology. But the average user? Not so much. Facebook only allows people to appeal content decisions in a limited set of circumstances, and in many cases, users have absolutely no option to appeal. Onlinecensorship.org, an EFF project for users to report takedown notices, has collected reports of hundreds of unjustified takedown incidents where appeals were unavailable. For most users, content Facebook removes is rarely restored, and some are banned from the platform for no good reason. EFF, Article 19, the Center for Democracy and Technology, and Ranking Digital Rights wrote directly to Mark Zuckerberg today demanding that Facebook implement common sense standards so that average users can easily appeal content moderation decisions, receive prompt replies and timely review by a human or humans, and have the opportunity to present evidence during the review process. The letter was co-signed by more than 70 human rights, digital rights, and civil liberties organizations from South America, Europe, the Middle East, Asia, Africa, and the U.S.
Paul Merrell

Trump administration pulls back curtain on secretive cybersecurity process - The Washin... - 0 views

  • The White House on Wednesday made public for the first time the rules by which the government decides to disclose or keep secret software flaws that can be turned into cyberweapons — whether by U.S. agencies hacking for foreign intelligence, money-hungry criminals or foreign spies seeking to penetrate American computers. The move to publish an un­classified charter responds to years of criticism that the process was unnecessarily opaque, fueling suspicion that it cloaked a stockpile of software flaws that the National Security Agency was hoarding to go after foreign targets but that put Americans’ cyber­security at risk.
  • The rules are part of the “Vulnerabilities Equities Process,” which the Obama administration revamped in 2014 as a multi­agency forum to debate whether and when to inform companies such as Microsoft and Juniper that the government has discovered or bought a software flaw that, if weaponized, could affect the security of their product. The Trump administration has mostly not altered the rules under which the government reaches a decision but is disclosing its process. Under the VEP, an “equities review board” of at least a dozen national security and civilian agencies will meet monthly — or more often, if a need arises — to discuss newly discovered vulnerabilities. Besides the NSA, the CIA and the FBI, the list includes the Treasury, Commerce and State departments, and the Office of Management and Budget. The priority is on disclosure, the policy states, to protect core Internet systems, the U.S. economy and critical infrastructure, unless there is “a demonstrable, overriding interest” in using the flaw for intelligence or law enforcement purposes. The government has long said that it discloses the vast majority — more than 90 percent — of the vulnerabilities it discovers or buys in products from defense contractors or other sellers. In recent years, that has amounted to more than 100 a year, according to people familiar with the process. But because the process was classified, the National Security Council, which runs the discussion, was never able to reveal any numbers. Now, Joyce said, the number of flaws disclosed and the number retained will be made public in an annual report. A classified version will be sent to Congress, he said.
timothypeverhart

Google Chrome for PC Latest Version - 0 views

image

Google Chrome

started by timothypeverhart on 24 Jul 23 no follow-up yet
Paul Merrell

The People and Tech Behind the Panama Papers - Features - Source: An OpenNews project - 0 views

  • Then we put the data up, but the problem with Solr was it didn’t have a user interface, so we used Project Blacklight, which is open source software normally used by librarians. We used it for the journalists. It’s simple because it allows you to do faceted search—so, for example, you can facet by the folder structure of the leak, by years, by type of file. There were more complex things—it supports queries in regular expressions, so the more advanced users were able to search for documents with a certain pattern of numbers that, for example, passports use. You could also preview and download the documents. ICIJ open-sourced the code of our document processing chain, created by our web developer Matthew Caruana Galizia. We also developed a batch-searching feature. So say you were looking for politicians in your country—you just run it through the system, and you upload your list to Blacklight and you would get a CSV back saying yes, there are matches for these names—not only exact matches, but also matches based on proximity. So you would say “I want Mar Cabra proximity 2” and that would give you “Mar Cabra,” “Mar whatever Cabra,” “Cabra, Mar,”—so that was good, because very quickly journalists were able to see… I have this list of politicians and they are in the data!
  • Last Sunday, April 3, the first stories emerging from the leaked dataset known as the Panama Papers were published by a global partnership of news organizations working in coordination with the International Consortium of Investigative Journalists, or ICIJ. As we begin the second week of reporting on the leak, Iceland’s Prime Minister has been forced to resign, Germany has announced plans to end anonymous corporate ownership, governments around the world launched investigations into wealthy citizens’ participation in tax havens, the Russian government announced that the investigation was an anti-Putin propaganda operation, and the Chinese government banned mentions of the leak in Chinese media. As the ICIJ-led consortium prepares for its second major wave of reporting on the Panama Papers, we spoke with Mar Cabra, editor of ICIJ’s Data & Research unit and lead coordinator of the data analysis and infrastructure work behind the leak. In our conversation, Cabra reveals ICIJ’s years-long effort to build a series of secure communication and analysis platforms in support of genuinely global investigative reporting collaborations.
  • For communication, we have the Global I-Hub, which is a platform based on open source software called Oxwall. Oxwall is a social network, like Facebook, which has a wall when you log in with the latest in your network—it has forum topics, links, you can share files, and you can chat with people in real time.
  • ...3 more annotations...
  • We had the data in a relational database format in SQL, and thanks to ETL (Extract, Transform, and Load) software Talend, we were able to easily transform the data from SQL to Neo4j (the graph-database format we used). Once the data was transformed, it was just a matter of plugging it into Linkurious, and in a couple of minutes, you have it visualized—in a networked way, so anyone can log in from anywhere in the world. That was another reason we really liked Linkurious and Neo4j—they’re very quick when representing graph data, and the visualizations were easy to understand for everybody. The not-very-tech-savvy reporter could expand the docs like magic, and more technically expert reporters and programmers could use the Neo4j query language, Cypher, to do more complex queries, like show me everybody within two degrees of separation of this person, or show me all the connected dots…
  • We believe in open source technology and try to use it as much as possible. We used Apache Solr for the indexing and Apache Tika for document processing, and it’s great because it processes dozens of different formats and it’s very powerful. Tika interacts with Tesseract, so we did the OCRing on Tesseract. To OCR the images, we created an army of 30–40 temporary servers in Amazon that allowed us to process the documents in parallel and do parallel OCR-ing. If it was very slow, we’d increase the number of servers—if it was going fine, we would decrease because of course those servers have a cost.
  • For the visualization of the Mossack Fonseca internal database, we worked with another tool called Linkurious. It’s not open source, it’s licensed software, but we have an agreement with them, and they allowed us to work with it. It allows you to represent data in graphs. We had a version of Linkurious on our servers, so no one else had the data. It was pretty intuitive—journalists had to click on dots that expanded, basically, and could search the names.
Paul Merrell

ChronoZoom: A deep dive into the history of everything - 0 views

  • Imagine a timeline of the universe, complete with high-resolution videos and images, in which you could zoom from a chronology of Egypt’s dynasties and pyramids to the tale of a Japanese-American couple interned in a World War II relocation camp to a discussion of a mass extinction that occurred on Earth 200 million years ago – all in seconds. Based on an idea from a University of California, Berkeley, student, ChronoZoom – essentially a zoomable timeline of timelines augmented with multimedia features –- is coming to life.
Paul Merrell

Tim Berners-Lee, W3C Approve Work On DRM For HTML 5.1 - Slashdot - 0 views

  • "Danny O'Brien from the EFF has a weblog post about how the Encrypted Media Extension (EME) proposal will continue to be part of HTML Work Group's bailiwick and may make it into a future HTML revision." From O'Brien's post: "A Web where you cannot cut and paste text; where your browser can't 'Save As...' an image; where the 'allowed' uses of saved files are monitored beyond the browser; where JavaScript is sealed away in opaque tombs; and maybe even where we can no longer effectively 'View Source' on some sites, is a very different Web from the one we have today. It's a Web where user agents—browsers—must navigate a nest of enforced duties every time they visit a page. It's a place where the next Tim Berners-Lee or Mozilla, if they were building a new browser from scratch, couldn't just look up the details of all the 'Web' technologies. They'd have to negotiate and sign compliance agreements with a raft of DRM providers just to be fully standards-compliant and interoperable."
  •  
    From the Dept. of YouGottaBeKiddingMe. 
Paul Merrell

This project aims to make '404 not found' pages a thing of the past - 0 views

  • The Internet is always changing. Sites are rising and falling, content is deleted, and bad URLs can lead to '404 Not Found' errors that are as helpful as a brick wall. A new project proposes an do away with dead 404 errors by implementing new HTML code that will help access prior versions of hyperlinked content. With any luck, that means that you’ll never have to run into a dead link again. The “404-No-More” project is backed by a formidable coalition including members from organizations like the Harvard Library Innovation Lab, Los Alamos National Laboratory, Old Dominion University, and the Berkman Center for Internet & Society. Part of the Knight News Challenge, which seeks to strengthen the Internet for free expression and innovation through a variety of initiatives, 404-No-More recently reached the semifinal stage. The project aims to cure so-called link rot, the process by which hyperlinks become useless overtime because they point to addresses that are no longer available. If implemented, websites such as Wikipedia and other reference documents would be vastly improved. The new feature would also give Web authors a way provide links that contain both archived copies of content and specific dates of reference, the sort of information that diligent readers have to hunt down on a website like Archive.org.
  • While it may sound trivial, link rot can actually have real ramifications. Nearly 50 percent of the hyperlinks in Supreme Court decisions no longer work, a 2013 study revealed. Losing footnotes and citations in landmark legal decisions can mean losing crucial information and context about the laws that govern us. The same study found that 70 percent of URLs within the Harvard Law Review and similar journals didn’t link to the originally cited information, considered a serious loss surrounding the discussion of our laws. The project’s proponents have come up with more potential uses as well. Activists fighting censorship will have an easier time combatting government takedowns, for instance. Journalists will be much more capable of researching dynamic Web pages. “If every hyperlink was annotated with a publication date, you could automatically view an archived version of the content as the author intended for you to see it,” the project’s authors explain. The ephemeral nature of the Web could no longer be used as a weapon. Roger Macdonald, a director at the Internet Archive, called the 404-No-More project “an important contribution to preservation of knowledge.”
  • The new feature would come in the form of introducing the mset attribute to the <a> element in HTML, which would allow users of the code to specify multiple dates and copies of content as an external resource. For instance, if both the date of reference and the location of a copy of targeted content is known by an author, the new code would like like this: The 404-No-More project’s goals are numerous, but the ultimate goal is to have mset become a new HTML standard for hyperlinks. “An HTML standard that incorporates archives for hyperlinks will loop in these efforts and make the Web better for everyone,” project leaders wrote, “activists, journalists, and regular ol’ everyday web users.”
Paul Merrell

Another judge upholds NSA call tracking - POLITICO.com - 0 views

  • A federal judge in Idaho has upheld the constitutionality of the National Security Agency's program that gathers massive quanities of data on the telephone calls of Americans. The ruling Tuesday from U.S. District Court Judge B. Lynn Winmill leaves the federal government with two wins in lawsuits decided since the program was revealed about a year ago by ex-NSA contractor Edward Snowden. In addition, one judge handling a criminal case ruled that the surveillance did not violate the Constitution. Opponents of the program have only one win: U.S. District Court Judge Richard Leon's ruling in December that the program likely violates the Fourth Amendment. In the new decision, Winmill said binding precedent in the Ninth Circuit holds that call and email metadata are not protected by the Constitution and no warrant is needed to obtain it.
  • "The weight of the authority favors the NSA," wrote Winmill, an appointee of President Bill Clinton. Winmill took note of Leon's contrary decision and called it eloquent, but concluded it departs from current Supreme Court precedent — though perhaps not for long. "Judge Leon’s decision should serve as a template for a Supreme Court opinion. And it might yet," Winmill wrote as he threw out the lawsuit brought by an Idaho registered nurse who objected to the gathering of data on her phone calls. Winmill's opinion (posted here) does not address an argument put forward by some critics of the program, including some lawmakers: that the metadata program violates federal law because it does not fit squarely within the language of the statute used to authorize it.
  •  
    A partial win for the public. The judge makes plain that he disagrees with pre-Snowden disclosure precedent and recommends that the Supreme Court adopt the reasoning of Judge Richard Leon's ruling that finds the NSA call-metadata violative of the Fourth Amendment. The judge says his hands are tied by prior decisions in the Ninth Circuit Court of Appeals that gave an expansive reading to Smith v. Maryland.
Paul Merrell

Facebook's Deepface Software Has Gotten Them in Deep Trouble | nsnbc international - 0 views

  • In a Chicago court, several Facebook users filed a class-action lawsuit against the social media giant for allegedly violating its users’ privacy rights to acquire the largest privately held stash of biometric face-recognition data in the world. The court documents reveal claims that “Facebook began violating the Illinois Biometric Information Privacy Act (IBIPA) of 2008 in 2010, in a purported attempt to make the process of tagging friends easier.”
  • This was accomplished through the “tag suggestions” feature provided by Facebook which “scans all pictures uploaded by users and identifies any Facebook friends they may want to tag.” The Facebook users maintain that this feature is a “form of data mining [that] violates user’s privacy”. One plaintiff said this is a “brazen disregard for its users’ privacy rights,” through which Facebook has “secretly amassed the world’s largest privately held database of consumer biometrics data.” Because “Facebook actively conceals” their protocol using “faceprint databases” to identify Facebook users in photos, and “doesn’t disclose its wholesale biometrics data collection practices in its privacy policies, nor does it even ask users to acknowledge them.”
  • This would be a violation of the IBIPA which states it is “unlawful to collect biometric data without written notice to the subject stating the purpose and length of the data collection, and without obtaining the subject’s written release.” Because all users are automatically part of the “faceprint’ facial recognition program, this is an illegal act in the state of Illinois, according to the complaint. Jay Edelson, attorney for the plaintiffs, asserts the opt-out ability to prevent other Facebook users from tagging them in photos is “insufficient”.
  • ...1 more annotation...
  • Deepface is the name of the new technology researchers at Facebook created in order to identify people in pictures; mimicking the way humans recognize the differences in each other’s faces. Facebook has already implemented facial recognition software (FRS) to suggest names for tagging photos; however Deepface can “identify faces from a side view” as well as when the person is directly facing the camera in the picture. In 2013, Erin Egan, chief privacy officer for Facebook, said that this upgrade “would give users better control over their personal information, by making it easier to identify posted photos in which they appear.” Egan explained: “Our goal is to facilitate tagging so that people know when there are photos of them on our service.” Facebook has stated that they retain information from their users that is syphoned from all across the web. This data is used to increase Facebook’s profits with the information being sold for marketing purposes. This is the impressive feature of Deepface; as previous FRS can only decipher faces in images that are frontal views of people. Shockingly, Deepface displays 97.25% accuracy in identifying faces in photos. That is quite a feat considering humans have a 97.53% accuracy rate. In order to ensure accuracy, Deepface “conducts its analysis based on more than 120 million different parameters.”
Paul Merrell

Eight HTML5 Drafts Updated, W3C News Archive: 2010 W3C - 0 views

  • The HTML Working Group published eight documents: Working Drafts of the HTML5 specification, the accompanying explanatory document HTML5 differences from HTML4, and the related non-normative reference HTML: The Markup Language. Working Drafts of the specifications HTML+RDFa 1.1 and HTML Microdata, which define mechanisms for embedding machine-readable data in HTML documents, and the specification HTML Canvas 2D Context, which defines a 2D immediate-mode graphics API for use with the HTML5 <canvas> element. HTML5: Techniques for providing useful text alternatives, which is intended to help authors provide useful text alternatives for images in HTML documents. Polyglot Markup: HTML-Compatible XHTML Documents, which is intended to help authors produce XHTML documents that are also compatible with non-XML HTML syntax and parsing rules.
Gary Edwards

Eucalyptus open-sources the cloud (Q&A) | The Open Road - CNET News - 0 views

  • The ideal customer is one with an IT organization that is tasked with supporting a heterogeneous set of user groups (each with its own technology needs, business logic, policies, etc.) using infrastructure that it must maintain across different phases of the technology lifecycle. There are two prevalent usage models that we observe regularly. The first is as a development and testing platform for applications that, ultimately, will be deployed in a public cloud. It is often easier, faster, and cheaper to use locally sited resources to develop and debug an application (particularly one that is designed to operate at scale) prior to its operational deployment in an externally hosted environment. The virtualization of machines makes cross-platform configuration easier to achieve and Eucalyptus' API compatibility makes the transition between on-premise resources and the public clouds simple. The second model is as an operational hybrid. It is possible to run the same image simultaneously both on-premise using Eucalyptus and in a public cloud thereby providing a way to augment local resources with those rented from a provider without modification to the application. For whom is this relevant technology today? Who are your customers? Wolski: We are seeing tremendous interest in several verticals. Banking/finance, big pharma, manufacturing, gaming, and the service provider market have been the early adopters to deploy and experiment with the Eucalyptus technology.
  • Eucalyptus is designed to be able to compose multiple technology platforms into a single "universal" cloud platform that exposes a common API, but that can at the same time support separate APIs for the individual technologies. Moreover, it is possible to export some of the specific and unique features of each technology through the common API as "quality-of-service" attributes.
  •  
    Eucalyptus, an open-source platform that implements "infrastructure as a service" (IaaS) style cloud computing, aims to take open source front and center in the cloud-computing craze. The project, founded by academics at the University of California at Santa Barbara, is now a Benchmark-funded company with an ambitious goal: become the universal cloud platform that everyone from Amazon to Microsoft to Red Hat to VMware ties into. [Eucalyptus] is architected to be compatible with such a wide variety of commonly installed data center technologies, [and hence] provides an easy and low-risk way of building private (i.e. on-premise or internal) clouds...Thus data center operators choosing Eucalyptus are assured of compatibility with the emerging application development and operational cloud ecosystem while attaining the security and IT investment amortization levels they desire without the "fear" of being locked into a single public cloud platform.
Paul Merrell

Hewlett-Packard Traded WebOS for This: The Autonomy Gamble - 0 views

  • Content management systems today continue to be based on the types of structured database systems about one or two steps more evolved than dBASE. We've known they would be insufficient for the task, but we've put off the problem of composing a new architecture. It's already too late for major IT companies to start that new architecture from square one; if a company has any hope of addressing this colossal, underappreciated problem, it will need to acquire the architectural project in progress. This is what Hewlett-Packard announced yesterday that it intends to do: acquire a software firm whose core product aims to supplant everything we know about databases, both the SQL kind and the Google kind. In its place would come a clustered approach whose goal is no less than to be the central repository for meaning in the world.
  • As CEO Apotheker told analysts yesterday, HP intends to exploit the prospects for using Autonomy's technology as a foundation for a content management system. For now, that CMS would be a project for what, on the surface, seems an unlikely department: the Imaging and Printing Group (IPG). Autonomy describes this technology - which it calls Intelligent Data Operating Layer (IDOL) - as nothing less than a replacement for, a complete substitute for, a revolutionary disruption of, Google.
  • Elsewhere in Autonomy's literature is a monkey wrench it hurls directly at Google, with hopes of messing up its gears. Here, the company attacks the value of Google's page ranking technology in the enterprise: "in many cases, the most popular information is also the most relevant. The importance or popularity of a Web page is approximated by counting the number of other pages that are linked to it, and by how frequently those pages are viewed by other users. This works quite well on the Internet but in the enterprise it is doomed to failure. Firstly, there are no native links between information in the enterprise. Secondly, if a user happens to be an expert, perhaps in the field of gallium arsenide laser diodes, there may be no one else interested in the subject, but it is still imperative that they find relevant information." This is what HP is buying: an opportunity to disrupt Google. If IDOL is every bit the next stage of database evolution that Autonomy makes it out to be, then HP (at least in its executives' own minds) is not surrendering to Google at all, as some consumer publications this morning are suggesting. As HP perceives it, rather than cutting off Google's left arm, it's targeting the gut.
Paul Merrell

Project Summary - 3 views

  • Maqetta is an open source technology initiative at Dojo Foundation that provides WYSIWYG tooling in the cloud for HTML5 (desktop and mobile). Maqetta allows User Experience Designers (UXD) to perform drag/drop assembly of live UI mockups. One of Maqetta's key design goals is to create developer-ready UI mockups that promote efficient hand-off from designers to developers. The user interfaces created by Maqetta are real-life web applications that can be handed off to developers, who can then transform the application incrementally from UI mockup into final shipping application. While we expect the Maqetta-created mockups often will go through major code changes, Maqetta is designed to promote preservation of visual assets, particularly the CSS style sheets, across the development life cycle. As a result, the careful pixel-level styling efforts by the UI team will carry through into the final shipping application. To help with the designer/developer hand-off, Maqetta includes a "download into ZIP" feature to create a ZIP image that can be imported into a developer tool workspace (e.g., Eclipse IDE). For team development, Maqetta includes a web-based review&commenting features with forum-style comments and on-canvas annotations.
  • Maqetta includes: a WYSIWYG visual page editor for drawing out user interfaces drag/drop mobile UI authoring within an exact-dimension device silhouette, such as the silhouette of an iPhone simultaneous editing in either design or source views deep support for CSS styling (the application includes a full CSS parser/modeler) a mechanism for organizing a UI prototype into a series of "application states" (aka "screens" or "panels") which allows a UI designer to define interactivity without programming a web-based review and commenting feature where the author can submit a live UI mockup for review by his team members a "wireframing" feature that allows UI designers to create UI proposals that have a hand-drawn look a theme editor for customizing the visual styling of a collection of widgets export options that allow for smooth hand-off of the UI mockups into leading developer tools such as Eclipse Maqetta's code base has a toolkit-independent architecture that allows for plugging in arbitrary widget libraries and CSS themes.
Gary Edwards

Google Swiffy - 0 views

  •  
    Swiffy converts Flash SWF files to HTML5, allowing you to reuse Flash content on devices without a Flash player (such as iPhones and iPads). Swiffy currently supports a subset of SWF 8 and ActionScript 2.0, and the output works in all Webkit browsers such as Chrome and Mobile Safari. If possible, exporting your Flash animation as a SWF 5 file might give better results. Upload a SWF file
Paul Merrell

NSA contractors use LinkedIn profiles to cash in on national security | Al Jazeera America - 0 views

  • NSA spies need jobs, too. And that is why many covert programs could be hiding in plain sight. Job websites such as LinkedIn and Indeed.com contain hundreds of profiles that reference classified NSA efforts, posted by everyone from career government employees to low-level IT workers who served in Iraq or Afghanistan. They offer a rare glimpse into the intelligence community's projects and how they operate. Now some researchers are using the same kinds of big-data tools employed by the NSA to scrape public LinkedIn profiles for classified programs. But the presence of so much classified information in public view raises serious concerns about security — and about the intelligence industry as a whole. “I’ve spent the past couple of years searching LinkedIn profiles for NSA programs,” said Christopher Soghoian, the principal technologist with the American Civil Liberties Union’s Speech, Privacy and Technology Project.
  • On Aug. 3, The Wall Street Journal published a story about the FBI’s growing use of hacking to monitor suspects, based on information Soghoian provided. The next day, Soghoian spoke at the Defcon hacking conference about how he uncovered the existence of the FBI’s hacking team, known as the Remote Operations Unit (ROU), using the LinkedIn profiles of two employees at James Bimen Associates, with which the FBI contracts for hacking operations. “Had it not been for the sloppy actions of a few contractors updating their LinkedIn profiles, we would have never known about this,” Soghoian said in his Defcon talk. Those two contractors were not the only ones being sloppy.
  • “I was, like, huh, maybe there’s more we can do with this — actually get a list of all these profiles that have these results and use that to analyze the structure of which companies are helping with which programs, which people are helping with which programs, try to figure out in what capacity, and learn more about things that we might not know about,” McGrath said. He set up a computer program called a scraper to search LinkedIn for public profiles that mention known NSA programs, contractors or jargon — such as SIGINT, the agency’s term for “signals intelligence” gleaned from intercepted communications. Once the scraper found the name of an NSA program, it searched nearby for other words in all caps. That allowed McGrath to find the names of unknown programs, too. Once McGrath had the raw data — thousands of profiles in all, with 70 to 80 different program names — he created a network graph that showed the relationships between specific government agencies, contractors and intelligence programs. Of course, the data are limited to what people are posting on their LinkedIn profiles. Still, the network graph gives a sense of which contractors work on several NSA programs, which ones work on just one or two, and even which programs military units in Iraq and Afghanistan are using. And that is just the beginning.
  • ...2 more annotations...
  • And there are many more. A quick search of Indeed.com using three code names unlikely to return false positives — Dishfire, XKeyscore and Pinwale — turned up 323 résumés. The same search on LinkedIn turned up 48 profiles mentioning Dishfire, 18 mentioning XKeyscore and 74 mentioning Pinwale. Almost all these people appear to work in the intelligence industry. Network-mapping the data Fabio Pietrosanti of the Hermes Center for Transparency and Digital Human Rights noticed all the code names on LinkedIn last December. While sitting with M.C. McGrath at the Chaos Communication Congress in Hamburg, Germany, Pietrosanti began searching the website for classified program names — and getting serious results. McGrath was already developing Transparency Toolkit, a Web application for investigative research, and knew he could improve on Pietrosanti’s off-the-cuff methods.
  • Click on the image to view an interactive network illustration of the relationships between specific national security surveillance programs in red, and government organizations or private contractors in blue.
  •  
    What a giggle, public spying on NSA and its contractors using Big Data. The interactive network graph with its sidebar display of relevant data derived from LinkedIn profiles is just too delightful. 
Paul Merrell

A Short Guide to the Internet's Biggest Enemies | Electronic Frontier Foundation - 0 views

  • Reporters Without Borders (RSF) released its annual “Enemies of the Internet” index this week—a ranking first launched in 2006 intended to track countries that repress online speech, intimidate and arrest bloggers, and conduct surveillance of their citizens.  Some countries have been mainstays on the annual index, while others have been able to work their way off the list.  Two countries particularly deserving of praise in this area are Tunisia and Myanmar (Burma), both of which have stopped censoring the Internet in recent years and are headed in the right direction toward Internet freedom. In the former category are some of the world’s worst offenders: Cuba, North Korea, China, Iran, Saudi Arabia, Vietnam, Belarus, Bahrain, Turkmenistan, Syria.  Nearly every one of these countries has amped up their online repression in recent years, from implementing sophisticated surveillance (Syria) to utilizing targeted surveillance tools (Vietnam) to increasing crackdowns on online speech (Saudi Arabia).  These are countries where, despite advocacy efforts by local and international groups, no progress has been made. The newcomers  A third, perhaps even more disheartening category, is the list of countries new to this year's index.  A motley crew, these nations have all taken new, harsh approaches to restricting speech or monitoring citizens:
  • United States: This is the first time the US has made it onto RSF’s list.  While the US government doesn’t censor online content, and pours money into promoting Internet freedom worldwide, the National Security Agency’s unapologetic dragnet surveillance and the government’s treatment of whistleblowers have earned it a spot on the index. United Kingdom: The European nation has been dubbed by RSF as the “world champion of surveillance” for its recently-revealed depraved strategies for spying on individuals worldwide.  The UK also joins countries like Ethiopia and Morocco in using terrorism laws to go after journalists.  Not noted by RSF, but also important, is the fact that the UK is also cracking down on legal pornography, forcing Internet users to opt-in with their ISP if they wish to view it and creating a slippery slope toward overblocking.  This is in addition to the government’s use of an opaque, shadowy NGO to identify child sexual abuse images, sometimes resulting instead in censorship of legitimate speech.
Paul Merrell

Most Agencies Falling Short on Mandate for Online Records - 0 views

  • Nearly 20 years after Congress passed the Electronic Freedom of Information Act Amendments (E-FOIA), only 40 percent of agencies have followed the law's instruction for systematic posting of records released through FOIA in their electronic reading rooms, according to a new FOIA Audit released today by the National Security Archive at www.nsarchive.org to mark Sunshine Week. The Archive team audited all federal agencies with Chief FOIA Officers as well as agency components that handle more than 500 FOIA requests a year — 165 federal offices in all — and found only 67 with online libraries populated with significant numbers of released FOIA documents and regularly updated.
  • Congress called on agencies to embrace disclosure and the digital era nearly two decades ago, with the passage of the 1996 "E-FOIA" amendments. The law mandated that agencies post key sets of records online, provide citizens with detailed guidance on making FOIA requests, and use new information technology to post online proactively records of significant public interest, including those already processed in response to FOIA requests and "likely to become the subject of subsequent requests." Congress believed then, and openness advocates know now, that this kind of proactive disclosure, publishing online the results of FOIA requests as well as agency records that might be requested in the future, is the only tenable solution to FOIA backlogs and delays. Thus the National Security Archive chose to focus on the e-reading rooms of agencies in its latest audit. Even though the majority of federal agencies have not yet embraced proactive disclosure of their FOIA releases, the Archive E-FOIA Audit did find that some real "E-Stars" exist within the federal government, serving as examples to lagging agencies that technology can be harnessed to create state-of-the art FOIA platforms. Unfortunately, our audit also found "E-Delinquents" whose abysmal web performance recalls the teletype era.
  • E-Delinquents include the Office of Science and Technology Policy at the White House, which, despite being mandated to advise the President on technology policy, does not embrace 21st century practices by posting any frequently requested records online. Another E-Delinquent, the Drug Enforcement Administration, insults its website's viewers by claiming that it "does not maintain records appropriate for FOIA Library at this time."
  • ...9 more annotations...
  • "The presumption of openness requires the presumption of posting," said Archive director Tom Blanton. "For the new generation, if it's not online, it does not exist." The National Security Archive has conducted fourteen FOIA Audits since 2002. Modeled after the California Sunshine Survey and subsequent state "FOI Audits," the Archive's FOIA Audits use open-government laws to test whether or not agencies are obeying those same laws. Recommendations from previous Archive FOIA Audits have led directly to laws and executive orders which have: set explicit customer service guidelines, mandated FOIA backlog reduction, assigned individualized FOIA tracking numbers, forced agencies to report the average number of days needed to process requests, and revealed the (often embarrassing) ages of the oldest pending FOIA requests. The surveys include:
  • The federal government has made some progress moving into the digital era. The National Security Archive's last E-FOIA Audit in 2007, " File Not Found," reported that only one in five federal agencies had put online all of the specific requirements mentioned in the E-FOIA amendments, such as guidance on making requests, contact information, and processing regulations. The new E-FOIA Audit finds the number of agencies that have checked those boxes is now much higher — 100 out of 165 — though many (66 in 165) have posted just the bare minimum, especially when posting FOIA responses. An additional 33 agencies even now do not post these types of records at all, clearly thwarting the law's intent.
  • The FOIAonline Members (Department of Commerce, Environmental Protection Agency, Federal Labor Relations Authority, Merit Systems Protection Board, National Archives and Records Administration, Pension Benefit Guaranty Corporation, Department of the Navy, General Services Administration, Small Business Administration, U.S. Citizenship and Immigration Services, and Federal Communications Commission) won their "E-Star" by making past requests and releases searchable via FOIAonline. FOIAonline also allows users to submit their FOIA requests digitally.
  • Disabilities Compliance. Despite the E-FOIA Act, many government agencies do not embrace the idea of posting their FOIA responses online. The most common reason agencies give is that it is difficult to post documents in a format that complies with the Americans with Disabilities Act, also referred to as being "508 compliant," and the 1998 Amendments to the Rehabilitation Act that require federal agencies "to make their electronic and information technology (EIT) accessible to people with disabilities." E-Star agencies, however, have proven that 508 compliance is no barrier when the agency has a will to post. All documents posted on FOIAonline are 508 compliant, as are the documents posted by the Department of Defense and the Department of State. In fact, every document created electronically by the US government after 1998 should already be 508 compliant. Even old paper records that are scanned to be processed through FOIA can be made 508 compliant with just a few clicks in Adobe Acrobat, according to this Department of Homeland Security guide (essentially OCRing the text, and including information about where non-textual fields appear). Even if agencies are insistent it is too difficult to OCR older documents that were scanned from paper, they cannot use that excuse with digital records.
  • Key Findings
  • Excuses Agencies Give for Poor E-Performance
  • Justice Department guidance undermines the statute. Currently, the FOIA stipulates that documents "likely to become the subject of subsequent requests" must be posted by agencies somewhere in their electronic reading rooms. The Department of Justice's Office of Information Policy defines these records as "frequently requested records… or those which have been released three or more times to FOIA requesters." Of course, it is time-consuming for agencies to develop a system that keeps track of how often a record has been released, which is in part why agencies rarely do so and are often in breach of the law. Troublingly, both the current House and Senate FOIA bills include language that codifies the instructions from the Department of Justice. The National Security Archive believes the addition of this "three or more times" language actually harms the intent of the Freedom of Information Act as it will give agencies an easy excuse ("not requested three times yet!") not to proactively post documents that agency FOIA offices have already spent time, money, and energy processing. We have formally suggested alternate language requiring that agencies generally post "all records, regardless of form or format that have been released in response to a FOIA request."
  • THE E-DELINQUENTS: WORST OVERALL AGENCIES In alphabetical order
  • Privacy. Another commonly articulated concern about posting FOIA releases online is that doing so could inadvertently disclose private information from "first person" FOIA requests. This is a valid concern, and this subset of FOIA requests should not be posted online. (The Justice Department identified "first party" requester rights in 1989. Essentially agencies cannot use the b(6) privacy exemption to redact information if a person requests it for him or herself. An example of a "first person" FOIA would be a person's request for his own immigration file.) Cost and Waste of Resources. There is also a belief that there is little public interest in the majority of FOIA requests processed, and hence it is a waste of resources to post them. This thinking runs counter to the governing principle of the Freedom of Information Act: that government information belongs to US citizens, not US agencies. As such, the reason that a person requests information is immaterial as the agency processes the request; the "interest factor" of a document should also be immaterial when an agency is required to post it online. Some think that posting FOIA releases online is not cost effective. In fact, the opposite is true. It's not cost effective to spend tens (or hundreds) of person hours to search for, review, and redact FOIA requests only to mail it to the requester and have them slip it into their desk drawer and forget about it. That is a waste of resources. The released document should be posted online for any interested party to utilize. This will only become easier as FOIA processing systems evolve to automatically post the documents they track. The State Department earned its "E-Star" status demonstrating this very principle, and spent no new funds and did not hire contractors to build its Electronic Reading Room, instead it built a self-sustaining platform that will save the agency time and money going forward.
Paul Merrell

Are processors pushing up against the limits of physics? | Ars Technica - 0 views

  • When I first started reading Ars Technica, performance of a processor was measured in megahertz, and the major manufacturers were rushing to squeeze as many of them as possible into their latest silicon. Shortly thereafter, however, the energy needs and heat output of these beasts brought that race crashing to a halt. More recently, the number of processing cores rapidly scaled up, but they quickly reached the point of diminishing returns. Now, getting the most processing power for each Watt seems to be the key measure of performance. None of these things happened because the companies making processors ran up against hard physical limits. Rather, computing power ended up being constrained because progress in certain areas—primarily energy efficiency—was slow compared to progress in others, such as feature size. But could we be approaching physical limits in processing power? In this week's edition of Nature, The University of Michigan's Igor Markov takes a look at the sorts of limits we might face.
Paul Merrell

Tech Companies Reel as NSA's Spying Tarnishes Reputations - Bloomberg - 0 views

  • U.S. technology companies are in danger of losing more business to foreign competitors if the National Security Agency’s power to spy on customers isn’t curbed, researchers with the New America Foundation said in a report today. The report, by the foundation’s Open Technology Institute, called for prohibiting the NSA from collecting data in bulk, while letting companies report more details about what information they give the government. Senate legislation introduced today would fulfill some recommendations by the institute, a Washington-based advocacy group that has been critical of NSA programs.
« First ‹ Previous 41 - 60 of 68 Next ›
Showing 20 items per page