Skip to main content

Home/ Open Web/ Group items tagged Important

Rss Feed Group items tagged

Paul Merrell

IE Drops Like a Rock, Eroded by Chrome and Firefox - No end in sight to IE's fall - Sof... - 1 views

  • Internet Explorer’s dominance on the browser market has been weakening constantly since Mozilla’s open source browser started getting traction with users. And with the advent of Google Chrome, IE’s share loss only became steeper. Statistics offered by Janco Associates reveal that in February 2010, Internet Explorer has dropped under 65%. Over the past four years, the release of Internet Explorer 7 and Internet Explorer 8 did nothing to halt IE’s crumbling market share.
  • Janco notes that from February 2009 to February 2010, IE dropped 6.21%, from 70.99% to 64.78%. “The major findings are that in the last 12 months Microsoft's browser market share has continued to erode - Microsoft lost over 6% in the last 12 months; Firefox's market share is unchanged for the last 12 months; Google Desktop and Chrome now have just under 6%; and Netscape is no more,” reads an excerpt from the Browser and Operating System Market Share White Paper.
  •  
    Janco notes that from February 2009 to February 2010, IE dropped 6.21%, from 70.99% to 64.78%. "The major findings are that in the last 12 months Microsoft's browser market share has continued to erode - Microsoft lost over 6% in the last 12 months; Firefox's market share is unchanged for the last 12 months; Google Desktop and Chrome now have just under 6%; and Netscape is no more," reads an excerpt from the Browser and Operating System Market Share White Paper.
Paul Merrell

Mobile Data Surpasses Voice Traffic For First Time - HotHardware - 0 views

  • Total mobile data traffic topped mobile voice traffic in the United States last year, for the first time.In fact, globally, data traffic (that includes SMS text messaging) topped voice traffic on a monthly basis last year and the total traffic across the world exceeded an exabyte for the first time in 2009, according to a report just released by Chetan Sharma Consulting, a leading strategist in the mobile industry (clients include AT&T and China Mobile).
Gary Edwards

A founder-friendly term sheet - Sam Altman - 1 views

  •  
    Must read for every entrepreneur!  When your product and service can command these kind of terms, for sure your company is worth investing in. "A founder-friendly term sheet When I invest (outside of YC) I make offers with the following term sheet.  I've tried to make the terms reflect what I wanted when I was a founder.  A few people have asked me if I'd share it, so here it is.  I think it's pretty founder-friendly. If you believe the upside risk theory, then it makes sense to offer compelling terms and forgo some downside protection to get the best companies to want to work with you. What's most important is what's not in it: *No option pool.  Taking the option pool out of the pre-money valuation (ie, diluting only founders and not investors for future hires) is just a way to artificially manipulate valuation.  New hires benefit everyone and should dilute everyone. *The company doesn't have to pay any of my legal fees.  Requiring the company to pay investors' legal fees always struck me as particularly egregious-the company can probably make better use of the money than investors can, so I'll pay my own legal fees for the round (in a simple deal with no back and forth they always end up super low anyway). *No expiration.  I got burned once by an exploding offer and haven't forgotten it; the founders can take as much time as they want to think about it.  In practice, people usually decide pretty quickly. *No confidentiality.  Founder/investor relationships are long and important.  The founders should talk to whomever they want, and if they want to tell people what I offered them, I don't really care.  Investors certainly tell each other what they offer companies. (Once we shake hands on a deal, of course, I expect the founders to honor it.) *No participating preferred, non-standard liquidation preference, etc.  There is a 1x liquidation preference, but I'm willing to forgo even that and buy common shares (and sometimes
Gary Edwards

Ticked off: How stock market decimalization killed IPOs and ruined our economy ~ I, Cri... - 0 views

  •  
    Really interesting blog from Robert X.  Wealth through productivity vs wealth through accumulation and the important but seriously declining role of IPO's. excerpt: "Big business grows by economies of scale, economies of scale are gained by increasing efficiency, and increased efficiency in big business always - always - means creating more economic output with fewer people. More economic output is good, but fewer people is bad if you need 100,000 new jobs per month just to provide for normal U.S. population growth. This is the ultimate irony of policies that declare companies too big to fail when in fact they are more properly too big to survive. Our policy obsession with helping big business no matter which party is in power has been a major factor in our own economic demise because it doesn't create jobs. Our leaders and would-be leaders are really good at talking about the value of small and medium size businesses in America but really terrible about actually doing much to help. Now here comes the important part: if small businesses, young businesses, new businesses create jobs, then Initial Public Offerings create wealth. Wealth creation is just as important as job creation in our economy but too many experts get it wrong when they think wealth creation and wealth preservation are the same things, because they aren't." ................. The fundamental error of trickle-down (Supply Side) economics is that it is dependent on rich people spending money which they structurally can't do fast enough to matter, and philosophically won't do because their role in the food chain is about growth through accumulation, not through new production. ..............................................
  •  
    I'm less than convinced that IPOs create wealth, in terms of the aggregate wealth of the nation. Most of the "wealth" created by IPOs goes to the previous owner's of the business, plus whatever speculators can maneuver to acquire through capital gains. But waving the "IPO wand" does not magically boost productivity, business outputs, or business profitability. So if "wealth" is created, it is faux wealth. I think Cringely ventures too far from what the real argument is about: levels of government taxation and creating jobs. Supply Side economics is in reality an argument against taxing the wealthy. But Cringely doesn't even touch on the taxation issue. I also do not agree with his "Steve Jobs created 50,000 new jobs" schtick because he does not take into account how many jobs were destroyed in the process. But modern information technology has unquestionably destroyed more jobs than it has created; the technology never would have succeeded had it not boosted individual productivity to a point that massive numbers of employees could be laid off. For example, remember the days when you could call a business and have a human being answer the phone and direct your call to the right person? That lady doesn't have that job anymore because of voice menu/mail technology. IT is all about doing more with fewer people. In the context of jobs and taxation levels, the fundamental error of Supply Side Economics is not the distinction between wealth accumulation and wealth creation. The real fundamental error is globalism, government policies that create enormous incentives to invest capital outside the U.S. Supply Side Economics simply blinks past that enormously inconvenient reality. To illustrate, let's try remodeling trickle-down economics in a way that has a prayer of producing more and better-paying jobs in the U.S. (Over-simplification warning.) -- The U.S. withdraws from all trade agreements standing in the way and repeals all laws inconsistent with the goal of
Paul Merrell

christine varney - Programming Blog - 0 views

  • Consumer Watchdog today called on the Justice Department to guarantee that its ongoing antitrust probe of Google’s business practices include an investigation into if the company is manipulating its search results to favor its own products. The nonprofit advocacy group said it sent a letter to Christine Varney, Assistant Attorney General for Antitrust Division, after news that the European Commission had received three complaints against Google alleging the company manipulated search engine results in an anticompetitive way. Also this week U.K. based price comparison site Foundem filed papers with the Federal Communications Commission with examples of how Google products were allegedly favored in its search results.
  • ongoing antitrust probe of Google’s business practices include an investigation into if the company is manipulating its search results to favor its own products. The nonprofit advocacy group said it sent a letter to Christine Varney, Assistant Attorney General for Antitrust Division, after news that the European Commission had received three complaints against Google alleging the company manipulated search engine results in an anticompetitive way. Also this week U.K. based price comparison site Foundem filed papers with the Federal Communications Commission with examples of how Google products were allegedly favored in its search results.
  •  
    If the evidence supports the allegations, this is a plausible antitrust theory, a company with a dominant market position leveraging that position into new markets via integration. In essence this is the same theory as that applied against Microsoft's bundling and integration of Windows, Internet Explorer, and Windows Media Player.  
Gary Edwards

Why a JavaScript hater thinks everyone needs to learn JavaScript in the next year - O'R... - 1 views

  • some extremely important game-changers: jQuery, JSON, Node.js, and HTML5.
  • .js has the potential to revolutionize web development. It is a framework for building high performance web applications: applications that can respond very quickly and efficiently to a high volume of incoming requests.
  • Google has started a revolution in JavaScript performance.
  • ...11 more annotations...
  • the number of JavaScript developers is huge.
  • HTML5 is about JavaScript
  • The power of HTML5 lies in what these tags allow you to create in JavaScript.
  • HTML5, then, isn't really a major advance in angle-bracket-based tagging; it's about enabling JavaScr
  • pt to do more powerful things
  • JavaScript has long been the workhorse for implementing dynamic features in HTML. But there have always been two problems: browser incompatibilities, and the awkwardness of working directly with the DOM. The JQuery library has elegantly solved both problems, and is the basis for modern client-side browser development.
  • The use of JavaScript has also exploded in databases.
  • document databases
  • for all three databases, a "document" means a JSON document, not a Word or Excel file.
  • JSON is really just a format for serializing JavaScript objects.
  • Web servers, rich web client libraries, HTML5, databases, even JavaScript-based languages: I see JavaScript everywhere.
  •  
    OK, this article gets my vote as the most important read of the year.  We all know that the the Web is the future of both computing and communications/connectivity.  But wha tis the future of the Web?  Uber coder Mike Loukides says it's JavaScript, and what a compelling case he builds.  This is a must read.  Key concepts are diigo highlighted :) excerpt: JavaScript has "grown up." I'm sure there are many JavaScript developers who would take issue with that judgement, and argue that JavaScript has been a capable, mature, and under-appreciated language all along. They may be right, though you can write any program in any complete programming language, including awful things like BASIC. What makes a language useful is some combination of the language's expressiveness and the libraries and tools available. JavaScript clearly passed the expressiveness barrier a long time ago, even if the ceremony required for creating objects is distasteful. But recently, we've seen some extremely important game-changers: jQuery, JSON, Node.js, and HTML5. JavaScript may have been a perfectly adequate language in the past, but these changes (and a few others that I'll point out) have made JavaScript a language that is essential for every developer to know. If there's one language you need to learn in the next year, it's JavaScript. Insightful comment: HTML5 is a JavaScript API, introducing new elements but significantly redefining ALL elements as objects or classes.  Elements can be expressed with tags.  Or, you can use DOM JavaScripting to create elements. 
Gary Edwards

Five reasons why Microsoft can't compete (and Steve Ballmer isn't one of them) - 2 views

  • discontinued
  • 1. U.S. and European antitrust cases put lawyers and non-technologists in charge of important final product decisions.
  • The company long resisted releasing pertinent interoperability information in the United States. On the European Continent, this resistance led to huge fines. Meanwhile, Microsoft steered away from exclusive contracts and from pushing into adjacent markets.
  • ...11 more annotations...
  • Additionally, Microsoft curtailed development of the so-called middleware at the core of the U.S. case: E-mail, instant messaging, media playback and Web browsing:
  • Microsoft cofounder Bill Gates learned several important lessons from IBM. Among them: The value of controlling key technology endpoints. For IBM, it was control interfaces. For Microsoft: Computing standards and file formats
  • 2. Microsoft lost control of file formats.
  • Charles Simonyi, the father of Microsoft, and his team achieved two important goals by the mid 1990s: Established format standards that resolved problems sharing documents created by disparate products.
  • nsured that Microsoft file formats would become the adopted desktop productivity standards. Format lock-in helped drive Office sales throughout the late 1990s and early 2000s -- and Windows along with it. However, the Web emerged as a potent threat, which Gates warned about in his May 1995 "Internet Tidal Wave" memo. Gates specifically identified HTML, HTTP and TCP/IP as formats outside Microsoft's control. "Browsing the Web, you find almost no Microsoft file formats," Gates wrote. He observed not seeing a single Microsoft file format "after 10 hours of browsing," but plenty of Apple QuickTime videos and Adobe PDF documents. He warned that "the Internet is the most important single development to come along since the IBM PC was introduced in 1981. It is even more important than the arrival of the graphical user interface (GUI)."
  • 3. Microsoft's senior leadership is middle-aging.
  • Google resembles Microsoft in the 1980s and 1990s:
  • Microsoft's middle-management structure is too large.
  • 5. Microsoft's corporate culture is risk adverse.
  • Microsoft's
  • . Microsoft was nimbler during the transition from mainframe to PC dominance. IBM had built up massive corporate infrastructure, large customer base and revenue streams attached to both. With few customers, Microsoft had little to lose but much to gain; the upstart took risks IBM wouldn't for fear of losing customers or jeopardizing existing revenue streams. Microsoft's role is similar today. Two product lines, Office and Windows, account for the majority of Microsoft products, and the majority of sales are to enterprises -- the same kind of customers IBM had during the mainframe era.
  •  
    Excellent summary and historical discussion about Microsoft and why they can't seem to compete.  Lot's of anti trust and monopolist swtuff - including file formats and interop lock ins (end points).  Microsoft's problems started with the World Wide Web and continue with mobile devices connected to cloud services.
Gary Edwards

Discoverer of JSON Recommends Suspension of HTML5 | Web Security Journal - 0 views

  •  
    Fascinating conversation between Douglas Crockford and Jeremy Geelan. The issue is that XSS - the Cross Site Scripting capabilities of HTML. and "the painful gap" in the HTML5 specification of the itnerface between JavaScript and the browser. I had to use the Evernote Clearly Chrome extension to read this page. Microsoft is running a huge JavaScript advertisement/pointer that totally blocks the page with no way of closing or escaping. Incredible. Clearly was able to knock it out though. Nicely done! The HTML5-XSS problem is very important, especially if your someone like me that sees the HTML+ format (HTML5-CSS3-JSON-JavaScript-SVG/Canvas) as the undisputed Cloud Productivity Platform "compound document" model. The XSS discussion goes right to the heart of matter of creating an HTML compound document in much the same way that a MSOffice Productivity Compound Document worked. The XSS mimics the functionality of of embedded compound document components such as OLE, DDE, ODBC and Scripting. Crack open any client/server business document and it will be found to be loaded with these embeded components. It seems to me that any one of the Cloud Productivity Platform contenders could solve the HTML-XSS problem. I'm thinking Google Apps, Zoho, SalesForce.com, RackSpace and Amazon - with gApps and Zoho clearly leading the charge. Also let me add that RSS and XMP (Jabber), while not normally mentioned with JSON, ought to be considered. Twitter uses RSS to transport and connect data. Jabber is of course a long time favorite of mine. excerpt: The fundamental mistake in HTML5 was one of prioritization. It should have tackled the browser's most important problem first. Once the platform was secured, then shiny new features could be carefully added. There is much that is attractive about HTML5. But ultimately the thing that made the browser into a credible application delivery system was JavaScript, the ultimate workaround tool. There is a painful gap
Gary Edwards

Key Google Docs changes promise faster service | Relevant Results - CNET News - 0 views

  •  
    Jonathan Rochelle and Dave Girouard: Google's long-term vision of computing is based around the notion that the Web and the browser become the primary vehicles for applications, and Google Docs is an important part of realizing that vision. The main improvement was to create a common infrastructure across the Google Docs products, all of which came into Google from separate acquisitions, Rochelle said. This has paved the way for Google to offer users a chance to do character-by-character real-time editing of a document or spreadsheet, almost the same way Google Wave lets collaborators see each other's keystrokes in a Wave. Those changes have also allowed Google to take more control of the way documents are rendered and formatted in Google Docs, instead of passing the buck to the browser to make those decisions. This allows Google to ensure that documents will look the same on the desktop or in the cloud, an important consideration for designing marketing materials or reviewing architectural blueprints, for example.
Gary Edwards

J.C. Herz Ringling College Commencement Speech: Harvesting the Creative Life - 0 views

  •  
    Wow!  excellent advice, top to bottom.  Very well thought out flow of wisdom. excerpt: The important work that you build your reputation on - you can't just Google it. You don't cut and paste it from Wikipedia. You roll up your sleeves, and bring all your creativity and meaningful skills to bear on the problem of building something.   This is what the world requires - this is what the world rewards. Not just calling yourself creative, but understanding how to exercise your creative powers to some end, to bring your vision and skills together in a meaningful way. This is a powerful thing to be able to do. It gives you tremendous value in a society where attention is currency - being able to capture people's imaginations is the scarcest kind of power in a fractured culture. Creating work that transports and transcends is one of the few ways to create sustainable value in a disposable society. What you do, if you do it well, is never going to be a commodity. Vision, magic, delight. Heart-rocking spectacle. Pulse-pounding action. These things don't get outsourced to some cubicle drone in the developing world.   You are an influential group of people, and today is an important moment, as you set forth to become the chief stewards of your gifts. Because, this is what it means to be a creative professional: figuring out how to be the best steward of your gifts, so that your power to create grows and deepens meaningfully over time. So that your edges stay sharp, and your light stays bright. The life you've chosen is not one that simply requires clocking in and clocking out. You've got to bring your soul to it every day. You've got to be on your game.   That takes discipline. And it takes awareness - of how you're spending your time, and of how what you're doing affects your capability and your capacity. You are going to have to ask yourself, at every turn: is this project making me smarter, or making me stupider. Is this job stoking my fire,
Paul Merrell

Applause For Finland: First Country To Make Broadband Access A Legal Right - 0 views

  • Kudos to the Finnish government, which has just introduced laws guaranteeing broadband access to every person living in Finland (5.5 million people, give or take). This is reportedly a first worldwide.
Gary Edwards

New Adobe Air 2.0 Released : ISEdb.COM - 0 views

  •  
    Is Adobe AiR a Virtual Desktop?  We expect a VD to run an alien OS and those OS specific applications.  With AiR 2.0 it seems Adobe has ditched the "OS" component of a VD, and the OS specific applications, but is quite capable of running AiR based applications and information services that would otherwise have been designed for a specific OS environment.   Another way of looking at this would be to say that VD's are designed to run existing OS and OS specific applications, while AiR is desinged to run newly written OS independent applications that have one very important advantage over legacy applications and information systems;  AiR speaks the language of the Web 3.0.   This is WebKit HTML5-CSS3 with an advanced but Air specific version of JavaScript called "ActionScript".  What Adobe doesn't do is provide support for other critically important aspects of the WebKit interactive Web 3.0 model: support for Canvas/SVG!  Adobe continues to push the proprietary SWF interactive vector graphics format.   Note that Microsoft's Silverlight universal runtime does not support anything in the WebKit Web 3.0 model!  It's all proprietary. excerpt: For the first time since 2007, Adobe has updated its Air platform, released recently in beta with a slew of new features. The features include support for detection of mass storage devices, advanced networking capabilities, ability to open a file with its default application, improved cross-platform printing, and a bunch of other things that you probably won't really notice in any other way other than your Adobe working significantly more efficiently and smoothly than before. The 2.0 version of Air also will be able to support HTML5 and CSS3, due to an upgrade of its WebKit. Developers will also be happy to know that they can create Air applications that can be installed through a native installer. Air's changes have seen it morph into something of an 'operating system sitting on an operating system'. According
emileybrown89

Consume Kaspersky Support +1-855-676-2448 security solution - 0 views

  •  
    Installing a security solution is important; however you just cannot assume that you are done there. Keeping your antivirus databases up-to-date is just as important as the initial install. Kaspersky Lab products update automatically, but sometimes certain issues can happen because of our inappropriate use of internet or may be caused by a conflict with some software or drivers installed on your computer. We recommend to exploit our Kaspersky Support Number which is toll-free +1-855-676-2448 and our experienced professionals will take care of your device and issue instantly.
Paul Merrell

Civil Rights Coalition files FCC Complaint Against Baltimore Police Department for Ille... - 0 views

  • This week the Center for Media Justice, ColorOfChange.org, and New America’s Open Technology Institute filed a complaint with the Federal Communications Commission alleging the Baltimore police are violating the federal Communications Act by using cell site simulators, also known as Stingrays, that disrupt cellphone calls and interfere with the cellular network—and are doing so in a way that has a disproportionate impact on communities of color. Stingrays operate by mimicking a cell tower and directing all cellphones in a given area to route communications through the Stingray instead of the nearby tower. They are especially pernicious surveillance tools because they collect information on every single phone in a given area—not just the suspect’s phone—this means they allow the police to conduct indiscriminate, dragnet searches. They are also able to locate people inside traditionally-protected private spaces like homes, doctors’ offices, or places of worship. Stingrays can also be configured to capture the content of communications. Because Stingrays operate on the same spectrum as cellular networks but are not actually transmitting communications the way a cell tower would, they interfere with cell phone communications within as much as a 500 meter radius of the device (Baltimore’s devices may be limited to 200 meters). This means that any important phone call placed or text message sent within that radius may not get through. As the complaint notes, “[d]epending on the nature of an emergency, it may be urgently necessary for a caller to reach, for example, a parent or child, doctor, psychiatrist, school, hospital, poison control center, or suicide prevention hotline.” But these and even 911 calls could be blocked.
  • The Baltimore Police Department could be among the most prolific users of cell site simulator technology in the country. A Baltimore detective testified last year that the BPD used Stingrays 4,300 times between 2007 and 2015. Like other law enforcement agencies, Baltimore has used its devices for major and minor crimes—everything from trying to locate a man who had kidnapped two small children to trying to find another man who took his wife’s cellphone during an argument (and later returned it). According to logs obtained by USA Today, the Baltimore PD also used its Stingrays to locate witnesses, to investigate unarmed robberies, and for mysterious “other” purposes. And like other law enforcement agencies, the Baltimore PD has regularly withheld information about Stingrays from defense attorneys, judges, and the public. Moreover, according to the FCC complaint, the Baltimore PD’s use of Stingrays disproportionately impacts African American communities. Coming on the heels of a scathing Department of Justice report finding “BPD engages in a pattern or practice of conduct that violates the Constitution or federal law,” this may not be surprising, but it still should be shocking. The DOJ’s investigation found that BPD not only regularly makes unconstitutional stops and arrests and uses excessive force within African-American communities but also retaliates against people for constitutionally protected expression, and uses enforcement strategies that produce “severe and unjustified disparities in the rates of stops, searches and arrests of African Americans.”
  • Adding Stingrays to this mix means that these same communities are subject to more surveillance that chills speech and are less able to make 911 and other emergency calls than communities where the police aren’t regularly using Stingrays. A map included in the FCC complaint shows exactly how this is impacting Baltimore’s African-American communities. It plots hundreds of addresses where USA Today discovered BPD was using Stingrays over a map of Baltimore’s black population based on 2010 Census data included in the DOJ’s recent report:
  • ...2 more annotations...
  • The Communications Act gives the FCC the authority to regulate radio, television, wire, satellite, and cable communications in all 50 states, the District of Columbia and U.S. territories. This includes being responsible for protecting cellphone networks from disruption and ensuring that emergency calls can be completed under any circumstances. And it requires the FCC to ensure that access to networks is available “to all people of the United States, without discrimination on the basis of race, color, religion, national origin, or sex.” Considering that the spectrum law enforcement is utilizing without permission is public property leased to private companies for the purpose of providing them next generation wireless communications, it goes without saying that the FCC has a duty to act.
  • But we should not assume that the Baltimore Police Department is an outlier—EFF has found that law enforcement has been secretly using stingrays for years and across the country. No community should have to speculate as to whether such a powerful surveillance technology is being used on its residents. Thus, we also ask the FCC to engage in a rule-making proceeding that addresses not only the problem of harmful interference but also the duty of every police department to use Stingrays in a constitutional way, and to publicly disclose—not hide—the facts around acquisition and use of this powerful wireless surveillance technology.  Anyone can support the complaint by tweeting at FCC Commissioners or by signing the petitions hosted by Color of Change or MAG-Net.
  •  
    An important test case on the constitutionality of stingray mobile device surveillance.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Dimple Patel

Important aspects of SEO Content Development - 0 views

  •  
    While designing a web page, you should make sure that enough efforts are put into optimizing the content of the page. A competitive keyword will, without doubt, show up around 30 to 50 million search results. Of these, the first 10 to 100 web pages will tie for number one. If the content on your web page is not optimized for a particular keyword, there will be many other web sites that have been.
Paul Merrell

The Latest Rules on How Long NSA Can Keep Americans' Encrypted Data Look Too Familiar |... - 0 views

  • Does the National Security Agency (NSA) have the authority to collect and keep all encrypted Internet traffic for as long as is necessary to decrypt that traffic? That was a question first raised in June 2013, after the minimization procedures governing telephone and Internet records collected under Section 702 of the Foreign Intelligence Surveillance Act were disclosed by Edward Snowden. The issue quickly receded into the background, however, as the world struggled to keep up with the deluge of surveillance disclosures. The Intelligence Authorization Act of 2015, which passed Congress this last December, should bring the question back to the fore. It established retention guidelines for communications collected under Executive Order 12333 and included an exception that allows NSA to keep ‘incidentally’ collected encrypted communications for an indefinite period of time. This creates a massive loophole in the guidelines. NSA’s retention of encrypted communications deserves further consideration today, now that these retention guidelines have been written into law. It has become increasingly clear over the last year that surveillance reform will be driven by technological change—specifically by the growing use of encryption technologies. Therefore, any legislation touching on encryption should receive close scrutiny.
  • Section 309 of the intel authorization bill describes “procedures for the retention of incidentally acquired communications.” It establishes retention guidelines for surveillance programs that are “reasonably anticipated to result in the acquisition of [telephone or electronic communications] to or from a United States person.” Communications to or from a United States person are ‘incidentally’ collected because the U.S. person is not the actual target of the collection. Section 309 states that these incidentally collected communications must be deleted after five years unless they meet a number of exceptions. One of these exceptions is that “the communication is enciphered or reasonably believed to have a secret meaning.” This exception appears to be directly lifted from NSA’s minimization procedures for data collected under Section 702 of FISA, which were declassified in 2013. 
  • While Section 309 specifically applies to collection taking place under E.O. 12333, not FISA, several of the exceptions described in Section 309 closely match exceptions in the FISA minimization procedures. That includes the exception for “enciphered” communications. Those minimization procedures almost certainly served as a model for these retention guidelines and will likely shape how this new language is interpreted by the Executive Branch. Section 309 also asks the heads of each relevant member of the intelligence community to develop procedures to ensure compliance with new retention requirements. I expect those procedures to look a lot like the FISA minimization guidelines.
  • ...6 more annotations...
  • This language is broad, circular, and technically incoherent, so it takes some effort to parse appropriately. When the minimization procedures were disclosed in 2013, this language was interpreted by outside commentators to mean that NSA may keep all encrypted data that has been incidentally collected under Section 702 for at least as long as is necessary to decrypt that data. Is this the correct interpretation? I think so. It is important to realize that the language above isn’t just broad. It seems purposefully broad. The part regarding relevance seems to mirror the rationale NSA has used to justify its bulk phone records collection program. Under that program, all phone records were relevant because some of those records could be valuable to terrorism investigations and (allegedly) it isn’t possible to collect only those valuable records. This is the “to find a needle a haystack, you first have to have the haystack” argument. The same argument could be applied to encrypted data and might be at play here.
  • This exception doesn’t just apply to encrypted data that might be relevant to a current foreign intelligence investigation. It also applies to cases in which the encrypted data is likely to become relevant to a future intelligence requirement. This is some remarkably generous language. It seems one could justify keeping any type of encrypted data under this exception. Upon close reading, it is difficult to avoid the conclusion that these procedures were written carefully to allow NSA to collect and keep a broad category of encrypted data under the rationale that this data might contain the communications of NSA targets and that it might be decrypted in the future. If NSA isn’t doing this today, then whoever wrote these minimization procedures wanted to at least ensure that NSA has the authority to do this tomorrow.
  • There are a few additional observations that are worth making regarding these nominally new retention guidelines and Section 702 collection. First, the concept of incidental collection as it has typically been used makes very little sense when applied to encrypted data. The way that NSA’s Section 702 upstream “about” collection is understood to work is that technology installed on the network does some sort of pattern match on Internet traffic; say that an NSA target uses example@gmail.com to communicate. NSA would then search content of emails for references to example@gmail.com. This could notionally result in a lot of incidental collection of U.S. persons’ communications whenever the email that references example@gmail.com is somehow mixed together with emails that have nothing to do with the target. This type of incidental collection isn’t possible when the data is encrypted because it won’t be possible to search and find example@gmail.com in the body of an email. Instead, example@gmail.com will have been turned into some alternative, indecipherable string of bits on the network. Incidental collection shouldn’t occur because the pattern match can’t occur in the first place. This demonstrates that, when communications are encrypted, it will be much harder for NSA to search Internet traffic for a unique ID associated with a specific target.
  • This lends further credence to the conclusion above: rather than doing targeted collection against specific individuals, NSA is collecting, or plans to collect, a broad class of data that is encrypted. For example, NSA might collect all PGP encrypted emails or all Tor traffic. In those cases, NSA could search Internet traffic for patterns associated with specific types of communications, rather than specific individuals’ communications. This would technically meet the definition of incidental collection because such activity would result in the collection of communications of U.S. persons who aren’t the actual targets of surveillance. Collection of all Tor traffic would entail a lot of this “incidental” collection because the communications of NSA targets would be mixed with the communications of a large number of non-target U.S. persons. However, this “incidental” collection is inconsistent with how the term is typically used, which is to refer to over-collection resulting from targeted surveillance programs. If NSA were collecting all Tor traffic, that activity wouldn’t actually be targeted, and so any resulting over-collection wouldn’t actually be incidental. Moreover, greater use of encryption by the general public would result in an ever-growing amount of this type of incidental collection.
  • This type of collection would also be inconsistent with representations of Section 702 upstream collection that have been made to the public and to Congress. Intelligence officials have repeatedly suggested that search terms used as part of this program have a high degree of specificity. They have also argued that the program is an example of targeted rather than bulk collection. ODNI General Counsel Robert Litt, in a March 2014 meeting before the Privacy and Civil Liberties Oversight Board, stated that “there is either a misconception or a mischaracterization commonly repeated that Section 702 is a form of bulk collection. It is not bulk collection. It is targeted collection based on selectors such as telephone numbers or email addresses where there’s reason to believe that the selector is relevant to a foreign intelligence purpose.” The collection of Internet traffic based on patterns associated with types of communications would be bulk collection; more akin to NSA’s collection of phone records en mass than it is to targeted collection focused on specific individuals. Moreover, this type of collection would certainly fall within the definition of bulk collection provided just last week by the National Academy of Sciences: “collection in which a significant portion of the retained data pertains to identifiers that are not targets at the time of collection.”
  • The Section 702 minimization procedures, which will serve as a template for any new retention guidelines established for E.O. 12333 collection, create a large loophole for encrypted communications. With everything from email to Internet browsing to real-time communications moving to encrypted formats, an ever-growing amount of Internet traffic will fall within this loophole.
  •  
    Tucked into a budget authorization act in December without press notice. Section 309 (the Act is linked from the article) appears to be very broad authority for the NSA to intercept any form of telephone or other electronic information in bulk. There are far more exceptions from the five-year retention limitation than the encrypted information exception. When reading this, keep in mind that the U.S. intelligence community plays semantic games to obfuscate what it does. One of its word plays is that communications are not "collected" until an analyst looks at or listens to partiuclar data, even though the data will be searched to find information countless times before it becomes "collected." That searching was the major basis for a decision by the U.S. District Court in Washington, D.C. that bulk collection of telephone communications was unconstitutional: Under the Fourth Amendment, a "search" or "seizure" requiring a judicial warrant occurs no later than when the information is intercepted. That case is on appeal, has been briefed and argued, and a decision could come any time now. Similar cases are pending in two other courts of appeals. Also, an important definition from the new Intelligence Authorization Act: "(a) DEFINITIONS.-In this section: (1) COVERED COMMUNICATION.-The term ''covered communication'' means any nonpublic telephone or electronic communication acquired without the consent of a person who is a party to the communication, including communications in electronic storage."       
Paul Merrell

How Edward Snowden Changed Everything | The Nation - 0 views

  • Ben Wizner, who is perhaps best known as Edward Snowden’s lawyer, directs the American Civil Liberties Union’s Speech, Privacy & Technology Project. Wizner, who joined the ACLU in August 2001, one month before the 9/11 attacks, has been a force in the legal battles against torture, watch lists, and extraordinary rendition since the beginning of the global “war on terror.” Ad Policy On October 15, we met with Wizner in an upstate New York pub to discuss the state of privacy advocacy today. In sometimes sardonic tones, he talked about the transition from litigating on issues of torture to privacy advocacy, differences between corporate and state-sponsored surveillance, recent developments in state legislatures and the federal government, and some of the obstacles impeding civil liberties litigation. The interview has been edited and abridged for publication.
  • en Wizner, who is perhaps best known as Edward Snowden’s lawyer, directs the American Civil Liberties Union’s Speech, Privacy & Technology Project. Wizner, who joined the ACLU in August 2001, one month before the 9/11 attacks, has been a force in the legal battles against torture, watch lists, and extraordinary rendition since the beginning of the global “war on terror.” Ad Policy On October 15, we met with Wizner in an upstate New York pub to discuss the state of privacy advocacy today. In sometimes sardonic tones, he talked about the transition from litigating on issues of torture to privacy advocacy, differences between corporate and state-sponsored surveillance, recent developments in state legislatures and the federal government, and some of the obstacles impeding civil liberties litigation. The interview has been edited and abridged for publication.
  • Many of the technologies, both military technologies and surveillance technologies, that are developed for purposes of policing the empire find their way back home and get repurposed. You saw this in Ferguson, where we had military equipment in the streets to police nonviolent civil unrest, and we’re seeing this with surveillance technologies, where things that are deployed for use in war zones are now commonly in the arsenals of local police departments. For example, a cellphone surveillance tool that we call the StingRay—which mimics a cellphone tower and communicates with all the phones around—was really developed as a military technology to help identify targets. Now, because it’s so inexpensive, and because there is a surplus of these things that are being developed, it ends up getting pushed down into local communities without local democratic consent or control.
  • ...4 more annotations...
  • SG & TP: How do you see the current state of the right to privacy? BW: I joked when I took this job that I was relieved that I was going to be working on the Fourth Amendment, because finally I’d have a chance to win. That was intended as gallows humor; the Fourth Amendment had been a dishrag for the last several decades, largely because of the war on drugs. The joke in civil liberties circles was, “What amendment?” But I was able to make this joke because I was coming to Fourth Amendment litigation from something even worse, which was trying to sue the CIA for torture, or targeted killings, or various things where the invariable outcome was some kind of non-justiciability ruling. We weren’t even reaching the merits at all. It turns out that my gallows humor joke was prescient.
  • The truth is that over the last few years, we’ve seen some of the most important Fourth Amendment decisions from the Supreme Court in perhaps half a century. Certainly, I think the Jones decision in 2012 [U.S. v. Jones], which held that GPS tracking was a Fourth Amendment search, was the most important Fourth Amendment decision since Katz in 1967 [Katz v. United States], in terms of starting a revolution in Fourth Amendment jurisprudence signifying that changes in technology were not just differences in degree, but they were differences in kind, and require the Court to grapple with it in a different way. Just two years later, you saw the Court holding that police can’t search your phone incident to an arrest without getting a warrant [Riley v. California]. Since 2012, at the level of Supreme Court jurisprudence, we’re seeing a recognition that technology has required a rethinking of the Fourth Amendment at the state and local level. We’re seeing a wave of privacy legislation that’s really passing beneath the radar for people who are not paying close attention. It’s not just happening in liberal states like California; it’s happening in red states like Montana, Utah, and Wyoming. And purple states like Colorado and Maine. You see as many libertarians and conservatives pushing these new rules as you see liberals. It really has cut across at least party lines, if not ideologies. My overall point here is that with respect to constraints on government surveillance—I should be more specific—law-enforcement government surveillance—momentum has been on our side in a way that has surprised even me.
  • Do you think that increased privacy protections will happen on the state level before they happen on the federal level? BW: I think so. For example, look at what occurred with the death penalty and the Supreme Court’s recent Eighth Amendment jurisprudence. The question under the Eighth Amendment is, “Is the practice cruel and unusual?” The Court has looked at what it calls “evolving standards of decency” [Trop v. Dulles, 1958]. It matters to the Court, when it’s deciding whether a juvenile can be executed or if a juvenile can get life without parole, what’s going on in the states. It was important to the litigants in those cases to be able to show that even if most states allowed the bad practice, the momentum was in the other direction. The states that were legislating on this most recently were liberalizing their rules, were making it harder to execute people under 18 or to lock them up without the possibility of parole. I think you’re going to see the same thing with Fourth Amendment and privacy jurisprudence, even though the Court doesn’t have a specific doctrine like “evolving standards of decency.” The Court uses this much-maligned test, “Do individuals have a reasonable expectation of privacy?” We’ll advance the argument, I think successfully, that part of what the Court should look at in considering whether an expectation of privacy is reasonable is showing what’s going on in the states. If we can show that a dozen or eighteen state legislatures have enacted a constitutional protection that doesn’t exist in federal constitutional law, I think that that will influence the Supreme Court.
  • The question is will it also influence Congress. I think there the answer is also “yes.” If you’re a member of the House or the Senate from Montana, and you see that your state legislature and your Republican governor have enacted privacy legislation, you’re not going to be worried about voting in that direction. I think this is one of those places where, unlike civil rights, where you saw most of the action at the federal level and then getting forced down to the states, we’re going to see more action at the state level getting funneled up to the federal government.
  •  
    A must-read. Ben Wizner discusses the current climate in the courts in government surveillance cases and how Edward Snowden's disclosures have affected that, and much more. Wizner is not only Edward Snowden's lawyer, he is also the coordinator of all ACLU litigation on electronic surveillance matters.
Gary Edwards

Steve Ballmer: Consumers Are Our Number One Thing - Business Insider - 3 views

  •  
    One of the "Lessons of Massachusetts" is that the key lock-in point for Microsoft's monopoly is their iron fisted control of the productivity environment, anchored by MSOffice and the Windows local workgroup client/server system.  Key to office productivity is the compound document model that fuels every business process and business productivity system.  It's the embedded logic and database connectivity (OLE, ODBC, MAPI and COM ActiveX controls) that juice the compound document model.   Convert a compound document to another format (or PDF), and you BREAK the both the document, AND THE BUSINESS PROCESS!!!! It was the breaking of the business process that stopped Massachusetts from moving to the Open Document Format !!!! So now comes a story with consumer sales vs enterprise sales numbers that seemingly shatter the Lessons of Massachusetts.  How is that? My take is that the numbers Microsoft touts are true.  Consumers are making new purchases - NOT enterprises.  The simple truth is that, as Microsoft introduces new OS and Application Services geared to Mobile / Cloud Computing, these new systems BREAK legacy business systems.  It's still way too costly for businesses to transition to the new models. Eventually though, businesses will replace those legacy business productivity systems with Mobile / Cloud Computing systems.  And it will be a rip-out-and-replace transition; not the gradual "value-added" transition everyone hopes Microsoft will provide.   Interesting stuff. excerpt: "If Microsoft is an enterprise company, then why is it spending so much time and money on stuff like Bing, Xbox, Windows Phone, and the Surface RT? It should be going all-in on cloud computing and services. If you were to ask Microsoft's CEO Steve Ballmer, his answer would probably be: It's a dumb question, we're both. In an interview with Jason Pontin at MIT Technology Review, he said: ""Our number-one thing is supplying products to consumers. That's kind of what we do.
  •  
    Note that rip-out-and-replace to get to the cloud is a very risky strategy for MSFT because the company forfeits its vendor lock-in advantage; the question for the enterprise then becomes "replace with what?" The answer in many cases will be non-Microsoft services. And traditionally, what the enterprise uses has driven what enterprise workers use at home far more than vice versa.
Gary Edwards

Google Is Prepping A Sneak Attack On Microsoft Office - ReadWrite - 0 views

    • Gary Edwards
       
      Pretty good quote describing the reach of "Visual Productivity".  Still, the quote lacks the power of embedded data (ODBC) streams and application obects (OLE) so important to the compound document model that sits at the center of all productivity environments and business system automation efforts.
  • In a supporting comment, Zborowski pointed out that Google doesn't support the Open Document Format, suggesting that Microsoft is more open than Google.
    • Gary Edwards
       
      Now this is funny!!!
  • Productivity software is built to help people communicate. It's more than just the words in a document or presentation; it's about the tone, style and format you use to convey an overall message. People often entrust important information in these documents -- from board presentations to financial analyses to book reports. You should be able to trust that what you intend to communicate is what is being seen.
1 - 20 of 148 Next › Last »
Showing 20 items per page