Skip to main content

Home/ Open Web/ Group items tagged place

Rss Feed Group items tagged

Paul Merrell

The best way to read Glenn Greenwald's 'No Place to Hide' - 0 views

  • Journalist Glenn Greenwald just dropped a pile of new secret National Security Agency documents onto the Internet. But this isn’t just some haphazard WikiLeaks-style dump. These documents, leaked to Greenwald last year by former NSA contractor Edward Snowden, are key supplemental reading material for his new book, No Place to Hide, which went on sale Tuesday. Now, you could just go buy the book in hardcover and read it like you would any other nonfiction tome. Thanks to all the additional source material, however, if any work should be read on an e-reader or computer, this is it. Here are all the links and instructions for getting the most out of No Place to Hide.
  • Greenwald has released two versions of the accompanying NSA docs: a compressed version and an uncompressed version. The only difference between these two is the quality of the PDFs. The uncompressed version clocks in at over 91MB, while the compressed version is just under 13MB. For simple reading purposes, just go with the compressed version and save yourself some storage space. Greenwald also released additional “notes” for the book, which are just citations. Unless you’re doing some scholarly research, you can skip this download.
  • No Place to Hide is, of course, available on a wide variety of ebook formats—all of which are a few dollars cheaper than the hardcover version, I might add. Pick your e-poison: Amazon, Nook, Kobo, iBooks. Flipping back and forth Each page of the documents includes a corresponding page number for the book, to allow readers to easily flip between the book text and the supporting documents. If you use the Amazon Kindle version, you also have the option of reading Greenwald’s book directly on your computer using the Kindle for PC app or directly in your browser. Yes, that may be the worst way to read a book. In this case, however, it may be the easiest way to flip back and forth between the book text and the notes and supporting documents. Of course, you can do the same on your e-reader—though it can be a bit of a pain. Those of you who own a tablet are in luck, as they provide the best way to read both ebooks and PDF files. Simply download the book using the e-reader app of your choice, download the PDFs from Greenwald’s website, and dig in. If you own a Kindle, Nook, or other ereader, you may have to convert the PDFs into a format that works well with your device. The Internet is full of tools and how-to guides for how to do this. Here’s one:
  • ...1 more annotation...
  • Kindle users also have the option of using Amazon’s Whispernet service, which converts PDFs into a format that functions best on the company’s e-reader. That will cost you a small fee, however—$0.15 per megabyte, which means the compressed Greenwald docs will cost you a whopping $1.95.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Paul Merrell

In Hearing on Internet Surveillance, Nobody Knows How Many Americans Impacted in Data C... - 0 views

  • The Senate Judiciary Committee held an open hearing today on the FISA Amendments Act, the law that ostensibly authorizes the digital surveillance of hundreds of millions of people both in the United States and around the world. Section 702 of the law, scheduled to expire next year, is designed to allow U.S. intelligence services to collect signals intelligence on foreign targets related to our national security interests. However—thanks to the leaks of many whistleblowers including Edward Snowden, the work of investigative journalists, and statements by public officials—we now know that the FISA Amendments Act has been used to sweep up data on hundreds of millions of people who have no connection to a terrorist investigation, including countless Americans. What do we mean by “countless”? As became increasingly clear in the hearing today, the exact number of Americans impacted by this surveillance is unknown. Senator Franken asked the panel of witnesses, “Is it possible for the government to provide an exact count of how many United States persons have been swept up in Section 702 surveillance? And if not the exact count, then what about an estimate?”
  • Elizabeth Goitein, the Brennan Center director whose articulate and thought-provoking testimony was the highlight of the hearing, noted that at this time an exact number would be difficult to provide. However, she asserted that an estimate should be possible for most if not all of the government’s surveillance programs. None of the other panel participants—which included David Medine and Rachel Brand of the Privacy and Civil Liberties Oversight Board as well as Matthew Olsen of IronNet Cybersecurity and attorney Kenneth Wainstein—offered an estimate. Today’s hearing reaffirmed that it is not only the American people who are left in the dark about how many people or accounts are impacted by the NSA’s dragnet surveillance of the Internet. Even vital oversight committees in Congress like the Senate Judiciary Committee are left to speculate about just how far-reaching this surveillance is. It's part of the reason why we urged the House Judiciary Committee to demand that the Intelligence Community provide the public with a number. 
  • The lack of information makes rigorous oversight of the programs all but impossible. As Senator Franken put it in the hearing today, “When the public lacks even a rough sense of the scope of the government’s surveillance program, they have no way of knowing if the government is striking the right balance, whether we are safeguarding our national security without trampling on our citizens’ fundamental privacy rights. But the public can’t know if we succeed in striking that balance if they don’t even have the most basic information about our major surveillance programs."  Senator Patrick Leahy also questioned the panel about the “minimization procedures” associated with this type of surveillance, the privacy safeguard that is intended to ensure that irrelevant data and data on American citizens is swiftly deleted. Senator Leahy asked the panel: “Do you believe the current minimization procedures ensure that data about innocent Americans is deleted? Is that enough?”  David Medine, who recently announced his pending retirement from the Privacy and Civil Liberties Oversight Board, answered unequivocally:
  • ...2 more annotations...
  • Senator Leahy, they don’t. The minimization procedures call for the deletion of innocent Americans’ information upon discovery to determine whether it has any foreign intelligence value. But what the board’s report found is that in fact information is never deleted. It sits in the databases for 5 years, or sometimes longer. And so the minimization doesn’t really address the privacy concerns of incidentally collected communications—again, where there’s been no warrant at all in the process… In the United States, we simply can’t read people’s emails and listen to their phone calls without court approval, and the same should be true when the government shifts its attention to Americans under this program. One of the most startling exchanges from the hearing today came toward the end of the session, when Senator Dianne Feinstein—who also sits on the Intelligence Committee—seemed taken aback by Ms. Goitein’s mention of “backdoor searches.” 
  • Feinstein: Wow, wow. What do you call it? What’s a backdoor search? Goitein: Backdoor search is when the FBI or any other agency targets a U.S. person for a search of data that was collected under Section 702, which is supposed to be targeted against foreigners overseas. Feinstein: Regardless of the minimization that was properly carried out. Goitein: Well the data is searched in its unminimized form. So the FBI gets raw data, the NSA, the CIA get raw data. And they search that raw data using U.S. person identifiers. That’s what I’m referring to as backdoor searches. It’s deeply concerning that any member of Congress, much less a member of the Senate Judiciary Committee and the Senate Intelligence Committee, might not be aware of the problem surrounding backdoor searches. In April 2014, the Director of National Intelligence acknowledged the searches of this data, which Senators Ron Wyden and Mark Udall termed “the ‘back-door search’ loophole in section 702.” The public was so incensed that the House of Representatives passed an amendment to that year's defense appropriations bill effectively banning the warrantless backdoor searches. Nonetheless, in the hearing today it seemed like Senator Feinstein might not recognize or appreciate the serious implications of allowing U.S. law enforcement agencies to query the raw data collected through these Internet surveillance programs. Hopefully today’s testimony helped convince the Senator that there is more to this topic than what she’s hearing in jargon-filled classified security briefings.
  •  
    The 4th Amendment: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and *particularly describing the place to be searched, and the* persons or *things to be seized."* So much for the particularized description of the place to be searched and the thngs to be seized.  Fah! Who needs a Constitution, anyway .... 
Gary Edwards

That's All Folks: Why the Writing Is on the Wall at Microsoft - Forbes - 1 views

  •  
    Control vs. Creativity.  As the ulitmate control freak, Ballmer was guanteed to crush the life out of Microsoft's most creative individuals and teams.  Gates was focused on Windows and MSOffice, and Ballmer on control.  That one-two punch made certain that Microsoft would not be a player in the next great wave of computing; The Cloud.   excerpt: This is yet another example of what I like to call the Wile E. Coyote syndrome. Like the unfortunate character in the old Warner Bros. cartoons, Microsoft now seems to be a company that has long since run off the cliff but, with legs spinning for all they are worth, doesn't know yet that it is ready to drop. Yet drop it most certainly will. Microsoft Win8 Tablet Is NOT a Game Changer Adam Hartung Contributor Snapshot: Steve Ballmer Follow (93) #44 Billionaires IDC Analyst: Microsoft's Surface Sizzle Needs Win8 Steak Daniel Nye Griffiths Contributor To understand how this happens, take a look at the work of Arnold J. Toynbee, a historian who studied the rise and fall of civilizations. He argued that a civilization flourishes when it motivates insiders and attracts outsiders with its creative dynamism and culture. The civilization breaks down when its leadership loses this creative capacity and gives way to, or transforms itself into, a dominant minority. When this happens, the driver of the civilization becomes control, not attraction. And it's precisely this switch from attraction to control that is the source of the breakdown. Interestingly, Toynbee says that the consequences may not be immediately apparent. A civilization can keep up momentum because the controls it puts in place generate some short-term efficiency. But eventually it will run its course and collapse, because no amount of control can replace the loss of collective creativity. Observe this in the corporate world by looking at the example of General Motors. G.M.'s 2009 bankruptcy came at the end of a long decline dating back to the early 197
Gary Edwards

Google's Chrome Browser Sprouts Programming Kit of the Future "Node.js" | Wired Enterpr... - 1 views

  •  
    Good article describing Node.js.  The Node.js Summitt is taking place in San Francisco on Jan 24th - 25th.  http://goo.gl/AhZTD I'm wondering if anyone has used Node.js to create real time Cloud ready compound documents?  Replacing MSOffice OLE-ODBC-ActiveX heavy productivity documents, forms and reports with Node.js event widgets, messages and database connections?  I'm thinking along the lines of a Lotus Notes alternative with a Node.js enhanced version of EverNote on the front end, and Node.js-Hadoop productivity platform on the server side? Might have to contact Stephen O'Grady on this.  He is a featured speaker at the conference. excerpt: At first, Chito Manansala (Visa & Sabre) built his Internet transaction processing systems using the venerable Java programming language. But he has since dropped Java and switched to what is widely regarded as The Next Big Thing among Silicon Valley developers. He switched to Node. Node is short for Node.js, a new-age programming platform based on a software engine at the heart of Google's Chrome browser. But it's not a browser technology. It's meant to help build software that sits on a distant server somewhere, feeding an application to your PC or smartphone, and it's particularly suited to systems like the one Chito Manansala is building - systems that juggle scads of information streaming to and from other sources. In other words, it's suited to the modern internet. Two years ago, Node was just another open source project. But it has since grown into the development platform of the moment. At Yahoo!, Node underpins "Manhattan," a fledgling online service for building and hosting mobile applications. Microsoft is offering Node atop Windows Azure, its online service for building and hosting a much beefier breed of business application. And Sabre is just one of a host of big names using the open source platform to erect applications on their own servers. Node is based on the Javascript engine at th
kanak sharma

~!@! 991O255145 delhi escorts call girls hot images free videos contct number in mahipa... - 0 views

  •  
    russian, afghami, american, indian, punjabi, kashmiri models airhostess actress housewife videos pics contact number of female escorts and call girls in dwarka gurgaon karol bagh lajpat nagar nehru place delhi INdia vaishali kaushambi noida indiarpuram saket south ex GK kalkaji ~!@! 991O255145 delhi escorts call girls hot images free videos contct number in mahipalpur number escort services in dwarka vasant kunj karol bagh paharganj call girls in noida indirapuram patel nagar GK saket vaishali gurgaon models russian independent punjabi college photo
  •  
    russian, afghami, american, indian, punjabi, kashmiri models airhostess actress housewife videos pics contact number of female escorts and call girls in dwarka gurgaon karol bagh lajpat nagar nehru place delhi INdia vaishali kaushambi noida indiarpuram saket south ex GK kalkaji ~!@! 991O255145 delhi escorts call girls hot images free videos contct number in mahipalpur number escort services in dwarka vasant kunj karol bagh paharganj call girls in noida indirapuram patel nagar GK saket vaishali gurgaon models russian independent punjabi college photo
Gary Edwards

Paul Buchheit: The Cloud OS - 0 views

  •  
    First, what is a "cloud OS" and why should I want one? Actually, I don't even know if anyone calls it a "cloud OS", but I couldn't find a better generic term for something like ChromeOS. The basic idea is that apps and data all live on the Internet, which is has been renamed "The Cloud" since that sounds cooler, and your laptop or whatever is basically just a window into that cloud. If your laptop is stolen or catches fire or something, it's not a big deal, because you can just buy another one and nothing has been lost (except your money). Many people characterize this approach as using a "dumb terminal", but that's wrong. Your local computer can still do all kinds of smart computation and data manipulation -- it's just no longer the single point of failure. To me, the defining characteristic of cloud based apps is "information without location". For example, in the bad old days, you would install a copy Outlook or other email software on your PC, it would download all of your email to your computer, and then the email would live on that computer until Outlook corrupted its PST file and everything was lost. If you accidentally left your computer at home, or it was stolen, then you simply couldn't get to your email. Information behaved much like a physical object -- it was always in one place. That's an unnecessary and annoying limitation. By moving my email into "the cloud", I can escape the limitations of physical location and am able to reach it from any number of computers, phones, televisions, or whatever else connects to the Internet. For performance and coverage reasons, those devices will usually cache some of my email, but the canonical version always lives online. The Gmail client on Android phones provides a great example of this. It stores copies of recent messages so that I can access them even when there is no Internet access, and also saves any recent changes (such as new messages or changes to read state), but as soon as possible it sends those chang
Paul Merrell

Tech firms and privacy groups press for curbs on NSA surveillance powers - The Washingt... - 0 views

  • The nation’s top technology firms and a coalition of privacy groups are urging Congress to place curbs on government surveillance in the face of a fast-approaching deadline for legislative action. A set of key Patriot Act surveillance authorities expire June 1, but the effective date is May 21 — the last day before Congress breaks for a Memorial Day recess. In a letter to be sent Wednesday to the Obama administration and senior lawmakers, the coalition vowed to oppose any legislation that, among other things, does not ban the “bulk collection” of Americans’ phone records and other data.
  • We know that there are some in Congress who think that they can get away with reauthorizing the expiring provisions of the Patriot Act without any reforms at all,” said Kevin Bankston, policy director of New America Foundation’s Open Technology Institute, a privacy group that organized the effort. “This letter draws a line in the sand that makes clear that the privacy community and the Internet industry do not intend to let that happen without a fight.” At issue is the bulk collection of Americans’ data by intelligence agencies such as the National Security Agency. The NSA’s daily gathering of millions of records logging phone call times, lengths and other “metadata” stirred controversy when it was revealed in June 2013 by former NSA contractor Edward Snowden. The records are placed in a database that can, with a judge’s permission, be searched for links to foreign terrorists.They do not include the content of conversations.
  • That program, placed under federal surveillance court oversight in 2006, was authorized by the court in secret under Section 215 of the Patriot Act — one of the expiring provisions. The public outcry that ensued after the program was disclosed forced President Obama in January 2014 to call for an end to the NSA’s storage of the data. He also appealed to Congress to find a way to preserve the agency’s access to the data for counterterrorism information.
  • ...3 more annotations...
  • Despite growing opposition in some quarters to ending the NSA’s program, a “clean” authorization — one that would enable its continuation without any changes — is unlikely, lawmakers from both parties say. Sen. Ron Wyden (D-Ore.), a leading opponent of the NSA’s program in its current format, said he would be “surprised if there are 60 votes” in the Senate for that. In the House, where there is bipartisan support for reining in surveillance, it’s a longer shot still. “It’s a toxic vote back in your district to reauthorize the Patriot Act, if you don’t get some reforms” with it, said Rep. Thomas Massie (R-Ky.). The House last fall passed the USA Freedom Act, which would have ended the NSA program, but the Senate failed to advance its own version.The House and Senate judiciary committees are working to come up with new bipartisan legislation to be introduced soon.
  • The tech firms and privacy groups’ demands are a baseline, they say. Besides ending bulk collection, they want companies to have the right to be more transparent in reporting on national security requests and greater declassification of opinions by the Foreign Intelligence Surveillance Court.
  • Some legal experts have pointed to a little-noticed clause in the Patriot Act that would appear to allow bulk collection to continue even if the authority is not renewed. Administration officials have conceded privately that a legal case probably could be made for that, but politically it would be a tough sell. On Tuesday, a White House spokesman indicated the administration would not seek to exploit that clause. “If Section 215 sunsets, we will not continue the bulk telephony metadata program,” National Security Council spokesman Edward Price said in a statement first reported by Reuters. Price added that allowing Section 215 to expire would result in the loss of a “critical national security tool” used in investigations that do not involve the bulk collection of data. “That is why we have underscored the imperative of Congressional action in the coming weeks, and we welcome the opportunity to work with lawmakers on such legislation,” he said.
  •  
    I omitted some stuff about opposition to sunsetting the provisions. They  seem to forget, as does Obama, that the proponents of the FISA Court's expansive reading of section 215 have not yet come up with a single instance where 215-derived data caught a single terrorist or prevented a single act of terrorism. Which means that if that data is of some use, it ain't in fighting terrorism, the purpose of the section.  Patriot Act § 215 is codified as 50 USCS § 1861, https://www.law.cornell.edu/uscode/text/50/1861 That section authorizes the FBI to obtain an iorder from the FISA Court "requiring the production of *any tangible things* (including books, records, papers, documents, and other items)."  Specific examples (a non-exclusive list) include: the production of library circulation records, library patron lists, book sales records, book customer lists, firearms sales records, tax return records, educational records, or medical records containing information that would identify a person." The Court can order that the recipient of the order tell no one of its receipt of the order or its response to it.   In other words, this is about way more than your telephone metadata. Do you trust the NSA with your medical records? 
Paul Merrell

NSA Director Finally Admits Encryption Is Needed to Protect Public's Privacy - 0 views

  • NSA Director Finally Admits Encryption Is Needed to Protect Public’s Privacy The new stance denotes a growing awareness within the government that Americans are not comfortable with the State’s grip on their data. By Carey Wedler | AntiMedia | January 22, 2016 Share this article! https://mail.google.com/mail/?view=cm&fs=1&to&su=NSA%20Director%20Finally%20Admits%20Encryption%20Is%20Needed%20to%20Protect%20Public%E2%80%99s%20Privacy&body=http%3A%2F%2Fwww.mintpress
  • Rogers cited the recent Office of Personnel Management hack of over 20 million users as a reason to increase encryption rather than scale it back. “What you saw at OPM, you’re going to see a whole lot more of,” he said, referring to the massive hack that compromised the personal data about 20 million people who obtained background checks. Rogers’ comments, while forward-thinking, signify an about face in his stance on encryption. In February 2015, he said he “shares [FBI] Director [James] Comey’s concern” about cell phone companies’ decision to add encryption features to their products. Comey has been one loudest critics of encryption. However, Rogers’ comments on Thursday now directly conflict with Comey’s stated position. The FBI director has publicly chastised encryption, as well as the companies that provide it. In 2014, he claimed Apple’s then-new encryption feature could lead the world to “a very dark place.” At a Department of Justice hearing in November, Comey testified that “Increasingly, the shadow that is ‘going dark’ is falling across more and more of our work.” Though he claimed, “We support encryption,” he insisted “we have a problem that encryption is crashing into public safety and we have to figure out, as people who care about both, to resolve it. So, I think the conversation’s in a healthier place.”
  • At the same hearing, Comey and Attorney General Loretta Lynch declined to comment on whether they had proof the Paris attackers used encryption. Even so, Comey recently lobbied for tech companies to do away with end-to-end encryption. However, his crusade has fallen on unsympathetic ears, both from the private companies he seeks to control — and from the NSA. Prior to Rogers’ statements in support of encryption Thursday, former NSA chief Michael Hayden said, “I disagree with Jim Comey. I actually think end-to-end encryption is good for America.” Still another former NSA chair has criticized calls for backdoor access to information. In October, Mike McConnell told a panel at an encryption summit that the United States is “better served by stronger encryption, rather than baking in weaker encryption.” Former Department of Homeland Security chief, Michael Chertoff, has also spoken out against government being able to bypass encryption.
  • ...2 more annotations...
  • Regardless of these individual defenses of encryption, the Intercept explained why these statements may be irrelevant: “Left unsaid is the fact that the FBI and NSA have the ability to circumvent encryption and get to the content too — by hacking. Hacking allows law enforcement to plant malicious code on someone’s computer in order to gain access to the photos, messages, and text before they were ever encrypted in the first place, and after they’ve been decrypted. The NSA has an entire team of advanced hackers, possibly as many as 600, camped out at Fort Meade.”
  • Rogers statements, of course, are not a full-fledged endorsement of privacy, nor can the NSA be expected to make it a priority. Even so, his new stance denotes a growing awareness within the government that Americans are not comfortable with the State’s grip on their data. “So spending time arguing about ‘hey, encryption is bad and we ought to do away with it’ … that’s a waste of time to me,” Rogers said Thursday. “So what we’ve got to ask ourselves is, with that foundation, what’s the best way for us to deal with it? And how do we meet those very legitimate concerns from multiple perspectives?”
Gary Edwards

Say hello to the new Mega: We go hands on. - The Next Web - 1 views

  •  
    50GB free.  $9.95 /mo for $500GB + 1TB bandwidth.  WOW!! Full encryption with private key exchange using secure messaging.  Awesome Cloud Drive service, excerpt: "Right now, Mega is still a barebones file sharing service, but the company has massive plans for the future. Snooping around on their site reveals their plans for going much further than just file sharing. A post on their blog - dated January 18th - details the features that were cut from launch but will be added soon. There are also apps for mobile platforms already underway, with the company planning to support all major platforms in the near future and allow uploading from them. The blog details a secure email component (probably the part we can't get working) that will be added, secure instant messaging and the ability for non-Mega users to send large files to those with a Mega account (for example, for printing files at a print shop). It doesn't stop there, though, the company also says they're planning on-site word processing, calendar and spreadsheet applications (watch out, Google Docs!) as well as a Dropbox-esque client for Windows, Linux and Mac. They also say that there are plans in the works for allowing users to run Mega as an appliance on their own machine, though there aren't many details on that in the post. In another place on the site, Mega promises that in the near future they will be offering secure video calling and traditional calling as well. Talk about trying to take over the world."
Gary Edwards

Readability / Clearly - Article Publishing Guidelines - Readability - 1 views

  •  
    Want to know how Evernote Clearly and Amazon Kindle Web Reader work?  This is it.  Both are made by Readability and in this guideline for Web publishers, they explain how they rip and parse a Web page.  The secret is solid HTML5!!!  "The following is a proposed standard for bringing more semanticity to articles on the Web. In our efforts to provide quality content without the superfluous leavings, we've seen that the Web is a pretty messy place. We hope that by providing some simple guidelines we can help publishers make their content a little more presentable with Readability while also making the Web a bit more semantic. By and large, you'll find that our guidelines just follow other specifications. We lean heavily on the work of the hNews microformat as well as the new elements provided within HTML5. If anything is unclear, please refer to the hNews microformat specification as well as this handy guide to semantic elements in html5, from Mark Pilgrim's Dive into HTML5. "
Paul Merrell

Civil Rights Coalition files FCC Complaint Against Baltimore Police Department for Ille... - 0 views

  • This week the Center for Media Justice, ColorOfChange.org, and New America’s Open Technology Institute filed a complaint with the Federal Communications Commission alleging the Baltimore police are violating the federal Communications Act by using cell site simulators, also known as Stingrays, that disrupt cellphone calls and interfere with the cellular network—and are doing so in a way that has a disproportionate impact on communities of color. Stingrays operate by mimicking a cell tower and directing all cellphones in a given area to route communications through the Stingray instead of the nearby tower. They are especially pernicious surveillance tools because they collect information on every single phone in a given area—not just the suspect’s phone—this means they allow the police to conduct indiscriminate, dragnet searches. They are also able to locate people inside traditionally-protected private spaces like homes, doctors’ offices, or places of worship. Stingrays can also be configured to capture the content of communications. Because Stingrays operate on the same spectrum as cellular networks but are not actually transmitting communications the way a cell tower would, they interfere with cell phone communications within as much as a 500 meter radius of the device (Baltimore’s devices may be limited to 200 meters). This means that any important phone call placed or text message sent within that radius may not get through. As the complaint notes, “[d]epending on the nature of an emergency, it may be urgently necessary for a caller to reach, for example, a parent or child, doctor, psychiatrist, school, hospital, poison control center, or suicide prevention hotline.” But these and even 911 calls could be blocked.
  • The Baltimore Police Department could be among the most prolific users of cell site simulator technology in the country. A Baltimore detective testified last year that the BPD used Stingrays 4,300 times between 2007 and 2015. Like other law enforcement agencies, Baltimore has used its devices for major and minor crimes—everything from trying to locate a man who had kidnapped two small children to trying to find another man who took his wife’s cellphone during an argument (and later returned it). According to logs obtained by USA Today, the Baltimore PD also used its Stingrays to locate witnesses, to investigate unarmed robberies, and for mysterious “other” purposes. And like other law enforcement agencies, the Baltimore PD has regularly withheld information about Stingrays from defense attorneys, judges, and the public. Moreover, according to the FCC complaint, the Baltimore PD’s use of Stingrays disproportionately impacts African American communities. Coming on the heels of a scathing Department of Justice report finding “BPD engages in a pattern or practice of conduct that violates the Constitution or federal law,” this may not be surprising, but it still should be shocking. The DOJ’s investigation found that BPD not only regularly makes unconstitutional stops and arrests and uses excessive force within African-American communities but also retaliates against people for constitutionally protected expression, and uses enforcement strategies that produce “severe and unjustified disparities in the rates of stops, searches and arrests of African Americans.”
  • Adding Stingrays to this mix means that these same communities are subject to more surveillance that chills speech and are less able to make 911 and other emergency calls than communities where the police aren’t regularly using Stingrays. A map included in the FCC complaint shows exactly how this is impacting Baltimore’s African-American communities. It plots hundreds of addresses where USA Today discovered BPD was using Stingrays over a map of Baltimore’s black population based on 2010 Census data included in the DOJ’s recent report:
  • ...2 more annotations...
  • The Communications Act gives the FCC the authority to regulate radio, television, wire, satellite, and cable communications in all 50 states, the District of Columbia and U.S. territories. This includes being responsible for protecting cellphone networks from disruption and ensuring that emergency calls can be completed under any circumstances. And it requires the FCC to ensure that access to networks is available “to all people of the United States, without discrimination on the basis of race, color, religion, national origin, or sex.” Considering that the spectrum law enforcement is utilizing without permission is public property leased to private companies for the purpose of providing them next generation wireless communications, it goes without saying that the FCC has a duty to act.
  • But we should not assume that the Baltimore Police Department is an outlier—EFF has found that law enforcement has been secretly using stingrays for years and across the country. No community should have to speculate as to whether such a powerful surveillance technology is being used on its residents. Thus, we also ask the FCC to engage in a rule-making proceeding that addresses not only the problem of harmful interference but also the duty of every police department to use Stingrays in a constitutional way, and to publicly disclose—not hide—the facts around acquisition and use of this powerful wireless surveillance technology.  Anyone can support the complaint by tweeting at FCC Commissioners or by signing the petitions hosted by Color of Change or MAG-Net.
  •  
    An important test case on the constitutionality of stingray mobile device surveillance.
Paul Merrell

Google, ACLU call to delay government hacking rule | TheHill - 0 views

  • A coalition of 26 organizations, including the American Civil Liberties Union (ACLU) and Google, signed a letter Monday asking lawmakers to delay a measure that would expand the government’s hacking authority. The letter asks Senate Majority Leader Mitch McConnellMitch McConnellTrump voices confidence on infrastructure plan GOP leaders to Obama: Leave Iran policy to Trump GOP debates going big on tax reform MORE (R-Ky.) and Minority Leader Harry ReidHarry ReidNevada can’t trust Trump to protect public lands Sanders, Warren face tough decision on Trump Google, ACLU call to delay government hacking rule MORE (D-Nev.), plus House Speaker Paul RyanPaul RyanTrump voices confidence on infrastructure plan GOP leaders to Obama: Leave Iran policy to Trump GOP debates going big on tax reform MORE (R-Wis.), and House Minority Leader Nancy Pelosi (D-Calif.) to further review proposed changes to Rule 41 and delay its implementation until July 1, 2017. ADVERTISEMENTThe Department of Justice’s alterations to the rule would allow law enforcement to use a single warrant to hack multiple devices beyond the jurisdiction that the warrant was issued in. The FBI used such a tactic to apprehend users of the child pornography dark website, Playpen. It took control of the dark website for two weeks and after securing two warrants, installed malware on Playpen users computers to acquire their identities. But the signatories of the letter — which include advocacy groups, companies and trade associations — are raising questions about the effects of the change. 
  •  
    ".. no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." Fourth Amendment. The changes to Rule 41 ignore the particularity requirement by allowing the government to search computers that are not particularly identified in multiple locations not particularly identifed, in other words, a general warrant that is precisely the reason the particularity requirement was adopted to outlaw.
Gary Edwards

The Man Who Makes the Future: Wired Icon Marc Andreessen | Epicenter | Wired.com - 1 views

  •  
    Must read interview. Marc Andreessen explains his five big ideas, taking us from the beginning of the Web, into the Cloud and beyond. Great stuff! ... (1) 1992 - Everyone Will Have the Web ... (2) 1995 - The Browser will the Operating System ... (3) 1999 - Web business will live in the Cloud ... (4) 2004 - Everything will be Social ... (5) 2009 - Software will Eat the World excerpt: Technology is like water; it wants to find its level. So if you hook up your computer to a billion other computers, it just makes sense that a tremendous share of the resources you want to use-not only text or media but processing power too-will be located remotely. People tend to think of the web as a way to get information or perhaps as a place to carry out ecommerce. But really, the web is about accessing applications. Think of each website as an application, and every single click, every single interaction with that site, is an opportunity to be on the very latest version of that application. Once you start thinking in terms of networks, it just doesn't make much sense to prefer local apps, with downloadable, installable code that needs to be constantly updated.

    "We could have built a social element into Mosaic. But back then the Internet was all about anonymity."
    Anderson: Assuming you have enough bandwidth.

    Andreessen: That's the very big if in this equation. If you have infinite network bandwidth, if you have an infinitely fast network, then this is what the technology wants. But we're not yet in a world of infinite speed, so that's why we have mobile apps and PC and Mac software on laptops and phones. That's why there are still Xbox games on discs. That's why everything isn't in the cloud. But eventually the technology wants it all to be up there.

    Anderson: Back in 1995, Netscape began pursuing this vision by enabling the browser to do more.

    Andreessen: We knew that you would need some pro
Gary Edwards

Kindle Format 8 Overview - 0 views

  •  
    Amazon releases a new version of the KF8 Format, with greatly improved HTML5-CSS3 capabilities.  Details of the KF8 spec can be found here: http://goo.gl/XY39v A couple of things i'm wondering about here.  One is, the KindleGen conversion tool can convert HTML, XHTML and EPUB to KF8.  Has anyone tried to push a OpenOffice XHTML compound document through this latest KF8 version of  KGen?  I'm thinking that perhaps the OOo HTML problem could be solved in this way? There is no doubt in my mind that HTML5 will continue to grow, and eventually replace the desktop XML "compound document" formats. The great transition from desktop client/server business productivity environments, where legacy compound documents rule the roost and fuel the engines of all business systems, to a Cloud Productivity Platform, will require an HTML5 compound document format model.  Also needed will be HTML5 capable applications participating in the production of Cloud ready compound documents.  Is KF8 a reasonable starting place? excerpt: Kindle Format 8 is Amazon's next generation file format offering a wide range of new features and enhancements - including HTML5 and CSS3 support that publishers can use to create all types of books. KF8 adds over 150 new formatting capabilities, including drop caps, numbered lists, fixed layouts, nested tables, callouts, sidebars and Scalable Vector Graphics - opening up more opportunities to create Kindle books that readers will love. Kindle Fire is the first Kindle device to support KF8 - in the coming months KF8 will be rolled out to our latest generation Kindle e-ink devices as well as our free Kindle reading apps.
Gary Edwards

You Are (Probably) Wrong About You - 0 views

  •  
    When you fail to reach a goal - say, for instance, you give an important presentation and it doesn't go well - you become the detective (once again, largely unconsciously). You gather up the usual suspects to see who is responsible for your failure: lack of innate ability, lack of effort, poor preparation, using the wrong strategy, bad luck, etc. Of all of these possible culprits, it's lack of innate ability we most frequently hold responsible, like the much-maligned butler in an Agatha Christie novel. In Western countries - and nowhere more so than in the U.S. - innate ability is the go-to explanation for all of our successes and our failures. The problem is that the evidence - the kind gathered by scientists over the last thirty years of study of motivation and achievement - suggests that innate ability is rarely to blame for either succeeding or falling short. (If you've blamed your poor performances in the past on a lack of ability, don't feel bad. We've all done it. The butler seems guilty. Just please don't do it anymore.) If we are going to ever improve performance, we need to place blame where it belongs. We need solid evidence about where we went wrong. Unfortunately, that's the kind of evidence that usually doesn't make it to our consciousness on its own, making self-diagnosis practically impossible. We need help getting the right answers.
Gary Edwards

YC-Backed Grid Reinvents The Spreadsheet For The Tablet Age | TechCrunch - 0 views

  •  
    "Y Combinator-backed startup, Grid, is based around the idea that a tablet should be a great place for spreadsheets. Indeed, as Leong told me earlier this week, his idea is to reinvent the spreadsheet around touch, all the tools and sensors available on mobile devices like the iPhone and iPad, and the way normal people (as opposed to Excel power users) actually use them. "
Gary Edwards

Engagio Gives the Web a 'Context' Button - 0 views

  •  
    After reading this article i gave engag.io a try.  It's very simple to sign up different networks like Facebook, Disquis, Tweeter, G+, Tumblr, LinkedIN and FourSquare.  But there doesn't seem to be a way to add my bookmarking network, Diigo?  Still, Engagio looks like a very useful service.  No idea how they plan on making money :) excerpt: The killer app for the social Web is the one that will filter the signal from the noise. In the Facebook age, even casual Web users hold tons of conversations at once. Engagio, the conversation discovery company, pulls them all into one place. It also leads you into new ones. And with a new dashboard view released today, it lets you click one button to figure out what's actually going on in all these conversations. Engagio's dashboard breaks out articles, sites and other links from all your social networks into separate panels, and lets you reply, share and like straight from there. But the best part of this section is the "context" button.
Gary Edwards

HTML5, Cloud and Mobile Create 'Perfect Storm' for Major App Dev Shift - Application De... - 0 views

  •  
    Good discussion, but it really deserves a more in-depth thrashing.  The basic concept is that a perfect storm of mobility, cloud-computing and HTML5-JavaScript has set the stage for a major, massive shift in application development.  The shift from C++ to Java is now being replaced by a greater shift from Java and C++ to JavaScript-JSON-HTML5. Interesting, but i continue to insist that the greater "Perfect Storm" triggered in 2008, is causing a platform shift from client/server computing to full on, must have "cloud-computing".   There are three major "waves"; platform shifts in the history of computing at work here.  The first wave was "Mainframe computing", otherwise known as server/terminal.  The second wave was that of "client/server" computing, where the Windows desktop eventually came to totally dominate and control the "client" side of the client/server equation. The third wave began with the Internet, and the dominance of the WWW protocols, interfaces, methods and formats.  The Web provides the foundation for the third great Wave of Cloud-Computing. The Perfect Storm of 2008 lit the fuse of the third Wave of computing.  Key to the 2008 Perfect Storm is the world wide financial collapse that put enormous pressure on businesses to cut cost and improve productivity; to do more with less, or die.  The survival maxim quickly became do more with less people - which is the most effective form of "productivity".  The nature of the collapse itself, and the kind of centralized, all powerful bailout-fascists governments that rose during the financial collapse, guaranteed that labor costs would rise dramatically while also being "uncertain".  Think government controlled healthcare. The other aspects of the 2008 Perfect Storm are mobility, HTML5, cloud-computing platform availability, and, the ISO standardization of "tagged" PDF.   The mobility bomb kicked off in late 2007, with the introduction of the Apple iPhone.  No further explanation needed :) Th
Paul Merrell

Surveillance scandal rips through hacker community | Security & Privacy - CNET News - 0 views

  • One security start-up that had an encounter with the FBI was Wickr, a privacy-forward text messaging app for the iPhone with an Android version in private beta. Wickr's co-founder Nico Sell told CNET at Defcon, "Wickr has been approached by the FBI and asked for a backdoor. We said, 'No.'" The mistrust runs deep. "Even if [the NSA] stood up tomorrow and said that [they] have eliminated these programs," said Marlinspike, "How could we believe them? How can we believe that anything they say is true?" Where does security innovation go next? The immediate future of information security innovation most likely lies in software that provides an existing service but with heightened privacy protections, such as webmail that doesn't mine you for personal data.
  • Wickr's Sell thinks that her company has hit upon a privacy innovation that a few others are also doing, but many will soon follow: the company itself doesn't store user data. "[The FBI] would have to force us to build a new app. With the current app there's no way," she said, that they could incorporate backdoor access to Wickr users' texts or metadata. "Even if you trust the NSA 100 percent that they're going to use [your data] correctly," Sell said, "Do you trust that they're going to be able to keep it safe from hackers? What if somebody gets that database and posts it online?" To that end, she said, people will start seeing privacy innovation for services that don't currently provide it. Calling it "social networks 2.0," she said that social network competitors will arise that do a better job of protecting their customer's privacy and predicted that some that succeed will do so because of their emphasis on privacy. Abine's recent MaskMe browser add-on and mobile app for creating disposable e-mail addresses, phone numbers, and credit cards is another example of a service that doesn't have access to its own users' data.
  • Stamos predicted changes in services that companies with cloud storage offer, including offering customers the ability to store their data outside of the U.S. "If they want to stay competitive, they're going to have to," he said. But, he cautioned, "It's impossible to do a cloud-based ad supported service." Soghoian added, "The only way to keep a service running is to pay them money." This, he said, is going to give rise to a new wave of ad-free, privacy protective subscription services.
  • ...2 more annotations...
  • The issue with balancing privacy and surveillance is that the wireless carriers are not interested in privacy, he said. "They've been providing wiretapping for 100 years. Apple may in the next year protect voice calls," he said, and said that the best hope for ending widespread government surveillance will be the makers of mobile operating systems like Apple and Google. Not all upcoming security innovation will be focused on that kind of privacy protection. Security researcher Brandon Wiley showed off at Defcon a protocol he calls Dust that can obfuscate different kinds of network traffic, with the end goal of preventing censorship. "I only make products about letting you say what you want to say anywhere in the world," such as content critical of governments, he said. Encryption can hide the specifics of the traffic, but some governments have figured out that they can simply block all encrypted traffic, he said. The Dust protocol would change that, he said, making it hard to tell the difference between encrypted and unencrypted traffic. It's hard to build encryption into pre-existing products, Wiley said. "I think people are going to make easy-to-use, encrypted apps, and that's going to be the future."
  • Companies could face severe consequences from their security experts, said Stamos, if the in-house experts find out that they've been lied to about providing government access to customer data. You could see "lots of resignations and maybe publicly," he said. "It wouldn't hurt their reputations to go out in a blaze of glory." Perhaps not surprisingly, Marlinspike sounded a hopeful call for non-destructive activism on Defcon's 21st anniversary. "As hackers, we don't have a lot of influence on policy. I hope that's something that we can focus our energy on," he said.
  •  
    NSA as the cause of the next major disruption in the social networking service industry?  Grief ahead for Google? Note the point made that: "It's impossible to do a cloud-based ad supported service" where the encryption/decryption takes place on the client side. 
1 - 20 of 113 Next › Last »
Showing 20 items per page