Skip to main content

Home/ Open Web/ Group items tagged Technology

Rss Feed Group items tagged

Paul Merrell

Civil Rights Coalition files FCC Complaint Against Baltimore Police Department for Ille... - 0 views

  • This week the Center for Media Justice, ColorOfChange.org, and New America’s Open Technology Institute filed a complaint with the Federal Communications Commission alleging the Baltimore police are violating the federal Communications Act by using cell site simulators, also known as Stingrays, that disrupt cellphone calls and interfere with the cellular network—and are doing so in a way that has a disproportionate impact on communities of color. Stingrays operate by mimicking a cell tower and directing all cellphones in a given area to route communications through the Stingray instead of the nearby tower. They are especially pernicious surveillance tools because they collect information on every single phone in a given area—not just the suspect’s phone—this means they allow the police to conduct indiscriminate, dragnet searches. They are also able to locate people inside traditionally-protected private spaces like homes, doctors’ offices, or places of worship. Stingrays can also be configured to capture the content of communications. Because Stingrays operate on the same spectrum as cellular networks but are not actually transmitting communications the way a cell tower would, they interfere with cell phone communications within as much as a 500 meter radius of the device (Baltimore’s devices may be limited to 200 meters). This means that any important phone call placed or text message sent within that radius may not get through. As the complaint notes, “[d]epending on the nature of an emergency, it may be urgently necessary for a caller to reach, for example, a parent or child, doctor, psychiatrist, school, hospital, poison control center, or suicide prevention hotline.” But these and even 911 calls could be blocked.
  • The Baltimore Police Department could be among the most prolific users of cell site simulator technology in the country. A Baltimore detective testified last year that the BPD used Stingrays 4,300 times between 2007 and 2015. Like other law enforcement agencies, Baltimore has used its devices for major and minor crimes—everything from trying to locate a man who had kidnapped two small children to trying to find another man who took his wife’s cellphone during an argument (and later returned it). According to logs obtained by USA Today, the Baltimore PD also used its Stingrays to locate witnesses, to investigate unarmed robberies, and for mysterious “other” purposes. And like other law enforcement agencies, the Baltimore PD has regularly withheld information about Stingrays from defense attorneys, judges, and the public. Moreover, according to the FCC complaint, the Baltimore PD’s use of Stingrays disproportionately impacts African American communities. Coming on the heels of a scathing Department of Justice report finding “BPD engages in a pattern or practice of conduct that violates the Constitution or federal law,” this may not be surprising, but it still should be shocking. The DOJ’s investigation found that BPD not only regularly makes unconstitutional stops and arrests and uses excessive force within African-American communities but also retaliates against people for constitutionally protected expression, and uses enforcement strategies that produce “severe and unjustified disparities in the rates of stops, searches and arrests of African Americans.”
  • Adding Stingrays to this mix means that these same communities are subject to more surveillance that chills speech and are less able to make 911 and other emergency calls than communities where the police aren’t regularly using Stingrays. A map included in the FCC complaint shows exactly how this is impacting Baltimore’s African-American communities. It plots hundreds of addresses where USA Today discovered BPD was using Stingrays over a map of Baltimore’s black population based on 2010 Census data included in the DOJ’s recent report:
  • ...2 more annotations...
  • The Communications Act gives the FCC the authority to regulate radio, television, wire, satellite, and cable communications in all 50 states, the District of Columbia and U.S. territories. This includes being responsible for protecting cellphone networks from disruption and ensuring that emergency calls can be completed under any circumstances. And it requires the FCC to ensure that access to networks is available “to all people of the United States, without discrimination on the basis of race, color, religion, national origin, or sex.” Considering that the spectrum law enforcement is utilizing without permission is public property leased to private companies for the purpose of providing them next generation wireless communications, it goes without saying that the FCC has a duty to act.
  • But we should not assume that the Baltimore Police Department is an outlier—EFF has found that law enforcement has been secretly using stingrays for years and across the country. No community should have to speculate as to whether such a powerful surveillance technology is being used on its residents. Thus, we also ask the FCC to engage in a rule-making proceeding that addresses not only the problem of harmful interference but also the duty of every police department to use Stingrays in a constitutional way, and to publicly disclose—not hide—the facts around acquisition and use of this powerful wireless surveillance technology.  Anyone can support the complaint by tweeting at FCC Commissioners or by signing the petitions hosted by Color of Change or MAG-Net.
  •  
    An important test case on the constitutionality of stingray mobile device surveillance.
Gary Edwards

9 Open Source Big Data Technologies Set to Change the Web - 0 views

  •  
    Good run down of new big data technologies for Cloud Computing. The technologies are arranged in context beginning with Apache Hadoop and including HBase and MongoDb. Nice reference article.
Paul Merrell

Federal smartphone kill-switch legislation proposed - Network World - 0 views

  • Pressure on the cellphone industry to introduce technology that could disable stolen smartphones has intensified with the introduction of proposed federal legislation that would mandate such a system.
  • Pressure on the cellphone industry to introduce technology that could disable stolen smartphones has intensified with the introduction of proposed federal legislation that would mandate such a system.
  • Senate bill 2032, "The Smartphone Prevention Act," was introduced to the U.S. Senate Wednesday by Amy Klobuchar, a Minnesota Democrat. The bill promises technology that allows consumers to remotely wipe personal data from their smartphones and render them inoperable. But how that will be accomplished is currently unclear. The full text of the bill was not immediately available and the offices of Klobuchar and the bill's co-sponsors were all shut down Thursday due to snow in Washington, D.C.
  • ...2 more annotations...
  • The co-sponsors are Democrats Barbara Mikulski of Maryland, Richard Blumenthal of Connecticut and Mazie Hirono of Hawaii. The proposal follows the introduction last Friday of a bill in the California state senate that would mandate a "kill switch" starting in January 2015. The California bill has the potential to usher in kill-switch technology nationwide because carriers might not bother with custom phones just for California, but federal legislation would give it the force of law across the U.S. Theft of smartphones is becoming an increasing problem in U.S. cities and the crimes often involve physical violence or intimidation with guns or knives. In San Francisco, two-thirds of street theft involves a smartphone or tablet and the number is even higher in nearby Oakland. It also represents a majority of street robberies in New York and is rising in Los Angeles. In some cases, victims have been killed for their phones. In response to calls last year by law-enforcement officials to do more to combat the crimes, most cellphone carriers have aligned themselves behind the CTIA, the industry's powerful lobbying group. The CTIA is opposing any legislation that would introduce such technology. An outlier is Verizon, which says that while it thinks legislation is unnecessary, it is supporting the group behind the California bill.
  • Some phone makers have been a little more proactive. Apple in particular has been praised for the introduction of its activation lock feature in iOS7. The function would satisfy the requirements of the proposed California law with one exception: Phones will have to come with the function enabled by default so consumers have to make a conscious choice to switch it off. Currently, it comes as disabled by default. Samsung has also added features to some of its phones that support the Lojack software, but the service requires an ongoing subscription.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

Is the Apps Marketplace just playing catchup to Microsoft? | Googling Google | ZDNet.com - 0 views

shared by Gary Edwards on 12 Mar 10 - Cached
  • Take the basic communication, calendaring, and documentation enabled for free by Apps Standard Edition, add a few slick applications from the Marketplace and the sky was the limit. Or at least the clouds were.
    • Gary Edwards
       
      Google Apps have all the basic elements of a productivity environment, but lack the internal application messaging, data connectivity and exchange that made the Windows desktop productivity platform so powerful.   gAPPS are great.  They even have copy/paste! But they lack the basics needed for simple "merge" of client and contact data into a wordprocessor letter/report/form/research paper. Things like DDE, OLE, ODBC, MAPI, COM, and DCOM have to be reinvented for the Open Web.   gAPPS is a good place to start.  But the focus has got to shift to Wave technologies like OT, XMPP and JSON.  Then there are the lower level innovations such as Web Sockets, Native Client, HTML5, and the Cairo-Skia graphics layer (thanks Florian).
  • Whether you (or your business) choose a Microsoft-centered solution that now has well-implemented cloud integration and tightly coupled productivity and collaboration software (think Office Live Web Apps, Office 2010, and Sharepoint 2010) or you build a business around the web-based collaboration inherent in Google Apps and extend its core functions with cool and useful applications, you win.
    • Gary Edwards
       
      Not true!!! The Microsoft Cloud is based on proprietary technologies, with the Silverlight-OOXML runtime/plug-in at the core of a WPF-.NET driven "Business Productivity Platform. The Google Cloud is based on the Open Web, and not the "Open Web" that's tied up in corporate "standards" consortia like the W3C, OASIS and Ecma. One of the reasons i really like WebKit is that they push HTML5 technologies to the edge, submitting new enhancements back into the knuckle dragging W3C HTML5 workgroups as "proposals".  They don't however wait for the entangled corporate politics of the W3C to "approve and include" these proposals.  Google and Apple submit and go live simultaneously.   This of course negates the heavy influence platform rivals like Microsoft have over the activities of corporate standards orgs.  Which has to be done if WebKit-HTML5-JavaScript-XMPP-OT-Web Sockets-Native Client family of technologies is ever to challenge the interactive and graphical richness of proprietary Microsoft technologies (Silverlight, OOXML, DrawingML, C#). The important hedge here is that Google is Open Sourcing their enhancements and innovations.  Without that Open Sourcing, i think there would be reasons to fear any platform player pushing beyond the corporate standards consortia approval process.  For me, OSS balances out the incredible influence of Google, and the ownership they have over core Open Web productivity application components. Which is to say; i don't want to find myself tomorrow in the same position with a Google Open Web Productivity Platform, that i found myself in with the 1994 Windows desktop productivity environment - where Microsoft owned every opportunity, and could take the marketshare of any Windows developed application with simple announcements that they to will enter that application category.  (ex. the entire independent contact/project management category was wiped out by mere announcement of MS Outlook).
Gary Edwards

Changing technology - How cloud computing is transforming business - and why you should... - 0 views

  •  
    If someone told you that you could drop your operating costs by 40 percent, would you listen? If that same person said you could save between $70 and $150 per user per year in energy savings alone if you tried something new, would you try it?  A lot of companies are listening, and those same businesses are trying something new - cloud computing and software as a service (SaaS) - and reaping the many benefits, which start with the aforementioned cost savings. "It's about saving money, and there's a tremendous amount of money to be saved, because if you look at IT budgets, nearly 80 percent of that budget, in many cases, is spent just to keep the lights on, which means the other 20 percent is the only money that's actually able to be used to implement new technologies into the model," says Jeff McNaught, chief marketing officer at Wyse Technology Inc. 
Gary Edwards

Adeptol Viewing Technology Features - 0 views

  •  
    Quick LinksGet a TrialEnterprise On DemandEnterprise On PremiseFAQHelpContact UsWhy Adeptol?Document SupportSupport for more than 300 document types out of boxNot a Virtual PrinterMultitenant platform for high end document viewingNo SoftwaresNo need to install any additional softwares on serverNo ActiveX/PluginsNo plugins or active x or applets need to be downloaded on client side.Fully customizableAdvanced API offers full customization and UI changes.Any OS/Any Prog LanguageInstall Adeptol Server on any OS and integrate with any programming language.AwardsAdeptol products receive industry awards and accolades year after year  View a DemoAttend a WebcastContact AdeptolView a Success StoryNo ActiveX, No Plug-in, No Software's to download. Any OS, Any Browser, Any Programming Language. That is the Power of Adeptol. Adeptol can help you retain your customers and streamline your content integration efforts. Leverage Web 2.0 technologies to get a completely scalable content viewer that easily handles any type of content in virtually unlimited volume, with additional capabilities to support high-volume transaction and archive environments. Our enterprise-class infrastructure was built to meet the needs of the world's most demanding global enterprises. Based on AJAX technology you can easily integrate the viewer into your application with complete ease. Support for all Server PlatformsCan be installed on Windows   (32bit/64bit) Server and Linux   (32bit/64bit) Server. Click here to see technical specifications.Integrate with any programming languageWhether you work in .net, c#, php, cold fusion or JSP. Adeptol Viewer can be integrated easily in any programming language using the easy API set. It also comes with sample code for all languages to get you started.Compatibility with more than 99% of the browsersTested & verified for compatibility with 99% of the various browsers on different platforms. Click here to see browser compatibility report.More than 300 Document T
Gary Edwards

Modernizr: A JavaScript Library for Open Web HTML5-CSS3 technologies - 1 views

  •  
    Modernizr is a small and simple JavaScript library that helps you take advantage of emerging web technologies (CSS3, HTML 5) while still maintaining a fine level of control over older browsers that may not yet support these new technologies. Modernizr uses feature detection to test the current browser against upcoming features like rgba(), border-radius, CSS Transitions and many more. These are currently being implemented across browsers and with Modernizr you can start using them right now, with an easy way to control the fallbacks for browsers that don't yet support them. Additionally, Modernizr creates a self-titled global JavaScript object which contains properties for each feature; if a browser supports it, the property will evaluate true and if not, it will be false. Lastly, Modernizr also adds support for styling and printing HTML5 elements. This allows you to use more semantic, forward-looking elements such as , and without having to worry about them not working in Internet Explorer. Another great catch by Marbux!  He's on fire today.
  •  
    [Blush.] Should be mentioned that Modernizr comes with an MIT-BSD dual license. So compatible with both GPL and proprietary apps.
Paul Merrell

Most Agencies Falling Short on Mandate for Online Records - 0 views

  • Nearly 20 years after Congress passed the Electronic Freedom of Information Act Amendments (E-FOIA), only 40 percent of agencies have followed the law's instruction for systematic posting of records released through FOIA in their electronic reading rooms, according to a new FOIA Audit released today by the National Security Archive at www.nsarchive.org to mark Sunshine Week. The Archive team audited all federal agencies with Chief FOIA Officers as well as agency components that handle more than 500 FOIA requests a year — 165 federal offices in all — and found only 67 with online libraries populated with significant numbers of released FOIA documents and regularly updated.
  • Congress called on agencies to embrace disclosure and the digital era nearly two decades ago, with the passage of the 1996 "E-FOIA" amendments. The law mandated that agencies post key sets of records online, provide citizens with detailed guidance on making FOIA requests, and use new information technology to post online proactively records of significant public interest, including those already processed in response to FOIA requests and "likely to become the subject of subsequent requests." Congress believed then, and openness advocates know now, that this kind of proactive disclosure, publishing online the results of FOIA requests as well as agency records that might be requested in the future, is the only tenable solution to FOIA backlogs and delays. Thus the National Security Archive chose to focus on the e-reading rooms of agencies in its latest audit. Even though the majority of federal agencies have not yet embraced proactive disclosure of their FOIA releases, the Archive E-FOIA Audit did find that some real "E-Stars" exist within the federal government, serving as examples to lagging agencies that technology can be harnessed to create state-of-the art FOIA platforms. Unfortunately, our audit also found "E-Delinquents" whose abysmal web performance recalls the teletype era.
  • E-Delinquents include the Office of Science and Technology Policy at the White House, which, despite being mandated to advise the President on technology policy, does not embrace 21st century practices by posting any frequently requested records online. Another E-Delinquent, the Drug Enforcement Administration, insults its website's viewers by claiming that it "does not maintain records appropriate for FOIA Library at this time."
  • ...9 more annotations...
  • "The presumption of openness requires the presumption of posting," said Archive director Tom Blanton. "For the new generation, if it's not online, it does not exist." The National Security Archive has conducted fourteen FOIA Audits since 2002. Modeled after the California Sunshine Survey and subsequent state "FOI Audits," the Archive's FOIA Audits use open-government laws to test whether or not agencies are obeying those same laws. Recommendations from previous Archive FOIA Audits have led directly to laws and executive orders which have: set explicit customer service guidelines, mandated FOIA backlog reduction, assigned individualized FOIA tracking numbers, forced agencies to report the average number of days needed to process requests, and revealed the (often embarrassing) ages of the oldest pending FOIA requests. The surveys include:
  • The federal government has made some progress moving into the digital era. The National Security Archive's last E-FOIA Audit in 2007, " File Not Found," reported that only one in five federal agencies had put online all of the specific requirements mentioned in the E-FOIA amendments, such as guidance on making requests, contact information, and processing regulations. The new E-FOIA Audit finds the number of agencies that have checked those boxes is now much higher — 100 out of 165 — though many (66 in 165) have posted just the bare minimum, especially when posting FOIA responses. An additional 33 agencies even now do not post these types of records at all, clearly thwarting the law's intent.
  • The FOIAonline Members (Department of Commerce, Environmental Protection Agency, Federal Labor Relations Authority, Merit Systems Protection Board, National Archives and Records Administration, Pension Benefit Guaranty Corporation, Department of the Navy, General Services Administration, Small Business Administration, U.S. Citizenship and Immigration Services, and Federal Communications Commission) won their "E-Star" by making past requests and releases searchable via FOIAonline. FOIAonline also allows users to submit their FOIA requests digitally.
  • Disabilities Compliance. Despite the E-FOIA Act, many government agencies do not embrace the idea of posting their FOIA responses online. The most common reason agencies give is that it is difficult to post documents in a format that complies with the Americans with Disabilities Act, also referred to as being "508 compliant," and the 1998 Amendments to the Rehabilitation Act that require federal agencies "to make their electronic and information technology (EIT) accessible to people with disabilities." E-Star agencies, however, have proven that 508 compliance is no barrier when the agency has a will to post. All documents posted on FOIAonline are 508 compliant, as are the documents posted by the Department of Defense and the Department of State. In fact, every document created electronically by the US government after 1998 should already be 508 compliant. Even old paper records that are scanned to be processed through FOIA can be made 508 compliant with just a few clicks in Adobe Acrobat, according to this Department of Homeland Security guide (essentially OCRing the text, and including information about where non-textual fields appear). Even if agencies are insistent it is too difficult to OCR older documents that were scanned from paper, they cannot use that excuse with digital records.
  • Key Findings
  • Excuses Agencies Give for Poor E-Performance
  • Justice Department guidance undermines the statute. Currently, the FOIA stipulates that documents "likely to become the subject of subsequent requests" must be posted by agencies somewhere in their electronic reading rooms. The Department of Justice's Office of Information Policy defines these records as "frequently requested records… or those which have been released three or more times to FOIA requesters." Of course, it is time-consuming for agencies to develop a system that keeps track of how often a record has been released, which is in part why agencies rarely do so and are often in breach of the law. Troublingly, both the current House and Senate FOIA bills include language that codifies the instructions from the Department of Justice. The National Security Archive believes the addition of this "three or more times" language actually harms the intent of the Freedom of Information Act as it will give agencies an easy excuse ("not requested three times yet!") not to proactively post documents that agency FOIA offices have already spent time, money, and energy processing. We have formally suggested alternate language requiring that agencies generally post "all records, regardless of form or format that have been released in response to a FOIA request."
  • THE E-DELINQUENTS: WORST OVERALL AGENCIES In alphabetical order
  • Privacy. Another commonly articulated concern about posting FOIA releases online is that doing so could inadvertently disclose private information from "first person" FOIA requests. This is a valid concern, and this subset of FOIA requests should not be posted online. (The Justice Department identified "first party" requester rights in 1989. Essentially agencies cannot use the b(6) privacy exemption to redact information if a person requests it for him or herself. An example of a "first person" FOIA would be a person's request for his own immigration file.) Cost and Waste of Resources. There is also a belief that there is little public interest in the majority of FOIA requests processed, and hence it is a waste of resources to post them. This thinking runs counter to the governing principle of the Freedom of Information Act: that government information belongs to US citizens, not US agencies. As such, the reason that a person requests information is immaterial as the agency processes the request; the "interest factor" of a document should also be immaterial when an agency is required to post it online. Some think that posting FOIA releases online is not cost effective. In fact, the opposite is true. It's not cost effective to spend tens (or hundreds) of person hours to search for, review, and redact FOIA requests only to mail it to the requester and have them slip it into their desk drawer and forget about it. That is a waste of resources. The released document should be posted online for any interested party to utilize. This will only become easier as FOIA processing systems evolve to automatically post the documents they track. The State Department earned its "E-Star" status demonstrating this very principle, and spent no new funds and did not hire contractors to build its Electronic Reading Room, instead it built a self-sustaining platform that will save the agency time and money going forward.
Paul Merrell

Opinion: Berkeley Can Become a City of Refuge | Opinion | East Bay Express - 0 views

  • The Berkeley City Council is poised to vote March 13 on the Surveillance Technology Use and Community Safety Ordinance, which will significantly protect people's right to privacy and safeguard the civil liberties of Berkeley residents in this age of surveillance and Big Data. The ordinance is based on an ACLU model that was first enacted by Santa Clara County in 2016. The Los Angeles Times has editorialized that the ACLU's model ordinance approach "is so pragmatic that cities, counties, and law enforcement agencies throughout California would be foolish not to embrace it." Berkeley's Peace and Justice and Police Review commissions agreed and unanimously approved a draft that will be presented to the council on Tuesday. The ordinance requires public notice and public debate prior to seeking funding, acquiring equipment, or otherwise moving forward with surveillance technology proposals. In neighboring Oakland, we saw the negative outcome that can occur from lack of such a discussion, when the city's administration pursued funding for, and began building, the citywide surveillance network known as the Domain Awareness Center ("DAC") without community input. Ultimately, the community rejected the project, and the fallout led to the establishment of a Privacy Advisory Commission and subsequent consideration of a similar surveillance ordinance to ensure proper vetting occurs up front, not after the fact. ✖ Play VideoPauseUnmuteCurrent Time 0:00/Duration Time 0:00Loaded: 0%Progress: 0%Stream TypeLIVERemaining Time -0:00 Playback Rate1ChaptersChaptersdescriptions off, selectedDescriptionssubtitles off, selectedSubtitlescaptions settings, opens captions settings dialogcaptions off, selectedCaptionsAudio TrackFullscreenThis is a modal window.Caption Settings DialogBeginning of dialog window. Escape will cancel and close the window.
Paul Merrell

India begins to embrace digital privacy. - 0 views

  • India is the world’s largest democracy and is home to 13.5 percent of the world’s internet users. So the Indian Supreme Court’s August ruling that privacy is a fundamental, constitutional right for all of the country’s 1.32 billion citizens was momentous. But now, close to three months later, it’s still unclear exactly how the decision will be implemented. Will it change everything for internet users? Or will the status quo remain? The most immediate consequence of the ruling is that tech companies such as Facebook, Twitter, Google, and Alibaba will be required to rein in their collection, utilization, and sharing of Indian user data. But the changes could go well beyond technology. If implemented properly, the decision could affect national politics, business, free speech, and society. It could encourage the country to continue to make large strides toward increased corporate and governmental transparency, stronger consumer confidence, and the establishment and growth of the Indian “individual” as opposed to the Indian collective identity. But that’s a pretty big if. Advertisement The privacy debate in India was in many ways sparked by a controversy that has shaken up the landscape of national politics for several months. It began in 2016 as a debate around a social security program that requires participating citizens to obtain biometric, or Aadhaar, cards. Each card has a unique 12-digit number and records an individual’s fingerprints and irises in order to confirm his or her identity. The program was devised to increase the ease with which citizens could receive social benefits and avoid instances of fraud. Over time, Aadhaar cards have become mandatory for integral tasks such as opening bank accounts, buying and selling property, and filing tax returns, much to the chagrin of citizens who are uncomfortable about handing over their personal data. Before the ruling, India had weak privacy protections in place, enabling unchecked data collection on citizens by private companies and the government. Over the past year, a number of large-scale data leaks and breaches that have impacted major Indian corporations, as well as the Aadhaar program itself, have prompted users to start asking questions about the security and uses of their personal data.
  • n order to bolster the ruling the government will also be introducing a set of data protection laws that are to be developed by a committee led by retired Supreme Court judge B.N. Srikrishna. The committee will study the data protection landscape, develop a draft Data Protection Bill, and identify how, and whether, the Aadhaar Act should be amended based on the privacy ruling.
  • Should the data protection laws be implemented in an enforceable manner, the ruling will significantly impact the business landscape in India. Since the election of Prime Minister Narendra Modi in May 2014, the government has made fostering and expanding the technology and startup sector a top priority. The startup scene has grown, giving rise to several promising e-commerce companies, but in 2014, only 12 percent of India’s internet users were online consumers. If the new data protection laws are truly impactful, companies will have to accept responsibility for collecting, utilizing, and protecting user data safely and fairly. Users would also have a stronger form of redress when their newly recognized rights are violated, which could transform how they engage with technology. This has the potential to not only increase consumer confidence but revitalize the Indian business sector, as it makes it more amenable and friendly to outside investors, users, and collaborators.
Paul Merrell

U.S. vs. Facebook: A Playbook for SEC, DOJ and EDNY - 0 views

  • Six4Three recently published a playbook for the FTC to get to the bottom of Facebook’s secretive deals selling user data without privacy controls. In light of The New York Times article reporting multiple criminal investigations into Facebook surrounding these secretive deals, we’re publishing the playbook for criminal investigators.Perhaps the most important recognition at the outset is that the secretive deals that have been reported, whether those with a handful of device manufacturers or with 150 large technology companies, are just the tip of the iceberg. Those secretive deals handing over user data in exchange for gobs of cash were merely part and parcel of a much broader illegal scheme that begins with Facebook’s transition to mobile in 2012 and continues to this very day. We believe this illegal scheme amounts to a clear RICO violation. The United Kingdom Parliament agrees. Here’s how criminal investigators can overcome Facebook’s incredibly effective concealment campaign and bring a viable RICO case.Facebook’s pattern of racketeering activity is a play in three acts from at least 2012 to present. The first act is all about the desperation resulting from the collapse of Facebook’s desktop advertising business right around its IPO and the various securities violations that resulted. The second act is about covering up those securities violations by illegally building its mobile advertising business via extortion and wire fraud in order to close the gap in Facebook’s revenue projections before the world took notice, which likely resulted in additional securities violations. The third act is about covering up the extortion and wire fraud by lying to government officials investigating Facebook while continuing to effectuate the scheme. We are still in the third act.For almost a decade now Facebook has been covering up one illegal act with another in order to hide how it managed to ramp up its mobile advertising business faster than any other business in the history of capitalism. The abuses of Facebook’s data, from Russian interference in the 2016 election to Cambridge Analytica and Brexit, all stem in substantial part from the decisions Facebook knowingly, willfully and maliciously made to facilitate this criminal conspiracy. Put simply, Facebook’s transition to mobile destabilized the world.
  •  
    This is so reminiscent of Microsoft tactics at the point that antitrust regulators stepped in.
Gary Edwards

Apple, Microsoft Challenged By Streaming Software Plan - Cloud - 0 views

  •  
    Very interesting the way JavaScript Libraries are continuing to challenge Native Code for Web Application dominance.    excerpt: "The code library, ORBX.js, can be thought of as a cloud-based alternative to Google's Native Client technology. It permits Linux, OS X and Windows applications to run on remote servers and to be presented in a Web browser." "With ORBX.js, native code and legacy applications can be hosted in the cloud (e.g. Amazon EC2), and stream interactive graphics, 3D rendering or low latency video to a standard HTML5 page without using plugins or native code, or even the video tag (which, like Google NaCL,is vendor specific - ORBX.js works on all five major browsers)," explained Otoy founder and CEO Jules Urbach in an email. "The video codec created for ORBX.js can decode 1080p60 at a quality on par with H.264, using only JavaScript." "With ORBX.js and a cloud service provider, you could conceivably run Value's PC Steam client on an Apple iMac or Google Chromebook. You could run Autodesk 3DS Max 2014 on an Android Nexus 7 tablet. You could run a big budget, graphically demanding game title like Left 4 Dead 2 in a Web browser, without any plugins, Flash, Java, NaCL or other supporting technology."
Gary Edwards

Office to finally fully support ODF, Open XML, and PDF formats | ZDNet - 0 views

  •  
    The king of clicks returns!  No doubt there was a time when the mere mention of ODF and the now legendary XML "document" format wars with Microsoft could drive click counts into the statisphere.  Sorry to say though, those times are long gone. It's still a good story though.  Even if the fate of mankind and the future of the Internet no longer hinges on the outcome.  There is that question that continues defy answer; "Did Microsoft win or lose?"  So the mere announcement of supported formats in MSOffice XX is guaranteed to rev the clicks somewhat. Veteran ODF clickmeister SVN does make an interesting observation though: "The ironic thing is that, while this was as hotly debated am issue in the mid-2000s as are mobile patents and cloud implementation is today, this news was barely noticed. That's a mistake. Updegrove points out, "document interoperability and vendor neutrality matter more now than ever before as paper archives disappear and literally all of human knowledge is entrusted to electronic storage." He concluded, "Only if documents can be easily exchanged and reliably accessed on an ongoing basis will competition in the present be preserved, and the availability of knowledge down through the ages be assured. Without robust, universally adopted document formats, both of those goals will be impossible to attain." Updegrove's right of course. Don't believe me? Go into your office's archives and try to bring up documents your wrote in the 90s in WordPerfect or papers your staff created in the 80s with WordStar. If you don't want to lose your institutional memory, open document standards support is more important than ever. "....................................... Sorry but Updegrove is wrong.  Woefully wrong. The Web is the future.  Sure interoperability matters, but only as far as the Web and the future of Cloud Computing is concerned.  Sadly neither ODF or Open XML are Web ready.  The language of the Web is famously HTML, now HTML5+
Gary Edwards

Overview of apps for Office 2013 - 0 views

  •  
    MSOffice is now "Web ready".  The Office apps are capable of running HTML5-JavaScript apps based on a simple Web page model.  Think of this as the Office apps being fitted with a browser, and developers writing extensions to run in that browser using HTML5 and JavaScript.  Microsoft provides an Office.js library and, a developer "Web App/Page Creator"  Visual Basic toolset called "Napa" Office 365 Development Tools.  Lots of project templates. Key MSOffice apps are Word, Excel, PowerPoint and Outlook.  Develop for Office or SharePoint.  Apps can be hosted on any Web Server. excerpt: Microsoft Office 2013 Developer Environment with HTML5, XML and JavaScript.  Office.js library. "his documentation is preliminary and is subject to change. Published: July 16, 2012 Learn how to use apps for Office to extend your Office 2013 Preview applications. This new Office solution type, apps for Office, built on web technologies like HTML, CSS, JavaScript, REST, OData, and OAuth. It provides new experiences within Office applications by surfacing web technologies and cloud services right within Office documents, email messages, meeting requests, and appointments. Applies to:  Excel Web App Preview | Exchange 2013 Preview | Outlook 2013 Preview | Outlook Web App Preview | Project Professional 2013 Preview | Word 2013 Preview | Excel 2013 Preview  In this article What is an app for Office? Anatomy of an app for Office Types of apps for Office What can an app for Office do? Understanding the runtime Development basics Create your first app for Office Publishing basics Scenarios Components of an app for Office solution Software requirements"
Gary Edwards

Google's Chrome Browser Sprouts Programming Kit of the Future "Node.js" | Wired Enterpr... - 1 views

  •  
    Good article describing Node.js.  The Node.js Summitt is taking place in San Francisco on Jan 24th - 25th.  http://goo.gl/AhZTD I'm wondering if anyone has used Node.js to create real time Cloud ready compound documents?  Replacing MSOffice OLE-ODBC-ActiveX heavy productivity documents, forms and reports with Node.js event widgets, messages and database connections?  I'm thinking along the lines of a Lotus Notes alternative with a Node.js enhanced version of EverNote on the front end, and Node.js-Hadoop productivity platform on the server side? Might have to contact Stephen O'Grady on this.  He is a featured speaker at the conference. excerpt: At first, Chito Manansala (Visa & Sabre) built his Internet transaction processing systems using the venerable Java programming language. But he has since dropped Java and switched to what is widely regarded as The Next Big Thing among Silicon Valley developers. He switched to Node. Node is short for Node.js, a new-age programming platform based on a software engine at the heart of Google's Chrome browser. But it's not a browser technology. It's meant to help build software that sits on a distant server somewhere, feeding an application to your PC or smartphone, and it's particularly suited to systems like the one Chito Manansala is building - systems that juggle scads of information streaming to and from other sources. In other words, it's suited to the modern internet. Two years ago, Node was just another open source project. But it has since grown into the development platform of the moment. At Yahoo!, Node underpins "Manhattan," a fledgling online service for building and hosting mobile applications. Microsoft is offering Node atop Windows Azure, its online service for building and hosting a much beefier breed of business application. And Sabre is just one of a host of big names using the open source platform to erect applications on their own servers. Node is based on the Javascript engine at th
Gary Edwards

Andreessen Horowitz & the Meteor investment - 0 views

  •  
    Web site for Andreessen Horowitz VC. List of blogs for general partners. The reason for linking into a16z is the $11.2 Million they invested in Meteor! Meteor is awesome. My guess is that Meteor will provide a very effective Cloud platform to replace or extend the Windows Client/Server business productivity platform. Many VC watchers are wondering if a16z can recover the investment? Say what? IMHO this is for all the marbles. Platform is everything, and Cloud Computing is certain to replace Client/Server over time. Meteor just move that time frame from a future uncertainty to NOW. The Windows Productivity Platform has dominated Client/Server computing since the introduction of Windows 4 WorkGroups (v3.11) in 1992. Key technologies that followed or were included in v3.11 were DDE, OLE, MAPI, ODBC, ActiveX, and Visual Basic scripting - to name but a few. Meteor is an open source platform that hits these technologies directly with an approach that truly improves the complicated development of all Cloud based Web Apps - including the sacred Microsoft Cow herd of client/server business productivity apps. Meteor nails OLE and ODBC like nothing i've ever seen before. Very dramatic stuff. Maybe they are nailing shut the Redmond coffin in the process - making that $11.2 Mill a drop in the bucket considering the opportunity Meteor has cracked open. The iron grip Microsoft has on business productivity is so tight and so far reaching that one could easily say that Windows is the client in Client/Server. But it took years to build that empire. With this investment, Meteor could do it in months. Compound documents are the fuel in Windows business productivity and office automation systems. Tear apart a compound document, and you'll find embedded logic for OLE and ODBC. Sure, it's brittle, costly to develop, costly to maintain, and a bear to distribute. Tear apart a Meteor productivity service and you'll find the same kind of OLE-ODBC-Script
Gary Edwards

Republic Wireless - Combining WiFi with Cellular to reduce Smartphone Costs - 0 views

  • Do I need to buy minutes from Sprint or anyone else? No. We're the first-ever wireless provider to bundle WiFi calling with access to cellular whenever you need it. Depending on the plan you choose, your Republic Wireless phone will have unlimited* access to data, talk and text when using the Sprint cellular network. Note that the $5 plan offered by Republic is WiFi only and the $10 plan includes cellular talk and text (no data). All Republic plans include unlimited data, talk and text on WiFi. 
  • Can I switch between plans? Yes! When you purchase a new Moto X phone, you’ll be able to choose whatever plan you like—and you can also switch plans up to twice per month as your needs change. For example, if you know you’ll be taking a vacation and might require more cell data one week, you can switch to a cell data plan right from your phone and then switch back to a WiFi “friendlier” plan once you return home.
  •  
    Republic Wireless provides a new kind of smartphone cellular service based on a technology that handles the roll over from WiFi to 3G or 4G cellular in the middle of a call. Very cool, but currently it only works with specially outfitted (custom ROM) Android Moto X phones. (They are working on how to port this custom ROM technology to all Android phones :) The concept is based on the fact that WiFi is cheap, very open and near universally available; while 3G and 4G Cellular is expensive, contractual and proprietary. The idea is to leverage free WiFi wherever they can, and roll over to the Sprint 3G - 4G network when needed. Very cool and the business model seems to have it right. ......................................................................... "Which Moto X plan is right for me? Here's the lowdown on our four new plan options. Depending on your needs and how you want to use your phone, you can choose the plan that's best for you. $5 WiFi only plan This is the most powerful tool in your arsenal of options. Why? You can drop your smartphone bill-at will-to $5. If you're interested in getting serious about cutting costs, you can use this tool to best leverage the WiFi in your life to reduce your phone bill. It's also the ultimate plan for home base stickers and kids who don't need a cellular plan. It's fully unlimited data, talk and text-on WiFi only. $10 WiFi + Cell Talk & Text One of our members, 10thdoctor said :  "I use WiFi for everything, except when I'm traveling and for voice at my school." Yep, this is the perfect plan for that. Our members are around WiFi about 90% of the time. During that 10% of the time where you're away from WiFi, this plan gives you cellular backup for communicating when you need to. This plan both cuts costs and accommodates what's quickly becoming the norm: a day filled with WiFi. $25 WiFi + Cell (3G) Talk, Text & Data Lots of people are on 3G plans today and are paying upwards of $100 a month on
Paul Merrell

New open-source router firmware opens your Wi-Fi network to strangers | Ars Technica - 0 views

  • We’ve often heard security folks explain their belief that one of the best ways to protect Web privacy and security on one's home turf is to lock down one's private Wi-Fi network with a strong password. But a coalition of advocacy organizations is calling such conventional wisdom into question. Members of the “Open Wireless Movement,” including the Electronic Frontier Foundation (EFF), Free Press, Mozilla, and Fight for the Future are advocating that we open up our Wi-Fi private networks (or at least a small slice of our available bandwidth) to strangers. They claim that such a random act of kindness can actually make us safer online while simultaneously facilitating a better allocation of finite broadband resources. The OpenWireless.org website explains the group’s initiative. “We are aiming to build technologies that would make it easy for Internet subscribers to portion off their wireless networks for guests and the public while maintaining security, protecting privacy, and preserving quality of access," its mission statement reads. "And we are working to debunk myths (and confront truths) about open wireless while creating technologies and legal precedent to ensure it is safe, private, and legal to open your network.”
  • One such technology, which EFF plans to unveil at the Hackers on Planet Earth (HOPE X) conference next month, is open-sourced router firmware called Open Wireless Router. This firmware would enable individuals to share a portion of their Wi-Fi networks with anyone nearby, password-free, as Adi Kamdar, an EFF activist, told Ars on Friday. Home network sharing tools are not new, and the EFF has been touting the benefits of open-sourcing Web connections for years, but Kamdar believes this new tool marks the second phase in the open wireless initiative. Unlike previous tools, he claims, EFF’s software will be free for all, will not require any sort of registration, and will actually make surfing the Web safer and more efficient.
  • Kamdar said that the new firmware utilizes smart technologies that prioritize the network owner's traffic over others', so good samaritans won't have to wait for Netflix to load because of strangers using their home networks. What's more, he said, "every connection is walled off from all other connections," so as to decrease the risk of unwanted snooping. Additionally, EFF hopes that opening one’s Wi-Fi network will, in the long run, make it more difficult to tie an IP address to an individual. “From a legal perspective, we have been trying to tackle this idea that law enforcement and certain bad plaintiffs have been pushing, that your IP address is tied to your identity. Your identity is not your IP address. You shouldn't be targeted by a copyright troll just because they know your IP address," said Kamdar.
  • ...1 more annotation...
  • While the EFF firmware will initially be compatible with only one specific router, the organization would like to eventually make it compatible with other routers and even, perhaps, develop its own router. “We noticed that router software, in general, is pretty insecure and inefficient," Kamdar said. “There are a few major players in the router space. Even though various flaws have been exposed, there have not been many fixes.”
Paul Merrell

Internet Giants Erect Barriers to Spy Agencies - NYTimes.com - 0 views

  • As fast as it can, Google is sealing up cracks in its systems that Edward J. Snowden revealed the N.S.A. had brilliantly exploited. It is encrypting more data as it moves among its servers and helping customers encode their own emails. Facebook, Microsoft and Yahoo are taking similar steps.
  • After years of cooperating with the government, the immediate goal now is to thwart Washington — as well as Beijing and Moscow. The strategy is also intended to preserve business overseas in places like Brazil and Germany that have threatened to entrust data only to local providers. Google, for example, is laying its own fiber optic cable under the world’s oceans, a project that began as an effort to cut costs and extend its influence, but now has an added purpose: to assure that the company will have more control over the movement of its customer data.
  • A year after Mr. Snowden’s revelations, the era of quiet cooperation is over. Telecommunications companies say they are denying requests to volunteer data not covered by existing law. A.T.&T., Verizon and others say that compared with a year ago, they are far more reluctant to cooperate with the United States government in “gray areas” where there is no explicit requirement for a legal warrant.
  • ...8 more annotations...
  • Eric Grosse, Google’s security chief, suggested in an interview that the N.S.A.'s own behavior invited the new arms race.“I am willing to help on the purely defensive side of things,” he said, referring to Washington’s efforts to enlist Silicon Valley in cybersecurity efforts. “But signals intercept is totally off the table,” he said, referring to national intelligence gathering.“No hard feelings, but my job is to make their job hard,” he added.
  • In Washington, officials acknowledge that covert programs are now far harder to execute because American technology companies, fearful of losing international business, are hardening their networks and saying no to requests for the kind of help they once quietly provided.Continue reading the main story Robert S. Litt, the general counsel of the Office of the Director of National Intelligence, which oversees all 17 American spy agencies, said on Wednesday that it was “an unquestionable loss for our nation that companies are losing the willingness to cooperate legally and voluntarily” with American spy agencies.
  • Many point to an episode in 2012, when Russian security researchers uncovered a state espionage tool, Flame, on Iranian computers. Flame, like the Stuxnet worm, is believed to have been produced at least in part by American intelligence agencies. It was created by exploiting a previously unknown flaw in Microsoft’s operating systems. Companies argue that others could have later taken advantage of this defect.Worried that such an episode undercuts confidence in its wares, Microsoft is now fully encrypting all its products, including Hotmail and Outlook.com, by the end of this year with 2,048-bit encryption, a stronger protection that would take a government far longer to crack. The software is protected by encryption both when it is in data centers and when data is being sent over the Internet, said Bradford L. Smith, the company’s general counsel.
  • Mr. Smith also said the company was setting up “transparency centers” abroad so that technical experts of foreign governments could come in and inspect Microsoft’s proprietary source code. That will allow foreign governments to check to make sure there are no “back doors” that would permit snooping by United States intelligence agencies. The first such center is being set up in Brussels.Microsoft has also pushed back harder in court. In a Seattle case, the government issued a “national security letter” to compel Microsoft to turn over data about a customer, along with a gag order to prevent Microsoft from telling the customer it had been compelled to provide its communications to government officials. Microsoft challenged the gag order as violating the First Amendment. The government backed down.
  • Hardware firms like Cisco, which makes routers and switches, have found their products a frequent subject of Mr. Snowden’s disclosures, and their business has declined steadily in places like Asia, Brazil and Europe over the last year. The company is still struggling to convince foreign customers that their networks are safe from hackers — and free of “back doors” installed by the N.S.A. The frustration, companies here say, is that it is nearly impossible to prove that their systems are N.S.A.-proof.
  • In one slide from the disclosures, N.S.A. analysts pointed to a sweet spot inside Google’s data centers, where they could catch traffic in unencrypted form. Next to a quickly drawn smiley face, an N.S.A. analyst, referring to an acronym for a common layer of protection, had noted, “SSL added and removed here!”
  • Facebook and Yahoo have also been encrypting traffic among their internal servers. And Facebook, Google and Microsoft have been moving to more strongly encrypt consumer traffic with so-called Perfect Forward Secrecy, specifically devised to make it more labor intensive for the N.S.A. or anyone to read stored encrypted communications.One of the biggest indirect consequences from the Snowden revelations, technology executives say, has been the surge in demands from foreign governments that saw what kind of access to user information the N.S.A. received — voluntarily or surreptitiously. Now they want the same.
  • The latest move in the war between intelligence agencies and technology companies arrived this week, in the form of a new Google encryption tool. The company released a user-friendly, email encryption method to replace the clunky and often mistake-prone encryption schemes the N.S.A. has readily exploited.But the best part of the tool was buried in Google’s code, which included a jab at the N.S.A.'s smiley-face slide. The code included the phrase: “ssl-added-and-removed-here-; - )”
« First ‹ Previous 41 - 60 of 451 Next › Last »
Showing 20 items per page