Skip to main content

Home/ Future of the Web/ Group items tagged fit

Rss Feed Group items tagged

applite

Google Fit Activities Can Be Started From Android Wear | APPlite - 0 views

  •  
    The most recent upgrade to Fit may spare numerous individuals from reaching for their telephone by any means. Wear clients can now begin and prevent movement trackers specifically from the small scale application.
Gary Edwards

The Silverlight RiA Platform : Replacing the desktop from the cloud - 0 views

  •  
    In the future application developers won't care what desktop operating system you use, they will only care which Fit Client platform is the most pervasive. This is what Adobe AIR, Microsoft Silverlight, Google Gears and Curl are fighting for. Nothing short of the future of desktop and RIA development. Microsoft brings with it a huge ecosystem of .NET developers - potentially millions of developers already skilled in WPF, XAML and C#. That's a pretty scary for others in the Fit Client arena. Right now the future of the desktop is completely open. Anyone with enough clout could win the desktop - effectively usurping Microsoft Windows dominate position.
Gary Edwards

The Grand Convergence: Web + RIA + Widgets + Client/Server - 0 views

  • he architecture of the Widget engine divides the client technology into two parts, the engine and the widgets. The widget engine is usually a pretty large download.
  • The widget engine is really a wonderful architecture that gives you the power of the desktop (via the widget engine) and the management of the Web (via widget downloads).  Widget engines can out-perform RIA solutions and they can store larger data sets. 
  • Fit Client applications can be centrally managed, yet remain resident on the desktop. They can offer access to standard web content (e.g. HTML) without the need of a browser. Fit Clients can leverage the processing power and disc space of the client machine, but they can also offer more restrictive and secure environments than client/server platforms.
  •  
    Excellent overview of where applications are going. Richard Monson-Haefel, (whom i met at the 2008 Web 2.0 Conference) explains the convergence of four emerging application models: Web Clients (Browsers), RiA Clients, Client/Server, and Widget Engines. He comes up with a convergence point called "Fit Client", offering Adobe Air as the leading example. Richard walks through each application model, discussing limitations and advantages. Good stuff, especially this comment: "The widget engine is really a wonderful architecture that gives you the power of the desktop (via the widget engine) and the management of the Web (via widget downloads).  Widget engines can out-perform RIA solutions and they can store larger data sets.    The limitation of Widget engines is not in their architecture, it is that they have been designed for applications with fairly weak capabilities compared to client/server. Widgets tend to be single-purpose applications with limited access to the native operating system. That said, the widget architecture itself - the separation of the platform from the applications - is important. It makes it possible to create applications (widgets) that are portable across operating systems and are packaged for easy download and installation. "
Gonzalo San Gil, PhD.

Using open document formats could create a domino effect | Opensource.com - 1 views

  •  
    "Higher education is increasingly embracing different concepts of openness, from open access to open education resources (OER). But where does that other open concept-open source-fit into this model? Open source represents the best way to ensure these materials can be easily modified, without risk of material suddenly becoming unchangeable or inaccessible."
  •  
    "Higher education is increasingly embracing different concepts of openness, from open access to open education resources (OER). But where does that other open concept-open source-fit into this model? Open source represents the best way to ensure these materials can be easily modified, without risk of material suddenly becoming unchangeable or inaccessible."
Gonzalo San Gil, PhD.

Fix Copyright! | Help us Reform Copyright - 0 views

  •  
    "01 DYSFUNCTIONAL & NOT FIT FOR THE DIGITAL WORLD Copyright reform is needed to adapt to the digital world we live in. Under the current system everything tends to fall under copyright unless it is covered by a specific exception in the law. The trouble is that these exceptions are narrow, specific and technologically outdated: the list was written in 2001! This was well before YouTube and Facebook were created. As a result, everyday habits of online users could be considered illegal today. A blogger linking to copyrighted content, a meme based on a copyrighted image, a video with some footage from an existing movie or a song: all of that could create issues for the user that posted them."
  •  
    "01 DYSFUNCTIONAL & NOT FIT FOR THE DIGITAL WORLD Copyright reform is needed to adapt to the digital world we live in. Under the current system everything tends to fall under copyright unless it is covered by a specific exception in the law. The trouble is that these exceptions are narrow, specific and technologically outdated: the list was written in 2001! This was well before YouTube and Facebook were created. As a result, everyday habits of online users could be considered illegal today. A blogger linking to copyrighted content, a meme based on a copyrighted image, a video with some footage from an existing movie or a song: all of that could create issues for the user that posted them."
Gonzalo San Gil, PhD.

Get started with a new open source project | Opensource.com - 0 views

  •  
    Matt Micene "How do you find the right open source project to jump into? Here's a guide based on my journey to find the right fit."
  •  
    "How do you find the right open source project to jump into? Here's a guide based on my journey to find the right fit."
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Make Firefox Look Native in Fedora - Fedora Magazine - 0 views

  •  
    "Firefox has been criticized by users for not fitting well in Fedora Workstation. Although it improved with the new interface called Australis, it still doesn't feel as native as GNOME Web (Epiphany). It's not likely it will close the gap any time soon for two reasons:"
Gonzalo San Gil, PhD.

Big Brother goes to school - 0 views

  •  
    ""Data gathering includes health, fitness and sleeping habits, sexual activity, prescription drug use, alcohol use and disciplinary matters. Students attitudes, sociability and even 'enthusiasm' are quantified, analyzed, recorded and dropped into giant data systems," she wrote." [ # ! #Smile... # ! ... #You are The '#Merchandise'. # ! #Protect #Yourself, You are The(ir) '#Target'... # ! But Stay #calm: You are one of #us... #of #Many... # ! ... The #Honest and #Peaceful #citizens...]
  •  
    ""Data gathering includes health, fitness and sleeping habits, sexual activity, prescription drug use, alcohol use and disciplinary matters. Students attitudes, sociability and even 'enthusiasm' are quantified, analyzed, recorded and dropped into giant data systems," she wrote."
Alexandra IcecreamApps

Best Fitness Apps for a Healthier Lifestyle - Icecream Tech Digest - 0 views

  •  
    February is the high time for everyone to start creating their summer body. Obviously, after a couple of gym visits you won’t get ripped as it is a time consuming process that requires your dedication and hard work. However, you … Continue reading →
  •  
    February is the high time for everyone to start creating their summer body. Obviously, after a couple of gym visits you won’t get ripped as it is a time consuming process that requires your dedication and hard work. However, you … Continue reading →
Gary Edwards

Can C.E.O. Satya Nadella Save Microsoft? | Vanity Fair - 0 views

  • he new world of computing is a radical break from the past. That’s because of the growth of mobile devices and cloud computing. In the old world, corporations owned and ran Windows P.C.’s and Window servers in their own facilities, with the necessary software installed on them. Everyone used Windows, so everything was developed for Windows. It was a virtuous circle for Microsoft.
  • Now the processing power is in the cloud, and very sophisticated applications, from e-mail to tools you need to run a business, can be run by logging onto a Web site, not from pre-installed software. In addition, the way we work (and play) has shifted from P.C.’s to mobile devices—where Android and Apple’s iOS each outsell Windows by more than 10 to 1. Why develop software to run on Windows if no one is using Windows? Why use Windows if nothing you want can run on it? The virtuous circle has turned vicious.
  • Part of why Microsoft failed with devices is that competitors upended its business model. Google doesn’t charge for the operating system. That’s because Google makes its money on search. Apple can charge high prices because of the beauty and elegance of its devices, where the software and hardware are integrated in one gorgeous package. Meanwhile, Microsoft continued to force outside manufacturers, whose products simply weren’t as compelling as Apple’s, to pay for a license for Windows. And it didn’t allow Office to be used on non-Windows phones and tablets. “The whole philosophy of the company was Windows first,” says Heather Bellini, an analyst at Goldman Sachs. Of course it was: that’s how Microsoft had always made its money.
  • ...18 more annotations...
  • Right now, Windows itself is fragmented: applications developed for one Windows device, say a P.C., don’t even necessarily work on another Windows device. And if Microsoft develops a new killer application, it almost has to be released for Android and Apple phones, given their market dominance, thereby strengthening those eco-systems, too.
  • At its core, Azure uses Windows server technology. That helps existing Windows applications run seamlessly on Azure. Technologists sometimes call what Microsoft has done a “hybrid cloud” because companies can use Azure alongside their pre-existing on-site Windows servers. At the same time, Nadella also to some extent has embraced open-source software—free code that doesn’t require a license from Microsoft—so that someone could develop something using non-Microsoft technology, and it would run on Azure. That broadens Azure’s appeal.
  • “In some ways the way people think about Bill and Steve is almost a Rorschach test.” For those who romanticize the Gates era, Microsoft’s current predicament will always be Ballmer’s fault. For others, it’s not so clear. “He left Steve holding a big bag of shit,” the former executive says of Gates. In the year Ballmer officially took over, Microsoft was found to be a predatory monopolist by the U.S. government and was ordered to split into two; the cost of that to Gates and his company can never be calculated. In addition, the dotcom bubble had burst, causing Microsoft stock to collapse, which resulted in a simmering tension between longtime employees, whom the company had made rich, and newer ones, who had missed the gravy train.
  • Nadella lived this dilemma because his job at Microsoft included figuring out the cloud-based future while maintaining the highly profitable Windows server business. And so he did a bunch of things that were totally un-Microsoft-like. He went to talk to start-ups to find out why they weren’t using Microsoft. He put massive research-and-development dollars behind Azure, a cloud-based platform that Microsoft had developed in Skunk Works fashion, which by definition took resources away from the highly profitable existing business.
  • They even have a catchphrase: “Re-inventing productivity.”
  • Microsoft’s historical reluctance to open Windows and Office is why it was such a big deal when in late March, less than two months after becoming C.E.O., Nadella announced that Microsoft would offer Office for Apple’s iPad. A team at the company had been working on it for about a year. Ballmer says he would have released it eventually, but Nadella did it immediately. Nadella also announced that Windows would be free for devices smaller than nine inches, meaning phones and small tablets. “Now that we have 30 million users on the iPad using it, that is 30 million people who never used Office before [on an iPad,]” he says. “And to me that’s what really drives us.” These are small moves in some ways, and yet they are also big. “It’s the first time I have listened to a senior Microsoft executive admit that they are behind,” says one institutional investor. “The fact that they are giving away Windows, their bread and butter for 25 years—it is quite a fundamental change.”
  • And whoever does the best job of building the right software experiences to give both organizations and individuals time back so that they can get more out of their time, that’s the core of this company—that’s the soul. That’s what Bill started this company with. That’s the Office franchise. That’s the Windows franchise. We have to re-invent them. . . . That’s where this notion of re-inventing productivity comes from.”
  • Ballmer might be a complicated character, but he has nothing on Gates, whose contradictions have long fascinated Microsoft-watchers. He is someone who has no problem humiliating individuals—he might not even notice—but who genuinely cares deeply about entire populations and is deeply loyal. He is generous in the biggest ways imaginable, and yet in small things, like picking up a lunch tab, he can be shockingly cheap. He can’t make small talk and can come across as totally lacking in E.Q. “The rules of human life that allow you to get along are not complicated,” says one person who knows Gates. “He could write a book on it, but he can’t do it!”
  • At the Microsoft board meeting in late June 2013, Ballmer announced he had a handshake deal with Nokia’s management to buy the company, pending the Microsoft board’s approval, according to a source close to the events. Ballmer thought he had it and left before the post-board-meeting dinner to attend his son’s middle-school graduation. When he came back the next day, he found that the board had pulled a coup: they informed him they weren’t doing the deal, and it wasn’t up for discussion. For Ballmer, it seems, the unforgivable thing was that Gates had been part of the coup, which Ballmer saw as the ultimate betrayal.
  • what is scarce in all of this abundance is human attention
  • And the original idea of having great software people and broad software products and Office being the primary tool that people look to across all these devices, that’ s as true today and as strong as ever.”
  • Meeting Room Plus
  • But he combines that with flashes of insight and humor that leave some wondering whether he can’t do it or simply chooses not to, or both. His most pronounced characteristic shouldn’t be simply labeled a competitive streak, because it is really a fierce, deep need to win. The dislike it bred among his peers in the industry is well known—“Silicon Bully” was the title of an infamous magazine story about him. And yet he left Microsoft for the philanthropic world, where there was no one to bully, only intractable problems to solve.
  • “The Irrelevance of Microsoft” is actually the title of a blog post by an analyst named Benedict Evans, who works at the Silicon Valley venture-capital firm Andreessen Horowitz. On his blog, Evans pointed out that Microsoft’s share of all computing devices that we use to connect to the Internet, including P.C.’s, phones, and tablets, has plunged from 90 percent in 2009 to just around 20 percent today. This staggering drop occurred not because Microsoft lost ground in personal computers, on which its software still dominates, but rather because it has failed to adapt its products to smartphones, where all the growth is, and tablets.
  • The board told Ballmer they wanted him to stay, he says, and they did eventually agree to a slightly different version of the deal. In September, Microsoft announced it was buying Nokia’s devices-and-services business for $7.2 billion. Why? The board finally realized the downside: without Nokia, Microsoft was effectively done in the smartphone business. But, for Ballmer, the damage was done, in more ways than one. He now says it became clear to him that despite the lack of a new C.E.O. he couldn’t stay. Cultural change, he decided, required a change at the top, and, he says,“there was too much water under the bridge with this board.” The feeling was mutual. As a source close to Microsoft says, no one, including Gates, tried to stop him from quitting.
  • in Wall Street’s eyes, Nadella can do no wrong. Microsoft’s stock has risen 30 percent since he became C.E.O., increasing its market value by $87 billion. “It’s interesting with Satya,” says one person who observes him with investors. “He is not a business guy or a financial analyst, but he finds a common language with investors, and in his short tenure, they leave going, Wow.” But the honeymoon is the easy part.
  • “He was so publicly and so early in life defined as the brilliant guy,” says a person who has observed him. “Anything that threatens that, he becomes narcissistic and defensive.” Or as another person puts it, “He throws hissy fits when he doesn’t get his way.”
  • round three-quarters of Microsoft’s profits come from the two fabulously successful products on which the company was built: the Windows operating system, which essentially makes personal computers run, and Office, the suite of applications that includes Word, Excel, and PowerPoint. Financially speaking, Microsoft is still extraordinarily powerful. In the last 12 months the company reported sales of $86.83 billion and earnings of $22.07 billion; it has $85.7 billion of cash on its balance sheet. But the company is facing a confluence of threats that is all the more staggering given Microsoft’s sheer size. Competitors such as Google and Apple have upended Microsoft’s business model, making it unclear where Windows will fit in the world, and even challenging Office. In the Valley, there are two sayings that everyone regards as truth. One is that profits follow relevance. The other is that there’s a difference between strategic position and financial position. “It’s easy to be in denial and think the financials reflect the current reality,” says a close observer of technology firms. “They do not.”
  •  
    Awesome article describing the history of Microsoft as seen through the lives of it's three CEO's: Bill Gates, Steve Ballmer and Satya Nadella
Gonzalo San Gil, PhD.

Linux Foundation Security Checklist: Have It Your Way | Community | LinuxInsider - 0 views

  •  
    "Sysadmins should use the LF security checklist as a resource, said its creator, Konstantin Ryabitsev. "They can evaluate it, adapt it, hack on it until it fits their purpose, and hopefully contribute back...
Gary Edwards

Google Apps no threat to Microsoft? Maybe it is... | TalkBack on ZDNet - 0 views

  •  
    Replace or Re-Purpose? The Belgian Desktop Pilot Study Here is the summary of the Belgian desktop pilot study. The conclusion echoed the findings of Massachusetts and California; they found that they could not use OpenOffice as a replacement for MSOffice. Although there were many reasons sighted, i think they all fit under the larger framework that MSOffice is the center of what turned out to be a sprawling desktop productivity ecosystem.
Gonzalo San Gil, PhD.

Pursuing adoption of free and open source software in governments - O'Reilly Radar - 0 views

  •  
    "by Andy Oram | @praxagora | +Andy Oram | Comment: 1 | March 26, 2014 Free and open source software creates a natural - and even necessary - fit with government. I joined a panel this past weekend at the Free Software Foundation conference LibrePlanet on this topic and have covered it previously in a journal article and talk. Our panel focused on barriers to its adoption and steps that free software advocates could take to reach out to government agencies."
Gary Edwards

Is Linux dead for the desktop? - 1 views

  • Linux never had the apps
  • Charles King, an IT analyst who follows enterprise trends, says the big change is in IT. At one time, executives in charge of computing services were mostly concerned with operating systems and applications for massive throng of traditional business users. Those users have now flocked to mobile computing devices, but they still have a Windows PC sitting on their desk.
  • Today, Microsoft's lock (on the desktop, anyway) remains secure, even in the face of Apple's surge," King says. "Ironically enough, though, the open source model remains alive and well but mostly in the development of new standards and development platforms."
  • ...5 more annotations...
  • David Johnson
  • What corporate end users really need is familiarity, consistency and compatibility - something Apple, Microsoft and Google seem more adept at offering."
  • Can desktop Linux OS be saved? Johnson says the best example of how to save Linux OS is the Chrome OS, an all-in-one laptop and desktop offering available through major consumer electronics companies such as LG (with their Chromebase all-in-one) and the Samsung Chromebook 2
  • The problem is that Chrome OS and Android aren't the same as Linux OS on the desktop. It's a complete reinvention. There are few Windows-like productivity apps and no knowledge worker apps designed for keyboard and mouse.
  • All of experts agree - Windows won every battle for the business user.
  •  
    "For executives in charge of desktop deployments in a large company, Linux OS was once hailed as a saviour for corporate end users. With incredibly low pricing - free, with fee-based support plans, for example - distributions such as Ubuntu Desktop and SUSE Linux Enterprise offered a "good enough" user interface, along with plenty of powerful apps and a rich browser. A few years ago, both Dell and HP jumped on the bandwagon; today, they still offer "developer" and "workstation" models that come pre-loaded with a Linux install. Plus, anyone who follows the Linux market knows that Google has reimagined Linux as a user-friendly tablet interface (the wildly popular Android OS) and a browser-only desktop variant (Chrome OS). Linux also shows up on countless connected home gadgets, fitness trackers, watches and other low-cost devices, mostly because OS costs are so low. The desktop computing OS for end users has failed to capture any attention lately, though. Al Gillen, the programme vice president for servers and system software at IDC, says the Linux OS as a computing platform for end users is at least comatose - and probably dead. Yes, it has reemerged on Android and other devices, but it has gone almost completely silent as a competitor to Windows for mass deployment. As they say, you can hear the crickets chirping."
Gonzalo San Gil, PhD.

Build a home music server with Linux | Opensource.com - 0 views

  •  
    "In this article, I am going to focus on the hardware, software, and configuration issues that we need to resolve to set up a Linux-based music server as part of the home music system. Specifically, I'll look at the Raspberry Pi, Cubox-i, and Fit-PC as options for hosting your digital home music system. Some of the material in this article can equally be applied to my previous article on the Linux laptop as a high-quality music player"
Gonzalo San Gil, PhD.

EU digital ministers demand free data flows, no one-size-fits-all rules | Ars Technica UK - 0 views

  •  
    "The UK's digital economy minister Ed Vaizey has-alongside ministers from 13 other EU countries-demanded that data should flow freely within and beyond the 28-member-state bloc."
  •  
    "The UK's digital economy minister Ed Vaizey has-alongside ministers from 13 other EU countries-demanded that data should flow freely within and beyond the 28-member-state bloc."
Paul Merrell

Commentary: Don't be so sure Russia hacked the Clinton emails | Reuters - 0 views

  • By James Bamford Last summer, cyber investigators plowing through the thousands of leaked emails from the Democratic National Committee uncovered a clue.A user named “Феликс Эдмундович” modified one of the documents using settings in the Russian language. Translated, his name was Felix Edmundovich, a pseudonym referring to Felix Edmundovich Dzerzhinsky, the chief of the Soviet Union’s first secret-police organization, the Cheka.It was one more link in the chain of evidence pointing to Russian President Vladimir Putin as the man ultimately behind the operation.During the Cold War, when Soviet intelligence was headquartered in Dzerzhinsky Square in Moscow, Putin was a KGB officer assigned to the First Chief Directorate. Its responsibilities included “active measures,” a form of political warfare that included media manipulation, propaganda and disinformation. Soviet active measures, retired KGB Major General Oleg Kalugin told Army historian Thomas Boghart, aimed to discredit the United States and “conquer world public opinion.”As the Cold War has turned into the code war, Putin recently unveiled his new, greatly enlarged spy organization: the Ministry of State Security, taking the name from Joseph Stalin’s secret service. Putin also resurrected, according to James Clapper, the U.S. director of national intelligence, some of the KGB’s old active- measures tactics. On October 7, Clapper issued a statement: “The U.S. Intelligence community is confident that the Russian government directed the recent compromises of emails from U.S. persons and institutions, including from U.S. political organizations.” Notably, however, the FBI declined to join the chorus, according to reports by the New York Times and CNBC.A week later, Vice President Joe Biden said on NBC’s Meet the Press that "we're sending a message" to Putin and "it will be at the time of our choosing, and under the circumstances that will have the greatest impact." When asked if the American public would know a message was sent, Biden replied, "Hope not." Meanwhile, the CIA was asked, according to an NBC report on October 14, “to deliver options to the White House for a wide-ranging ‘clandestine’ cyber operation designed to harass and ‘embarrass’ the Kremlin leadership.”But as both sides begin arming their cyberweapons, it is critical for the public to be confident that the evidence is really there, and to understand the potential consequences of a tit-for-tat cyberwar escalating into a real war. 
  • This is a prospect that has long worried Richard Clarke, the former White House cyber czar under President George W. Bush. “It’s highly likely that any war that began as a cyberwar,” Clarke told me last year, “would ultimately end up being a conventional war, where the United States was engaged with bombers and missiles.”The problem with attempting to draw a straight line from the Kremlin to the Clinton campaign is the number of variables that get in the way. For one, there is little doubt about Russian cyber fingerprints in various U.S. campaign activities. Moscow, like Washington, has long spied on such matters. The United States, for example, inserted malware in the recent Mexican election campaign. The question isn’t whether Russia spied on the U.S. presidential election, it’s whether it released the election emails.Then there’s the role of Guccifer 2.0, the person or persons supplying WikiLeaks and other organizations with many of the pilfered emails. Is this a Russian agent? A free agent? A cybercriminal? A combination, or some other entity? No one knows.There is also the problem of groupthink that led to the war in Iraq. For example, just as the National Security Agency, the Central Intelligence Agency and the rest of the intelligence establishment are convinced Putin is behind the attacks, they also believed it was a slam-dunk that Saddam Hussein had a trove of weapons of mass destruction. Consider as well the speed of the political-hacking investigation, followed by a lack of skepticism, culminating in a rush to judgment. After the Democratic committee discovered the potential hack last spring, it called in the cybersecurity firm CrowdStrike in May to analyze the problem.
  • CrowdStrike took just a month or so before it conclusively determined that Russia’s FSB, the successor to the KGB, and the Russian military intelligence organization, GRU, were behind it. Most of the other major cybersecurity firms quickly fell in line and agreed. By October, the intelligence community made it unanimous. That speed and certainty contrasts sharply with a previous suspected Russian hack in 2010, when the target was the Nasdaq stock market. According to an extensive investigation by Bloomberg Businessweek in 2014, the NSA and FBI made numerous mistakes over many months that stretched to nearly a year. “After months of work,” the article said, “there were still basic disagreements in different parts of government over who was behind the incident and why.”  There was no consensus­, with just a 70 percent certainty that the hack was a cybercrime. Months later, this determination was revised again: It was just a Russian attempt to spy on the exchange in order to design its own. The federal agents also considered the possibility that the Nasdaq snooping was not connected to the Kremlin. Instead, “someone in the FSB could have been running a for-profit operation on the side, or perhaps sold the malware to a criminal hacking group.” Again, that’s why it’s necessary to better understand the role of Guccifer 2.0 in releasing the Democratic National Committee and Clinton campaign emails before launching any cyberweapons.
  • ...2 more annotations...
  • t is strange that clues in the Nasdaq hack were very difficult to find ― as one would expect from a professional, state-sponsored cyber operation. Conversely, the sloppy, Inspector Clouseau-like nature of the Guccifer 2.0 operation, with someone hiding behind a silly Bolshevik cover name, and Russian language clues in the metadata, smacked more of either an amateur operation or a deliberate deception.Then there’s the Shadow Brokers, that mysterious person or group that surfaced in August with its farcical “auction” to profit from a stolen batch of extremely secret NSA hacking tools, in essence, cyberweapons. Where do they fit into the picture? They have a small armory of NSA cyberweapons, and they appeared just three weeks after the first DNC emails were leaked. On Monday, the Shadow Brokers released more information, including what they claimed is a list of hundreds of organizations that the NSA has targeted over more than a decade, complete with technical details. This offers further evidence that their information comes from a leaker inside the NSA rather than the Kremlin. The Shadow Brokers also discussed Obama’s threat of cyber retaliation against Russia. Yet they seemed most concerned that the CIA, rather than the NSA or Cyber Command, was given the assignment. This may be a possible indication of a connection to NSA’s elite group, Tailored Access Operations, considered by many the A-Team of hackers.“Why is DirtyGrandpa threating CIA cyberwar with Russia?” they wrote. “Why not threating with NSA or Cyber Command? CIA is cyber B-Team, yes? Where is cyber A-Team?” Because of legal and other factors, the NSA conducts cyber espionage, Cyber Command conducts cyberattacks in wartime, and the CIA conducts covert cyberattacks. 
  • The Shadow Brokers connection is important because Julian Assange, the founder of WikiLeaks, claimed to have received identical copies of the Shadow Brokers cyberweapons even before they announced their “auction.” Did he get them from the Shadow Brokers, from Guccifer, from Russia or from an inside leaker at the NSA?Despite the rushed, incomplete investigation and unanswered questions, the Obama administration has announced its decision to retaliate against Russia.  But a public warning about a secret attack makes little sense. If a major cyber crisis happens in Russia sometime in the future, such as a deadly power outage in frigid winter, the United States could be blamed even if it had nothing to do with it. That could then trigger a major retaliatory cyberattack against the U.S. cyber infrastructure, which would call for another reprisal attack ― potentially leading to Clarke’s fear of a cyberwar triggering a conventional war. President Barack Obama has also not taken a nuclear strike off the table as an appropriate response to a devastating cyberattack.
  •  
    Article by James Bamford, the first NSA whistleblower and author of three books on the NSA.
Gonzalo San Gil, PhD.

With the Google Antitrust Case, the European Commission Is Is Trying to Gerrymander Yes... - 1 views

  •  
    "Earlier this month Google filed its response to the European Commission's Android antitrust complaint, which alleges that Google thwarts its competitors in search, mobile apps, and mobile devices by limiting their access to Android users through self-serving licensing terms. "
1 - 20 of 32 Next ›
Showing 20 items per page