Skip to main content

Home/ Future of the Web/ Group items matching "rules" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
3More

Marriott fined $600,000 for jamming guest hotspots - SlashGear - 0 views

  • Marriott will cough up $600,000 in penalties after being caught blocking mobile hotspots so that guests would have to pay for its own WiFi services, the FCC has confirmed today. The fine comes after staff at the Gaylord Opryland Hotel and Convention Center in Nashville, Tennessee were found to be jamming individual hotspots and then charging people up to $1,000 per device to get online. Marriott has been operating the center since 2012, and is believed to have been running its interruption scheme since then. The first complaint to the FCC, however, wasn't until March 2013, when one guest warned the Commission that they suspected their hardware had been jammed. An investigation by the FCC's Enforcement Bureau revealed that was, in fact, the case. A WiFi monitoring system installed at the Gaylord Opryland would target access points with de-authentication packets, disconnecting users so that their browsing was interrupted.
  • The FCC deemed Marriott's behaviors as contravening Section 333 of the Communications Act, which states that "no person shall willfully or maliciously interfere with or cause interference to any radio communications of any station licensed or authorized by or under this chapter or operated by the United States Government." In addition to the $600,000 civil penalty, Marriott will have to cease blocking guests, hand over details of any access point containment features to the FCC across its entire portfolio of owned or managed properties, and finally file compliance and usage reports each quarter for the next three years.
  • Update: Marriott has issued the following statement on the FCC ruling: "Marriott has a strong interest in ensuring that when our guests use our Wi-Fi service, they will be protected from rogue wireless hotspots that can cause degraded service, insidious cyber-attacks and identity theft. Like many other institutions and companies in a wide variety of industries, including hospitals and universities, the Gaylord Opryland protected its Wi-Fi network by using FCC-authorized equipment provided by well-known, reputable manufacturers. We believe that the Gaylord Opryland's actions were lawful. We will continue to encourage the FCC to pursue a rulemaking in order to eliminate the ongoing confusion resulting from today's action and to assess the merits of its underlying policy."
2More

Awful Spanish Copyright Law May Be Stalled Waiting For EU Court Ruling On Plans To Chan... - 0 views

  •  
    "from the stopping-good-ideas,-stopping-bad-ideas dept Techdirt has written about Spain's new copyright law a couple of times. There, we concentrated on the "Google tax" that threatens the digital commons and open access in that country. But alongside this extremely foolish idea, there was another good one: getting rid of the anachronistic levy on recording devices that was supposed to "compensate" for private copying (as if any such compensation were needed), and paying collecting societies directly out of Spain's state budget. "
  •  
    "from the stopping-good-ideas,-stopping-bad-ideas dept Techdirt has written about Spain's new copyright law a couple of times. There, we concentrated on the "Google tax" that threatens the digital commons and open access in that country. But alongside this extremely foolish idea, there was another good one: getting rid of the anachronistic levy on recording devices that was supposed to "compensate" for private copying (as if any such compensation were needed), and paying collecting societies directly out of Spain's state budget. "
7More

Court gave NSA broad leeway in surveillance, documents show - The Washington Post - 0 views

  • Virtually no foreign government is off-limits for the National Security Agency, which has been authorized to intercept information “concerning” all but four countries, according to top-secret documents. The United States has long had broad no-spying arrangements with those four countries — Britain, Canada, Australia and New Zealand — in a group known collectively with the United States as the Five Eyes. But a classified 2010 legal certification and other documents indicate the NSA has been given a far more elastic authority than previously known, one that allows it to intercept through U.S. companies not just the communications of its overseas targets but any communications about its targets as well.
  • The certification — approved by the Foreign Intelligence Surveillance Court and included among a set of documents leaked by former NSA contractor Edward Snowden — lists 193 countries that would be of valid interest for U.S. intelligence. The certification also permitted the agency to gather intelligence about entities including the World Bank, the International Monetary Fund, the European Union and the International Atomic Energy Agency. The NSA is not necessarily targeting all the countries or organizations identified in the certification, the affidavits and an accompanying exhibit; it has only been given authority to do so. Still, the privacy implications are far-reaching, civil liberties advocates say, because of the wide spectrum of people who might be engaged in communication about foreign governments and entities and whose communications might be of interest to the United States.
  • That language could allow for surveillance of academics, journalists and human rights researchers. A Swiss academic who has information on the German government’s position in the run-up to an international trade negotiation, for instance, could be targeted if the government has determined there is a foreign-intelligence need for that information. If a U.S. college professor e-mails the Swiss professor’s e-mail address or phone number to a colleague, the American’s e-mail could be collected as well, under the program’s court-approved rules
  • ...4 more annotations...
  • On Friday, the Office of the Director of National Intelligence released a transparency report stating that in 2013 the government targeted nearly 90,000 foreign individuals or organizations for foreign surveillance under the program. Some tech-industry lawyers say the number is relatively low, considering that several billion people use U.S. e-mail services.
  • Still, some lawmakers are concerned that the potential for intrusions on Americans’ privacy has grown under the 2008 law because the government is intercepting not just communications of its targets but communications about its targets as well. The expansiveness of the foreign-powers certification increases that concern.
  • In a 2011 FISA court opinion, a judge using an NSA-provided sample estimated that the agency could be collecting as many as 46,000 wholly domestic e-mails a year that mentioned a particular target’s e-mail address or phone number, in what is referred to as “about” collection. “When Congress passed Section 702 back in 2008, most members of Congress had no idea that the government was collecting Americans’ communications simply because they contained a particular individual’s contact information,” Sen. Ron Wyden (D-Ore.), who has co-sponsored ­legislation to narrow “about” collection authority, said in an e-mail to The Washington Post. “If ‘about the target’ collection were limited to genuine national security threats, there would be very little privacy impact. In fact, this collection is much broader than that, and it is scooping up huge amounts of Americans’ wholly domestic communications.”
  • The only reason the court has oversight of the NSA program is that Congress in 2008 gave the government a new authority to gather intelligence from U.S. companies that own the Internet cables running through the United States, former officials noted. Edgar, the former privacy officer at the Office of the Director of National Intelligence, said ultimately he believes the authority should be narrowed. “There are valid privacy concerns with leaving these collection decisions entirely in the executive branch,” he said. “There shouldn’t be broad collection, using this authority, of foreign government information without any meaningful judicial role that defines the limits of what can be collected.”
22More

Can C.E.O. Satya Nadella Save Microsoft? | Vanity Fair - 0 views

  • he new world of computing is a radical break from the past. That’s because of the growth of mobile devices and cloud computing. In the old world, corporations owned and ran Windows P.C.’s and Window servers in their own facilities, with the necessary software installed on them. Everyone used Windows, so everything was developed for Windows. It was a virtuous circle for Microsoft.
  • Now the processing power is in the cloud, and very sophisticated applications, from e-mail to tools you need to run a business, can be run by logging onto a Web site, not from pre-installed software. In addition, the way we work (and play) has shifted from P.C.’s to mobile devices—where Android and Apple’s iOS each outsell Windows by more than 10 to 1. Why develop software to run on Windows if no one is using Windows? Why use Windows if nothing you want can run on it? The virtuous circle has turned vicious.
  • Part of why Microsoft failed with devices is that competitors upended its business model. Google doesn’t charge for the operating system. That’s because Google makes its money on search. Apple can charge high prices because of the beauty and elegance of its devices, where the software and hardware are integrated in one gorgeous package. Meanwhile, Microsoft continued to force outside manufacturers, whose products simply weren’t as compelling as Apple’s, to pay for a license for Windows. And it didn’t allow Office to be used on non-Windows phones and tablets. “The whole philosophy of the company was Windows first,” says Heather Bellini, an analyst at Goldman Sachs. Of course it was: that’s how Microsoft had always made its money.
  • ...18 more annotations...
  • Right now, Windows itself is fragmented: applications developed for one Windows device, say a P.C., don’t even necessarily work on another Windows device. And if Microsoft develops a new killer application, it almost has to be released for Android and Apple phones, given their market dominance, thereby strengthening those eco-systems, too.
  • At its core, Azure uses Windows server technology. That helps existing Windows applications run seamlessly on Azure. Technologists sometimes call what Microsoft has done a “hybrid cloud” because companies can use Azure alongside their pre-existing on-site Windows servers. At the same time, Nadella also to some extent has embraced open-source software—free code that doesn’t require a license from Microsoft—so that someone could develop something using non-Microsoft technology, and it would run on Azure. That broadens Azure’s appeal.
  • “In some ways the way people think about Bill and Steve is almost a Rorschach test.” For those who romanticize the Gates era, Microsoft’s current predicament will always be Ballmer’s fault. For others, it’s not so clear. “He left Steve holding a big bag of shit,” the former executive says of Gates. In the year Ballmer officially took over, Microsoft was found to be a predatory monopolist by the U.S. government and was ordered to split into two; the cost of that to Gates and his company can never be calculated. In addition, the dotcom bubble had burst, causing Microsoft stock to collapse, which resulted in a simmering tension between longtime employees, whom the company had made rich, and newer ones, who had missed the gravy train.
  • Nadella lived this dilemma because his job at Microsoft included figuring out the cloud-based future while maintaining the highly profitable Windows server business. And so he did a bunch of things that were totally un-Microsoft-like. He went to talk to start-ups to find out why they weren’t using Microsoft. He put massive research-and-development dollars behind Azure, a cloud-based platform that Microsoft had developed in Skunk Works fashion, which by definition took resources away from the highly profitable existing business.
  • They even have a catchphrase: “Re-inventing productivity.”
  • Microsoft’s historical reluctance to open Windows and Office is why it was such a big deal when in late March, less than two months after becoming C.E.O., Nadella announced that Microsoft would offer Office for Apple’s iPad. A team at the company had been working on it for about a year. Ballmer says he would have released it eventually, but Nadella did it immediately. Nadella also announced that Windows would be free for devices smaller than nine inches, meaning phones and small tablets. “Now that we have 30 million users on the iPad using it, that is 30 million people who never used Office before [on an iPad,]” he says. “And to me that’s what really drives us.” These are small moves in some ways, and yet they are also big. “It’s the first time I have listened to a senior Microsoft executive admit that they are behind,” says one institutional investor. “The fact that they are giving away Windows, their bread and butter for 25 years—it is quite a fundamental change.”
  • And whoever does the best job of building the right software experiences to give both organizations and individuals time back so that they can get more out of their time, that’s the core of this company—that’s the soul. That’s what Bill started this company with. That’s the Office franchise. That’s the Windows franchise. We have to re-invent them. . . . That’s where this notion of re-inventing productivity comes from.”
  • Ballmer might be a complicated character, but he has nothing on Gates, whose contradictions have long fascinated Microsoft-watchers. He is someone who has no problem humiliating individuals—he might not even notice—but who genuinely cares deeply about entire populations and is deeply loyal. He is generous in the biggest ways imaginable, and yet in small things, like picking up a lunch tab, he can be shockingly cheap. He can’t make small talk and can come across as totally lacking in E.Q. “The rules of human life that allow you to get along are not complicated,” says one person who knows Gates. “He could write a book on it, but he can’t do it!”
  • At the Microsoft board meeting in late June 2013, Ballmer announced he had a handshake deal with Nokia’s management to buy the company, pending the Microsoft board’s approval, according to a source close to the events. Ballmer thought he had it and left before the post-board-meeting dinner to attend his son’s middle-school graduation. When he came back the next day, he found that the board had pulled a coup: they informed him they weren’t doing the deal, and it wasn’t up for discussion. For Ballmer, it seems, the unforgivable thing was that Gates had been part of the coup, which Ballmer saw as the ultimate betrayal.
  • what is scarce in all of this abundance is human attention
  • And the original idea of having great software people and broad software products and Office being the primary tool that people look to across all these devices, that’ s as true today and as strong as ever.”
  • Meeting Room Plus
  • But he combines that with flashes of insight and humor that leave some wondering whether he can’t do it or simply chooses not to, or both. His most pronounced characteristic shouldn’t be simply labeled a competitive streak, because it is really a fierce, deep need to win. The dislike it bred among his peers in the industry is well known—“Silicon Bully” was the title of an infamous magazine story about him. And yet he left Microsoft for the philanthropic world, where there was no one to bully, only intractable problems to solve.
  • “The Irrelevance of Microsoft” is actually the title of a blog post by an analyst named Benedict Evans, who works at the Silicon Valley venture-capital firm Andreessen Horowitz. On his blog, Evans pointed out that Microsoft’s share of all computing devices that we use to connect to the Internet, including P.C.’s, phones, and tablets, has plunged from 90 percent in 2009 to just around 20 percent today. This staggering drop occurred not because Microsoft lost ground in personal computers, on which its software still dominates, but rather because it has failed to adapt its products to smartphones, where all the growth is, and tablets.
  • The board told Ballmer they wanted him to stay, he says, and they did eventually agree to a slightly different version of the deal. In September, Microsoft announced it was buying Nokia’s devices-and-services business for $7.2 billion. Why? The board finally realized the downside: without Nokia, Microsoft was effectively done in the smartphone business. But, for Ballmer, the damage was done, in more ways than one. He now says it became clear to him that despite the lack of a new C.E.O. he couldn’t stay. Cultural change, he decided, required a change at the top, and, he says,“there was too much water under the bridge with this board.” The feeling was mutual. As a source close to Microsoft says, no one, including Gates, tried to stop him from quitting.
  • in Wall Street’s eyes, Nadella can do no wrong. Microsoft’s stock has risen 30 percent since he became C.E.O., increasing its market value by $87 billion. “It’s interesting with Satya,” says one person who observes him with investors. “He is not a business guy or a financial analyst, but he finds a common language with investors, and in his short tenure, they leave going, Wow.” But the honeymoon is the easy part.
  • “He was so publicly and so early in life defined as the brilliant guy,” says a person who has observed him. “Anything that threatens that, he becomes narcissistic and defensive.” Or as another person puts it, “He throws hissy fits when he doesn’t get his way.”
  • round three-quarters of Microsoft’s profits come from the two fabulously successful products on which the company was built: the Windows operating system, which essentially makes personal computers run, and Office, the suite of applications that includes Word, Excel, and PowerPoint. Financially speaking, Microsoft is still extraordinarily powerful. In the last 12 months the company reported sales of $86.83 billion and earnings of $22.07 billion; it has $85.7 billion of cash on its balance sheet. But the company is facing a confluence of threats that is all the more staggering given Microsoft’s sheer size. Competitors such as Google and Apple have upended Microsoft’s business model, making it unclear where Windows will fit in the world, and even challenging Office. In the Valley, there are two sayings that everyone regards as truth. One is that profits follow relevance. The other is that there’s a difference between strategic position and financial position. “It’s easy to be in denial and think the financials reflect the current reality,” says a close observer of technology firms. “They do not.”
  •  
    Awesome article describing the history of Microsoft as seen through the lives of it's three CEO's: Bill Gates, Steve Ballmer and Satya Nadella
49More

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
4More

NSA Spying Relies on AT&T's 'Extreme Willingness to Help' - ProPublica - 0 views

  • he National Security Agency’s ability to spy on vast quantities of Internet traffic passing through the United States has relied on its extraordinary, decades-long partnership with a single company: the telecom giant AT&T. While it has been long known that American telecommunications companies worked closely with the spy agency, newly disclosed NSA documents show that the relationship with AT&T has been considered unique and especially productive. One document described it as “highly collaborative,” while another lauded the company’s “extreme willingness to help.”
  • AT&T’s cooperation has involved a broad range of classified activities, according to the documents, which date from 2003 to 2013. AT&T has given the NSA access, through several methods covered under different legal rules, to billions of emails as they have flowed across its domestic networks. It provided technical assistance in carrying out a secret court order permitting the wiretapping of all Internet communications at the United Nations headquarters, a customer of AT&T. The NSA’s top-secret budget in 2013 for the AT&T partnership was more than twice that of the next-largest such program, according to the documents. The company installed surveillance equipment in at least 17 of its Internet hubs on American soil, far more than its similarly sized competitor, Verizon. And its engineers were the first to try out new surveillance technologies invented by the eavesdropping agency. One document reminds NSA officials to be polite when visiting AT&T facilities, noting: “This is a partnership, not a contractual relationship.” The documents, provided by the former agency contractor Edward Snowden, were jointly reviewed by The New York Times and ProPublica.
  • It is not clear if the programs still operate in the same way today. Since the Snowden revelations set off a global debate over surveillance two years ago, some Silicon Valley technology companies have expressed anger at what they characterize as NSA intrusions and have rolled out new encryption to thwart them. The telecommunications companies have been quieter, though Verizon unsuccessfully challenged a court order for bulk phone records in 2014. At the same time, the government has been fighting in court to keep the identities of its telecom partners hidden. In a recent case, a group of AT&T customers claimed that the NSA’s tapping of the Internet violated the Fourth Amendment protection against unreasonable searches. This year, a federal judge dismissed key portions of the lawsuit after the Obama administration argued that public discussion of its telecom surveillance efforts would reveal state secrets, damaging national security.
2More

Florida Man, Accused of Terrorism Based on Book Collection, Set Free - The Intercept - 1 views

    • Gonzalo San Gil, PhD.
       
      [# ! Va Janet Innes-Kirkwood's LinkedIn]
  •  
    [The U.S. government had produced "snippets of information from various sources, out of context, to weave together a narrative of terrorist ideation," a Florida judge said Friday, ordering the release of Marcus Dwayne Robertson, an Orlando-based Islamic scholar who stood accused of supporting terrorism. ...]
3More

Kano - The Kano Kit [Via x Open source Rules...] - 1 views

    • Gonzalo San Gil, PhD.
       
      [# ! Via, thanks x #share, # ! Robert Ryan -> FB's 'P2P Community...]
  •  
    "Kano is a computer you build and code yourself. Lego simple, Raspberry Pi powerful, and hugely fun."
  •  
    "Kano is a computer you build and code yourself. Lego simple, Raspberry Pi powerful, and hugely fun."
2More

Big Tech Does Not Speak for the Internet | Electronic Frontier Foundation - 0 views

  •  
    "Too often, media and policymakers take seriously the claim of government officials that secret trade deals like the Trans-Pacific Partnership (TPP) promote and protect "Internet freedom," even though the traditional guardians of Internet freedom-users and innovators who rely on it-have said precisely the opposite."
  •  
    "Too often, media and policymakers take seriously the claim of government officials that secret trade deals like the Trans-Pacific Partnership (TPP) promote and protect "Internet freedom," even though the traditional guardians of Internet freedom-users and innovators who rely on it-have said precisely the opposite."
3More

Movie producers call for an end to the 'Six Strikes' rule [# ! Note to previous Article... - 1 views

    • Gonzalo San Gil, PhD.
       
      # ! Do You remember Yesterday... https://gonzalosangil.wordpress.com/2015/09/04/isps-and-rightsholders-extend-six-strikes-antipiracy-scheme-torrentfreak/ ...? # ! If ISPs and Rightsholders are unable to reach an agreement with Producers... what kind of 'Copyright Enforcement' is this...?
  •  
    "It may sound like the fictional government department that Patricia Arquette works for in CSI: Cyber, but that's not what the Internet Security Task Force is for. In fact, the ITSF is a group of independent film companies that have banded together to call for immediate reform on how internet piracy is handled. "
  •  
    "It may sound like the fictional government department that Patricia Arquette works for in CSI: Cyber, but that's not what the Internet Security Task Force is for. In fact, the ITSF is a group of independent film companies that have banded together to call for immediate reform on how internet piracy is handled. "
1More

New Leak Of Final TPP Text Confirms Attack On Freedom Of Expression, Public Health - 0 views

  • Offering a first glimpse of the secret 12-nation “trade” deal in its final form—and fodder for its growing ranks of opponents—WikiLeaks on Friday published the final negotiated text for the Trans-Pacific Partnership (TPP)’s Intellectual Property Rights chapter, confirming that the pro-corporate pact would harm freedom of expression by bolstering monopolies while and injure public health by blocking patient access to lifesaving medicines. The document is dated October 5, the same day it was announced in Atlanta, Georgia that the member states to the treaty had reached an accord after more than five years of negotiations. Aside from the WikiLeaks publication, the vast majority of the mammoth deal’s contents are still being withheld from the public—which a WikiLeaks press statement suggests is a strategic move by world leaders to forestall public criticism until after the Canadian election on October 19. Initial analyses suggest that many of the chapter’s more troubling provisions, such as broader patent and data protections that pharmaceutical companies use to delay generic competition, have stayed in place since draft versions were leaked in 2014 and 2015. Moreover, it codifies a crackdown on freedom of speech with rules allowing widespread internet censorship.
2More

USA Freedom Act Passes: What We Celebrate, What We Mourn, and Where We Go Fro... - 0 views

  • The Senate passed the USA Freedom Act today by 67-32, marking the first time in over thirty years that both houses of Congress have approved a bill placing real restrictions and oversight on the National Security Agency’s surveillance powers. The weakening amendments to the legislation proposed by NSA defender Senate Majority Mitch McConnell were defeated, and we have every reason to believe that President Obama will sign USA Freedom into law. Technology users everywhere should celebrate, knowing that the NSA will be a little more hampered in its surveillance overreach, and both the NSA and the FISA court will be more transparent and accountable than it was before the USA Freedom Act. It’s no secret that we wanted more. In the wake of the damning evidence of surveillance abuses disclosed by Edward Snowden, Congress had an opportunity to champion comprehensive surveillance reform and undertake a thorough investigation, like it did with the Church Committee. Congress could have tried to completely end mass surveillance and taken numerous other steps to rein in the NSA and FBI. This bill was the result of compromise and strong leadership by Sens. Patrick Leahy and Mike Lee and Reps. Robert Goodlatte, Jim Sensenbrenner, and John Conyers. It’s not the bill EFF would have written, and in light of the Second Circuit's thoughtful opinion, we withdrew our support from the bill in an effort to spur Congress to strengthen some of its privacy protections and out of concern about language added to the bill at the behest of the intelligence community. Even so, we’re celebrating. We’re celebrating because, however small, this bill marks a day that some said could never happen—a day when the NSA saw its surveillance power reduced by Congress. And we’re hoping that this could be a turning point in the fight to rein in the NSA.
  •  
    [The Senate passed the USA Freedom Act today by 67-32, marking the first time in over thirty years that both houses of Congress have approved a bill placing real restrictions and oversight on the National Security Agency's surveillance powers. The weakening amendments to the legislation proposed by NSA defender Senate Majority Mitch McConnell were defeated, and we have every reason to believe that President Obama will sign USA Freedom into law. Technology users everywhere should celebrate, knowing that the NSA will be a little more hampered in its surveillance overreach, and both the NSA and the FISA court will be more transparent and accountable than it was before the USA Freedom Act. ...]
4More

Belgium sues Facebook over illegal Privacy Violations of Users and Non-Users | nsnbc in... - 0 views

  • The Belgian government will be suing Facebook. The Commission for the Protection of Privacy states that Facebook violates Belgian and EU law by tracking systems that target both Facebook users as well as non-Facebook users. Facebook is known for cooperating with the U.S.’ National Security Agency. 
  • The Belgian privacy watchdog’s case against the internet giant Facebook will be heard at a court in Brussels on Thursday. The Commission has repeatedly requested that Facebook should comply with Belgian and EU law. Facebook failed to comply, and the Commission has no power to enforce the law; hence the decision to sue Facebook to attain a a court ruling. The President of the Commission for the Protection of Privacy, Willem Debeuckelaere, told the press that: “Facebook treats its users’ private lives without respect and that needs tackling. It’s not because we want to start a lawsuit over this, but we cannot continue to negotiate through other means. .. We want a judge to impose our recommendations. These recommendations are chiefly aimed at protecting internet users who are not Facebook members.”
  • The Belgian privacy watchdog alleges that Facebook tracks the web browsing of all visitors, including those who have specifically turned the tracking function off; This gathering of private information allegedly also includes those who do not have a Facebook account. Moreover, the Commission claims that Facebook has the capability to surveil computers without consent, even when users are logged out; and Facebook can monitor every PC of users that use websites with Facebook plugins. The capability to monitor both Facebook users and non-Facebook users allegedly functions via Cookies that store information about user’s internet activities, including preferential settings of websites and which websites internet users have visited. The Commission claims that Facebook installs these Cookies on all computers that visit websites that for example have a Facebook plugin to share internet content. That includes the computers of persons who do not make use of Facebook’s “share” or “like” button.
  • ...1 more annotation...
  • In other words, Facebook has the capacity to monitor your browser settings as well as which websites you have visited if you have read this article or any other article on any website that contains a Facebook “share” button, whether you “like” it or not. The Commissions lawsuit against Facebook is or particular importance due to the fact that the corporation is known for its cooperation with the United States’ National Security Agency (NSA). While the lawsuit is of particular interest for Belgian and EU citizens, it also sheds light on Facebook’s monitoring of U.S. citizens.
3More

News - Antitrust - Competition - European Commission - 0 views

  • Google inquiries Commission accuses Google of systematically favouring own shopping comparison service Infographic: Google might be favouring 'Google Shopping' when displaying general search results
  • Antitrust: Commission sends Statement of Objections to Google on comparison shopping service; opens separate formal investigation on AndroidWed, 15 Apr 2015 10:00:00 GMTAntitrust: Commission opens formal investigation against Google in relation to Android mobile operating systemWed, 15 Apr 2015 10:00:00 GMTAntitrust: Commission sends Statement of Objections to Google on comparison shopping serviceWed, 15 Apr 2015 10:00:00 GMTStatement by Commissioner Vestager on antitrust decisions concerning GoogleWed, 15 Apr 2015 11:39:00 GMT
  •  
    The more interesting issue to me is the accusation that Google violates antitrust law by boosting its comparison shopping search results in its search results, unfairly disadvantaging competing shopping services and not delivering best results to users. What's interesting to me is that the Commission is attempting to portray general search as a separate market from comparison shopping search, accusing Google of attempting to leverage its general search monopoly into the separate comoparison shopping search market. At first blush, Iim not convinced that these are or should be regarded as separable markets. But the ramifications are enormous. If that is a separate market, then arguably so is Google's book search, its Google Scholar search, its definition search, its site search, etc. It isn't clear to me how one might draw a defensible line taht does not also sweep in every new search feature  as a separate market.   
1More

Fourth Circuit adopts mosaic theory, holds that obtaining "extended" cell-site records ... - 0 views

  • A divided Fourth Circuit has ruled, in United States v. Graham, that “the government conducts a search under the Fourth Amendment when it obtains and inspects a cell phone user’s historical [cell-site location information] for an extended period of time” and that obtaining such records requires a warrant. The new case creates multiple circuit splits, which may lead to Supreme Court review. Specifically, the decision creates a clear circuit split with the Fifth and Eleventh Circuits on whether acquiring cell-site records is a search. It also creates an additional clear circuit split with the Eleventh Circuit on whether, if cell-site records are protected, a warrant is required. Finally, it also appears to deepen an existing split between the Fifth and Third Circuits on whether the Stored Communications Act allows the government to choose whether to obtain an intermediate court order or a warrant for cell-site records. This post will cover the reasoning of the new case in detail.
7More

Siding with HTML over XHTML, My Decision to Switch - Monday By Noon - 0 views

  • Publishing content on the Web is in no way limited to professional developers or designers, much of the reason the net is so active is because anyone can make a website. Sure, we (as knowledgeable professionals or hobbyists) all hope to make the Web a better place by doing our part in publishing documents with semantically rich, valid markup, but the reality is that those documents are rare. It’s important to keep in mind the true nature of the Internet; an open platform for information sharing.
  • XHTML2 has some very good ideas that I hope can become part of the web. However, it’s unrealistic to think that all web authors will switch to an XML-based syntax which demands that browsers stop processing the document on the first error. XML’s draconian policy was an attempt to clean up the web. This was done around 1996 when lots of invalid content entered the web. CSS took a different approach: instead of demanding that content isn’t processed, we defined rules for how to handle the undefined. It’s called “forward-compatible parsing” and means we can add new constructs without breaking the old. So, I don’t think XHTML is a realistic option for the masses. HTML 5 is it.
    • Gary Edwards
       
      Great quote from CSS expert Hakon Wium Lie.
  • @marbux: Of course i disagree with your interop assessment, but I wondered how it is that you’re missing the point. I think you confuse web applications with legacy desktop – client/server application model. And that confusion leads to the mistake of trying to transfer the desktop document model to one that could adequately service advancing web applications.
  •  
    A CMS expert argues for HTML over XHTML, explaining his reasons for switching. Excellent read! He nails the basics. for similar reasons, we moved from ODF to ePUB and then to CDf and finally to the advanced WebKit document model, where wikiWORD will make it's stand.
  •  
    See also my comment on the same web page that explains why HTML 5 is NOT it for document exchange between web editing applications. .
  •  
    Response to marbux supporting the WebKit layout/document model. Marbux argues that HTML5 is not interoperable, and CSS2 near useless. HTML5 fails regarding the the interop web appplications need. I respond by arguing that the only way to look at web applications is to consider that the browser layout engine is the web application layout engine! Web applications are actually written to the browser layout/document model, OR, to take advantage of browser plug-in capabilities. The interoperability marbux seeks is tied directly to the browser layout engine. In this context, the web format is simply a reflection of that layout engine. If there's an interop problem, it comes from browser madness differentials. The good news is that there are all kinds of efforts to close the browser gap: including WHATWG - HTML5, CSS3, W3C DOM, JavaScript Libraries, Google GWT (Java to JavaScript), Yahoo GUI, and the my favorite; WebKit. The bad news is that the clock is ticking. Microsoft has pulled the trigger and the great migration of MSOffice client/server systems to the MS WebSTack-Mesh architecture has begun. Key to this transition are the WPF-.NET proprietary formats, protocols and interfaces such as XAML, Silverlight, LINQ, and Smart Tags. New business processes are being written, and old legacy desktop bound processes are being transitioned to this emerging platform. The fight for the Open Web is on, with Microsoft threatening to transtion their entire business desktop monopoly to a Web platfomr they own. ~ge~
2More

Microsoft Ordered to Delete Browser - NYTimes.com - 0 views

  • BRUSSELS (AP) — The European Union said Friday that Microsoft’s practice of selling the Internet Explorer browser together with its Windows operating system violated the union’s antitrust rules. It ordered the software giant to untie the browser from its operating system in the 27-nation union, enabling makers of rival browsers to compete fairly.
  •  
    The Times goes farther than the DG Competition announcement, saying that Microsoft has been ordered to untie MSIE from Windows throughout the E.U. No source is attributed for the statement. The DG Competition announcement does not state what remedy it proposes to order. So take this report with a grain of salt. The Times is well capable of error.
4More

Bloomberg.com: News - 0 views

  • Christine A. Varney, nominated by President Barack Obama to be the U.S.’s next antitrust chief, has described Google Inc. as a monopolist that will dominate online computing services the way Microsoft Corp. ruled software.
  • Varney, 53, lobbied the Clinton administration on behalf of Netscape Communications Corp. to urge antitrust enforcers to sue Microsoft.
  • Still, Google is “quickly gathering market power in what I would call an online computing environment in the clouds,” she said, using a software industry term for software that is based on the Internet rather than in individual personal computers. “When all our enterprises move to computing in the clouds and there is a single firm that is offering a comprehensive solution,” Varney said, “you are going to see the same repeat of Microsoft.”
  • ...1 more annotation...
  • As in the Microsoft case, “there will be companies that will begin to allege that Google is discriminating” against them by “not allowing their products to interoperate with Google’s products,” Varney said.
5More

EU considers spending €1 billion for satellite broadband technology - Interna... - 0 views

  • The €200 billion economic rescue plan being considered this week by European Union leaders includes a proposal to spend €1 billion on bringing high-speed Internet access to rural areas. The proposal is likely to pit the Continent's telecommunications operators against satellite companies, which say they are uniquely suited to expand the broadband, or high-speed, network to underserved parts of Eastern Europe and the Alps by the end of 2010.
  • But support for the plan by EU government leaders, who begin a two-day meeting to consider the rescue plan Thursday is not assured. The money would come from unspent funds in the current EU budget, which under EU rules normally revert back to member countries. Germany, which contributes the most to the EU budget and stands to get the largest refund if the project is rejected, opposes the expenditure.
  • Across the EU, 21.7 percent of residents had broadband Internet access in July, according to the commission; 107.6 million received service from a telephone DSL line or a cable television connection and 130,592 via satellite. Only 6 percent of EU residents on average received broadband via mobile phones.
  • ...1 more annotation...
  • Until now, Baugh said, satellite broadband had been hindered by the relatively high cost of the hardware consumers needed to gain access to the service. But recent advances have lowered the cost to roughly €400, including installation, from several thousand euros a few years ago. At about €30 a month, service packages are comparable to those of DSL and cable.
  •  
    A billion Euros is chicken feed compared to other portions of the E.U. economic stimulus initiatives in the works that respond to the major recession under way. Still, this could be a significant foot in the door for satellite broadband in the E.U., perhaps enough to build out the infrastructure enough for a more serious challenge to cable and telephony broadband. But I wonder if there would be enough redundancy enabled by only a billion Euros to gracefully handle a satellite's death if it has far more broadband users.
« First ‹ Previous 221 - 240 of 282 Next › Last »
Showing 20 items per page