Skip to main content

Home/ Future of the Web/ Group items tagged due

Rss Feed Group items tagged

Gary Edwards

Running beyond the browser - 0 views

  •  
    Although there are many ways to slice this discussion, it might be useful to compare Adobe RIA and Microsoft Silverlight RIA in terms of web ready, highly interactive documents. The Adobe RIA story is quite different from that of Silverlight. Both however exploit the shortcomings of browsers; shortcomings that are in large part, i think, due to the disconnect the browser community has had with the W3C. The W3C forked off the HTML-CSS path, putting the bulk of their attention into XML, RDF and the Semantic Web. The web developer community stayed the course, pushing the HTML-CSS envelope with JavaScript and some rather stunning CSS magic. Adobe seems to have picked up the HTML-CSS-Javascript trail with a Microsoft innovation to take advantage of browser cache, DHTML (Dynamic HTML). DHTML morphs into AJAX, (which so wild as to have difficulty scaling). And AJAX gets tamed by an Adobe-Apple sponsored WebKit. Most people see WebKit as a browser specific layout engine, and compare it to the IE and Gecko on those terms. I would argue however that WebKit is both a document model and, a document format. For sure it's a framework for very advanced HTML-CSS-DOM-Javascript work. Because the Adobe AIR run-time is based on WebKit layout, WebKit documents can hit on all cylinders across any browser able to implement the AIR plug-in. Meaning, web developers and web content providers need only target the WebKit document model to attain the interactive access ubiquity all seek. Very cool. Let me also add that the WebKit HTML-CSS-DOM-Javascript model is capable of "fixed/flow" representation. I'll explain the importance of "fixed/flow" un momento, but think about how iPhone renders a web page and you'll understand the "flow" side of this equation.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Paul Merrell

Microsoft to host data in Germany to evade US spying | Naked Security - 0 views

  • Microsoft's new plan to keep the US government's hands off its customers' data: Germany will be a safe harbor in the digital privacy storm. Microsoft on Wednesday announced that beginning in the second half of 2016, it will give foreign customers the option of keeping data in new European facilities that, at least in theory, should shield customers from US government surveillance. It will cost more, according to the Financial Times, though pricing details weren't forthcoming. Microsoft Cloud - including Azure, Office 365 and Dynamics CRM Online - will be hosted from new datacenters in the German regions of Magdeburg and Frankfurt am Main. Access to data will be controlled by what the company called a German data trustee: T-Systems, a subsidiary of the independent German company Deutsche Telekom. Without the permission of Deutsche Telekom or customers, Microsoft won't be able to get its hands on the data. If it does get permission, the trustee will still control and oversee Microsoft's access.
  • Microsoft CEO Satya Nadella dropped the word "trust" into the company's statement: Microsoft’s mission is to empower every person and every individual on the planet to achieve more. Our new datacenter regions in Germany, operated in partnership with Deutsche Telekom, will not only spur local innovation and growth, but offer customers choice and trust in how their data is handled and where it is stored.
  • On Tuesday, at the Future Decoded conference in London, Nadella also announced that Microsoft would, for the first time, be opening two UK datacenters next year. The company's also expanding its existing operations in Ireland and the Netherlands. Officially, none of this has anything to do with the long-drawn-out squabbling over the transatlantic Safe Harbor agreement, which the EU's highest court struck down last month, calling the agreement "invalid" because it didn't protect data from US surveillance. No, Nadella said, the new datacenters and expansions are all about giving local businesses and organizations "transformative technology they need to seize new global growth." But as Diginomica reports, Microsoft EVP of Cloud and Enterprise Scott Guthrie followed up his boss’s comments by saying that yes, the driver behind the new datacenters is to let customers keep data close: We can guarantee customers that their data will always stay in the UK. Being able to very concretely tell that story is something that I think will accelerate cloud adoption further in the UK.
  • ...2 more annotations...
  • Microsoft and T-Systems' lawyers may well think that storing customer data in a German trustee data center will protect it from the reach of US law, but for all we know, that could be wishful thinking. Forrester cloud computing analyst Paul Miller: To be sure, we must wait for the first legal challenge. And the appeal. And the counter-appeal. As with all new legal approaches, we don’t know it is watertight until it is challenged in court. Microsoft and T-Systems’ lawyers are very good and say it's watertight. But we can be sure opposition lawyers will look for all the holes. By keeping data offshore - particularly in Germany, which has strong data privacy laws - Microsoft could avoid the situation it's now facing with the US demanding access to customer emails stored on a Microsoft server in Dublin. The US has argued that Microsoft, as a US company, comes under US jurisdiction, regardless of where it keeps its data.
  • Running away to Germany isn't a groundbreaking move; other US cloud services providers have already pledged expansion of their EU presences, including Amazon's plan to open a UK datacenter in late 2016 that will offer what CTO Werner Vogels calls "strong data sovereignty to local users." Other big data operators that have followed suit: Salesforce, which has already opened datacenters in the UK and Germany and plans to open one in France next year, as well as new EU operations pledged for the new year by NetSuite and Box. Can Germany keep the US out of its datacenters? Can Ireland? Time, and court cases, will tell.
  •  
    The European Community's Court of Justice decision in the Safe Harbor case --- and Edward Snowden --- are now officially downgrading the U.S. as a cloud data center location. NSA is good business for Europeans looking to displace American cloud service providers, as evidenced by Microsoft's decision. The legal test is whether Microsoft has "possession, custody, or control" of the data. From the info given in the article, it seems that Microsoft has done its best to dodge that bullet by moving data centers to Germany and placing their data under the control of a European company. Do ownership of the hardware and profits from their rent mean that Microsoft still has "possession, custody, or control" of the data? The fine print of the agreement with Deutsche Telekom and the customer EULAs will get a thorough going over by the Dept. of Justice for evidence of Microsoft "control" of the data. That will be the crucial legal issue. The data centers in Germany may pass the test. But the notion that data centers in the UK can offer privacy is laughable; the UK's legal authority for GCHQ makes it even easier to get the data than the NSA can in the U.S.  It doesn't even require a court order. 
Gonzalo San Gil, PhD.

Guide: Anonymity and Privacy for Advanced Linux Users - Deep Dot Web - 1 views

  •  
    "All Credits go to beac0n, thanks for contacting us and contributing the guide you created! As people requested - here is a link to download this guide as a PDF. Intro The goal is to bring together enough information in one document for a beginner to get started. Visiting countless sites, and combing the internet for information can make it obvious your desire to obtain anonymity, and lead to errors, due to conflicting information. Every effort has been made to make this document accurate. This guide is image heavy so it may take some time to load via Tor."
Paul Merrell

Superiority in Cyberspace Will Remain Elusive - Federation Of American Scientists - 0 views

  • Military planners should not anticipate that the United States will ever dominate cyberspace, the Joint Chiefs of Staff said in a new doctrinal publication. The kind of supremacy that might be achievable in other domains is not a realistic option in cyber operations. “Permanent global cyberspace superiority is not possible due to the complexity of cyberspace,” the DoD publication said. In fact, “Even local superiority may be impractical due to the way IT [information technology] is implemented; the fact US and other national governments do not directly control large, privately owned portions of cyberspace; the broad array of state and non-state actors; the low cost of entry; and the rapid and unpredictable proliferation of technology.” Nevertheless, the military has to make do under all circumstances. “Commanders should be prepared to conduct operations under degraded conditions in cyberspace.” This sober assessment appeared in a new edition of Joint Publication 3-12, Cyberspace Operations, dated June 8, 2018. (The 100-page document updates and replaces a 70-page version from 2013.) The updated DoD doctrine presents a cyber concept of operations, describes the organization of cyber forces, outlines areas of responsibility, and defines limits on military action in cyberspace, including legal limits.
  • The new cyber doctrine reiterates the importance and the difficulty of properly attributing cyber attacks against the US to their source. “The ability to hide the sponsor and/or the threat behind a particular malicious effect in cyberspace makes it difficult to determine how, when, and where to respond,” the document said. “The design of the Internet lends itself to anonymity and, combined with applications intended to hide the identity of users, attribution will continue to be a challenge for the foreseeable future.”
Gonzalo San Gil, PhD.

Musk: We need universal basic income because robots will take all the jobs | Ars Techni... - 0 views

  •  
    "Elon Musk reckons the robot revolution is inevitable and it's going to take all the jobs. For humans to survive in an automated world, he said that governments are going to be forced to bring in a universal basic income-paying each citizen a certain amount of money so they can afford to survive. According to Musk, there aren't likely to be any other options."
  •  
    "Elon Musk reckons the robot revolution is inevitable and it's going to take all the jobs. For humans to survive in an automated world, he said that governments are going to be forced to bring in a universal basic income-paying each citizen a certain amount of money so they can afford to survive. According to Musk, there aren't likely to be any other options."
Paul Merrell

American and British Spy Agencies Targeted In-Flight Mobile Phone Use - 0 views

  • In the trove of documents provided by former National Security Agency contractor Edward Snowden is a treasure. It begins with a riddle: “What do the President of Pakistan, a cigar smuggler, an arms dealer, a counterterrorism target, and a combatting proliferation target have in common? They all used their everyday GSM phone during a flight.” This riddle appeared in 2010 in SIDtoday, the internal newsletter of the NSA’s Signals Intelligence Directorate, or SID, and it was classified “top secret.” It announced the emergence of a new field of espionage that had not yet been explored: the interception of data from phone calls made on board civil aircraft. In a separate internal document from a year earlier, the NSA reported that 50,000 people had already used their mobile phones in flight as of December 2008, a figure that rose to 100,000 by February 2009. The NSA attributed the increase to “more planes equipped with in-flight GSM capability, less fear that a plane will crash due to making/receiving a call, not as expensive as people thought.” The sky seemed to belong to the agency.
Gonzalo San Gil, PhD.

How Big Is Your Target? - Freedom Penguin - 0 views

  •  
    "April 20, 2016 Jacob Roecker 0 Comment Opinion In his 2014 TED presentation Cory Doctorow compares an open system of development to the scientific method and credits the methods for bringing mankind out of the dark ages. Tim Berners-Lee has a very credible claim to patent the technology that runs the internet, but instead has championed for its open development. This open development has launched us forward into a brave new world. Nearly one third of all internet traffic rides on just one openly developed project. "
  •  
    "April 20, 2016 Jacob Roecker 0 Comment Opinion In his 2014 TED presentation Cory Doctorow compares an open system of development to the scientific method and credits the methods for bringing mankind out of the dark ages. Tim Berners-Lee has a very credible claim to patent the technology that runs the internet, but instead has championed for its open development. This open development has launched us forward into a brave new world. Nearly one third of all internet traffic rides on just one openly developed project. "
Paul Merrell

Federal smartphone kill-switch legislation proposed - Network World - 0 views

  • Pressure on the cellphone industry to introduce technology that could disable stolen smartphones has intensified with the introduction of proposed federal legislation that would mandate such a system.
  • Pressure on the cellphone industry to introduce technology that could disable stolen smartphones has intensified with the introduction of proposed federal legislation that would mandate such a system.
  • Senate bill 2032, "The Smartphone Prevention Act," was introduced to the U.S. Senate Wednesday by Amy Klobuchar, a Minnesota Democrat. The bill promises technology that allows consumers to remotely wipe personal data from their smartphones and render them inoperable. But how that will be accomplished is currently unclear. The full text of the bill was not immediately available and the offices of Klobuchar and the bill's co-sponsors were all shut down Thursday due to snow in Washington, D.C.
  • ...2 more annotations...
  • The co-sponsors are Democrats Barbara Mikulski of Maryland, Richard Blumenthal of Connecticut and Mazie Hirono of Hawaii. The proposal follows the introduction last Friday of a bill in the California state senate that would mandate a "kill switch" starting in January 2015. The California bill has the potential to usher in kill-switch technology nationwide because carriers might not bother with custom phones just for California, but federal legislation would give it the force of law across the U.S. Theft of smartphones is becoming an increasing problem in U.S. cities and the crimes often involve physical violence or intimidation with guns or knives. In San Francisco, two-thirds of street theft involves a smartphone or tablet and the number is even higher in nearby Oakland. It also represents a majority of street robberies in New York and is rising in Los Angeles. In some cases, victims have been killed for their phones. In response to calls last year by law-enforcement officials to do more to combat the crimes, most cellphone carriers have aligned themselves behind the CTIA, the industry's powerful lobbying group. The CTIA is opposing any legislation that would introduce such technology. An outlier is Verizon, which says that while it thinks legislation is unnecessary, it is supporting the group behind the California bill.
  • Some phone makers have been a little more proactive. Apple in particular has been praised for the introduction of its activation lock feature in iOS7. The function would satisfy the requirements of the proposed California law with one exception: Phones will have to come with the function enabled by default so consumers have to make a conscious choice to switch it off. Currently, it comes as disabled by default. Samsung has also added features to some of its phones that support the Lojack software, but the service requires an ongoing subscription.
Gary Edwards

Blog | Spritz - 0 views

  • Therein lies one of the biggest problems with traditional RSVP. Each time you see text that is not centered properly on the ORP position, your eyes naturally will look for the ORP to process the word and understand its meaning. This requisite eye movement creates a “saccade”, a physical eye movement caused by your eyes taking a split second to find the proper ORP for a word. Every saccade has a penalty in both time and comprehension, especially when you start to speed up reading. Some saccades are considered by your brain to be “normal” during reading, such as when you move your eye from left to right to go from one ORP position to the next ORP position while reading a book. Other saccades are not normal to your brain during reading, such as when you move your eyes right to left to spot an ORP. This eye movement is akin to trying to read a line of text backwards. In normal reading, you normally won’t saccade right-to-left unless you encounter a word that your brain doesn’t already know and you go back for another look; those saccades will increase based on the difficulty of the text being read and the percentage of words within it that you already know. And the math doesn’t look good, either. If you determined the length of all the words in a given paragraph, you would see that, depending on the language you’re reading, there is a low (less than 15%) probability of two adjacent words being the same length and not requiring a saccade when they are shown to you one at a time. This means you move your eyes on a regular basis with traditional RSVP! In fact, you still move them with almost every word. In general, left-to-right saccades contribute to slower reading due to the increased travel time for the eyeballs, while right-to-left saccades are discombobulating for many people, especially at speed. It’s like reading a lot of text that contains words you don’t understand only you DO understand the words! The experience is frustrating to say the least.
  • In addition to saccading, another issue with RSVP is associated with “foveal vision,” the area in focus when you look at a sentence. This distance defines the number of letters on which your eyes can sharply focus as you read. Its companion is called “parafoveal vision” and refers to the area outside foveal vision that cannot be seen sharply.
  •  
    "To understand Spritz, you must understand Rapid Serial Visual Presentation (RSVP). RSVP is a common speed-reading technique used today. However, RSVP was originally developed for psychological experiments to measure human reactions to content being read. When RSVP was created, there wasn't much digital content and most people didn't have access to it anyway. The internet didn't even exist yet. With traditional RSVP, words are displayed either left-aligned or centered. Figure 1 shows an example of a center-aligned RSVP, with a dashed line on the center axis. When you read a word, your eyes naturally fixate at one point in that word, which visually triggers the brain to recognize the word and process its meaning. In Figure 1, the preferred fixation point (character) is indicated in red. In this figure, the Optimal Recognition Position (ORP) is different for each word. For example, the ORP is only in the middle of a 3-letter word. As the length of a word increases, the percentage that the ORP shifts to the left of center also increases. The longer the word, the farther to the left of center your eyes must move to locate the ORP."
Paul Merrell

US websites should inform EU citizens about NSA surveillance, says report - 0 views

  • All existing data sharing agreements between Europe and the US should be revoked, and US web site providers should prominently inform European citizens that their data may be subject to government surveillance, according to the recommendations of a briefing report for the European Parliament. The report was produced in response to revelations about the US National Security Agency (NSA) snooping on internet traffic, and aims to highlight the subsequent effect on European Union (EU) citizens' rights.
  • The report warns that EU data protection authorities have failed to understand the “structural shift of data sovereignty implied by cloud computing”, and the associated risks to the rights of EU citizens. It suggests “a full industrial policy for development of an autonomous European cloud computing capacity” should be set up to reduce exposure of EU data to NSA surveillance that is undertaken by the use of US legislation that forces US-based cloud providers to provide access to data they hold.
  • To put pressure on the US government, the report recommends that US websites should ask EU citizens for their consent before gathering data that could be used by the NSA. “Prominent notices should be displayed by every US web site offering services in the EU to inform consent to collect data from EU citizens. The users should be made aware that the data may be subject to surveillance by the US government for any purpose which furthers US foreign policy,” it said. “A consent requirement will raise EU citizen awareness and favour growth of services solely within EU jurisdiction. This will thus have economic impact on US business and increase pressure on the US government to reach a settlement.”
  • ...2 more annotations...
  • Other recommendations include the EU offering protection and rewards for whistleblowers, including “strong guarantees of immunity and asylum”. Such a move would be seen as a direct response to the plight of Edward Snowden, the former NSA analyst who leaked documents that revealed the extent of the NSA’s global internet surveillance programmes. The report also says that, “Encryption is futile to defend against NSA accessing data processed by US clouds,” and that there is “no technical solution to the problem”. It calls for the EU to press for changes to US law.
  • “It seems that the only solution which can be trusted to resolve the Prism affair must involve changes to the law of the US, and this should be the strategic objective of the EU,” it said. The report was produced for the European Parliament committee on civil liberties, justice and home affairs, and comes before the latest hearing of an inquiry into electronic mass surveillance of EU citizens, due to take place in Brussels on 24 September.
  •  
    Yee-haw! E.U. sanctuary and rewards for NSA whistle-blowers. Mandatory warnings for customers of U.S. cloud services that their data may be turned over to the NSA. Pouring more gasoline on the NSA diplomatic fire. 
Paul Merrell

The US is Losing Control of the Internet…Oh, Really? | Global Research - 2 views

  • All of the major internet organisations have pledged, at a summit in Uruguay, to free themselves of the influence of the US government. The directors of ICANN, the Internet Engineering Task Force, the Internet Architecture Board, the World Wide Web Consortium, the Internet Society and all five of the regional Internet address registries have vowed to break their associations with the US government. In a statement, the group called for “accelerating the globalization of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing”. That’s a distinct change from the current situation, where the US department of commerce has oversight of ICANN. In another part of the statement, the group “expressed strong concern over the undermining of the trust and confidence of Internet users globally due to recent revelations of pervasive monitoring and surveillance”. Meanwhile, it was announced that the next Internet Governance Summit would be held in Brazil, whose president has been extremely critical of the US over web surveillance. In a statement announcing the location of the summit, Brazilian president Dilma Rousseff said: “The United States and its allies must urgently end their spying activities once and for all.”
Gonzalo San Gil, PhD.

Five Years Before Any New U.S. Anti-Piracy Laws, MP Predicts | TorrentFreak - 0 views

  •  
    " Andy on April 18, 2014 C: 116 Breaking Two years ago the Internet's biggest ever protest killed the hugely controversial anti-piracy legislation SOPA. Speaking to studios this week, a prominent UK government intellectual property advisor admitted that the damage caused was so great that it's unlikely that there will be a fresh piracy-focused legislative push for another five years."
Paul Merrell

Exclusive: How FBI Informant Sabu Helped Anonymous Hack Brazil | Motherboard - 0 views

  • In early 2012, members of the hacking collective Anonymous carried out a series of cyber attacks on government and corporate websites in Brazil. They did so under the direction of a hacker who, unbeknownst to them, was wearing another hat: helping the Federal Bureau of Investigation carry out one of its biggest cybercrime investigations to date. A year after leaked files exposed the National Security Agency's efforts to spy on citizens and companies in Brazil, previously unpublished chat logs obtained by Motherboard reveal that while under the FBI's supervision, Hector Xavier Monsegur, widely known by his online persona, "Sabu," facilitated attacks that affected Brazilian websites. The operation raises questions about how the FBI uses global internet vulnerabilities during cybercrime investigations, how it works with informants, and how it shares information with other police and intelligence agencies. 
  • After his arrest in mid-2011, Monsegur continued to organize cyber attacks while working for the FBI. According to documents and interviews, Monsegur passed targets and exploits to hackers to disrupt government and corporate servers in Brazil and several other countries. Details about his work as a federal informant have been kept mostly secret, aired only in closed-door hearings and in redacted documents that include chat logs between Monsegur and other hackers. The chat logs remain under seal due to a protective order upheld in court, but in April, they and other court documents were obtained by journalists at Motherboard and the Daily Dot. 
Paul Merrell

Yahoo breaks every mailing list in the world including the IETF's - 0 views

  • DMARC is what one might call an emerging e-mail security scheme. There's a draft on it at draft-kucherawy-dmarc-base-04, intended for the independent stream. It's emerging pretty fast, since many of the largest mail systems in the world have already implemented it, including Gmail, Hotmail/MSN/Outlook, Comcast, and Yahoo.
  • The reason this matters is that over the weekend Yahoo published a DMARC record with a policy saying to reject all yahoo.com mail that fails DMARC. I noticed this because I got a blizzard of bounces from my church mailing list, when a subscriber sent a message from her yahoo.com account, and the list got a whole bunch of rejections from gmail, Yahoo, Hotmail, Comcast, and Yahoo itself. This is definitely a DMARC problem, the bounces say so. The problem for mailing lists isn't limited to the Yahoo subscribers. Since Yahoo mail provokes bounces from lots of other mail systems, innocent subscribers at Gmail, Hotmail, etc. not only won't get Yahoo subscribers' messages, but all those bounces are likely to bounce them off the lists. A few years back we had a similar problem due to an overstrict implementation of DKIM ADSP, but in this case, DMARC is doing what Yahoo is telling it to do. Suggestions: * Suspend posting permission of all yahoo.com addresses, to limit damage * Tell Yahoo users to get a new mail account somewhere else, pronto, if they want to continue using mailing lists * If you know people at Yahoo, ask if perhaps this wasn't such a good idea
  •  
    Short story: Check your SPAM folder for email from folks who email you from Yahoo accounts. That's where it's currently going. (They got rid of the first bug but created a new one in the process. Your Spam folder is where they're currently being routed.)
Paul Merrell

FCC Putting Comcast/Time Warner Cable Investigation On Hold - 0 views

  • On Friday, the U.S. Federal Communications Commission said that it has extended its time to file responses and oppositions for the Comcast/Time Warner merger from October 8 to October 29. This is due to a motion filed by DISH Network, which said that Comcast didn't fully respond to the Commission's Request to Responses and Oppositions. The FCC is taking 180 days to determine if the Comcast and Time Warner merger will be in the best interest of the public. As of Friday, the investigation was at day 85, and it will resume once October 29 arrives. Originally, the investigation was expected to be complete on January 6, 2015. According to Reuters, a number of competitors and consumer advocates have rejected the merger, stating that the combined entity will have too much power over American consumers' viewing habits. Comcast disagrees of course, indicating that Time Warner is not a competitor and that their combined forces would bring better subscription services to a larger consumer audience.
  • Back in August, the FCC sent questions to both Comcast and Time Warner Cable asking for additional information about their broadband and video services, such as their Web traffic management practices. However, the FCC said on Friday that both companies failed to provide enough answers to please the merger reviewers. Comcast disagrees but said it will work with the reviewers to provide the missing information. "We will work with the staff to determine the additional information the FCC is seeking (including the document production that the FCC had asked us to delay filing) and will submit supplemental answers and documents quickly thereafter so that the FCC can complete its review early in 2015," Comcast spokeswoman Sena Fitzmaurice told Reuters.
  • Currently, the FCC is trying to retrieve Comcast's programming and retransmission consent agreements, but media companies have objected to the collection, saying that these documents are highly confidential. However, the documents have made their way to the Justice Department, which is conducting its own review for antitrust issues. The delay in the FCC's deadline also stems from a large 850-page document supplied by Comcast. The FCC indicated that this volume of information is critical to the investigation.
Gonzalo San Gil, PhD.

Movie Studios Give 'Pirate' Sites a 24h Shutdown Ultimatum | TorrentFreak - 1 views

    • Gonzalo San Gil, PhD.
       
      # The Loser: The Culture (supposedly 'defended')... # ! :/
    • Gonzalo San Gil, PhD.
       
      # ! Perfect 'Double Income' for minimum work.
    • Gonzalo San Gil, PhD.
       
      # ! 2nd) from the generous governments' subsidies due to 'Industry'continious complaints...
    • Gonzalo San Gil, PhD.
       
      # ! How it is possible that we take 17 (since DMCA) with this…? # ! Bet that everybody in the entertainment -and else- industry are taking profit... # 1st) From the additional promotion that 'free riding supposes, and
  •  
    [ By Ernesto on April 30, 2015 C: 0 Breaking On behalf of the major Hollywood movie studios the Motion Picture Association (MPA) is demanding that pirate site operators shut down within 24 hours, or else. The recent push targets a wide variety of services, including some of the top torrent sites. Thus far, the only casualty appears to be a rather small linking site.]
Paul Merrell

PATRIOT Act spying programs on death watch - Seung Min Kim and Kate Tummarello - POLITICO - 0 views

  • With only days left to act and Rand Paul threatening a filibuster, Senate Republicans remain deeply divided over the future of the PATRIOT Act and have no clear path to keep key government spying authorities from expiring at the end of the month. Crucial parts of the PATRIOT Act, including a provision authorizing the government’s controversial bulk collection of American phone records, first revealed by Edward Snowden, are due to lapse May 31. That means Congress has barely a week to figure out a fix before before lawmakers leave town for Memorial Day recess at the end of the next week. Story Continued Below The prospects of a deal look grim: Senate Majority Leader Mitch McConnell on Thursday night proposed just a two-month extension of expiring PATRIOT Act provisions to give the two sides more time to negotiate, but even that was immediately dismissed by critics of the program.
  •  
    A must-read. The major danger is that the the Senate could pass the USA Freedom Act, which has already been passed by the House. Passage of that Act, despite its name, would be bad news for civil liberties.  Now is the time to let your Congress critters know that you want them to fight to the Patriot Act provisions expire on May 31, without any replacement legislation.  Keep in mind that Section 502 does not apply just to telephone metadata. It authorizes the FBI to gather without notice to their victims "any tangible thing", specifically including as examples "library circulation records, library patron lists, book sales records, book customer lists, firearms sales records, tax return records, educational records, or medical records containing information that would identify a person." The breadth of the section is illustrated by telephone metadata not even being mentioned in the section.  NSA going after your medical records souand far fetched? Former NSA technical director William Binney says they're already doing it: "Binney alludes to even more extreme intelligence practices that are not yet public knowledge, including the collection of Americans' medical data, the collection and use of client-attorney conversations, and law enforcement agencies' "direct access," without oversight, to NSA databases." https://consortiumnews.com/2015/03/05/seeing-the-stasi-through-nsa-eyes/ So please, contact your Congress critters right now and tell them to sunset the Patriot Act NOW. This will be decided in the next few days so the sooner you contact them the better. 
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

CISPA is back! - 0 views

  • OPERATION: Fax Big Brother Congress is rushing toward a vote on CISA, the worst spying bill yet. CISA would grant sweeping legal immunity to giant companies like Facebook and Google, allowing them to do almost anything they want with your data. In exchange, they'll share even more of your personal information with the government, all in the name of "cybersecurity." CISA won't stop hackers — Congress is stuck in 1984 and doesn't understand modern technology. So this week we're sending them thousands of faxes — technology that is hopefully old enough for them to understand. Stop CISA. Send a fax now!
  • (Any tweet w/ #faxbigbrother will get faxed too!) Your email is only shown in your fax to Congress. We won't add you to any mailing lists.
  • CISA: the dirty deal between government and corporate giants. It's the dirty deal that lets much of government from the NSA to local police get your private data from your favorite websites and lets them use it without due process. The government is proposing a massive bribe—they will give corporations immunity for breaking virtually any law if they do so while providing the NSA, DHS, DEA, and local police surveillance access to everyone's data in exchange for getting away with crimes, like fraud, money laundering, or illegal wiretapping. Specifically it incentivizes companies to automatically and simultaneously transfer your data to the DHS, NSA, FBI, and local police with all of your personally-indentifying information by giving companies legal immunity (notwithstanding any law), and on top of that, you can't use the Freedom of Information Act to find out what has been shared.
  • ...1 more annotation...
  • The NSA and members of Congress want to pass a "cybersecurity" bill so badly, they’re using the recent hack of the Office of Personnel Management as justification for bringing CISA back up and rushing it through. In reality, the OPM hack just shows that the government has not been a good steward of sensitive data and they need to institute real security measures to fix their problems. The truth is that CISA could not have prevented the OPM hack, and no Senator could explain how it could have. Congress and the NSA are using irrational hysteria to turn the Internet into a place where the government has overly broad, unchecked powers. Why Faxes? Since 2012, online and civil liberties groups and 30,000+ sites have driven more than 2.6 million emails and hundreds of thousands of calls, tweets and more to Congress opposing overly broad cybersecurity legislation. Congress has tried to pass CISA in one form or another 4 times, and they were beat back every time by people like you. It's clear Congress is completely out of touch with modern technology, so this week, as Congress rushes toward a vote on CISA, we are going to send them thousands of faxes, a technology from the 1980s that is hopefully antiquated enough for them to understand. Sending a fax is super easy — you can use this page to send a fax. Any tweet with the hashtag #faxbigbrother will get turned into a fax to Congress too, so what are you waiting for? Click here to send a fax now!
Gonzalo San Gil, PhD.

Microsoft has built a Linux-based operating system | ITworld - 1 views

    • Gonzalo San Gil, PhD.
       
      # ! Everbody wants to be ( or say they are) # ! #OpenSource. (http://www.bloomberg.com/news/articles/2015-06-08/apple-goes-open-source) # ! ... "Don't believe everything You hear..." # ! And, in this case, all moves of Apple and Microsoft towards Open Source are due to their appetite for the Supercomputers' Marker (actually -and traditionally- reigned by Open Source Platforms... )http://www.datacenterdynamics.com/news-analysis/supercomputers-prefer-open-source-storage/77786.fullarticle
  •  
    "Pigs haven't taken flight; aliens haven't invaded; hell hasn't frozen over. But... Microsoft has created an OS powered by Linux. No, this is not The Onion; it's true. "
  •  
    "Pigs haven't taken flight; aliens haven't invaded; hell hasn't frozen over. But... Microsoft has created an OS powered by Linux. No, this is not The Onion; it's true. "
‹ Previous 21 - 40 of 67 Next › Last »
Showing 20 items per page