Skip to main content

Home/ Future of the Web/ Group items matching "draw" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Gonzalo San Gil, PhD.

The "Internet Governance" Farce and its "Multi-stakeholder" Illusion | La Quadrature du Net - 0 views

  •  
    by Jérémie Zimmermann For almost 15 years, "Internet Governance" meetings1 have been drawing attention and driving our imaginaries towards believing that consensual rules for the Internet could emerge from global "multi-stakeholder" discussions. A few days ahead of the "NETmundial" Forum in Sao Paulo it has become obvious that "Internet Governance" is a farcical way of keeping us busy and hiding a sad reality: Nothing concrete in these 15 years, not a single action, ever emerged from "multi-stakeholder" meetings, while at the same time, technology as a whole has been turned against its users, as a tool for surveillance, control and oppression.
  •  
    by Jérémie Zimmermann For almost 15 years, "Internet Governance" meetings1 have been drawing attention and driving our imaginaries towards believing that consensual rules for the Internet could emerge from global "multi-stakeholder" discussions. A few days ahead of the "NETmundial" Forum in Sao Paulo it has become obvious that "Internet Governance" is a farcical way of keeping us busy and hiding a sad reality: Nothing concrete in these 15 years, not a single action, ever emerged from "multi-stakeholder" meetings, while at the same time, technology as a whole has been turned against its users, as a tool for surveillance, control and oppression.
Gonzalo San Gil, PhD.

Socially controversial science topics on Wikipedia draw edit wars | Ars Technica - 0 views

  •  
    "by John Timmer - Aug 18, 2015 9:42pm CEST Share Tweet 70 Gene Likens (Wikipedia link, naturally) is an ecologist who set up a longterm study of a forest in New Hampshire. That study found that the water entering the ecosystem was unusually acidic, a finding that was eventually tied back to pollution. This turned out to be one of the earliest indications of acid rain."
Gonzalo San Gil, PhD.

Use common goals to overcome a competitive spirit | Opensource.com - 0 views

  •  
    "During the humid summer months of 1954, twenty-two 11 and 12-year-old boys were randomly split into two groups and taken to a 200-acre Boy Scouts of America camp in Robbers Cave State Park, Oklahoma. Over the next few weeks, they would unknowingly be the subjects of one of the most widely known psychological studies of our time. And the ways these groups bonded and interacted with each other draw some interesting parallels to our understanding of workplace culture."
  •  
    "During the humid summer months of 1954, twenty-two 11 and 12-year-old boys were randomly split into two groups and taken to a 200-acre Boy Scouts of America camp in Robbers Cave State Park, Oklahoma. Over the next few weeks, they would unknowingly be the subjects of one of the most widely known psychological studies of our time. And the ways these groups bonded and interacted with each other draw some interesting parallels to our understanding of workplace culture."
Gonzalo San Gil, PhD.

Elements for the reform of copyright and related cultural policies | La Quadrature du Net - 0 views

  •  
    " copyright creative contribution LQDN's proposals mutualised funding Net neutrality proposal Printer-friendly version Send by email Français Now that the ACTA treaty has been rejected by the European Parliament, a period opens during which it will be possible to push for a new regulatory and policy framework adapted to the digital era. Many citizens and MEPs support the idea of reforming copyright in order to make possible for all to draw the benefits of the digital environment, engage into creative and expressive activities and share in their results. In the coming months and years, the key questions will be: What are the real challenges that this reform should address? How can we address them?"
Paul Merrell

The Government Can No Longer Track Your Cell Phone Without a Warrant | Motherboard - 0 views

  • The government and police regularly use location data pulled off of cell phone towers to put criminals at the scenes of crimes—often without a warrant. Well, an appeals court ruled today that the practice is unconstitutional, in one of the strongest judicial defenses of technology privacy rights we've seen in a while.  The United States Court of Appeals for the Eleventh Circuit ruled that the government illegally obtained and used Quartavious Davis's cell phone location data to help convict him in a string of armed robberies in Miami and unequivocally stated that cell phone location information is protected by the Fourth Amendment. "In short, we hold that cell site location information is within the subscriber’s reasonable expectation of privacy," the court ruled in an opinion written by Judge David Sentelle. "The obtaining of that data without a warrant is a Fourth Amendment violation."
  • In Davis's case, police used his cell phone's call history against him to put him at the scene of several armed robberies. They obtained a court order—which does not require the government to show probable cause—not a warrant, to do so. From now on, that'll be illegal. The decision applies only in the Eleventh Circuit, but sets a strong precedent for future cases.
  • Indeed, the decision alone is a huge privacy win, but Sentelle's strong language supporting cell phone users' privacy rights is perhaps the most important part of the opinion. Sentelle pushed back against several of the federal government's arguments, including one that suggested that, because cell phone location data based on a caller's closest cell tower isn't precise, it should be readily collectable.  "The United States further argues that cell site location information is less protected than GPS data because it is less precise. We are not sure why this should be significant. We do not doubt that there may be a difference in precision, but that is not to say that the difference in precision has constitutional significance," Sentelle wrote. "That information obtained by an invasion of privacy may not be entirely precise does not change the calculus as to whether obtaining it was in fact an invasion of privacy." The court also cited the infamous US v. Jones Supreme Court decision that held that attaching a GPS to a suspect's car is a "search" under the Fourth Amendment. Sentelle suggested a cell phone user has an even greater expectation of location privacy with his or her cell phone use than a driver does with his or her car. A car, Sentelle wrote, isn't always with a person, while a cell phone, these days, usually is.
  • ...2 more annotations...
  • "One’s cell phone, unlike an automobile, can accompany its owner anywhere. Thus, the exposure of the cell site location information can convert what would otherwise be a private event into a public one," he wrote. "In that sense, cell site data is more like communications data than it is like GPS information. That is, it is private in nature rather than being public data that warrants privacy protection only when its collection creates a sufficient mosaic to expose that which would otherwise be private." Finally, the government argued that, because Davis made outgoing calls, he "voluntarily" gave up his location data. Sentelle rejected that, too, citing a prior decision by a Third Circuit Court. "The Third Circuit went on to observe that 'a cell phone customer has not ‘voluntarily’ shared his location information with a cellular provider in any meaningful way.' That circuit further noted that 'it is unlikely that cell phone customers are aware that their cell phone providers collect and store historical location information,'” Sentelle wrote.
  • "Therefore, as the Third Circuit concluded, 'when a cell phone user makes a call, the only information that is voluntarily and knowingly conveyed to the phone company is the number that is dialed, and there is no indication to the user that making that call will also locate the caller,'" he continued.
  •  
    Another victory for civil libertarians against the surveillance state. Note that this is another decision drawing guidance from the Supreme Court's decision in U.S. v. Jones, shortly before the Edward Snowden leaks came to light, that called for re-examination of the Third Party Doctrine, an older doctrine that data given to or generated by third parties is not protected by the Fourth Amendment.   
Paul Merrell

Own Your Own Devices You Will, Under Rep. Farenthold's YODA Bill | Bloomberg BNA - 0 views

  • A bill introduced Sept. 18 would make clear that consumers actually owned the electronic devices, and any accompanying software on that device, that they purchased, according to sponsor Rep. Blake Farenthold's (R-Texas). The You Own Devices Act (H.R. 5586) would amend the Copyright Act “to provide that the first sale doctrine applies to any computer program that enables a machine or other product to operate.” The bill, which is unlikely to receive attention during Congress's lame-duck legislative session, was well-received by consumer's rights groups.
  • Section 109(a) of the Copyright Act, 17 U.S.C. §109(a), serves as the foundation for the first sale doctrine. H.R. 5586 would amend Section 109(a) by adding a provision covering “transfer of computer programs.” That provision would state:if a computer program enables any part of a machine or other product to operate, the owner of the machine or other product is entitled to transfer an authorized copy of the computer pro gram, or the right to obtain such copy, when the owner sells, leases, or otherwise transfers the machine or other product to another person. The right to transfer provided under this subsection may not be waived by any agreement.
  • ‘Things' Versus SoftwareFarenthold had expressed concern during a Sept. 17 hearing on Section 1201 of the Digital Millennium Copyright Act over what he perceived was a muddling between patents and copyrights when it comes to consumer products. “Traditionally patent law has protected things and copyright law has protected artistic-type works,” he said. “But now more and more things have software in them and you are licensing that software when you purchase a thing.” Farenthold asked the witnesses if there was a way to draw a distinction in copyright “between software that is an integral part of a thing as opposed to an add-on app that you would put on your telephone.”
  • ...1 more annotation...
  • H.R. 5586 seeks to draw that distinction. “YODA would simply state that if you want to sell, lease, or give away your device, the software that enables it to work is transferred along with it, and that any right you have to security and bug fixing of that software is transferred as well,” Farenthold said in a statement issued Sept. 19.
Paul Merrell

News - Antitrust - Competition - European Commission - 0 views

  • Google inquiries Commission accuses Google of systematically favouring own shopping comparison service Infographic: Google might be favouring 'Google Shopping' when displaying general search results
  • Antitrust: Commission sends Statement of Objections to Google on comparison shopping service; opens separate formal investigation on AndroidWed, 15 Apr 2015 10:00:00 GMTAntitrust: Commission opens formal investigation against Google in relation to Android mobile operating systemWed, 15 Apr 2015 10:00:00 GMTAntitrust: Commission sends Statement of Objections to Google on comparison shopping serviceWed, 15 Apr 2015 10:00:00 GMTStatement by Commissioner Vestager on antitrust decisions concerning GoogleWed, 15 Apr 2015 11:39:00 GMT
  •  
    The more interesting issue to me is the accusation that Google violates antitrust law by boosting its comparison shopping search results in its search results, unfairly disadvantaging competing shopping services and not delivering best results to users. What's interesting to me is that the Commission is attempting to portray general search as a separate market from comparison shopping search, accusing Google of attempting to leverage its general search monopoly into the separate comoparison shopping search market. At first blush, Iim not convinced that these are or should be regarded as separable markets. But the ramifications are enormous. If that is a separate market, then arguably so is Google's book search, its Google Scholar search, its definition search, its site search, etc. It isn't clear to me how one might draw a defensible line taht does not also sweep in every new search feature  as a separate market.   
Gary Edwards

Mozilla's Bespin project encourages experimentation - Ars Technica, Paul Ryan - 0 views

  •  
    "The Bespin project, which aims to develop a browser-based IDE, has attracted significant attention in the Web development community. Ars looks at some of the buzz around Bespin and the project's innovative use of the HTML canvas element.........." Good stuff here. The Bespin project started off as a JavaScript code editor written in JavaScript, but the really exciting part looks to be the innovative use of the canvas element and the JavaScript API for drawing. There is also the development of using Bespin as a Web page editor using the new canvas text rendering API! One of the advantages Flash has over WebKit is the proliferation of SWF based IDE's. Silverlight will similarly have an excellent collection of IDE's. There are no WebKit - Canvas based IDE's today, but Bespin will perhaps change that. I can also imagine that many of the Flash based IDE's like Swifft tools and my favorite, "SwishMAX", could provide multiple vector graphics; including Canvas! Note that Adobe is scheduled to discontinue all support for SVG this coming March of 2009, moving everything to the proprietary SWF.
Paul Merrell

Microsoft to Google: Get Off of My Cloud - BusinessWeek - 0 views

  • Microsoft's newest facility is drawing lots of oohs and ahs from experts in this specialized field. Most data centers are open, warehouse-style buildings filled with racks of gear. But the first floor of this vast 700,000-square-foot facility looks more like an indoor parking lot, with gear packed into preconfigured shipping containers. Suppliers such as Sun Microsystems (JAVA) and Rackable Systems (RACK) have been advocating similar approaches for years, but this is by far the most ambitious implementation. Each of the containers can hold 2,500 servers, and the floor can hold up to 224 containers. That's a potential maximum of 560,000 servers. "They're pushing the concept to the extreme," Cappuccio says.
  • Microsoft's newest facility is drawing lots of oohs and ahs from experts in this specialized field. Most data centers are open, warehouse-style buildings filled with racks of gear. But the first floor of this vast 700,000-square-foot facility looks more like an indoor parking lot, with gear packed into preconfigured shipping containers. Suppliers such as Sun Microsystems (JAVA) and Rackable Systems (RACK) have been advocating similar approaches for years, but this is by far the most ambitious implementation. Each of the containers can hold 2,500 servers, and the floor can hold up to 224 containers. That's a potential maximum of 560,000 servers. "They're pushing the concept to the extreme," Cappuccio says.
Paul Merrell

Zuckerberg set up fraudulent scheme to 'weaponise' data, court case alleges | Technology | The Guardian - 1 views

  • Mark Zuckerberg faces allegations that he developed a “malicious and fraudulent scheme” to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive “weaponised” the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal. A legal motion filed last week in the superior court of San Mateo draws upon extensive confidential emails and messages between Facebook senior executives including Mark Zuckerberg. He is named individually in the case and, it is claimed, had personal oversight of the scheme. Facebook rejects all claims, and has made a motion to have the case dismissed using a free speech defence.
  • It claims the first amendment protects its right to make “editorial decisions” as it sees fit. Zuckerberg and other senior executives have asserted that Facebook is a platform not a publisher, most recently in testimony to Congress.
  • Heather Whitney, a legal scholar who has written about social media companies for the Knight First Amendment Institute at Columbia University, said, in her opinion, this exposed a potential tension for Facebook. “Facebook’s claims in court that it is an editor for first amendment purposes and thus free to censor and alter the content available on its site is in tension with their, especially recent, claims before the public and US Congress to be neutral platforms.” The company that has filed the case, a former startup called Six4Three, is now trying to stop Facebook from having the case thrown out and has submitted legal arguments that draw on thousands of emails, the details of which are currently redacted. Facebook has until next Tuesday to file a motion requesting that the evidence remains sealed, otherwise the documents will be made public.
Paul Merrell

Commentary: Don't be so sure Russia hacked the Clinton emails | Reuters - 0 views

  • By James Bamford Last summer, cyber investigators plowing through the thousands of leaked emails from the Democratic National Committee uncovered a clue.A user named “Феликс Эдмундович” modified one of the documents using settings in the Russian language. Translated, his name was Felix Edmundovich, a pseudonym referring to Felix Edmundovich Dzerzhinsky, the chief of the Soviet Union’s first secret-police organization, the Cheka.It was one more link in the chain of evidence pointing to Russian President Vladimir Putin as the man ultimately behind the operation.During the Cold War, when Soviet intelligence was headquartered in Dzerzhinsky Square in Moscow, Putin was a KGB officer assigned to the First Chief Directorate. Its responsibilities included “active measures,” a form of political warfare that included media manipulation, propaganda and disinformation. Soviet active measures, retired KGB Major General Oleg Kalugin told Army historian Thomas Boghart, aimed to discredit the United States and “conquer world public opinion.”As the Cold War has turned into the code war, Putin recently unveiled his new, greatly enlarged spy organization: the Ministry of State Security, taking the name from Joseph Stalin’s secret service. Putin also resurrected, according to James Clapper, the U.S. director of national intelligence, some of the KGB’s old active- measures tactics. On October 7, Clapper issued a statement: “The U.S. Intelligence community is confident that the Russian government directed the recent compromises of emails from U.S. persons and institutions, including from U.S. political organizations.” Notably, however, the FBI declined to join the chorus, according to reports by the New York Times and CNBC.A week later, Vice President Joe Biden said on NBC’s Meet the Press that "we're sending a message" to Putin and "it will be at the time of our choosing, and under the circumstances that will have the greatest impact." When asked if the American public would know a message was sent, Biden replied, "Hope not." Meanwhile, the CIA was asked, according to an NBC report on October 14, “to deliver options to the White House for a wide-ranging ‘clandestine’ cyber operation designed to harass and ‘embarrass’ the Kremlin leadership.”But as both sides begin arming their cyberweapons, it is critical for the public to be confident that the evidence is really there, and to understand the potential consequences of a tit-for-tat cyberwar escalating into a real war. 
  • This is a prospect that has long worried Richard Clarke, the former White House cyber czar under President George W. Bush. “It’s highly likely that any war that began as a cyberwar,” Clarke told me last year, “would ultimately end up being a conventional war, where the United States was engaged with bombers and missiles.”The problem with attempting to draw a straight line from the Kremlin to the Clinton campaign is the number of variables that get in the way. For one, there is little doubt about Russian cyber fingerprints in various U.S. campaign activities. Moscow, like Washington, has long spied on such matters. The United States, for example, inserted malware in the recent Mexican election campaign. The question isn’t whether Russia spied on the U.S. presidential election, it’s whether it released the election emails.Then there’s the role of Guccifer 2.0, the person or persons supplying WikiLeaks and other organizations with many of the pilfered emails. Is this a Russian agent? A free agent? A cybercriminal? A combination, or some other entity? No one knows.There is also the problem of groupthink that led to the war in Iraq. For example, just as the National Security Agency, the Central Intelligence Agency and the rest of the intelligence establishment are convinced Putin is behind the attacks, they also believed it was a slam-dunk that Saddam Hussein had a trove of weapons of mass destruction. Consider as well the speed of the political-hacking investigation, followed by a lack of skepticism, culminating in a rush to judgment. After the Democratic committee discovered the potential hack last spring, it called in the cybersecurity firm CrowdStrike in May to analyze the problem.
  • CrowdStrike took just a month or so before it conclusively determined that Russia’s FSB, the successor to the KGB, and the Russian military intelligence organization, GRU, were behind it. Most of the other major cybersecurity firms quickly fell in line and agreed. By October, the intelligence community made it unanimous. That speed and certainty contrasts sharply with a previous suspected Russian hack in 2010, when the target was the Nasdaq stock market. According to an extensive investigation by Bloomberg Businessweek in 2014, the NSA and FBI made numerous mistakes over many months that stretched to nearly a year. “After months of work,” the article said, “there were still basic disagreements in different parts of government over who was behind the incident and why.”  There was no consensus­, with just a 70 percent certainty that the hack was a cybercrime. Months later, this determination was revised again: It was just a Russian attempt to spy on the exchange in order to design its own. The federal agents also considered the possibility that the Nasdaq snooping was not connected to the Kremlin. Instead, “someone in the FSB could have been running a for-profit operation on the side, or perhaps sold the malware to a criminal hacking group.” Again, that’s why it’s necessary to better understand the role of Guccifer 2.0 in releasing the Democratic National Committee and Clinton campaign emails before launching any cyberweapons.
  • ...2 more annotations...
  • t is strange that clues in the Nasdaq hack were very difficult to find ― as one would expect from a professional, state-sponsored cyber operation. Conversely, the sloppy, Inspector Clouseau-like nature of the Guccifer 2.0 operation, with someone hiding behind a silly Bolshevik cover name, and Russian language clues in the metadata, smacked more of either an amateur operation or a deliberate deception.Then there’s the Shadow Brokers, that mysterious person or group that surfaced in August with its farcical “auction” to profit from a stolen batch of extremely secret NSA hacking tools, in essence, cyberweapons. Where do they fit into the picture? They have a small armory of NSA cyberweapons, and they appeared just three weeks after the first DNC emails were leaked. On Monday, the Shadow Brokers released more information, including what they claimed is a list of hundreds of organizations that the NSA has targeted over more than a decade, complete with technical details. This offers further evidence that their information comes from a leaker inside the NSA rather than the Kremlin. The Shadow Brokers also discussed Obama’s threat of cyber retaliation against Russia. Yet they seemed most concerned that the CIA, rather than the NSA or Cyber Command, was given the assignment. This may be a possible indication of a connection to NSA’s elite group, Tailored Access Operations, considered by many the A-Team of hackers.“Why is DirtyGrandpa threating CIA cyberwar with Russia?” they wrote. “Why not threating with NSA or Cyber Command? CIA is cyber B-Team, yes? Where is cyber A-Team?” Because of legal and other factors, the NSA conducts cyber espionage, Cyber Command conducts cyberattacks in wartime, and the CIA conducts covert cyberattacks. 
  • The Shadow Brokers connection is important because Julian Assange, the founder of WikiLeaks, claimed to have received identical copies of the Shadow Brokers cyberweapons even before they announced their “auction.” Did he get them from the Shadow Brokers, from Guccifer, from Russia or from an inside leaker at the NSA?Despite the rushed, incomplete investigation and unanswered questions, the Obama administration has announced its decision to retaliate against Russia.  But a public warning about a secret attack makes little sense. If a major cyber crisis happens in Russia sometime in the future, such as a deadly power outage in frigid winter, the United States could be blamed even if it had nothing to do with it. That could then trigger a major retaliatory cyberattack against the U.S. cyber infrastructure, which would call for another reprisal attack ― potentially leading to Clarke’s fear of a cyberwar triggering a conventional war. President Barack Obama has also not taken a nuclear strike off the table as an appropriate response to a devastating cyberattack.
  •  
    Article by James Bamford, the first NSA whistleblower and author of three books on the NSA.
Paul Merrell

WikiLeaks - Vault 7: Projects - 0 views

  • Today, March 31st 2017, WikiLeaks releases Vault 7 "Marble" -- 676 source code files for the CIA's secret anti-forensic Marble Framework. Marble is used to hamper forensic investigators and anti-virus companies from attributing viruses, trojans and hacking attacks to the CIA. Marble does this by hiding ("obfuscating") text fragments used in CIA malware from visual inspection. This is the digital equivallent of a specalized CIA tool to place covers over the english language text on U.S. produced weapons systems before giving them to insurgents secretly backed by the CIA. Marble forms part of the CIA's anti-forensics approach and the CIA's Core Library of malware code. It is "[D]esigned to allow for flexible and easy-to-use obfuscation" as "string obfuscation algorithms (especially those that are unique) are often used to link malware to a specific developer or development shop." The Marble source code also includes a deobfuscator to reverse CIA text obfuscation. Combined with the revealed obfuscation techniques, a pattern or signature emerges which can assist forensic investigators attribute previous hacking attacks and viruses to the CIA. Marble was in use at the CIA during 2016. It reached 1.0 in 2015.
  • The source code shows that Marble has test examples not just in English but also in Chinese, Russian, Korean, Arabic and Farsi. This would permit a forensic attribution double game, for example by pretending that the spoken language of the malware creator was not American English, but Chinese, but then showing attempts to conceal the use of Chinese, drawing forensic investigators even more strongly to the wrong conclusion, --- but there are other possibilities, such as hiding fake error messages. The Marble Framework is used for obfuscation only and does not contain any vulnerabilties or exploits by itself.
  •  
    But it was the Russians who hacked the 2016 U.S. election. Really.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

Home - Pencil Project - 0 views

  • The Pencil Project's unique mission is to build a free and opensource tool for making diagrams and GUI prototyping that everyone can use.
  • Built-in stencils for diagraming and prototyping Multi-page document with background page On-screen text editing with rich-text supports PNG rasterizing Undo/redo supports Installing user-defined stencils Standard drawing operations: aligning, z-ordering, scaling, rotating... Cross-platforms Adding external objects And much more...
  •  
    Interesting application for prototyping GUIs. Runs as a Firefox 3 extension or standalone on Linux and Windows using XULRunner.
Gary Edwards

Flash Wars: Adobe Fights for AIR with the Open Screen Project [Part 3 of 3] | AppleInsider - 0 views

  • Two areas where Flash can offer real value is in displaying and packaging video on the web, and in serving as a Java replacement for developing applets. Here's a look at how Adobe is working to defend its strengths in the face of competition, and how its efforts to open the Flash specification in the Open Screen Project play into these efforts.
  • proprietary FLV video container format
  • more advanced and open H.264 video codec
  • ...3 more annotations...
  • Apple's ability to disrupt the status quo in video playback is evident in its deal with Google to vend YouTube videos to the iPhone, iPod Touch, and Apple TV as straight H.264 rather than Google's existing mix of a Flash-based player and its archaic GVI file format based upon AVI.
  • As Apple's hardware-based H.264 playback in mobile devices begins to define how to reach affluent customers with content, Flash will increasingly lose any allure on the PC desktop as well, as developers won't want to target PCs and mobiles using two different systems.
  • Adobe seems to be hoping that nobody notices these problems and that its vigilant marketing efforts can entrance the public into thinking that a drawing app extended into an animation tool and then retrofitted into a monstrous hack of a development platform is a superior technology basis for building web apps compared to the use of modern open standards created expressly to promote true interoperability by design rather than retroactively.
  •  
    Part two of the Prince McClean Adobe-Flash history. Excellent history involves Adobe SVG, Microsoft VmL-XAML-Silverlight, Apple WebKit, Sun (Java) as they battle for dominance over web applications and the future of the Web itself.
  •  
    Live Roulette from Australia, Fun and Free! Now you can play Real "www.funlivecasino.com.au" Live Roulette for Fun in Australia on a brand new website, FunLiveCasino.com.au. Using the latest internet streaming technologies, Fun Live Casino lets you join a real game happening on a real table in a real casino, all broadcast Live! You can see other real players in the casino betting on the same results you do giving you ultimate trust in the results as they are not generated 'just for you', like other casino gaming products such as 'live studios' or computer generated games. Its amazing to think next time your really in the casino that you might be on camera, and people online might be watching! The future is scary! Imagine that one day soon this will be the only way people would gamble online because the internet is full of scams, you have to be super careful, and why would you play Online Roulette any other way except from a Real Casino you can visit, see, hear and trust! Amazingly this site is completely Free and has no registration process, no spam, no clicks and no fuss. Just Instant Fun "www.funlivecasino.com.au" Free Live Roulette! Give it a try, its worth checking out! "www.funlivecasino.com.au" Australia's Online Fun Live Casino! Backlink created from http://fiverr.com/radjaseotea/making-best-156654-backlink-high-pr
Paul Merrell

Mozilla, ARM and Others Eyeing a New Class of Device | OStatic - 0 views

  • I read with interest this item, along with analysis from Matt Asay about Mozilla, ARM, MontaVista Software and four other companies working together on a new category of device. The partners envision devices that sit between smartphones and laptops, and they sound very much like the Ultra-Mobile PC (UMPC) tablets, such as the ones Nokia makes.
  • The new device from the seven partners might be on sale by early 2009, according to Softpedia. Their story also makes this good point about the difference between this new effort and Nokia's tablet strategy: "Arm Inc. is creating a completely open platform that will be shared with the open-source community ." If it is completely open that could draw the interest of developers.
Paul Merrell

Hakia Retools Semantic Search Engine to Better Battle Google, Yahoo - 0 views

  • Semantic search engine startup Hakia has retooled its Web site, adding tabs for news, images and "credible" site searches as a way to differentiate between its search approach and what it calls the "10 blue links" approach search incumbents Google, Yahoo and Microsoft have used in the first era of search engines. Hakia employs semantic search technologies, leveraging natural language processing to derive broader meaning from search queries.
  • Hakia began hawking "credible" Web sites, vetted by librarians and informational professionals, in April for health and medical searches drawing from sites examined by the Medical Library Association. These sites have a peer review process or strict editorial controls to ensure the accuracy of the information and zero commercial bias. The idea is to clearly define sites users can trust in an age when do-it-yourself chronicling via Wikipedia and other sites that enable crowdsourcing activities has led to some questionable results.
Paul Merrell

BetaNews | Corel: We are not...not for sale - 0 views

  • Earlier this week, Corel announced that its majority investor Vector Capital had withdrawn its March buyout offer that valued the company at nearly $280 million, in the interest of Corel's pursuit of other "potential strategic third-party alternatives," which would best suit shareholders. GA_googleFillSlot("BN_Article_Box_336x280"); Today, the company announced that yes, these alternatives do include a potential sale of the company, and yes it is in discussions with a third party regarding Corel's sale, but no agreement has been reached.
  •  
    Corel is apparently for sale.
Paul Merrell

Medvedev proposes Creative Commons-style copyright scheme for Russia | Society | RIA Novosti - 0 views

  • Russian President Dmitry Medvedev has proposed setting up a new flexible copyright scheme on the Runet, as the Russian-language part of the internet is known. In a statement released on the Kremlin's website on Thursday, Medvedev instructed the country's communications ministry to draw up amendments "aimed at allowing authors to let an unlimited number of people use their content on the basis of free licensing."
Paul Merrell

Rapid - Press Releases - EUROPA - 0 views

  • The Commission has found that Intel excluded its competitor in two ways: through illegal loyalty rebates by paying manufacturers and retailers to restrict the commercialisation of competitors' products.These illegal actions were designed to preserve Intel's market share at a time when their only significant rival - AMD - was a growing threat to Intel's position. This threat was widely recognised by both computer manufacturers and in Intel's own internal documents seen by the Commission. The computer manufacturers involved are Acer, Dell, HP, Lenovo and NEC. The retailer involved is Media Saturn Holdings, the parent company of Media Markt.
  • Naturally, the Commission favours strong, vigorous price competition, including by dominant firms. However, Intel went beyond normal price competition by giving rebates to computer manufacturers on the condition that they bought all, or almost all, of their CPUs from Intel. Intel also made direct payments to a major retailer – Media Markt - on the condition that it stocked only computers with Intel CPUs.
  • Just to give you one example: in one case, a computer manufacturer took up only a small part of an offer by AMD of free CPUs because acceptance of all the free CPUs offered would have led that computer manufacturer to breach the conditions of its agreement with Intel and to lose rebates on all its much more numerous Intel purchases.
  • ...3 more annotations...
  • Intel made direct payments to computer manufacturers to halt or delay the launch of products using their rival's chips, and to limit their distribution once available. The Commission has specific, documented examples, of Intel paying other manufacturers to, for example, delay the launch of an AMD-based PC by six months, and to restrict the sales of AMD-based products to certain customers.
  • The Commission Decision contains evidence that Intel went to great lengths to cover-up many of its anti-competitive actions. Many of the conditions mentioned above were not to be found in Intel’s official contracts. However, the Commission was able to gather a broad range of evidence demonstrating Intel's illegal conduct through statements from companies, on-site inspections, and formal requests for information.
  • Finally, I would like to draw your attention to Intel's latest global advertising campaign which proposes Intel as the "Sponsors of Tomorrow." Their website invites visitors to add their 'vision of tomorrow'. Well, I can give my vision of tomorrow for Intel here and now: "obey the law".
1 - 20 of 26 Next ›
Showing 20 items per page