Skip to main content

Home/ Future of the Web/ Group items tagged potential

Rss Feed Group items tagged

Gary Edwards

Google's ARC Beta runs Android apps on Chrome OS, Windows, Mac, and Linux | Ars Technica - 0 views

  • So calling all developers: You can now (probably, maybe) run your Android apps on just about anything—Android, Chrome OS, Windows, Mac, and Linux—provided you fiddle with the ARC Welder and submit your app to the Chrome Web Store.
  • The App Runtime for Chrome and Native Client are hugely important projects because they potentially allow Google to push a "universal binary" strategy on developers. "Write your app for Android, and we'll make it run on almost every popular OS! (other than iOS)" Google Play Services support is a major improvement for ARC and signals just how ambitious this project is. Some day it will be a great sales pitch to convince developers to write for Android first, which gives them apps on all these desktop OSes for free.
  •  
    Thanks Marbux. ARC appears to be an extraordinary technology. Funny but Florian has been pushing Native Client (NaCL) since it was first ported from Firefox to Chrome. Looks like he was right. "In September, Google launched ARC-the "App Runtime for Chrome,"-a project that allowed Android apps to run on Chrome OS. A few days later, a hack revealed the project's full potential: it enabled ARC on every "desktop" version of Chrome, meaning you could unofficially run Android apps on Chrome OS, Windows, Mac OS X, and Linux. ARC made Android apps run on nearly every computing platform (save iOS). ARC is an early beta though so Google has kept the project's reach very limited-only a handful of apps have been ported to ARC, which have all been the result of close collaborations between Google and the app developer. Now though, Google is taking two big steps forward with the latest developer preview: it's allowing any developer to run their app on ARC via a new Chrome app packager, and it's allowing ARC to run on any desktop OS with a Chrome browser. ARC runs Windows, Mac, Linux, and Chrome OS thanks to Native Client (abbreviated "NaCL"). NaCL is a Chrome sandboxing technology that allows Chrome apps and plugins to run at "near native" speeds, taking full advantage of the system's CPU and GPU. Native Client turns Chrome into a development platform, write to it, and it'll run on all desktop Chrome browsers. Google ported a full Android stack to Native Client, allowing Android apps to run on most major OSes. With the original ARC release, there was no official process to getting an Android app running on the Chrome platform (other than working with Google). Now Google has released the adorably-named ARC Welder, a Chrome app which will convert any Android app into an ARC-powered Chrome app. It's mainly for developers to package up an APK and submit it to the Chrome Web Store, but anyone can package and launch an APK from the app directly."
Gonzalo San Gil, PhD.

Nikto - OWASP - 0 views

  •  
    "Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 3500 potentially dangerous files/CGIs, versions on over 900 servers, and version specific problems on over 250 servers. Scan items and plugins are frequently updated and can be automatically updated (if desired). "
  •  
    "Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 3500 potentially dangerous files/CGIs, versions on over 900 servers, and version specific problems on over 250 servers. Scan items and plugins are frequently updated and can be automatically updated (if desired). "
Paul Merrell

The Wifi Alliance, Coming Soon to Your Neighborhood: 5G Wireless | Global Research - Ce... - 0 views

  • Just as any new technology claims to offer the most advanced development; that their definition of progress will cure society’s ills or make life easier by eliminating the drudgery of antiquated appliances, the Wifi Alliance  was organized as a worldwide wireless network to connect ‘everyone and everything, everywhere” as it promised “improvements to nearly every aspect of daily life.”    The Alliance, which makes no pretense of potential health or environmental concerns, further proclaimed (and they may be correct) that there are “more wifi devices than people on earth”.   It is that inescapable exposure to ubiquitous wireless technologies wherein lies the problem.   
  • Even prior to the 1997 introduction of commercially available wifi devices which has saturated every industrialized country, EMF wifi hot spots were everywhere.  Today with the addition of cell and cordless phones and towers, broadcast antennas, smart meters and the pervasive computer wifi, both adults and especially vulnerable children are surrounded 24-7 by an inescapable presence with little recognition that all radiation exposure is cumulative.    
  • The National Toxicology Program (NTP), a branch of the US National Institute for Health (NIH), conducted the world’s largest study on radiofrequency radiation used by the US telecommunications industry and found a ‘significantly statistical increase in brain and heart cancers” in animals exposed to EMF (electromagnetic fields).  The NTP study confirmed the connection between mobile and wireless phone use and human brain cancer risks and its conclusions were supported by other epidemiological peer-reviewed studies.  Of special note is that studies citing the biological risk to human health were below accepted international exposure standards.    
  •  
    ""…what this means is that the current safety standards as off by a factor of about 7 million.' Pointing out that a recent FCC Chair was a former lobbyist for the telecom industry, "I know how they've attacked various people.  In the U.S. … the funding for the EMF research [by the Environmental Protection Agency] was cut off starting in 1986 … The U.S. Office of Naval Research had been funding a fair amount of research in this area [in the '70s]. They [also] … stopped funding new grants in 1986 …  And then the NIH a few years later followed the same path …" As if all was not reason enough for concern or even downright panic,  the next generation of wireless technology known as 5G (fifth generation), representing the innocuous sounding Internet of Things, promises a quantum leap in power and exceedingly more damaging health impacts with mandatory exposures.      The immense expansion of radiation emissions from the current wireless EMF frequency band and 5G about to be perpetrated on an unsuspecting American public should be criminal.  Developed by the US military as non lethal perimeter and crowd control, the Active Denial System emits a high density, high frequency wireless radiation comparable to 5G and emits radiation in the neighborhood of 90 GHz.    The current Pre 5G, frequency band emissions used in today's commercial wireless range is from 300 Mhz to 3 GHZ as 5G will become the first wireless system to utilize millimeter waves with frequencies ranging from 30 to 300 GHz. One example of the differential is that a current LANS (local area network system) uses 2.4 GHz.  Hidden behind these numbers is an utterly devastating increase in health effects of immeasurable impacts so stunning as to numb the senses. In 2017, the international Environmental Health Trust recommended an EU moratorium "on the roll-out of the fifth generation, 5G, for telecommunication until potential hazards for human health and the environment hav
Paul Merrell

Hey ITU Member States: No More Secrecy, Release the Treaty Proposals | Electronic Front... - 0 views

  • The International Telecommunication Union (ITU) will hold the World Conference on International Telecommunications (WCIT-12) in December in Dubai, an all-important treaty-writing event where ITU Member States will discuss the proposed revisions to the International Telecommunication Regulations (ITR). The ITU is a United Nations agency responsible for international telecom regulation, a bureaucratic, slow-moving, closed regulatory organization that issues treaty-level provisions for international telecommunication networks and services. The ITR, a legally binding international treaty signed by 178 countries, defines the boundaries of ITU’s regulatory authority and provides "general principles" on international telecommunications. However, media reports indicate that some proposed amendments to the ITR—a negotiation that is already well underway—could potentially expand the ITU’s mandate to encompass the Internet.
  • The International Telecommunication Union (ITU) will hold the World Conference on International Telecommunications (WCIT-12) in December in Dubai, an all-important treaty-writing event where ITU Member States will discuss the proposed revisions to the International Telecommunication Regulations (ITR). The ITU is a United Nations agency responsible for international telecom regulation, a bureaucratic, slow-moving, closed regulatory organization that issues treaty-level provisions for international telecommunication networks and services. The ITR, a legally binding international treaty signed by 178 countries, defines the boundaries of ITU’s regulatory authority and provides "general principles" on international telecommunications. However, media reports indicate that some proposed amendments to the ITR—a negotiation that is already well underway—could potentially expand the ITU’s mandate to encompass the Internet. In similar fashion to the secrecy surrounding ACTA and TPP, the ITR proposals are being negotiated in secret, with high barriers preventing access to any negotiating document. While aspiring to be a venue for Internet policy-making, the ITU Member States do not appear to be very open to the idea of allowing all stakeholders (including civil society) to participate. The framework under which the ITU operates does not allow for any form of open participation. Mere access to documents and decision-makers is sold by the ITU to corporate “associate” members at prohibitively high rates. Indeed, the ITU’s business model appears to depend on revenue generation from those seeking to ‘participate’ in its policy-making processes. This revenue-based principle of policy-making is deeply troubling in and of itself, as the objective of policy making should be to reach the best possible outcome.
  • EFF, European Digital Rights, CIPPIC and CDT and a coalition of civil society organizations from around the world are demanding that the ITU Secretary General, the  WCIT-12 Council Working Group, and ITU Member States open up the WCIT-12 and the Council working group negotiations, by immediately releasing all the preparatory materials and Treaty proposals. If it affects the digital rights of citizens across the globe, the public needs to know what is going on and deserves to have a say. The Council Working Group is responsible for the preparatory work towards WCIT-12, setting the agenda for and consolidating input from participating governments and Sector Members. We demand full and meaningful participation for civil society in its own right, and without cost, at the Council Working Group meetings and the WCIT on equal footing with all other stakeholders, including participating governments. A transparent, open process that is inclusive of civil society at every stage is crucial to creating sound policy.
  • ...5 more annotations...
  • Civil society has good reason to be concerned regarding an expanded ITU policy-making role. To begin with, the institution does not appear to have high regard for the distributed multi-stakeholder decision making model that has been integral to the development of an innovative, successful and open Internet. In spite of commitments at WSIS to ensure Internet policy is based on input from all relevant stakeholders, the ITU has consistently put the interests of one stakeholder—Governments—above all others. This is discouraging, as some government interests are inconsistent with an open, innovative network. Indeed, the conditions which have made the Internet the powerful tool it is today emerged in an environment where the interests of all stakeholders are given equal footing, and existing Internet policy-making institutions at least aspire, with varying success, to emulate this equal footing. This formula is enshrined in the Tunis Agenda, which was committed to at WSIS in 2005:
  • 83. Building an inclusive development-oriented Information Society will require unremitting multi-stakeholder effort. We thus commit ourselves to remain fully engaged—nationally, regionally and internationally—to ensure sustainable implementation and follow-up of the outcomes and commitments reached during the WSIS process and its Geneva and Tunis phases of the Summit. Taking into account the multifaceted nature of building the Information Society, effective cooperation among governments, private sector, civil society and the United Nations and other international organizations, according to their different roles and responsibilities and leveraging on their expertise, is essential. 84. Governments and other stakeholders should identify those areas where further effort and resources are required, and jointly identify, and where appropriate develop, implementation strategies, mechanisms and processes for WSIS outcomes at international, regional, national and local levels, paying particular attention to people and groups that are still marginalized in their access to, and utilization of, ICTs.
  • Indeed, the ITU’s current vision of Internet policy-making is less one of distributed decision-making, and more one of ‘taking control.’ For example, in an interview conducted last June with ITU Secretary General Hamadoun Touré, Russian Prime Minister Vladimir Putin raised the suggestion that the union might take control of the Internet: “We are thankful to you for the ideas that you have proposed for discussion,” Putin told Touré in that conversation. “One of them is establishing international control over the Internet using the monitoring and supervisory capabilities of the International Telecommunication Union (ITU).” Perhaps of greater concern are views espoused by the ITU regarding the nature of the Internet. Yesterday, at the World Summit of Information Society Forum, Mr. Alexander Ntoko, head of the Corporate Strategy Division of the ITU, explained the proposals made during the preparatory process for the WCIT, outlining a broad set of topics that can seriously impact people's rights. The categories include "security," "interoperability" and "quality of services," and the possibility that ITU recommendations and regulations will be not only binding on the world’s nations, but enforced.
  • Rights to online expression are unlikely to fare much better than privacy under an ITU model. During last year’s IGF in Kenya, a voluntary code of conduct was issued to further restrict free expression online. A group of nations (including China, the Russian Federation, Tajikistan and Uzbekistan) released a Resolution for the UN General Assembly titled, “International Code of Conduct for Information Security.”  The Code seems to be designed to preserve and protect national powers in information and communication. In it, governments pledge to curb “the dissemination of information that incites terrorism, secessionism or extremism or that undermines other countries’ political, economic and social stability, as well as their spiritual and cultural environment.” This overly broad provision accords any state the right to censor or block international communications, for almost any reason.
  • EFF Joins Coalition Denouncing Secretive WCIT Planning Process June 2012 Congressional Witnesses Agree: Multistakeholder Processes Are Right for Internet Regulation June 2012 Widespread Participation Is Key in Internet Governance July 2012 Blogging ITU: Internet Users Will Be Ignored Again if Flawed ITU Proposals Gain Traction June 2012 Global Telecom Governance Debated at European Parliament Workshop
Paul Merrell

AT&T Mobility LLC, et al v. AU Optronics Corp., et al :: Ninth Circuit :: US Courts of ... - 0 views

  •  
    This page includes the opinion of the Ninth U.S. Circuit Court of Appeals on an interlocutory appeal from a district court decision to dismiss two California state law causes of action from an ongoing case, leaving only the federal law causes of action. The Ninth Circuit disagreed, vacated the district court's decision, and remanded for consideration of the dismissal issue under the correct legal standard. This was a pro-plaintiff decision that makes it very likely that the case will continue with the state law causes of action reinstated against all or nearly all defendants. This is an unusually important price-fixing case with potentially disruptive effect among mobile device component manufacturers and by such a settlement or judgment's ripple effects, manufacturers of other device components globally. Plaintiffs are several major  voice/data communications services in the U.S. with the defendants being virtually all of the manufacturers of LCD panels used in mobile telephones. One must suspect that if price-fixing is in fact universal in the LCD panel manufacturing industry, price-fixing is likely common among manufacturers of other device components. According to the Ninth Circuit opinion, the plaintiffs' amended complaint includes detailed allegations of specific price-fixing agreements and price sharing actions by principles or agents of each individual defendant company committed within the State of California, which suggests that plaintiffs have very strong evidence that the alleged conspiracy exists. This is a case to watch.    
Gonzalo San Gil, PhD.

What is Blockchain Technology? A Step-by-Step Guide For Beginners - 0 views

  •  
    "The blockchain is an undeniably ingenious invention - the brainchild of a person or group of people known by the pseudonym Satoshi Nakamoto. By allowing digital information to be distributed but not copied, blockchain technology created the backbone of a new type of internet. Originally devised for the digital currency, Bitcoin, the tech community is now finding other potential uses for the technology."
Gonzalo San Gil, PhD.

Music Group Protests ISPs Move for a Declaratory Ruling on Piracy Liability - TorrentFreak - 0 views

  •  
    " Ernesto on September 1, 2016 C: 53 News Music rights group BMG says that Internet providers RCN and Windstream should not be allowed to obtain a declaratory judgment on their potential liability for pirating subscribers. According to BMG, the providers are improperly trying immunize themselves, hiding behind the DMCA's safe harbor."
Gonzalo San Gil, PhD.

Standards - W3C - 1 views

  •  
    "W3C standards define an Open Web Platform for application development that has the unprecedented potential to enable developers to build rich interactive experiences, powered by vast data stores, that are available on any device. Although the boundaries of the platform continue to evolve, industry leaders speak nearly in unison about how HTML5 will be the cornerstone for this platform."
Gonzalo San Gil, PhD.

Pirate Blocking Injunctions Could Be Abused, Researchers Say - TorrentFreak - 0 views

  •  
    " By Andy on July 4, 2016 C: 19 News Researchers at Western Sydney University and King's College London have published a paper comparing various 'pirate' blocking mechanisms around the world. While all have shortcomings, the regime in the UK is highlighted as being open to potential future abuse."
Gonzalo San Gil, PhD.

Music Downloads Post Their Worst Decline EVER - 0 views

  •  
    "Last month, sources pointed Digital Music News to double-digit declines in music download sales, with drops potentially exceeding 20 percent year-on-year. But actual figures released early this morning show a sharper drop than imagined. According to Nielsen Soundscan first-half figures, music downloads dropped an astounding 23.9%, with total sales landed at 404.9 million."
Gonzalo San Gil, PhD.

How writing about Linux boosts your IT career - 0 views

  •  
    "The World Wide Web has become an extremely strong source of information. Job seekers search for jobs on online job boards, while recruiters search for potential employees by listing their Web profiles. Such head hunting is, in particular, common when looking for a Linux professional."
Paul Merrell

Google Acquires Titan Aerospace, The Drone Company Pursued By Facebook | TechCrunch - 0 views

  • Google has acquired Titan Aerospace, the drone startup that makes high-flying robots which was previously scoped by Facebook as a potential acquisition target (as first reported by TechCrunch), the WSJ reports. The details of the purchase weren’t disclosed, but the deal comes after Facebook disclosed its own purchase of a Titan Aerospace competitor in U.K.-based Ascenta for its globe-spanning Internet plans. Both Ascenta and Titan Aerospace are in the business of high altitude drones, which cruise nearer the edge of the earth’s atmosphere and provide tech that could be integral to blanketing the globe in cheap, omnipresent Internet connectivity to help bring remote areas online. According to the WSJ, Google will be using Titan Aerospace’s expertise and tech to contribute to Project Loon, the balloon-based remote Internet delivery project it’s currently working on along these lines. That’s not all the Titan drones can help Google with, however. The company’s robots also take high-quality images in real-time that could help with Maps initiatives, as well as contribute to things like “disaster relief” and addressing “deforestation,” a Google spokesperson tells WSJ. The main goal, however, is likely spreading the potential reach of Google and its network, which is Facebook’s aim, too. When you saturate your market and you’re among the world’s most wealthy companies, you don’t go into maintenance mode; you build new ones.
  • As for why an exit to Google looked appealing to a company like Titan, Sarah Perez outlines how Titan had sparked early interest from VCs thanks to its massive drones, which were capable of flying at a reported altitude of 65,000 feet for up to three years, but how there was also a lot of risk involved that would’ve made it difficult to find sustained investment while remaining independent. Google had just recently demonstrated how its Loon prototype balloons could traverse the globe in a remarkably short period of time, but the use of drones could conceivably make a network of Internet-providing automotons even better at globe-trotting, with a higher degree of control and ability to react to changing conditions. Some kind of hybrid system might also be in the pipeline that marries both technologies.
Gary Edwards

Tech Execs Express Extreme Concern That NSA Surveillance Could Lead To 'Breaking' The I... - 0 views

  • We need to look the world's dangers in the face. And we need to resolve that we will not allow the dangers of the world to freeze this country in its tracks. We need to recognize that antiquated laws will not keep the public safe. We need to recognize that laws that the rest of the world does not respect will ultimately undermine the fundamental ability of our own legal processes, law enforcement agencies and even the intelligence community itself. At the end of the day, we need to recognize... the one asset that the US has which is even stronger than our military might is our moral authority. And this decline in trust, has not only effected people's trust in American technology products. It has effected people's willingness to trust the leadership of the United States. If we are going to win the war on terror. If we are going to keep the public safe. If we are going to improve American competitiveness, we need Congress to stay on the path it's set. We need Congress to finish in December the job the President put before Congress in January.
  •  
    "Nothing necessarily earth-shattering was said by anyone, but it did involve a series of high powered tech execs absolutely slamming the NSA and the intelligence community, and warning of the vast repercussions from that activity, up to and including potentially splintering or "breaking" the internet by causing people to so distrust the existing internet, that they set up separate networks on their own. The execs repeated the same basic points over and over again. They had been absolutely willing to work with law enforcement when and where appropriate based on actual court orders and review -- but that the government itself completely poisoned the well with its activities, including hacking into the transmission lines between overseas datacenters. Thus, as Eric Schmidt noted, if the NSA and other law enforcement folks are "upset" about Google and others suddenly ramping up their use of encryption and being less willing to cooperate with the government, they only have themselves to blame for completely obliterating any sense of trust. Microsoft's Brad Smith, towards the end, made quite an impassioned plea -- it sounded more like a politician's stump speech -- about the need for rebuilding trust in the internet. It's at about an hour and 3 minutes into the video. He points out that while people had expected Congress to pass the USA Freedom Act, the rise of ISIS and other claimed threats has some people scared, but, he notes: We need to look the world's dangers in the face. And we need to resolve that we will not allow the dangers of the world to freeze this country in its tracks. We need to recognize that antiquated laws will not keep the public safe. We need to recognize that laws that the rest of the world does not respect will ultimately undermine the fundamental ability of our own legal processes, law enforcement agencies and even the intelligence community itself. At the end of the day, we need to recognize... the one asset that the US has which is even stron
Gonzalo San Gil, PhD.

Path cleared for judge to block NSA phone surveillance program - POLITICO [# ! Via] - 1 views

  •  
    [By Josh Gerstein 10/06/15 07:17 PM EDT A federal judge who seems keen on blocking the National Security Agency's phone records collection program has a clear path to doing so after a federal appeals court removed a potential obstacle Tuesday. ...]
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Piracy: Hollywood's Losing a Few Pounds, Who Cares? - TorrentFreak - 0 views

  •  
    " Andy on August 27, 2015 C: 72 Breaking Following news this week that a man is facing a custodial sentence after potentially defrauding the movie industry out of £120m, FACT Director General Kieron Sharp has been confronted with an uncomfortable truth. According to listeners contacting the BBC, the public has little sympathy with Hollywood."
Paul Merrell

Senate majority whip: Cyber bill will have to wait until fall | TheHill - 0 views

  • Senate Majority Whip John Cornyn (R-Texas) on Tuesday said the upper chamber is unlikely to move on a stalled cybersecurity bill before the August recess.Senate Republican leaders, including Cornyn, had been angling to get the bill — known as the Cybersecurity Information Sharing Act (CISA) — to the floor this month.ADVERTISEMENTBut Cornyn said that there is simply too much of a time crunch in the remaining legislative days to get to the measure, intended to boost the public-private exchange of data on hackers.  “I’m sad to say I don’t think that’s going to happen,” he told reporters off the Senate floor. “The timing of this is unfortunate.”“I think we’re just running out time,” he added.An aide for Senate Majority Leader Mitch McConnell (R-Ky.) said he had not committed to a specific schedule after the upper chamber wraps up work in the coming days on a highway funding bill.Cornyn said Senate leadership will look to move on the bill sometime after the legislature returns in September from its month-long break.
  • The move would delay yet again what’s expected to be a bruising floor fight about government surveillance and digital privacy rights.“[CISA] needs a lot of work,” Sen. Patrick Leahy (D-Vt.), who currently opposes the bill, told The Hill on Tuesday. “And when it comes up, there’s going to have to be a lot of amendments otherwise it won’t pass.”Despite industry support, broad bipartisan backing, and potentially even White House support, CISA has been mired in the Senate for months over privacy concerns.Civil liberties advocates worry the bill would create another venue for the government’s intelligence wing to collect sensitive data on Americans only months after Congress voted to rein in surveillance powers.But industry groups and many lawmakers insist a bolstered data exchange is necessary to better understand and counter the growing cyber threat. Inaction will leave government and commercial networks exposed to increasingly dangerous hackers, they say.Sen. Ron Wyden (D-Ore.), who has been leading the chorus opposing the bill, rejoiced Tuesday after hearing of the likely delay.
  • “I really want to commend the advocates for the tremendous grassroots effort to highlight the fact that this bill was badly flawed from a privacy standpoint,” he told The Hill.Digital rights and privacy groups are blanketing senators’ offices this week with faxes and letters in an attempt to raise awareness of bill’s flaws.“Our side has picked up an enormous amount of support,” Wyden said.Wyden was the only senator to vote against CISA in the Senate Intelligence Committee. The panel approved the measure in March by a 14-1 vote and it looked like CISA was barrelling toward the Senate floor.After the House easily passed its companion pieces of legislation, CISA’s odds only seemed better.But the measure got tied up in the vicious debate over the National Security Agency's (NSA) spying powers that played out throughout April and May.“It’s like a number of these issues, in the committee the vote was 14-1, everyone says, ‘oh, Ron Wyden opposes another bipartisan bill,’” Wyden said Tuesday. “And I said, ‘People are going to see that this is a badly flawed bill.’”
  • ...2 more annotations...
  • CISA backers hoped that the ultimate vote to curb the NSA’s surveillance authority might quell some of the privacy fears surrounding CISA, clearing a path to passage. But numerous budget debates and the Iranian nuclear deal have chewed up much of the Senate’s floor time throughout June and July.  Following the devastating hacks at the Office of Personnel Management (OPM), Senate Republican leaders tried to jump CISA in the congressional queue by offering its language as an amendment to a defense authorization bill.Democrats — including the bill’s original co-sponsor Sen. Dianne Feinstein (D-Calif.) — revolted, angry they could not offer amendments to CISA’s language before it was attached to the defense bill.Cornyn on Tuesday chastised Democrats for stalling a bill that many of them favor.“As you know, Senate Democrats blocked that before on the defense authorization bill,” Cornyn said. “So we had an opportunity to do it then.”Now it’s unclear when the Senate will have another opportunity.When it does, however, CISA could have the votes to get through.
  • There will be vocal opposition from senators like Wyden and Leahy, and potentially from anti-surveillance advocates like Sens. Rand Paul (R-Ky.), Mike Lee (R-Utah) and Dean Heller (R-Nev.).But finding 40 votes to block the bill completely will be a difficult task.Wyden said he wouldn’t “get into speculation” about whether he could gather the support to stop CISA altogether.“I’m pleased about the progress that we’ve made,” he said.
  •  
    NSA and crew decide to delay and try later with CISA. The Internet strikes back again.
Gonzalo San Gil, PhD.

The Web's ten most dangerous neighborhoods | CSO Online - 1 views

  •  
    "Ten top-level domains are to blame for at least 95 percent of the websites that pose a potential threat to visitors Maria Korolov By Maria Korolov Follow CSO | Sep 1, 2015 1:00 AM PT"
Paul Merrell

Data Transfer Pact Between U.S. and Europe Is Ruled Invalid - The New York Times - 0 views

  • Europe’s highest court on Tuesday struck down an international agreement that allowed companies to move digital information like people’s web search histories and social media updates between the European Union and the United States. The decision left the international operations of companies like Google and Facebook in a sort of legal limbo even as their services continued working as usual.The ruling, by the European Court of Justice, said the so-called safe harbor agreement was flawed because it allowed American government authorities to gain routine access to Europeans’ online information. The court said leaks from Edward J. Snowden, the former contractor for the National Security Agency, made it clear that American intelligence agencies had almost unfettered access to the data, infringing on Europeans’ rights to privacy. The court said data protection regulators in each of the European Union’s 28 countries should have oversight over how companies collect and use online information of their countries’ citizens. European countries have widely varying stances towards privacy.
  • Data protection advocates hailed the ruling. Industry executives and trade groups, though, said the decision left a huge amount of uncertainty for big companies, many of which rely on the easy flow of data for lucrative businesses like online advertising. They called on the European Commission to complete a new safe harbor agreement with the United States, a deal that has been negotiated for more than two years and could limit the fallout from the court’s decision.
  • Some European officials and many of the big technology companies, including Facebook and Microsoft, tried to play down the impact of the ruling. The companies kept their services running, saying that other agreements with the European Union should provide an adequate legal foundation.But those other agreements are now expected to be examined and questioned by some of Europe’s national privacy watchdogs. The potential inquiries could make it hard for companies to transfer Europeans’ information overseas under the current data arrangements. And the ruling appeared to leave smaller companies with fewer legal resources vulnerable to potential privacy violations.
  • ...3 more annotations...
  • “We can’t assume that anything is now safe,” Brian Hengesbaugh, a privacy lawyer with Baker & McKenzie in Chicago who helped to negotiate the original safe harbor agreement. “The ruling is so sweepingly broad that any mechanism used to transfer data from Europe could be under threat.”At issue is the sort of personal data that people create when they post something on Facebook or other social media; when they do web searches on Google; or when they order products or buy movies from Amazon or Apple. Such data is hugely valuable to companies, which use it in a broad range of ways, including tailoring advertisements to individuals and promoting products or services based on users’ online activities.The data-transfer ruling does not apply solely to tech companies. It also affects any organization with international operations, such as when a company has employees in more than one region and needs to transfer payroll information or allow workers to manage their employee benefits online.
  • But it was unclear how bulletproof those treaties would be under the new ruling, which cannot be appealed and went into effect immediately. Europe’s privacy watchdogs, for example, remain divided over how to police American tech companies.France and Germany, where companies like Facebook and Google have huge numbers of users and have already been subject to other privacy rulings, are among the countries that have sought more aggressive protections for their citizens’ personal data. Britain and Ireland, among others, have been supportive of Safe Harbor, and many large American tech companies have set up overseas headquarters in Ireland.
  • “For those who are willing to take on big companies, this ruling will have empowered them to act,” said Ot van Daalen, a Dutch privacy lawyer at Project Moore, who has been a vocal advocate for stricter data protection rules. The safe harbor agreement has been in place since 2000, enabling American tech companies to compile data generated by their European clients in web searches, social media posts and other online activities.
  •  
    Another take on it from EFF: https://www.eff.org/deeplinks/2015/10/europes-court-justice-nsa-surveilance Expected since the Court's Advocate General released an opinion last week, presaging today's opinion.  Very big bucks involved behind the scenes because removing U.S.-based internet companies from the scene in the E.U. would pave the way for growth of E.U.-based companies.  The way forward for the U.S. companies is even more dicey because of a case now pending in the U.S.  The Second U.S. Circuit Court of Appeals is about to decide a related case in which Microsoft was ordered by the lower court to produce email records stored on a server in Ireland. . Should the Second Circuit uphold the order and the Supreme Court deny review, then under the principles announced today by the Court in the E.U., no U.S.-based company could ever be allowed to have "possession, custody, or control" of the data of E.U. citizens. You can bet that the E.U. case will weigh heavily in the Second Circuit's deliberations.  The E.U. decision is by far and away the largest legal event yet flowing out of the Edward Snowden disclosures, tectonic in scale. Up to now, Congress has succeeded in confining all NSA reforms to apply only to U.S. citizens. But now the large U.S. internet companies, Google, Facebook, Microsoft, Dropbox, etc., face the loss of all Europe as a market. Congress *will* be forced by their lobbying power to extend privacy protections to "non-U.S. persons."  Thank you again, Edward Snowden.
Gonzalo San Gil, PhD.

Google eavesdropping tool installed on computers without permission | Technology | The ... - 0 views

  •  
    "Privacy advocates claim always-listening component was involuntarily activated within Chromium, potentially exposing private conversations"
‹ Previous 21 - 40 of 139 Next › Last »
Showing 20 items per page