Skip to main content

Home/ Open Web/ Group items tagged documentation

Rss Feed Group items tagged

Gary Edwards

http://www.sdtimes.com/lgp/images/wp/What's%20next%20for%20HTML5.pdf - 0 views

  •  
    White paper from Intel discusses HTML5 and the future of computing. Intro: Computer programmers have been grappling with cross-platform issues since there was a second platform. Since then, the number of issues has rapidly increased. Today's developers can target at least four operating systems (plus their fragments), running on devices with all shapes, sizes, resolutions, persistence levels, input methods, carrier networks, connection speeds and states, UI conventions, app stores, deployment and update mechanisms, and on and on. Many of the world's developers once looked to Java* as the shining knight of cross-platform development. Indeed, the structured language of Sun* (and now Oracle) continues to solve many cross-platform issues. But it also introduces obstacles, not the least of which is a class structure that heavily burdens even the tiniest of program functions. Java's heft grew still more burdensome as developers turned to the browser for app delivery; Java applets are black boxes that are as opaque to the browser as the language is closed to the developer (with all due deference to the JCP). Around the same time Java was fuelling the browser wars, a like-named interpreted language was beginning to emerge. First called Mocha, later LiveScript, and finally JavaScript*, the language proved more useful than Java in some ways because it could interact with the browser and control content display using HTML's cascading style sheets (CSS). JavaScript support soon became standard in every browser. It is now the programming language of HTML5, which is currently being considered by the World Wide Web Consortium as the next markup-language standard. To better understand HTML5-why it is where it is and where it's going- Intel® Software Adrenaline turned to Moh Haghighat, a senior principal engineer in the Developer Products Division of Intel's Software and Services Group. Moh was the technical lead from Intel's side on the first JavaScript
Gary Edwards

Office Productivity Software Is No Closer To Becoming A Commodity | Forrester Blogs - 0 views

  • We just published a report on the state of adoption of Office 2013 And Productivity Suite Alternatives based on a survey of 155 Forrester clients with responsibility for those investments. The sample does not fully represent the market, but lets us draw comparisons to the results of our previous survey in 2011. Some key takeaways from the data:   One in five firms uses email in the cloud. Another quarter plans to move at some point. More are using Office 365 (14%) than Google Apps (9%).  Just 22% of respondents are on Office 2013. Another 36% have plans to be on it. Office 2013's uptake will be slower than Office 2010 because fewer firms plan to combine the rollout of Office 2013 with Windows 8 as they combined Office 2010 with Windows 7. Alternatives to Microsoft Office show little traction. In 2011, 13% of respondents supported open source alternatives to Office. This year the number is just 5%. Google Docs has slightly higher adoption and is in use at 13% of companies. 
  • Microsoft continues to have a stranglehold on office productivity in the enterprise: Just 6% of companies in our survey give all or some employees an alternative instead of the installed version of Microsoft Office. Most surprising of all, multi-platform support is NOT a priority. Apps on iOS and Android devices were important to 16% of respondents, and support for non-Windows PCs was important to only 11%. For now, most technology decision-makers seem satisfied with leaving employees to self-provision office productivity apps on their smartphones and tablets if they really want them. 
  • Do you think we're getting closer to replacing Microsoft Office in the workplace?
  •  
    "We (Forrester) just published a report on the state of adoption of Office 2013 And Productivity Suite Alternatives based on a survey of 155 Forrester clients with responsibility for those investments. The sample does not fully represent the market, but lets us draw comparisons to the results of our previous survey in 2011. Some key takeaways from the data:   One in five firms uses email in the cloud. Another quarter plans to move at some point. More are using Office 365 (14%) than Google Apps (9%).  Just 22% of respondents are on Office 2013. Another 36% have plans to be on it. Office 2013's uptake will be slower than Office 2010 because fewer firms plan to combine the rollout of Office 2013 with Windows 8 as they combined Office 2010 with Windows 7. Alternatives to Microsoft Office show little traction. In 2011, 13% of respondents supported open source alternatives to Office. This year the number is just 5%. Google Docs has slightly higher adoption and is in use at 13% of companies. "
Paul Merrell

This project aims to make '404 not found' pages a thing of the past - 0 views

  • The Internet is always changing. Sites are rising and falling, content is deleted, and bad URLs can lead to '404 Not Found' errors that are as helpful as a brick wall. A new project proposes an do away with dead 404 errors by implementing new HTML code that will help access prior versions of hyperlinked content. With any luck, that means that you’ll never have to run into a dead link again. The “404-No-More” project is backed by a formidable coalition including members from organizations like the Harvard Library Innovation Lab, Los Alamos National Laboratory, Old Dominion University, and the Berkman Center for Internet & Society. Part of the Knight News Challenge, which seeks to strengthen the Internet for free expression and innovation through a variety of initiatives, 404-No-More recently reached the semifinal stage. The project aims to cure so-called link rot, the process by which hyperlinks become useless overtime because they point to addresses that are no longer available. If implemented, websites such as Wikipedia and other reference documents would be vastly improved. The new feature would also give Web authors a way provide links that contain both archived copies of content and specific dates of reference, the sort of information that diligent readers have to hunt down on a website like Archive.org.
  • While it may sound trivial, link rot can actually have real ramifications. Nearly 50 percent of the hyperlinks in Supreme Court decisions no longer work, a 2013 study revealed. Losing footnotes and citations in landmark legal decisions can mean losing crucial information and context about the laws that govern us. The same study found that 70 percent of URLs within the Harvard Law Review and similar journals didn’t link to the originally cited information, considered a serious loss surrounding the discussion of our laws. The project’s proponents have come up with more potential uses as well. Activists fighting censorship will have an easier time combatting government takedowns, for instance. Journalists will be much more capable of researching dynamic Web pages. “If every hyperlink was annotated with a publication date, you could automatically view an archived version of the content as the author intended for you to see it,” the project’s authors explain. The ephemeral nature of the Web could no longer be used as a weapon. Roger Macdonald, a director at the Internet Archive, called the 404-No-More project “an important contribution to preservation of knowledge.”
  • The new feature would come in the form of introducing the mset attribute to the <a> element in HTML, which would allow users of the code to specify multiple dates and copies of content as an external resource. For instance, if both the date of reference and the location of a copy of targeted content is known by an author, the new code would like like this: The 404-No-More project’s goals are numerous, but the ultimate goal is to have mset become a new HTML standard for hyperlinks. “An HTML standard that incorporates archives for hyperlinks will loop in these efforts and make the Web better for everyone,” project leaders wrote, “activists, journalists, and regular ol’ everyday web users.”
Paul Merrell

How an FBI informant orchestrated the Stratfor hack - 0 views

  • Sitting inside a medium-security federal prison in Kentucky, Jeremy Hammond looks defiant and frustrated.  “[The FBI] could've stopped me,” he told the Daily Dot last month at the Federal Correctional Institution, Manchester. “They could've. They knew about it. They could’ve stopped dozens of sites I was breaking into.” Hammond is currently serving the remainder of a 10-year prison sentence in part for his role in one of the most high-profile cyberattacks of the early 21st century. His 2011 breach of Strategic Forecasting, Inc. (Stratfor) left tens of thousands of Americans vulnerable to identity theft and irrevocably damaged the Texas-based intelligence firm's global reputation. He was also indicted for his role in the June 2011 hack of an Arizona state law enforcement agency's computer servers.
  • There's no question of his guilt: Hammond, 29, admittedly hacked into Stratfor’s network and exfiltrated an estimated 60,000 credit card numbers and associated data and millions of emails, information that was later shared with the whistleblower organization WikiLeaks and the hacker collective Anonymous.   Sealed court documents obtained by the Daily Dot and Motherboard, however, reveal that the attack was instigated and orchestrated not by Hammond, but by an informant, with the full knowledge of the Federal Bureau of Investigation (FBI).  In addition to directly facilitating the breach, the FBI left Stratfor and its customers—which included defense contractors, police chiefs, and National Security Agency employees—vulnerable to future attacks and fraud, and it requested knowledge of the data theft to be withheld from affected customers. This decision would ultimately allow for millions of dollars in damages.
Paul Merrell

F.C.C. Backs Opening Net Rules for Debate - NYTimes.com - 0 views

  • On Thursday, the Federal Communications Commission voted 3-2 to open for public debate new rules meant to guarantee an open Internet. Before the plan becomes final, though, the chairman of the commission, Tom Wheeler, will need to convince his colleagues and an array of powerful lobbying groups that the plan follows the principle of net neutrality, the idea that all content running through the Internet’s pipes is treated equally.While the rules are meant to prevent Internet providers from knowingly slowing data, they would allow content providers to pay for a guaranteed fast lane of service. Some opponents of the plan, those considered net neutrality purists, argue that allowing some content to be sent along a fast lane would essentially discriminate against other content.
  • “We are dedicated to protecting and preserving an open Internet,” Mr. Wheeler said immediately before the commission vote. “What we’re dealing with today is a proposal, not a final rule. We are asking for specific comment on different approaches to accomplish the same goal, an open Internet.”
  • Mr. Wheeler argued on Thursday that the proposal did not allow a fast lane. But the proposed rules do not address the connection between an Internet service provider, which sells a connection to consumers, and the operators of backbone transport networks that connect various parts of the Internet’s central plumbing.That essentially means that as long as an Internet service provider like Comcast or Verizon does not slow the service that a consumer buys, the provider can give faster service to a company that pays to get its content to consumers unimpeded
  • ...2 more annotations...
  • The plan will be open for comment for four months, beginning immediately.
  • The public will have until July 15 to submit initial comments on the proposal to the commission, and until Sept. 10 to file comments replying to the initial discussions.
  •  
    I'll need to read the proposed rule, but this doesn't sound good. the FCC majority tries to spin this as options still being open, but I don't recall ever seeing formal regulations changed substantially from their proposed form. If their were to be substantial change, another proposal and comment period would be likely. The public cannot comment on what has not been proposed, so substantial departure from the proposal, absent a new proposal and comment period, would offend basic principles of public notice and comment rulemaking under the Administrative Procedures Act. The proverbial elephant in the room that the press hasn't picked up on yet is the fight that is going on behind the scenes in the Dept. of Justice. If the Anti-trust Division gets its way, DoJ's public comments on the proposed rule could blow this show out of the water. The ISPs are regulated utility monopolies in vast areas of the U.S. with market consolidation at or near the limits of what the anti-trust folk will tolerate. And leveraging one monopoly (service to subscribers) to impose another (fees for internet-based businesses to gain high speed access) is directly counter to the Sherman Act's section 2.   http://www.law.cornell.edu/uscode/text/15/2
Paul Merrell

Hacking Online Polls and Other Ways British Spies Seek to Control the Internet - The In... - 0 views

  • The secretive British spy agency GCHQ has developed covert tools to seed the internet with false information, including the ability to manipulate the results of online polls, artificially inflate pageview counts on web sites, “amplif[y]” sanctioned messages on YouTube, and censor video content judged to be “extremist.” The capabilities, detailed in documents provided by NSA whistleblower Edward Snowden, even include an old standby for pre-adolescent prank callers everywhere: A way to connect two unsuspecting phone users together in a call.
  • he “tools” have been assigned boastful code names. They include invasive methods for online surveillance, as well as some of the very techniques that the U.S. and U.K. have harshly prosecuted young online activists for employing, including “distributed denial of service” attacks and “call bombing.” But they also describe previously unknown tactics for manipulating and distorting online political discourse and disseminating state propaganda, as well as the apparent ability to actively monitor Skype users in real-time—raising further questions about the extent of Microsoft’s cooperation with spy agencies or potential vulnerabilities in its Skype’s encryption. Here’s a list of how JTRIG describes its capabilities: • “Change outcome of online polls” (UNDERPASS) • “Mass delivery of email messaging to support an Information Operations campaign” (BADGER) and “mass delivery of SMS messages to support an Information Operations campaign” (WARPARTH) • “Disruption of video-based websites hosting extremist content through concerted target discovery and content removal.” (SILVERLORD)
  • • “Active skype capability. Provision of real time call records (SkypeOut and SkypetoSkype) and bidirectional instant messaging. Also contact lists.” (MINIATURE HERO) • “Find private photographs of targets on Facebook” (SPRING BISHOP) • “A tool that will permanently disable a target’s account on their computer” (ANGRY PIRATE) • “Ability to artificially increase traffic to a website” (GATEWAY) and “ability to inflate page views on websites” (SLIPSTREAM) • “Amplification of a given message, normally video, on popular multimedia websites (Youtube)” (GESTATOR) • “Targeted Denial Of Service against Web Servers” (PREDATORS FACE) and “Distributed denial of service using P2P. Built by ICTR, deployed by JTRIG” (ROLLING THUNDER)
  • ...1 more annotation...
  • • “A suite of tools for monitoring target use of the UK auction site eBay (www.ebay.co.uk)” (ELATE) • “Ability to spoof any email address and send email under that identity” (CHANGELING) • “For connecting two target phone together in a call” (IMPERIAL BARGE) While some of the tactics are described as “in development,” JTRIG touts “most” of them as “fully operational, tested and reliable.” It adds: “We only advertise tools here that are either ready to fire or very close to being ready.”
Paul Merrell

China Pressures U.S. Companies to Buckle on Strong Encryption and Surveillance - 0 views

  • Before Chinese President Xi Jinping visits President Obama, he and Chinese executives have some business in Seattle: pressing U.S. tech companies, hungry for the Chinese market, to comply with the country’s new stringent and suppressive Internet policies. The New York Times reported last week that Chinese authorities sent a letter to some U.S. tech firms seeking a promise they would not harm China’s national security. That might require such things as forcing users to register with their real names, storing Chinese citizens’ data locally where the government can access it, and building government “back doors” into encrypted communication products for better surveillance. China’s new national security law calls for systems that are “secure and controllable”, which industry groups told the Times in July means companies will have to hand over encryption keys or even source code to their products. Among the big names joining Xi at Wednesday’s U.S.-China Internet Industry Forum: Apple, Google, Facebook, IBM, and Microsoft.
  • The meeting comes as U.S. law enforcement officials have been pressuring companies to give them a way to access encrypted communications. The technology community has responded by pointing out that any sort of hole for law enforcement weakens the entire system to attack from outside bad actors—such as China, which has been tied to many instances of state-sponsored hacking into U.S systems. In fact, one argument privacy advocates have repeatedly made is that back doors for law enforcement would set a dangerous precedent when countries like China want the same kind of access to pursue their own domestic political goals. But here, potentially, the situation has been reversed, with China using its massive economic leverage to demand that sort of access right now. Human rights groups are urging U.S. companies not to give in.
Paul Merrell

NSA Doesn't Want Court That Found Phone Dragnet Illegal to Actually Do Anything About It - 0 views

  • The National Security Agency doesn’t think it’s relevant that its dragnet of American telephone data — information on who’s calling who, when, and for how long — was ruled illegal back in May. An American Civil Liberties Union lawsuit is asking the Second Circuit Court of Appeals, which reached that conclusion, to immediately enjoin the program. But the U.S. government responded on Monday evening, saying that Congressional passage of the USA Freedom Act trumped the earlier ruling. The Freedom Act ordered an end to the program — but with a six-month wind-down period.
  • The ACLU still maintains that even temporary revival is a blatant infringement on American’s legal rights. “We strongly disagree with the government’s claim that recent reform legislation was meant to give the NSA’s phone-records dragnet a new lease on life,” said Jameel Jaffer, the ACLU’s deputy legal director in a statement. “The appeals court should order the NSA to end this surveillance now.  It’s unlawful and it’s an entirely unnecessary intrusion into the privacy of millions of people.” On Monday, the Obama administration announced that at the same time the National Security Agency ends the dragnet, it will also stop perusing the vast archive of data collected by the program. Read the U.S. government brief responding to the ACLU below:
  •  
    Go ACLU!
Paul Merrell

Here Are All the Sketchy Government Agencies Buying Hacking Team's Spy Tech | Motherboard - 0 views

  • They say what goes around comes around, and there's perhaps nowhere that rings more true than in the world of government surveillance. Such was the case on Monday morning when Hacking Team, the Italian company known for selling electronic intrusion tools to police and federal agencies around the world, awoke to find that it had been hacked itself—big time—apparently exposing its complete client list, email spools, invoices, contracts, source code, and more. Those documents show that not only has the company been selling hacking tools to a long list of foreign governments with dubious human rights records, but it’s also establishing a nice customer base right here in the good old US of A. The cache, which sources told Motherboard is legitimate, contains more than 400 gigabytes of files, many of which confirm previous reports that the company has been selling industrial-grade surveillance software to authoritarian governments. Hacking Team is known in the surveillance world for its flagship hacking suite, Remote Control System (RCS) or Galileo, which allows its government and law enforcement clients to secretly install “implants” on remote machines that can steal private emails, record Skype calls, and even monitor targets through their computer's webcam. Hacking Team in North America
  • According to leaked contracts, invoices and an up-to-date list of customer subscriptions, Hacking Team’s clients—which the company has consistently refused to name—also include Kazakhstan, Azerbaijan, Oman, Saudi Arabia, Uzbekistan, Bahrain, Ethiopia, Nigeria, Sudan and many others. The list of names matches the findings of Citizen Lab, a research lab at the University of Toronto's Munk School of Global Affairs that previously found traces of Hacking Team on the computers of journalists and activists around the world. Last year, the Lab's researchers mapped out the worldwide collection infrastructure used by Hacking Team's customers to covertly transport stolen data, unveiling a massive network comprised of servers based in 21 countries. Reporters Without Borders later named the company one of the “Enemies of the Internet” in its annual report on government surveillance and censorship.
  • we’ve only scratched the surface of this massive leak, and it’s unclear how Hacking Team will recover from having its secrets spilling across the internet for all to see. In the meantime, the company is asking all customers to stop using its spyware—and likely preparing for the worst.
Paul Merrell

New Leak Of Final TPP Text Confirms Attack On Freedom Of Expression, Public Health - 0 views

  • Offering a first glimpse of the secret 12-nation “trade” deal in its final form—and fodder for its growing ranks of opponents—WikiLeaks on Friday published the final negotiated text for the Trans-Pacific Partnership (TPP)’s Intellectual Property Rights chapter, confirming that the pro-corporate pact would harm freedom of expression by bolstering monopolies while and injure public health by blocking patient access to lifesaving medicines. The document is dated October 5, the same day it was announced in Atlanta, Georgia that the member states to the treaty had reached an accord after more than five years of negotiations. Aside from the WikiLeaks publication, the vast majority of the mammoth deal’s contents are still being withheld from the public—which a WikiLeaks press statement suggests is a strategic move by world leaders to forestall public criticism until after the Canadian election on October 19. Initial analyses suggest that many of the chapter’s more troubling provisions, such as broader patent and data protections that pharmaceutical companies use to delay generic competition, have stayed in place since draft versions were leaked in 2014 and 2015. Moreover, it codifies a crackdown on freedom of speech with rules allowing widespread internet censorship.
Paul Merrell

Facebook's Deepface Software Has Gotten Them in Deep Trouble | nsnbc international - 0 views

  • In a Chicago court, several Facebook users filed a class-action lawsuit against the social media giant for allegedly violating its users’ privacy rights to acquire the largest privately held stash of biometric face-recognition data in the world. The court documents reveal claims that “Facebook began violating the Illinois Biometric Information Privacy Act (IBIPA) of 2008 in 2010, in a purported attempt to make the process of tagging friends easier.”
  • This was accomplished through the “tag suggestions” feature provided by Facebook which “scans all pictures uploaded by users and identifies any Facebook friends they may want to tag.” The Facebook users maintain that this feature is a “form of data mining [that] violates user’s privacy”. One plaintiff said this is a “brazen disregard for its users’ privacy rights,” through which Facebook has “secretly amassed the world’s largest privately held database of consumer biometrics data.” Because “Facebook actively conceals” their protocol using “faceprint databases” to identify Facebook users in photos, and “doesn’t disclose its wholesale biometrics data collection practices in its privacy policies, nor does it even ask users to acknowledge them.”
  • This would be a violation of the IBIPA which states it is “unlawful to collect biometric data without written notice to the subject stating the purpose and length of the data collection, and without obtaining the subject’s written release.” Because all users are automatically part of the “faceprint’ facial recognition program, this is an illegal act in the state of Illinois, according to the complaint. Jay Edelson, attorney for the plaintiffs, asserts the opt-out ability to prevent other Facebook users from tagging them in photos is “insufficient”.
  • ...1 more annotation...
  • Deepface is the name of the new technology researchers at Facebook created in order to identify people in pictures; mimicking the way humans recognize the differences in each other’s faces. Facebook has already implemented facial recognition software (FRS) to suggest names for tagging photos; however Deepface can “identify faces from a side view” as well as when the person is directly facing the camera in the picture. In 2013, Erin Egan, chief privacy officer for Facebook, said that this upgrade “would give users better control over their personal information, by making it easier to identify posted photos in which they appear.” Egan explained: “Our goal is to facilitate tagging so that people know when there are photos of them on our service.” Facebook has stated that they retain information from their users that is syphoned from all across the web. This data is used to increase Facebook’s profits with the information being sold for marketing purposes. This is the impressive feature of Deepface; as previous FRS can only decipher faces in images that are frontal views of people. Shockingly, Deepface displays 97.25% accuracy in identifying faces in photos. That is quite a feat considering humans have a 97.53% accuracy rate. In order to ensure accuracy, Deepface “conducts its analysis based on more than 120 million different parameters.”
Paul Merrell

Dr Dobbs - HTML5 Web Storage - 0 views

  • HTML5 Web Storage is an API that makes it easy to persist data across web requests. Before the Web Storage API, remote web servers had to store any data that persisted by sending it back and forth from client to server. With the advent of the Web Storage API, however, developers can now store data directly in a browser for repeated access across requests, or to be retrieved long after you completely close the browser, thus greatly reducing network traffic. One more reason to use Web Storage is that this is one of few HTML5 APIs that is already supported in all browsers, including Internet Explorer 8.
  • In many cases, the same results can be achieved without involving a network or remote server. This is where the HTML5 Web Storage API comes in. By using this simple API, developers can store values in easily retrievable JavaScript objects, which persist across page loads. By using either sessionStorage or localStorage, developers can choose to let values survive either across page loads in a single window or tab, or across browser restarts, respectively. Stored data is not transmitted across the network, and is easily accessed on return visits to a page. Furthermore, larger values -- as high as a few megabytes -- can be persisted using the HTML5 Web Storage API. This makes Web Storage suitable for document and file data that would quickly blow out the size limit of a cookie.
Paul Merrell

Last Call Working Draft -- W3C Authoring Tool Accessibility Guidelines (ATAG) 2.0 - 0 views

  • This is a Working Draft of the Authoring Tool Accessibility Guidelines (ATAG) version 2.0. This document includes recommendations for assisting authoring tool developers to make the authoring tools that they develop more accessible to people with disabilities, including blindness and low vision, deafness and hearing loss, learning disabilities, cognitive limitations, motor difficulties, speech difficulties, and others. Accessibility, from an authoring tool perspective, includes addressing the needs of two (potentially overlapping) user groups with disabilities: authors of web content, whose needs are met by ensuring that the authoring tool user interface itself is accessible (addressed by Part A of the guidelines), and end users of web content, whose needs are met by ensuring that all authors are enabled, supported, and guided towards producing accessible web content (addressed by Part B of the guidelines).
  • Examples of authoring tools: ATAG 2.0 applies to a wide variety of web content generating applications, including, but not limited to: web page authoring tools (e.g., WYSIWYG HTML editors) software for directly editing source code (see note below) software for converting to web content technologies (e.g., "Save as HTML" features in office suites) integrated development environments (e.g., for web application development) software that generates web content on the basis of templates, scripts, command-line input or "wizard"-type processes software for rapidly updating portions of web pages (e.g., blogging, wikis, online forums) software for generating/managing entire web sites (e.g., content management systems, courseware tools, content aggregators) email clients that send messages in web content technologies multimedia authoring tools debugging tools for web content software for creating mobile web applications
  • Web-based and non-web-based: ATAG 2.0 applies equally to authoring tools of web content that are web-based, non-web-based or a combination (e.g., a non-web-based markup editor with a web-based help system, a web-based content management system with a non-web-based file uploader client). Real-time publishing: ATAG 2.0 applies to authoring tools with workflows that involve real-time publishing of web content (e.g., some collaborative tools). For these authoring tools, conformance to Part B of ATAG 2.0 may involve some combination of real-time accessibility supports and additional accessibility supports available after the real-time authoring session (e.g., the ability to add captions for audio that was initially published in real-time). For more information, see the Implementing ATAG 2.0 - Appendix E: Real-time content production. Text Editors: ATAG 2.0 is not intended to apply to simple text editors that can be used to edit source content, but that include no support for the production of any particular web content technology. In contrast, ATAG 2.0 can apply to more sophisticated source content editors that support the production of specific web content technologies (e.g., with syntax checking, markup prediction, etc.).
  •  
    Link is the latest version link so page should update when this specification graduates to a W3C recommendation.
Paul Merrell

Media Queries - 0 views

  • Abstract HTML4 and CSS2 currently support media-dependent style sheets tailored for different media types. For example, a document may use sans-serif fonts when displayed on a screen and serif fonts when printed. ‘screen’ and ‘print’ are two media types that have been defined. Media queries extend the functionality of media types by allowing more precise labeling of style sheets. A media query consists of a media type and zero or more expressions that check for the conditions of particular media features. Among the media features that can be used in media queries are ‘width’, ‘height’, and ‘color’. By using media queries, presentations can be tailored to a specific range of output devices without changing the content itself.
  • There must be at least two interoperable implementations. For the purposes of this criterion, we define the following terms: interoperable passing the respective test case(s) in the CSS test suite, or, if the implementation is not a Web browser, an equivalent test. Every relevant test in the test suite should have an equivalent test created if such a user agent (UA) is to be used to claim interoperability. In addition if such a UA is to be used to claim interoperability, then there must one or more additional UAs which can also pass those equivalent tests in the same way for the purpose of interoperability. The equivalent tests must be made publicly available for the purposes of peer review.
  •  
    While the candidate Media Queries specification is interesting and a small step in the right direction, W3C continues to butcher the meaning of "interoperability." In this latest sleight of hand, we now have "interoperable" *user agents*, a term of art used by W3C for implementations that only receive and cannot return data, e.g., web browsers. But under competition law, "interoperability" requires implementations that can exchange data and *mutually* use data that has been exchanged. See e.g., European Commission v. Microsoft, European Community Court of First Instance (Grand Chamber Judgment of 17 September, 2007), para. 230, 374, 421, http://tinyurl.com/23md42c (rejecting Microsoft's argument that "interoperability" has a 1-way rather than 2-way meaning; "Directive 91/250 defines interoperability as 'the ability to exchange information and *mutually* to use the information which has been exchanged'") (emphasis added). W3C --- the World Wide Web Conspiracy --- continues down its rut of broadcasting information whilst denying the world the power to round-trip the data received. Incredibly, in its latest assault on the meaning of "interoperability", W3C no longer defines "conformance" but redefines the term "interoperability" as its substitute for "conformance." As though W3C could redefine the law?
Gary Edwards

4 Pillars for Web Content Management Site & Content Optimization - 0 views

  •  
    4 Pillars for Web Content Management Site & Content Optimization.  Excellent review of the basics of WCM - DMS. Billy Cripe from Oracle.
Paul Merrell

Selectors Level 3 - 0 views

  • This document describes the selectors that already exist in CSS1 [CSS1] and CSS2 [CSS21], and further introduces new selectors for CSS3 and other languages that may need them.
  •  
    W3C releases CSS Selectors Level 3 as Proposed Recommendation.
Paul Merrell

Official Google Docs Blog: Upload and store your files in the cloud with Google Docs - 0 views

  • We're happy to announce that over the next few weeks we will be rolling out the ability to upload, store and organize any type of file in Google Docs. With this change, you'll be able to upload and access your files from any computer -- all you need is an Internet connection.Instead of emailing files to yourself, which is particularly difficult with large files, you can upload to Google Docs any file up to 250 MB. You'll have 1 GB of free storage for files you don't convert into one of the Google Docs formats (i.e. Google documents, spreadsheets, and presentations), and if you need more space, you can buy additional storage for $0.25 per GB per year. This makes it easy to backup more of your key files online, from large graphics and raw photos to unedited home videos taken on your smartphone. You might even be able to replace the USB drive you reserved for those files that are too big to send over email.Combined with shared folders, you can store, organize, and collaborate on files more easily using Google Docs. For example, if you are in a club or PTA working on large graphic files for posters or a newsletter, you can upload them to a shared folder for collaborators to view, download, and print.
Gary Edwards

Is the Apps Marketplace just playing catchup to Microsoft? | Googling Google | ZDNet.com - 0 views

shared by Gary Edwards on 12 Mar 10 - Cached
  • Take the basic communication, calendaring, and documentation enabled for free by Apps Standard Edition, add a few slick applications from the Marketplace and the sky was the limit. Or at least the clouds were.
    • Gary Edwards
       
      Google Apps have all the basic elements of a productivity environment, but lack the internal application messaging, data connectivity and exchange that made the Windows desktop productivity platform so powerful.   gAPPS are great.  They even have copy/paste! But they lack the basics needed for simple "merge" of client and contact data into a wordprocessor letter/report/form/research paper. Things like DDE, OLE, ODBC, MAPI, COM, and DCOM have to be reinvented for the Open Web.   gAPPS is a good place to start.  But the focus has got to shift to Wave technologies like OT, XMPP and JSON.  Then there are the lower level innovations such as Web Sockets, Native Client, HTML5, and the Cairo-Skia graphics layer (thanks Florian).
  • Whether you (or your business) choose a Microsoft-centered solution that now has well-implemented cloud integration and tightly coupled productivity and collaboration software (think Office Live Web Apps, Office 2010, and Sharepoint 2010) or you build a business around the web-based collaboration inherent in Google Apps and extend its core functions with cool and useful applications, you win.
    • Gary Edwards
       
      Not true!!! The Microsoft Cloud is based on proprietary technologies, with the Silverlight-OOXML runtime/plug-in at the core of a WPF-.NET driven "Business Productivity Platform. The Google Cloud is based on the Open Web, and not the "Open Web" that's tied up in corporate "standards" consortia like the W3C, OASIS and Ecma. One of the reasons i really like WebKit is that they push HTML5 technologies to the edge, submitting new enhancements back into the knuckle dragging W3C HTML5 workgroups as "proposals".  They don't however wait for the entangled corporate politics of the W3C to "approve and include" these proposals.  Google and Apple submit and go live simultaneously.   This of course negates the heavy influence platform rivals like Microsoft have over the activities of corporate standards orgs.  Which has to be done if WebKit-HTML5-JavaScript-XMPP-OT-Web Sockets-Native Client family of technologies is ever to challenge the interactive and graphical richness of proprietary Microsoft technologies (Silverlight, OOXML, DrawingML, C#). The important hedge here is that Google is Open Sourcing their enhancements and innovations.  Without that Open Sourcing, i think there would be reasons to fear any platform player pushing beyond the corporate standards consortia approval process.  For me, OSS balances out the incredible influence of Google, and the ownership they have over core Open Web productivity application components. Which is to say; i don't want to find myself tomorrow in the same position with a Google Open Web Productivity Platform, that i found myself in with the 1994 Windows desktop productivity environment - where Microsoft owned every opportunity, and could take the marketshare of any Windows developed application with simple announcements that they to will enter that application category.  (ex. the entire independent contact/project management category was wiped out by mere announcement of MS Outlook).
Paul Merrell

[webkit-dev] Announcing WebKit2 - 0 views

  • This is a heads-up that we will shortly start landing patches for a new WebKit framework that we at Apple have been working on for a while. We currently call this new framework "WebKit2". WebKit2 is designed from the ground up to support a split process model, where the web content (JavaScript, HTML, layout, etc) lives in a separate process. This model is similar to what Google Chrome offers, with the major difference being that we have built the process split model directly into the framework, allowing other clients to use it. Some high-level documentation is available at http://trac.webkit.org/wiki/WebKit2 Currently WebKit2 is available for Mac and Windows, and we would gladly accept patches to add more ports.
Paul Merrell

Official Google Blog: Alis volat propriis: Oregon's bringing Google Apps to classrooms ... - 0 views

  • Things have changed since I was in middle school of course, and there are people working hard to bring technology into classrooms to help students learn and teachers teach. Today Oregon is taking a huge step in that direction — they’re the first state to open up Google Apps for Education to public schools throughout the state.Starting today, the Oregon Department of Education will offer Google Apps to all the school districts in the state — helping teachers, staff and students use Gmail, Docs, Sites, Video, Groups and more within their elementary, middle and high schools. School funding has been hit hard over the past couple of years, and Oregon is no exception. This move is going to save the Department of Education $1.5 million per year — big bucks for a hurting budget.With Google Apps, students in Oregon can build websites or email teachers about a project. Their documents and email will live online in the cloud — so they’ll be able to work from a classroom or a computer lab, at home or at the city (or county) library. And instead of just grading a paper at the end of the process, Oregonian teachers can help students with their docs in real time, coaching them along the way. It’s critical that students learn how to use the kind of productivity technology they’ll need throughout their lives, and Oregon is helping students across the state do just that.
« First ‹ Previous 241 - 260 of 312 Next › Last »
Showing 20 items per page