Skip to main content

Home/ Open Web/ Group items tagged copy-paste

Rss Feed Group items tagged

Gary Edwards

Reinventing Copy and Paste - Anil Dash - 0 views

  •  
    We can all learn a lot of lessons from the history of DDE/OLE/ OLE3/COM /ActiveX/DCOM /COM+ (you can start reading up on Wikipedia to get some background) and how we went from everyone using best-of-breed standalone apps to one integrated, nearly monolithic Office. It basically all started with copy and paste. People who never spent a lot of time in singletasking, character-mode operating environments like the DOS command line don't recall that simply copying-and-pasting information between apps was difficult at the time. And part of the revelation of Windows for mainstream users (or Mac, for leading-edge tech fans), was being able to easily share data in that way. This was different than what Unix users were used to with the command-line pipe, or from what most applications do with feeds today, in allowing structured information flows between applications. There's a desire to combine data from different sources in an arbitrary way, and to have the user interface display the appropriate tools for whatever context you're in. The dominant model here, probably because of the influence of the early PARC demos, is to have toolbars or UI widgets change depending on what kind of content you're manipulating. Microsoft was really into this in the early 90s with OLE2, where your Word toolbars would morph into Excel toolbars if you double-clicked on an embedded spreadsheet. It was ungainly and ugly and slow, especially if you had less than an exorbitant 8MB of RAM, but the idea was pretty cool. And it still is. People are so focused on data formats and feeds that they're ignoring consensus around UI interoperability. The Atom API and Metaweblog API give me a good-enough interface if I want to treat a discrete chunk of information (like a blog post) as an undifferentiated blob. But all the erstwhile spec work around microformats and structured blogging (I forget which one is for XML and which one's for XHTML) doesn't seem to have addressed user experience or editing behavior
Paul Merrell

This project aims to make '404 not found' pages a thing of the past - 0 views

  • The Internet is always changing. Sites are rising and falling, content is deleted, and bad URLs can lead to '404 Not Found' errors that are as helpful as a brick wall. A new project proposes an do away with dead 404 errors by implementing new HTML code that will help access prior versions of hyperlinked content. With any luck, that means that you’ll never have to run into a dead link again. The “404-No-More” project is backed by a formidable coalition including members from organizations like the Harvard Library Innovation Lab, Los Alamos National Laboratory, Old Dominion University, and the Berkman Center for Internet & Society. Part of the Knight News Challenge, which seeks to strengthen the Internet for free expression and innovation through a variety of initiatives, 404-No-More recently reached the semifinal stage. The project aims to cure so-called link rot, the process by which hyperlinks become useless overtime because they point to addresses that are no longer available. If implemented, websites such as Wikipedia and other reference documents would be vastly improved. The new feature would also give Web authors a way provide links that contain both archived copies of content and specific dates of reference, the sort of information that diligent readers have to hunt down on a website like Archive.org.
  • While it may sound trivial, link rot can actually have real ramifications. Nearly 50 percent of the hyperlinks in Supreme Court decisions no longer work, a 2013 study revealed. Losing footnotes and citations in landmark legal decisions can mean losing crucial information and context about the laws that govern us. The same study found that 70 percent of URLs within the Harvard Law Review and similar journals didn’t link to the originally cited information, considered a serious loss surrounding the discussion of our laws. The project’s proponents have come up with more potential uses as well. Activists fighting censorship will have an easier time combatting government takedowns, for instance. Journalists will be much more capable of researching dynamic Web pages. “If every hyperlink was annotated with a publication date, you could automatically view an archived version of the content as the author intended for you to see it,” the project’s authors explain. The ephemeral nature of the Web could no longer be used as a weapon. Roger Macdonald, a director at the Internet Archive, called the 404-No-More project “an important contribution to preservation of knowledge.”
  • The new feature would come in the form of introducing the mset attribute to the <a> element in HTML, which would allow users of the code to specify multiple dates and copies of content as an external resource. For instance, if both the date of reference and the location of a copy of targeted content is known by an author, the new code would like like this: The 404-No-More project’s goals are numerous, but the ultimate goal is to have mset become a new HTML standard for hyperlinks. “An HTML standard that incorporates archives for hyperlinks will loop in these efforts and make the Web better for everyone,” project leaders wrote, “activists, journalists, and regular ol’ everyday web users.”
Gary Edwards

Mobile Enterprise: Android OS, Best Practices for Developing Mobile Strategies - 0 views

  •  
    Convert Content for Android OS Making your content mobile friendly is harder than it sounds. However, more tools are emerging to help companies create content for multiple platforms, from iPads to smartphones, across a variety of operating systems. Recently, AppsGeyser privately launched a web platform that allows you to convert any web content to an Android App. With AppsGeyser companies can create an Android app three ways: Grabbing any website content block or web widget Copying and pasting HTML code, JavaScript, AJAX or Flash Entering the URL of your website Nifty tool for instantly converting web site widgets into Android Apps.  Looks like a new category of tools to make legacy Web services mobile-ready.  Titanium
Gary Edwards

Zoho's Next Big Thing | ge TalkBack on ZDNet - 0 views

  •  
    Moving the Point of Assembly Kudos to Zoho. Their efforts remind me of the early days of the Microsoft Productivity Environment where core MSOffice editors expanded their reach through DDE, OLE, rich copy/paste, data binding, merged content and data, VBA scripting and the infamous recorder, and a developer API that meshed platform and productivity apps so deeply into end user information that the binding of business processes to the MOPE is proving near impossible to break. Even for years after the fact. A business ecosystem for client/server was born back in the early 90's, with Microsoft continuing on to own entirely the client side of the equation.
¡%@&# Dizzywizard

The Need for a Reverse Creative Commons | PlagiarismToday - 0 views

  • A reverse CC system could fix that by having the user pick out the license that they need/want and then emailing it in the form of a permission request to the rightsholder via email. All the user would have to do is pick the rights they need, enter some information about the work, and then send it. This could also be used in situations where the copyright holder has a CC license but the user needs more permissions for a one-time use.
  • Under your proposed model, it might potentially make it easier for someone like me to write to someone who has inquired about use, or already violated copyrights with a link and a friendly "Here is where you can go to submit a permission request", without having to educate folks top to bottom on how it's done and why. (I've had people downright argue that they have every right to copy whatever they want because hey, it's online, and they can highlight, copy, and paste with the best of 'em!)Permissions are important, cumbersome, time consuming, and yes, important. With so much sharing online, it makes sense for artists/creatives to be proactive in helping to sculpt the online "culture" in a way that facilitates the fair sharing of ideas and information in a way that does not take from each artist's efforts or goals.
Paul Merrell

Free At Last: New DMCA Rules Might Make the Web a Better Place | nsnbc international - 0 views

  • David Mao, the Librarian of Congress, has issued new rules pertaining to exemptions to the Digital Millennium Copyright Act (DMCA) after a 3 year battle that was expedited in the wake of the Volkswagen scandal.
  • Opposition to this new decision is coming from the Environmental Protection Agency (EPA) and the auto industry because the DMCA prohibits “circumventing encryption or access controls to copy or modify copyrighted works.” For example, GM “claimed the exemption ‘could introduce safety and security issues as well as facilitate violation of various laws designed specifically to regulate the modern car, including emissions, fuel economy, and vehicle safety regulations’.” The exemption in question is in Section 1201 which forbids the unlocking of software access controls which has given the auto industry the unique ability to “threaten legal action against anyone who needs to get around those restrictions, no matter how legitimate the reason.” Journalist Nick Statt points out that this provision “made it illegal in the past to unlock your smartphone from its carrier or even to share your HBO Go password with a friend. It’s designed to let corporations protect copyrighted material, but it allows them to crackdown on circumventions even when they’re not infringing on those copyrights or trying to access or steal proprietary information.”
  • Kit Walsh, staff attorney for the Electronic Frontier Foundation (EFF), explained that the “‘access control’ rule is supposed to protect against unlawful copying. But as we’ve seen in the recent Volkswagen scandal—where VW was caught manipulating smog tests—it can be used instead to hide wrongdoing hidden in computer code.” Walsh continued: “We are pleased that analysts will now be able to examine the software in the cars we drive without facing legal threats from car manufacturers, and that the Librarian has acted to promote competition in the vehicle aftermarket and protect the long tradition of vehicle owners tinkering with their cars and tractors. The year-long delay in implementing the exemptions, though, is disappointing and unjustified. The VW smog tests and a long run of security vulnerabilities have shown researchers and drivers need the exemptions now.” As part of the new changes, gamers can “modify an old video game so it doesn’t perform a check with an authentication server that has since been shut down” and after the publisher cuts of support for the video game.
  • ...1 more annotation...
  • Another positive from the change is that smartphone users will be able to jailbreak their phone and finally enjoy running operating systems and applications from any source, not just those approved by the manufacturer. And finally, those who remix excerpts from DVDs, Blu – Ray discs or downloading services will be allowed to mix the material into theirs without violating the DMCA.
Gary Edwards

oEmbed: How New Twitter Could Help Combine Content From Different Sites - 0 views

  •  
    transclusion of hypertext documents. Transclusion is technically defined as "when you put that one thing in that other thing". In its current implementation, Twitter has declared that media which is shown within the Twitter interface comes from selected partners. But actually, the technology to allow embedding of rich media from almost any site already exists, using a system called OEmbed. Geeky stuff, but it's made by nice people who are pretty smart, and it lets any site say, "Hey, if you want to put our thing in your thing, do it like this". It works. Lots of sites do it. Nobody's getting rich off of it, but nobody's getting sued, and in between those two extremes lies most of what makes the Web great.
Gary Edwards

Is the Apps Marketplace just playing catchup to Microsoft? | Googling Google | ZDNet.com - 0 views

shared by Gary Edwards on 12 Mar 10 - Cached
  • Take the basic communication, calendaring, and documentation enabled for free by Apps Standard Edition, add a few slick applications from the Marketplace and the sky was the limit. Or at least the clouds were.
    • Gary Edwards
       
      Google Apps have all the basic elements of a productivity environment, but lack the internal application messaging, data connectivity and exchange that made the Windows desktop productivity platform so powerful.   gAPPS are great.  They even have copy/paste! But they lack the basics needed for simple "merge" of client and contact data into a wordprocessor letter/report/form/research paper. Things like DDE, OLE, ODBC, MAPI, COM, and DCOM have to be reinvented for the Open Web.   gAPPS is a good place to start.  But the focus has got to shift to Wave technologies like OT, XMPP and JSON.  Then there are the lower level innovations such as Web Sockets, Native Client, HTML5, and the Cairo-Skia graphics layer (thanks Florian).
  • Whether you (or your business) choose a Microsoft-centered solution that now has well-implemented cloud integration and tightly coupled productivity and collaboration software (think Office Live Web Apps, Office 2010, and Sharepoint 2010) or you build a business around the web-based collaboration inherent in Google Apps and extend its core functions with cool and useful applications, you win.
    • Gary Edwards
       
      Not true!!! The Microsoft Cloud is based on proprietary technologies, with the Silverlight-OOXML runtime/plug-in at the core of a WPF-.NET driven "Business Productivity Platform. The Google Cloud is based on the Open Web, and not the "Open Web" that's tied up in corporate "standards" consortia like the W3C, OASIS and Ecma. One of the reasons i really like WebKit is that they push HTML5 technologies to the edge, submitting new enhancements back into the knuckle dragging W3C HTML5 workgroups as "proposals".  They don't however wait for the entangled corporate politics of the W3C to "approve and include" these proposals.  Google and Apple submit and go live simultaneously.   This of course negates the heavy influence platform rivals like Microsoft have over the activities of corporate standards orgs.  Which has to be done if WebKit-HTML5-JavaScript-XMPP-OT-Web Sockets-Native Client family of technologies is ever to challenge the interactive and graphical richness of proprietary Microsoft technologies (Silverlight, OOXML, DrawingML, C#). The important hedge here is that Google is Open Sourcing their enhancements and innovations.  Without that Open Sourcing, i think there would be reasons to fear any platform player pushing beyond the corporate standards consortia approval process.  For me, OSS balances out the incredible influence of Google, and the ownership they have over core Open Web productivity application components. Which is to say; i don't want to find myself tomorrow in the same position with a Google Open Web Productivity Platform, that i found myself in with the 1994 Windows desktop productivity environment - where Microsoft owned every opportunity, and could take the marketshare of any Windows developed application with simple announcements that they to will enter that application category.  (ex. the entire independent contact/project management category was wiped out by mere announcement of MS Outlook).
Paul Merrell

Information Warfare: Automated Propaganda and Social Media Bots | Global Research - 0 views

  • NATO has announced that it is launching an “information war” against Russia. The UK publicly announced a battalion of keyboard warriors to spread disinformation. It’s well-documented that the West has long used false propaganda to sway public opinion. Western military and intelligence services manipulate social media to counter criticism of Western policies. Such manipulation includes flooding social media with comments supporting the government and large corporations, using armies of sock puppets, i.e. fake social media identities. See this, this, this, this and this. In 2013, the American Congress repealed the formal ban against the deployment of propaganda against U.S. citizens living on American soil. So there’s even less to constrain propaganda than before.
  • Information warfare for propaganda purposes also includes: The Pentagon, Federal Reserve and other government entities using software to track discussion of political issues … to try to nip dissent in the bud before it goes viral “Controlling, infiltrating, manipulating and warping” online discourse Use of artificial intelligence programs to try to predict how people will react to propaganda
  • Some of the propaganda is spread by software programs. We pointed out 6 years ago that people were writing scripts to censor hard-hitting information from social media. One of America’s top cyber-propagandists – former high-level military information officer Joel Harding – wrote in December: I was in a discussion today about information being used in social media as a possible weapon.  The people I was talking with have a tool which scrapes social media sites, gauges their sentiment and gives the user the opportunity to automatically generate a persuasive response. Their tool is called a “Social Networking Influence Engine”. *** The implications seem to be profound for the information environment. *** The people who own this tool are in the civilian world and don’t even remotely touch the defense sector, so getting approval from the US Department of State might not even occur to them.
  • ...2 more annotations...
  • How Can This Real? Gizmodo reported in 2010: Software developer Nigel Leck got tired rehashing the same 140-character arguments against climate change deniers, so he programmed a bot that does the work for him. With citations! Leck’s bot, @AI_AGW, doesn’t just respond to arguments directed at Leck himself, it goes out and picks fights. Every five minutes it trawls Twitter for terms and phrases that commonly crop up in Tweets that refute human-caused climate change. It then searches its database of hundreds to find a counter-argument best suited for that tweet—usually a quick statement and a link to a scientific source. As can be the case with these sorts of things, many of the deniers don’t know they’ve been targeted by a robot and engage AI_AGW in debate. The bot will continue to fire back canned responses that best fit the interlocutor’s line of debate—Leck says this goes on for days, in some cases—and the bot’s been outfitted with a number of responses on the topic of religion, where the arguments unsurprisingly often end up. Technology has come a long way in the past 5 years. So if a lone programmer could do this 5 years ago, imagine what he could do now. And the big players have a lot more resources at their disposal than a lone climate activist/software developer does.  For example, a government expert told the Washington Post that the government “quite literally can watch your ideas form as you type” (and see this).  So if the lone programmer is doing it, it’s not unreasonable to assume that the big boys are widely doing it.
  • How Effective Are Automated Comments? Unfortunately, this is more effective than you might assume … Specifically, scientists have shown that name-calling and swearing breaks down people’s ability to think rationally … and intentionally sowing discord and posting junk comments to push down insightful comments  are common propaganda techniques. Indeed, an automated program need not even be that sophisticated … it can copy a couple of words from the main post or a comment, and then spew back one or more radioactive labels such as “terrorist”, “commie”, “Russia-lover”, “wimp”, “fascist”, “loser”, “traitor”, “conspiratard”, etc. Given that Harding and his compadres consider anyone who questions any U.S. policies as an enemy of the state  – as does the Obama administration (and see this) – many honest, patriotic writers and commenters may be targeted for automated propaganda comments.
1 - 9 of 9
Showing 20 items per page