Skip to main content

Home/ Open Web/ Group items tagged become

Rss Feed Group items tagged

Paul Merrell

Assange Keeps Warning Of AI Censorship, And It's Time We Started Listening - 0 views

  • Where power is not overtly totalitarian, wealthy elites have bought up all media, first in print, then radio, then television, and used it to advance narratives that are favorable to their interests. Not until humanity gained widespread access to the internet has our species had the ability to freely and easily share ideas and information on a large scale without regulation by the iron-fisted grip of power. This newfound ability arguably had a direct impact on the election for the most powerful elected office in the most powerful government in the world in 2016, as a leak publishing outlet combined with alternative and social media enabled ordinary Americans to tell one another their own stories about what they thought was going on in their country.This newly democratized narrative-generating power of the masses gave those in power an immense fright, and they’ve been working to restore the old order of power controlling information ever since. And the editor-in-chief of the aforementioned leak publishing outlet, WikiLeaks, has been repeatedly trying to warn us about this coming development.
  • In a statement that was recently read during the “Organising Resistance to Internet Censorship” webinar, sponsored by the World Socialist Web Site, Assange warned of how “digital super states” like Facebook and Google have been working to “re-establish discourse control”, giving authority over how ideas and information are shared back to those in power.Assange went on to say that the manipulative attempts of world power structures to regain control of discourse in the information age has been “operating at a scale, speed, and increasingly at a subtlety, that appears likely to eclipse human counter-measures.”What this means is that using increasingly more advanced forms of artificial intelligence, power structures are becoming more and more capable of controlling the ideas and information that people are able to access and share with one another, hide information which goes against the interests of those power structures and elevate narratives which support those interests, all of course while maintaining the illusion of freedom and lively debate.
  • To be clear, this is already happening. Due to a recent shift in Google’s “evaluation methods”, traffic to left-leaning and anti-establishment websites has plummeted, with sites like WikiLeaks, Alternet, Counterpunch, Global Research, Consortium News, Truthout, and WSWS losing up to 70 percent of the views they were getting prior to the changes. Powerful billionaire oligarchs Pierre Omidyar and George Soros are openly financing the development of “an automated fact-checking system” (AI) to hide “fake news” from the public.
  • ...2 more annotations...
  • To make matters even worse, there’s no way to know the exact extent to which this is going on, because we know that we can absolutely count on the digital super states in question to lie about it. In the lead-up to the 2016 election, Twitter CEO Jack Dorsey was asked point-blank if Twitter was obstructing the #DNCLeaks from trending, a hashtag people were using to build awareness of the DNC emails which had just been published by WikiLeaks, and Dorsey flatly denied it. More than a year later, we learned from a prepared testimony before the Senate Subcommittee on Crime and Terrorism by Twitter’s acting general counsel Sean J. Edgett that this was completely false and Twitter had indeed been doing exactly that to protect the interests of US political structures by sheltering the public from information allegedly gathered by Russian hackers.
  • Imagine going back to a world like the Middle Ages where you only knew the things your king wanted you to know, except you could still watch innocuous kitten videos on Youtube. That appears to be where we may be headed, and if that happens the possibility of any populist movement arising to hold power to account may be effectively locked out from the realm of possibility forever.To claim that these powerful new media corporations are just private companies practicing their freedom to determine what happens on their property is to bury your head in the sand and ignore the extent to which these digital super states are already inextricably interwoven with existing power structures. In a corporatist system of government, which America unquestionably has, corporate censorship is government censorship, of an even more pernicious strain than if Jeff Sessions were touring the country burning books. The more advanced artificial intelligence becomes, the more adept these power structures will become at manipulating us. Time to start paying very close attention to this.
Gary Edwards

Diary Of An x264 Developer » Flash, Google, VP8, and the future of internet v... - 0 views

  •  
    In depth technical discussion about Flash, HTML5, H.264, and Google's VP8.  Excellent.  Read the comments.  Bottom line - Google has the juice to put Flash and H.264 in the dirt.  The YouTube acquisition turns out to be very strategic. excerpt: The internet has been filled for quite some time with an enormous number of blog posts complaining about how Flash sucks-so much that it's sounding as if the entire internet is crying wolf.  But, of course, despite the incessant complaining, they're right: Flash has terrible performance on anything other than Windows x86 and Adobe doesn't seem to care at all.  But rather than repeat this ad nauseum, let's be a bit more intellectual and try to figure out what happened. Flash became popular because of its power and flexibility.  At the time it was the only option for animated vector graphics and interactive content (stuff like VRML hardly counts).  Furthermore, before Flash, the primary video options were Windows Media, Real, and Quicktime: all of which were proprietary, had no free software encoders or decoders, and (except for Windows Media) required the user to install a clunky external application, not merely a plugin.  Given all this, it's clear why Flash won: it supported open multimedia formats like H.263 and MP3, used an ultra-simple container format that anyone could write (FLV), and worked far more easily and reliably than any alternative. Thus, Adobe (actually, at the time, Macromedia) got their 98% install base.  And with that, they began to become complacent.  Any suggestion of a competitor was immediately shrugged off; how could anyone possibly compete with Adobe, given their install base?  It'd be insane, nobody would be able to do it.  They committed the cardinal sin of software development: believing that a competitor being better is excusable.  At x264, if we find a competitor that does something better, we immediately look into trying to put ourselves back on top.  This is why
Gary Edwards

Box.net Gets 48 million more to build enterprise platform | ZDNet - 0 views

  •  
    In taking this next step Box are closing some acquisition doors in electing to attempt to become a core piece of enterprise infrastructure rather than be swallowed up into someone else's larger offering. It's a brave and interesting move that will see them attempting to penetrate on-premise document and project management opportunities that are currently dominated by entrenched vendors, notably Sharepoint. Box's collaboration and work flow tools are currently adequate but unremarkable, and while the user interfaces are well done and unintimidating, they are now attempting to enter the areas of business steeped in document versioning and email inefficiencies that have been so lucrative to Microsoft, who can't be blamed for not cannibalizing their licensing golden geese of Office, Sharepoint and Exchange yet, and probably made 48 million as you read this sentence. Addressing the inefficiencies of these old ways of working are at the core of the modern collaborative enterprise, and it is primarily focusing on business purpose and performance from participants that ultimately unlocks the greater efficiencies possible with 2.0 technologies. The challenge for Box will be to avoid becoming a larger document and content graveyard while providing greater business agility, and this requires some cultural shifts in their offerings to target customers.
Gary Edwards

Gray Matter : Open XML and the SharePoint Conference - 0 views

  •  
    excerpt: The trend in Office development is the migration of solutions away from in-application scripted processing toward more data-centric development. Of course this is a primary purpose of Open XML, and it is great to see the amount of activity in this area. We've seen customers scripting Word in a server environment to batch process / print documents or for other automation tasks. In reality Word isn't built to do that on a large scale, it is better to work directly against the document rather than via the application whenever possible. The Open XML SDK unlocks a "whole nuther" environment for document processing, and gets you out of the business of scripting client apps on servers to do the work of a true server application (not to mention the licensing problems created by installing Office on a server). comment:  Gray makes a very important point here.  The dominance of the desktop based MSOffice Productivity Environment was largely based the embedded logic driving "in-process" documents that was application and platform (Win32 API) specific.  Tear open any of these workgroup-workflow oriented compound documents and you find application specific scripts, macros, OLE, data bindings, security settings and other application specific settings.  These internal components are certain to break whenever these highly interactive and "live" compound documents are converted to another format, or application use.  This is how MSOffice documents and the business processes they represent become "bound" to the MSOffice Productivity Environment. What Gray is pointing to here is that Microsoft is moving the legacy Productivity Environment to an MSWeb based center where OpenXML, Silverlight, CAML, XAML and a number of other .NET-WPF technologies become the workgroup drivers.  The key applications for the MS WebStack are Exchange/SharePoint/SQL Server.  To make this move, documents had to be separated from the legacy desktop Productivity Environment settings. Note th
Gary Edwards

The Anatomy of an iPhone Site | Build Internet! - 0 views

  •  
    In today's world the internet travels. Not just through laptops and wireless signal, but through a growing number of smart phones. The trick? Getting your site to travel just as well. Build to Touch:  The iPhone did two things differently. The full browser was a good first, but the second changed the fundamentals of interaction in a new direction. The phone is driven by touch. The best applications and websites have navigations that compliment this. Buttons are larger and more accommodating, and interfaces become more intuitive when they seem tactile. The iPhone did two things differently. The full browser was a good first, but the second changed the fundamentals of interaction in a new direction. The phone is driven by touch. The best applications and websites have navigations that compliment this. Buttons are larger and more accommodating, and interfaces become more intuitive when they seem tactile. For the average web designer, you'll save yourself a significant amount of time and headache by simply giving the site some iPhone sensitive browser design. Applications must be approved before going live, and can require extensive knowledge of development tools.
Gary Edwards

Why Cloud Computing is the Future of Mobile - 0 views

  •  
    This one's for Florian. He's been wondering about mobile computing and that creeping sense of being left out of something big. The desktop is so not happening. It's day has come and gone. Now there is a study out from ABI Research, connecting mobile computing to the future of the Web. Good stuff: Intro Excerpt:The term "cloud computing" is being bandied about a lot these days, mainly in the context of the "future of the web." But cloud computing's potential doesn't begin and end with the personal computer's transformation into a thin client - the mobile platform is going to be heavily impacted by this technology as well. At least that's the analysis being put forth by ABI Research. Their recent report, Mobile Cloud Computing, theorizes that the cloud will soon become a disruptive force in the mobile world, eventually becoming the dominant way in which mobile applications operate.
Gary Edwards

New Adobe Air 2.0 Released : ISEdb.COM - 0 views

  •  
    Is Adobe AiR a Virtual Desktop?  We expect a VD to run an alien OS and those OS specific applications.  With AiR 2.0 it seems Adobe has ditched the "OS" component of a VD, and the OS specific applications, but is quite capable of running AiR based applications and information services that would otherwise have been designed for a specific OS environment.   Another way of looking at this would be to say that VD's are designed to run existing OS and OS specific applications, while AiR is desinged to run newly written OS independent applications that have one very important advantage over legacy applications and information systems;  AiR speaks the language of the Web 3.0.   This is WebKit HTML5-CSS3 with an advanced but Air specific version of JavaScript called "ActionScript".  What Adobe doesn't do is provide support for other critically important aspects of the WebKit interactive Web 3.0 model: support for Canvas/SVG!  Adobe continues to push the proprietary SWF interactive vector graphics format.   Note that Microsoft's Silverlight universal runtime does not support anything in the WebKit Web 3.0 model!  It's all proprietary. excerpt: For the first time since 2007, Adobe has updated its Air platform, released recently in beta with a slew of new features. The features include support for detection of mass storage devices, advanced networking capabilities, ability to open a file with its default application, improved cross-platform printing, and a bunch of other things that you probably won't really notice in any other way other than your Adobe working significantly more efficiently and smoothly than before. The 2.0 version of Air also will be able to support HTML5 and CSS3, due to an upgrade of its WebKit. Developers will also be happy to know that they can create Air applications that can be installed through a native installer. Air's changes have seen it morph into something of an 'operating system sitting on an operating system'. According
Paul Merrell

The Latest Rules on How Long NSA Can Keep Americans' Encrypted Data Look Too Familiar |... - 0 views

  • Does the National Security Agency (NSA) have the authority to collect and keep all encrypted Internet traffic for as long as is necessary to decrypt that traffic? That was a question first raised in June 2013, after the minimization procedures governing telephone and Internet records collected under Section 702 of the Foreign Intelligence Surveillance Act were disclosed by Edward Snowden. The issue quickly receded into the background, however, as the world struggled to keep up with the deluge of surveillance disclosures. The Intelligence Authorization Act of 2015, which passed Congress this last December, should bring the question back to the fore. It established retention guidelines for communications collected under Executive Order 12333 and included an exception that allows NSA to keep ‘incidentally’ collected encrypted communications for an indefinite period of time. This creates a massive loophole in the guidelines. NSA’s retention of encrypted communications deserves further consideration today, now that these retention guidelines have been written into law. It has become increasingly clear over the last year that surveillance reform will be driven by technological change—specifically by the growing use of encryption technologies. Therefore, any legislation touching on encryption should receive close scrutiny.
  • Section 309 of the intel authorization bill describes “procedures for the retention of incidentally acquired communications.” It establishes retention guidelines for surveillance programs that are “reasonably anticipated to result in the acquisition of [telephone or electronic communications] to or from a United States person.” Communications to or from a United States person are ‘incidentally’ collected because the U.S. person is not the actual target of the collection. Section 309 states that these incidentally collected communications must be deleted after five years unless they meet a number of exceptions. One of these exceptions is that “the communication is enciphered or reasonably believed to have a secret meaning.” This exception appears to be directly lifted from NSA’s minimization procedures for data collected under Section 702 of FISA, which were declassified in 2013. 
  • While Section 309 specifically applies to collection taking place under E.O. 12333, not FISA, several of the exceptions described in Section 309 closely match exceptions in the FISA minimization procedures. That includes the exception for “enciphered” communications. Those minimization procedures almost certainly served as a model for these retention guidelines and will likely shape how this new language is interpreted by the Executive Branch. Section 309 also asks the heads of each relevant member of the intelligence community to develop procedures to ensure compliance with new retention requirements. I expect those procedures to look a lot like the FISA minimization guidelines.
  • ...6 more annotations...
  • This language is broad, circular, and technically incoherent, so it takes some effort to parse appropriately. When the minimization procedures were disclosed in 2013, this language was interpreted by outside commentators to mean that NSA may keep all encrypted data that has been incidentally collected under Section 702 for at least as long as is necessary to decrypt that data. Is this the correct interpretation? I think so. It is important to realize that the language above isn’t just broad. It seems purposefully broad. The part regarding relevance seems to mirror the rationale NSA has used to justify its bulk phone records collection program. Under that program, all phone records were relevant because some of those records could be valuable to terrorism investigations and (allegedly) it isn’t possible to collect only those valuable records. This is the “to find a needle a haystack, you first have to have the haystack” argument. The same argument could be applied to encrypted data and might be at play here.
  • This exception doesn’t just apply to encrypted data that might be relevant to a current foreign intelligence investigation. It also applies to cases in which the encrypted data is likely to become relevant to a future intelligence requirement. This is some remarkably generous language. It seems one could justify keeping any type of encrypted data under this exception. Upon close reading, it is difficult to avoid the conclusion that these procedures were written carefully to allow NSA to collect and keep a broad category of encrypted data under the rationale that this data might contain the communications of NSA targets and that it might be decrypted in the future. If NSA isn’t doing this today, then whoever wrote these minimization procedures wanted to at least ensure that NSA has the authority to do this tomorrow.
  • There are a few additional observations that are worth making regarding these nominally new retention guidelines and Section 702 collection. First, the concept of incidental collection as it has typically been used makes very little sense when applied to encrypted data. The way that NSA’s Section 702 upstream “about” collection is understood to work is that technology installed on the network does some sort of pattern match on Internet traffic; say that an NSA target uses example@gmail.com to communicate. NSA would then search content of emails for references to example@gmail.com. This could notionally result in a lot of incidental collection of U.S. persons’ communications whenever the email that references example@gmail.com is somehow mixed together with emails that have nothing to do with the target. This type of incidental collection isn’t possible when the data is encrypted because it won’t be possible to search and find example@gmail.com in the body of an email. Instead, example@gmail.com will have been turned into some alternative, indecipherable string of bits on the network. Incidental collection shouldn’t occur because the pattern match can’t occur in the first place. This demonstrates that, when communications are encrypted, it will be much harder for NSA to search Internet traffic for a unique ID associated with a specific target.
  • This lends further credence to the conclusion above: rather than doing targeted collection against specific individuals, NSA is collecting, or plans to collect, a broad class of data that is encrypted. For example, NSA might collect all PGP encrypted emails or all Tor traffic. In those cases, NSA could search Internet traffic for patterns associated with specific types of communications, rather than specific individuals’ communications. This would technically meet the definition of incidental collection because such activity would result in the collection of communications of U.S. persons who aren’t the actual targets of surveillance. Collection of all Tor traffic would entail a lot of this “incidental” collection because the communications of NSA targets would be mixed with the communications of a large number of non-target U.S. persons. However, this “incidental” collection is inconsistent with how the term is typically used, which is to refer to over-collection resulting from targeted surveillance programs. If NSA were collecting all Tor traffic, that activity wouldn’t actually be targeted, and so any resulting over-collection wouldn’t actually be incidental. Moreover, greater use of encryption by the general public would result in an ever-growing amount of this type of incidental collection.
  • This type of collection would also be inconsistent with representations of Section 702 upstream collection that have been made to the public and to Congress. Intelligence officials have repeatedly suggested that search terms used as part of this program have a high degree of specificity. They have also argued that the program is an example of targeted rather than bulk collection. ODNI General Counsel Robert Litt, in a March 2014 meeting before the Privacy and Civil Liberties Oversight Board, stated that “there is either a misconception or a mischaracterization commonly repeated that Section 702 is a form of bulk collection. It is not bulk collection. It is targeted collection based on selectors such as telephone numbers or email addresses where there’s reason to believe that the selector is relevant to a foreign intelligence purpose.” The collection of Internet traffic based on patterns associated with types of communications would be bulk collection; more akin to NSA’s collection of phone records en mass than it is to targeted collection focused on specific individuals. Moreover, this type of collection would certainly fall within the definition of bulk collection provided just last week by the National Academy of Sciences: “collection in which a significant portion of the retained data pertains to identifiers that are not targets at the time of collection.”
  • The Section 702 minimization procedures, which will serve as a template for any new retention guidelines established for E.O. 12333 collection, create a large loophole for encrypted communications. With everything from email to Internet browsing to real-time communications moving to encrypted formats, an ever-growing amount of Internet traffic will fall within this loophole.
  •  
    Tucked into a budget authorization act in December without press notice. Section 309 (the Act is linked from the article) appears to be very broad authority for the NSA to intercept any form of telephone or other electronic information in bulk. There are far more exceptions from the five-year retention limitation than the encrypted information exception. When reading this, keep in mind that the U.S. intelligence community plays semantic games to obfuscate what it does. One of its word plays is that communications are not "collected" until an analyst looks at or listens to partiuclar data, even though the data will be searched to find information countless times before it becomes "collected." That searching was the major basis for a decision by the U.S. District Court in Washington, D.C. that bulk collection of telephone communications was unconstitutional: Under the Fourth Amendment, a "search" or "seizure" requiring a judicial warrant occurs no later than when the information is intercepted. That case is on appeal, has been briefed and argued, and a decision could come any time now. Similar cases are pending in two other courts of appeals. Also, an important definition from the new Intelligence Authorization Act: "(a) DEFINITIONS.-In this section: (1) COVERED COMMUNICATION.-The term ''covered communication'' means any nonpublic telephone or electronic communication acquired without the consent of a person who is a party to the communication, including communications in electronic storage."       
Paul Merrell

Cy Vance's Proposal to Backdoor Encrypted Devices Is Riddled With Vulnerabilities | Jus... - 0 views

  • Less than a week after the attacks in Paris — while the public and policymakers were still reeling, and the investigation had barely gotten off the ground — Cy Vance, Manhattan’s District Attorney, released a policy paper calling for legislation requiring companies to provide the government with backdoor access to their smartphones and other mobile devices. This is the first concrete proposal of this type since September 2014, when FBI Director James Comey reignited the “Crypto Wars” in response to Apple’s and Google’s decisions to use default encryption on their smartphones. Though Comey seized on Apple’s and Google’s decisions to encrypt their devices by default, his concerns are primarily related to end-to-end encryption, which protects communications that are in transit. Vance’s proposal, on the other hand, is only concerned with device encryption, which protects data stored on phones. It is still unclear whether encryption played any role in the Paris attacks, though we do know that the attackers were using unencrypted SMS text messages on the night of the attack, and that some of them were even known to intelligence agencies and had previously been under surveillance. But regardless of whether encryption was used at some point during the planning of the attacks, as I lay out below, prohibiting companies from selling encrypted devices would not prevent criminals or terrorists from being able to access unbreakable encryption. Vance’s primary complaint is that Apple’s and Google’s decisions to provide their customers with more secure devices through encryption interferes with criminal investigations. He claims encryption prevents law enforcement from accessing stored data like iMessages, photos and videos, Internet search histories, and third party app data. He makes several arguments to justify his proposal to build backdoors into encrypted smartphones, but none of them hold water.
  • Before addressing the major privacy, security, and implementation concerns that his proposal raises, it is worth noting that while an increase in use of fully encrypted devices could interfere with some law enforcement investigations, it will help prevent far more crimes — especially smartphone theft, and the consequent potential for identity theft. According to Consumer Reports, in 2014 there were more than two million victims of smartphone theft, and nearly two-thirds of all smartphone users either took no steps to secure their phones or their data or failed to implement passcode access for their phones. Default encryption could reduce instances of theft because perpetrators would no longer be able to break into the phone to steal the data.
  • Vance argues that creating a weakness in encryption to allow law enforcement to access data stored on devices does not raise serious concerns for security and privacy, since in order to exploit the vulnerability one would need access to the actual device. He considers this an acceptable risk, claiming it would not be the same as creating a widespread vulnerability in encryption protecting communications in transit (like emails), and that it would be cheap and easy for companies to implement. But Vance seems to be underestimating the risks involved with his plan. It is increasingly important that smartphones and other devices are protected by the strongest encryption possible. Our devices and the apps on them contain astonishing amounts of personal information, so much that an unprecedented level of harm could be caused if a smartphone or device with an exploitable vulnerability is stolen, not least in the forms of identity fraud and credit card theft. We bank on our phones, and have access to credit card payments with services like Apple Pay. Our contact lists are stored on our phones, including phone numbers, emails, social media accounts, and addresses. Passwords are often stored on people’s phones. And phones and apps are often full of personal details about their lives, from food diaries to logs of favorite places to personal photographs. Symantec conducted a study, where the company spread 50 “lost” phones in public to see what people who picked up the phones would do with them. The company found that 95 percent of those people tried to access the phone, and while nearly 90 percent tried to access private information stored on the phone or in other private accounts such as banking services and email, only 50 percent attempted contacting the owner.
  • ...8 more annotations...
  • Vance attempts to downplay this serious risk by asserting that anyone can use the “Find My Phone” or Android Device Manager services that allow owners to delete the data on their phones if stolen. However, this does not stand up to scrutiny. These services are effective only when an owner realizes their phone is missing and can take swift action on another computer or device. This delay ensures some period of vulnerability. Encryption, on the other hand, protects everyone immediately and always. Additionally, Vance argues that it is safer to build backdoors into encrypted devices than it is to do so for encrypted communications in transit. It is true that there is a difference in the threats posed by the two types of encryption backdoors that are being debated. However, some manner of widespread vulnerability will inevitably result from a backdoor to encrypted devices. Indeed, the NSA and GCHQ reportedly hacked into a database to obtain cell phone SIM card encryption keys in order defeat the security protecting users’ communications and activities and to conduct surveillance. Clearly, the reality is that the threat of such a breach, whether from a hacker or a nation state actor, is very real. Even if companies go the extra mile and create a different means of access for every phone, such as a separate access key for each phone, significant vulnerabilities will be created. It would still be possible for a malicious actor to gain access to the database containing those keys, which would enable them to defeat the encryption on any smartphone they took possession of. Additionally, the cost of implementation and maintenance of such a complex system could be high.
  • Privacy is another concern that Vance dismisses too easily. Despite Vance’s arguments otherwise, building backdoors into device encryption undermines privacy. Our government does not impose a similar requirement in any other context. Police can enter homes with warrants, but there is no requirement that people record their conversations and interactions just in case they someday become useful in an investigation. The conversations that we once had through disposable letters and in-person conversations now happen over the Internet and on phones. Just because the medium has changed does not mean our right to privacy has.
  • In addition to his weak reasoning for why it would be feasible to create backdoors to encrypted devices without creating undue security risks or harming privacy, Vance makes several flawed policy-based arguments in favor of his proposal. He argues that criminals benefit from devices that are protected by strong encryption. That may be true, but strong encryption is also a critical tool used by billions of average people around the world every day to protect their transactions, communications, and private information. Lawyers, doctors, and journalists rely on encryption to protect their clients, patients, and sources. Government officials, from the President to the directors of the NSA and FBI, and members of Congress, depend on strong encryption for cybersecurity and data security. There are far more innocent Americans who benefit from strong encryption than there are criminals who exploit it. Encryption is also essential to our economy. Device manufacturers could suffer major economic losses if they are prohibited from competing with foreign manufacturers who offer more secure devices. Encryption also protects major companies from corporate and nation-state espionage. As more daily business activities are done on smartphones and other devices, they may now hold highly proprietary or sensitive information. Those devices could be targeted even more than they are now if all that has to be done to access that information is to steal an employee’s smartphone and exploit a vulnerability the manufacturer was required to create.
  • Vance also suggests that the US would be justified in creating such a requirement since other Western nations are contemplating requiring encryption backdoors as well. Regardless of whether other countries are debating similar proposals, we cannot afford a race to the bottom on cybersecurity. Heads of the intelligence community regularly warn that cybersecurity is the top threat to our national security. Strong encryption is our best defense against cyber threats, and following in the footsteps of other countries by weakening that critical tool would do incalculable harm. Furthermore, even if the US or other countries did implement such a proposal, criminals could gain access to devices with strong encryption through the black market. Thus, only innocent people would be negatively affected, and some of those innocent people might even become criminals simply by trying to protect their privacy by securing their data and devices. Finally, Vance argues that David Kaye, UN Special Rapporteur for Freedom of Expression and Opinion, supported the idea that court-ordered decryption doesn’t violate human rights, provided certain criteria are met, in his report on the topic. However, in the context of Vance’s proposal, this seems to conflate the concepts of court-ordered decryption and of government-mandated encryption backdoors. The Kaye report was unequivocal about the importance of encryption for free speech and human rights. The report concluded that:
  • States should promote strong encryption and anonymity. National laws should recognize that individuals are free to protect the privacy of their digital communications by using encryption technology and tools that allow anonymity online. … States should not restrict encryption and anonymity, which facilitate and often enable the rights to freedom of opinion and expression. Blanket prohibitions fail to be necessary and proportionate. States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows. Additionally, the group of intelligence experts that was hand-picked by the President to issue a report and recommendations on surveillance and technology, concluded that: [R]egarding encryption, the U.S. Government should: (1) fully support and not undermine efforts to create encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and (3) increase the use of encryption and urge US companies to do so, in order to better protect data in transit, at rest, in the cloud, and in other storage.
  • The clear consensus among human rights experts and several high-ranking intelligence experts, including the former directors of the NSA, Office of the Director of National Intelligence, and DHS, is that mandating encryption backdoors is dangerous. Unaddressed Concerns: Preventing Encrypted Devices from Entering the US and the Slippery Slope In addition to the significant faults in Vance’s arguments in favor of his proposal, he fails to address the question of how such a restriction would be effectively implemented. There is no effective mechanism for preventing code from becoming available for download online, even if it is illegal. One critical issue the Vance proposal fails to address is how the government would prevent, or even identify, encrypted smartphones when individuals bring them into the United States. DHS would have to train customs agents to search the contents of every person’s phone in order to identify whether it is encrypted, and then confiscate the phones that are. Legal and policy considerations aside, this kind of policy is, at the very least, impractical. Preventing strong encryption from entering the US is not like preventing guns or drugs from entering the country — encrypted phones aren’t immediately obvious as is contraband. Millions of people use encrypted devices, and tens of millions more devices are shipped to and sold in the US each year.
  • Finally, there is a real concern that if Vance’s proposal were accepted, it would be the first step down a slippery slope. Right now, his proposal only calls for access to smartphones and devices running mobile operating systems. While this policy in and of itself would cover a number of commonplace devices, it may eventually be expanded to cover laptop and desktop computers, as well as communications in transit. The expansion of this kind of policy is even more worrisome when taking into account the speed at which technology evolves and becomes widely adopted. Ten years ago, the iPhone did not even exist. Who is to say what technology will be commonplace in 10 or 20 years that is not even around today. There is a very real question about how far law enforcement will go to gain access to information. Things that once seemed like merely science fiction, such as wearable technology and artificial intelligence that could be implanted in and work with the human nervous system, are now available. If and when there comes a time when our “smart phone” is not really a device at all, but is rather an implant, surely we would not grant law enforcement access to our minds.
  • Policymakers should dismiss Vance’s proposal to prohibit the use of strong encryption to protect our smartphones and devices in order to ensure law enforcement access. Undermining encryption, regardless of whether it is protecting data in transit or at rest, would take us down a dangerous and harmful path. Instead, law enforcement and the intelligence community should be working to alter their skills and tactics in a fast-evolving technological world so that they are not so dependent on information that will increasingly be protected by encryption.
Paul Merrell

Google+ YouTube Integration: Kind of Like Twilight, Except In This Version When +Cullen... - 0 views

  • Google+ YouTube Integration: Kind of Like Twilight, Except In This Version When +Cullen Drinks BellaTube’s Blood They Both Become Mortal, But +Cullen Is Still An Abusive Creep, Also It Is Still Bad
  •  
    A lot of anger here but some valid criticism too. Well placed in context by some very choice embedded links. I'm not a frequent commenter on YouTube. In fact, I don't think I ever have. But YouTube comments definitely got messed up big-time by the integration with Google+. If you follow the first link "the basics" you'll see pretty quickly that some of the problems can't be fixed without crippling Google+. 
Paul Merrell

WikiLeaks' Julian Assange warns: Google is not what it seems - 0 views

  • Back in 2011, Julian Assange met up with Eric Schmidt for an interview that he considers the best he’s ever given. That doesn’t change, however, the opinion he now has about Schmidt and the company he represents, Google.In fact, the WikiLeaks leader doesn’t believe in the famous “Don’t Be Evil” mantra that Google has been preaching for years.Assange thinks both Schmidt and Google are at the exact opposite spectrum.“Nobody wants to acknowledge that Google has grown big and bad. But it has. Schmidt’s tenure as CEO saw Google integrate with the shadiest of US power structures as it expanded into a geographically invasive megacorporation. But Google has always been comfortable with this proximity,” Assange writes in an opinion piece for Newsweek.
  • “Long before company founders Larry Page and Sergey Brin hired Schmidt in 2001, their initial research upon which Google was based had been partly funded by the Defense Advanced Research Projects Agency (DARPA). And even as Schmidt’s Google developed an image as the overly friendly giant of global tech, it was building a close relationship with the intelligence community,” Assange continues.Throughout the lengthy article, Assange goes on to explain how the 2011 meeting came to be and talks about the people the Google executive chairman brought along - Lisa Shields, then vice president of the Council on Foreign Relationship, Jared Cohen, who would later become the director of Google Ideas, and Scott Malcomson, the book’s editor, who would later become the speechwriter and principal advisor to Susan Rice.“At this point, the delegation was one part Google, three parts US foreign-policy establishment, but I was still none the wiser.” Assange goes on to explain the work Cohen was doing for the government prior to his appointment at Google and just how Schmidt himself plays a bigger role than previously thought.In fact, he says that his original image of Schmidt, as a politically unambitious Silicon Valley engineer, “a relic of the good old days of computer science graduate culture on the West Coast,” was wrong.
  • However, Assange concedes that that is not the sort of person who attends Bilderberg conferences, who regularly visits the White House, and who delivers speeches at the Davos Economic Forum.He claims that Schmidt’s emergence as Google’s “foreign minister” did not come out of nowhere, but it was “presaged by years of assimilation within US establishment networks of reputation and influence.” Assange makes further accusations that, well before Prism had even been dreamed of, the NSA was already systematically violating the Foreign Intelligence Surveillance Act under its director at the time, Michael Hayden. He states, however, that during the same period, namely around 2003, Google was accepting NSA money to provide the agency with search tools for its rapidly-growing database of information.Assange continues by saying that in 2008, Google helped launch the NGA spy satellite, the GeoEye-1, into space and that the search giant shares the photographs from the satellite with the US military and intelligence communities. Later on, 2010, after the Chinese government was accused of hacking Google, the company entered into a “formal information-sharing” relationship with the NSA, which would allow the NSA’s experts to evaluate the vulnerabilities in Google’s hardware and software.
  • ...1 more annotation...
  • “Around the same time, Google was becoming involved in a program known as the “Enduring Security Framework” (ESF), which entailed the sharing of information between Silicon Valley tech companies and Pentagon-affiliated agencies at network speed.’’Emails obtained in 2014 under Freedom of Information requests show Schmidt and his fellow Googler Sergey Brin corresponding on first-name terms with NSA chief General Keith Alexander about ESF,” Assange writes.Assange seems to have a lot of backing to his statements, providing links left and right, which people can go check on their own.
  •  
    The "opinion piece for Newsweek" is an excerpt from Assange's new book, When Google met Wikileaks.  The chapter is well worth the read. http://www.newsweek.com/assange-google-not-what-it-seems-279447
Paul Merrell

Red Hat's CEO: Clouds can become the mother of all lock-ins | Cloud Computing - InfoWorld - 0 views

  • Cloud architecture has to be defined in a way that allows applications to move around, or clouds can become the mother of all lock-ins, warned Red Hat's CEO James Whitehurst. Once users get stuck in something, it's hard for them to move, Whitehurst said in an interview. The industry has to get in front of the cloud computing wave and make sure this next generation infrastructure is defined in a way that's friendly to customers, rather than to IT vendors, according to Whitehurst.
  • The cloud certification program was announced last year, and Amazon Web Services was the first cloud provider to get certified. Since then, NTT and IBM have been added to the list of certified partners and more are on the way, according to Whitehurst.
  • To be able to move a workload from a data center to a cloud or between two clouds, a connecting API (application programming interface) is needed, and there are a plethora of different ones being developed. Fewer would be better, according to Whitehurst. However, the real challenge isn't the API, but ensuring that the application will run with the same performance when it has been moved.
Paul Merrell

Dr. Dobb's | Other Voices: An HTML5 Primer | June 03, 2010 - 0 views

  • With Google and Apple strongly supporting HTML5 as the solution for rich applications for the Internet, it's become the buzzword of the month -- particularly after Google I/O. Given its hot currency, though, it's not surprising that the term is starting to become unhinged from reality. Already, we're starting to see job postings requiring "HTML5 experience," and people pointing to everything from simple JavaScript animations to CSS3 effects as examples of HTML5. Just as "AJAX" and "Web 2.0" became handy (and widely misused) shorthand for "next-generation" web development in the mid-2000's, HTML5 is now becoming the next overloaded term. And although there are many excellent resources out there describing details of HTML5, including the core specification itself, they are generally technical and many of them are now out of synch with the current state of the specs. So, I thought a primer on HTML5 might be in order.
Paul Merrell

Most Agencies Falling Short on Mandate for Online Records - 0 views

  • Nearly 20 years after Congress passed the Electronic Freedom of Information Act Amendments (E-FOIA), only 40 percent of agencies have followed the law's instruction for systematic posting of records released through FOIA in their electronic reading rooms, according to a new FOIA Audit released today by the National Security Archive at www.nsarchive.org to mark Sunshine Week. The Archive team audited all federal agencies with Chief FOIA Officers as well as agency components that handle more than 500 FOIA requests a year — 165 federal offices in all — and found only 67 with online libraries populated with significant numbers of released FOIA documents and regularly updated.
  • Congress called on agencies to embrace disclosure and the digital era nearly two decades ago, with the passage of the 1996 "E-FOIA" amendments. The law mandated that agencies post key sets of records online, provide citizens with detailed guidance on making FOIA requests, and use new information technology to post online proactively records of significant public interest, including those already processed in response to FOIA requests and "likely to become the subject of subsequent requests." Congress believed then, and openness advocates know now, that this kind of proactive disclosure, publishing online the results of FOIA requests as well as agency records that might be requested in the future, is the only tenable solution to FOIA backlogs and delays. Thus the National Security Archive chose to focus on the e-reading rooms of agencies in its latest audit. Even though the majority of federal agencies have not yet embraced proactive disclosure of their FOIA releases, the Archive E-FOIA Audit did find that some real "E-Stars" exist within the federal government, serving as examples to lagging agencies that technology can be harnessed to create state-of-the art FOIA platforms. Unfortunately, our audit also found "E-Delinquents" whose abysmal web performance recalls the teletype era.
  • E-Delinquents include the Office of Science and Technology Policy at the White House, which, despite being mandated to advise the President on technology policy, does not embrace 21st century practices by posting any frequently requested records online. Another E-Delinquent, the Drug Enforcement Administration, insults its website's viewers by claiming that it "does not maintain records appropriate for FOIA Library at this time."
  • ...9 more annotations...
  • "The presumption of openness requires the presumption of posting," said Archive director Tom Blanton. "For the new generation, if it's not online, it does not exist." The National Security Archive has conducted fourteen FOIA Audits since 2002. Modeled after the California Sunshine Survey and subsequent state "FOI Audits," the Archive's FOIA Audits use open-government laws to test whether or not agencies are obeying those same laws. Recommendations from previous Archive FOIA Audits have led directly to laws and executive orders which have: set explicit customer service guidelines, mandated FOIA backlog reduction, assigned individualized FOIA tracking numbers, forced agencies to report the average number of days needed to process requests, and revealed the (often embarrassing) ages of the oldest pending FOIA requests. The surveys include:
  • The federal government has made some progress moving into the digital era. The National Security Archive's last E-FOIA Audit in 2007, " File Not Found," reported that only one in five federal agencies had put online all of the specific requirements mentioned in the E-FOIA amendments, such as guidance on making requests, contact information, and processing regulations. The new E-FOIA Audit finds the number of agencies that have checked those boxes is now much higher — 100 out of 165 — though many (66 in 165) have posted just the bare minimum, especially when posting FOIA responses. An additional 33 agencies even now do not post these types of records at all, clearly thwarting the law's intent.
  • The FOIAonline Members (Department of Commerce, Environmental Protection Agency, Federal Labor Relations Authority, Merit Systems Protection Board, National Archives and Records Administration, Pension Benefit Guaranty Corporation, Department of the Navy, General Services Administration, Small Business Administration, U.S. Citizenship and Immigration Services, and Federal Communications Commission) won their "E-Star" by making past requests and releases searchable via FOIAonline. FOIAonline also allows users to submit their FOIA requests digitally.
  • THE E-DELINQUENTS: WORST OVERALL AGENCIES In alphabetical order
  • Key Findings
  • Excuses Agencies Give for Poor E-Performance
  • Justice Department guidance undermines the statute. Currently, the FOIA stipulates that documents "likely to become the subject of subsequent requests" must be posted by agencies somewhere in their electronic reading rooms. The Department of Justice's Office of Information Policy defines these records as "frequently requested records… or those which have been released three or more times to FOIA requesters." Of course, it is time-consuming for agencies to develop a system that keeps track of how often a record has been released, which is in part why agencies rarely do so and are often in breach of the law. Troublingly, both the current House and Senate FOIA bills include language that codifies the instructions from the Department of Justice. The National Security Archive believes the addition of this "three or more times" language actually harms the intent of the Freedom of Information Act as it will give agencies an easy excuse ("not requested three times yet!") not to proactively post documents that agency FOIA offices have already spent time, money, and energy processing. We have formally suggested alternate language requiring that agencies generally post "all records, regardless of form or format that have been released in response to a FOIA request."
  • Disabilities Compliance. Despite the E-FOIA Act, many government agencies do not embrace the idea of posting their FOIA responses online. The most common reason agencies give is that it is difficult to post documents in a format that complies with the Americans with Disabilities Act, also referred to as being "508 compliant," and the 1998 Amendments to the Rehabilitation Act that require federal agencies "to make their electronic and information technology (EIT) accessible to people with disabilities." E-Star agencies, however, have proven that 508 compliance is no barrier when the agency has a will to post. All documents posted on FOIAonline are 508 compliant, as are the documents posted by the Department of Defense and the Department of State. In fact, every document created electronically by the US government after 1998 should already be 508 compliant. Even old paper records that are scanned to be processed through FOIA can be made 508 compliant with just a few clicks in Adobe Acrobat, according to this Department of Homeland Security guide (essentially OCRing the text, and including information about where non-textual fields appear). Even if agencies are insistent it is too difficult to OCR older documents that were scanned from paper, they cannot use that excuse with digital records.
  • Privacy. Another commonly articulated concern about posting FOIA releases online is that doing so could inadvertently disclose private information from "first person" FOIA requests. This is a valid concern, and this subset of FOIA requests should not be posted online. (The Justice Department identified "first party" requester rights in 1989. Essentially agencies cannot use the b(6) privacy exemption to redact information if a person requests it for him or herself. An example of a "first person" FOIA would be a person's request for his own immigration file.) Cost and Waste of Resources. There is also a belief that there is little public interest in the majority of FOIA requests processed, and hence it is a waste of resources to post them. This thinking runs counter to the governing principle of the Freedom of Information Act: that government information belongs to US citizens, not US agencies. As such, the reason that a person requests information is immaterial as the agency processes the request; the "interest factor" of a document should also be immaterial when an agency is required to post it online. Some think that posting FOIA releases online is not cost effective. In fact, the opposite is true. It's not cost effective to spend tens (or hundreds) of person hours to search for, review, and redact FOIA requests only to mail it to the requester and have them slip it into their desk drawer and forget about it. That is a waste of resources. The released document should be posted online for any interested party to utilize. This will only become easier as FOIA processing systems evolve to automatically post the documents they track. The State Department earned its "E-Star" status demonstrating this very principle, and spent no new funds and did not hire contractors to build its Electronic Reading Room, instead it built a self-sustaining platform that will save the agency time and money going forward.
Gary Edwards

Hubspot Presentation On Company Culture - Business Insider - 0 views

  •  
    Hubspot has posted a slide deck stating their "Cultural Principles".  Very interesting and thought provoking piece.  I enjoyed the surprisingly quick presentation.  Easy to understand. but make no mistake;  some serious thought and effort went into this. "The workplace is changing at rapid pace - it's mobile, decentralized, and flexible. "The biggest problem most companies have is that they operate much like a company from 50 years ago - despite the fact that the world has changed," says HubSpot CTO Dharmesh Shah. "The second biggest problem is that they don't think of their culture as being for the people. Culture is not about perks and parties. It's about what you believe and how you behave." People work for a purpose, not a pension The 9-5 workday is dead So is the idea of staying at one company forever And so it's not just a manifesto, but becomes part of a company, and everyone participates in creating and changing it.
Gary Edwards

The End of the Battery - Getting All Charged Up over Supercapacitors - Casey Research - 0 views

  •  
    Very interesting article describing the near market ready potential of "supercapacitor" batteries.   This is truly game changer stuff, and very interesting to me since i've been following the research and development of "graphene technologies" for some time.  The graphene superconductor targets the future of both energy and computing.  But graphene is also at the cutting edge of "faster, better, cheaper" water desalinization!  Nor does it take a rocket scientist to see that a graphene nano latice will have an enormous impact on methods of separating water (H2O) atoms to create an electical current - a cost free flow of electons.   Very well written research! excerpt: "an article in the recent issue of Nature Communications on a novel way to mass-produce so-called superconductors on the super-cheap - using no more equipment than the average home CD/DVD burner. Hacked together by a group of research scientists at UCLA, the ingenious technique is a way of producing layers of microscopically nuanced lattices called graphene, an essential component of many superconductor designs. It holds the promise of rapidly dropping prices for what was until now a very expensive process. You see, we've known about the concept of supercapacitors for decades. In fact, their antecedent, the capacitor, is one of the fundamental building blocks of electronics. Long before the Energizer Bunny starting banging its away around our television screens, engineers had been using capacitors to store electrical charge - originally as filters to help tune signals clearly on wireless radios of all sorts. The devices did so by storing and releasing excess energy, but only teeny amounts of it... we're talking millions of them to hold what a simple AA battery can. Over the years, however, scientists worked on increasing their storage capacity. Way back in 1957, engineers at General Electric came up with the first supercapacitor... but back then there were no uses for it. So, the technology
Gary Edwards

Crocodoc's HTML Document Viewer Infiltrates the Enterprise | Xconomy - 0 views

  •  
    Excellent report on Crocodoc and their ability to convert MANY different document file types to HTML5.  Including all MSOffice formats - OOXML, ODF, and PDF. " Crocodoc, and took on the much larger problem of allowing groups to collaborate on editing a document online, no matter what the document type: PowerPoint, PDF, Word, Photoshop, JPEG, or PNG. In the process, they had to build an embeddable viewer that could take apart any document and reassemble it accurately within a Web browser. And as soon as they'd finished that, they had to tear their own system apart and rebuild it around HTML5 rather than Flash, the Adobe multimedia format that's edging closer and closer to extinction. The result of all that iterating is what's probably the world's most flexible and faithful HTML5-based document viewer: when you open a PDF, PowerPoint, or Word document in Crocodoc, the Web version looks exactly like the native version, even though it's basically been stripped down and re-rendered from scratch. When I talked with Damico in February of 2011, the startup had visions of building on this technology to become a kind of central, Web-based clearinghouse for everyone's documents-a cross between Scribd, Dropbox, and Google Docs, but with a focus on consumers, and with prettier viewing tools. In the last year, though, Crocodoc's direction has changed dramatically. Damico and his colleagues realized that it would be smarter to partner with the fastest growing providers of document-sharing services and social business-tool providers than to try to compete with them. "The massive, seismic change for us is that we had a huge opportunity to partner with Dropbox and LinkedIn and SAP and Yammer, and let them build on top of Crocodoc and make it into a core piece of their own products," Damico says. In other words, every time an office worker opens a document from within a Web app like Dropbox or Yammer, they're activating a white-label version
Paul Merrell

Mozilla Acquires Pocket | The Mozilla Blog - 0 views

  • e are excited to announce that the Mozilla Corporation has completed the acquisition of Read It Later, Inc. the developers of Pocket. Mozilla is growing, experimenting more, and doubling down on our mission to keep the internet healthy, as a global public resource that’s open and accessible to all. As our first strategic acquisition, Pocket contributes to our strategy by growing our mobile presence and providing people everywhere with powerful tools to discover and access high quality web content, on their terms, independent of platform or content silo. Pocket will join Mozilla’s product portfolio as a new product line alongside the Firefox web browsers with a focus on promoting the discovery and accessibility of high quality web content. (Here’s a link to their blog post on the acquisition).  Pocket’s core team and technology will also accelerate Mozilla’s broader Context Graph initiative.
  • “We believe that the discovery and accessibility of high quality web content is key to keeping the internet healthy by fighting against the rising tide of centralization and walled gardens. Pocket provides people with the tools they need to engage with and share content on their own terms, independent of hardware platform or content silo, for a safer, more empowered and independent online experience.” – Chris Beard, Mozilla CEO Pocket brings to Mozilla a successful human-powered content recommendation system with 10 million unique monthly active users on iOS, Android and the Web, and with more than 3 billion pieces of content saved to date. In working closely with Pocket over the last year around the integration within Firefox, we developed a shared vision and belief in the opportunity to do more together that has led to Pocket joining Mozilla today. “We’ve really enjoyed partnering with Mozilla over the past year. We look forward to working more closely together to support the ongoing growth of Pocket and to create great new products that people love in support of our shared mission.” – Nate Weiner, Pocket CEO As a result of this strategic acquisition, Pocket will become a wholly owned subsidiary of Mozilla Corporation and will become part of the Mozilla open source project.
Gary Edwards

Government Market Drags Microsoft Deeper into the Cloud - 0 views

  •  
    Nice article from Scott M. Fulton describing Microsoft's iron fisted lock on government desktop productivity systems and the great transition to a Cloud Productivity Platform.  Keep in mind that in 2005, Massachusetts tried to do the same thing with their SOA effort.  Then Governor Romney put over $1 M into a beta test that produced the now infamous 300 page report written by Sam Hiser.  The details of this test resulted in the even more infamous da Vinci ODF plug-in for Microsoft Office desktops.   The lessons of Massachusetts are simple enough; it's not the formats or office suite applications.  It's the business process!  Conversion of documents not only breaks the document.  It also breaks the embedded "business process". The mystery here is that Microsoft owns the client side of client/server computing.  Compound documents, loaded with intertwined OLE, ODBC, ActiveX, and other embedded protocols and interface dependencies connecting data sources with work flow, are the fuel of these client/server business productivity systems.  Break a compound document and you break the business process.   Even though Massachusetts workers were wonderfully enthusiastic and supportive of an SOA based infrastructure that would include Linux servers and desktops as well as OSS productivity applications, at the end of the day it's all about getting the work done.  Breaking the business process turned out to be a show stopper. Cloud Computing changes all that.  The reason is that the Cloud is rapidly replacing client/server as the target architecture for new productivity developments; including data centers and transaction processing systems.  There are many reasons for the great transition, but IMHO the most important is that the Web combines communications with content, data, and collaborative computing.   Anyone who ever worked with the Microsoft desktop productivity environment knows that the desktop sucks as a communication device.  There was
Gary Edwards

That's All Folks: Why the Writing Is on the Wall at Microsoft - Forbes - 1 views

  •  
    Control vs. Creativity.  As the ulitmate control freak, Ballmer was guanteed to crush the life out of Microsoft's most creative individuals and teams.  Gates was focused on Windows and MSOffice, and Ballmer on control.  That one-two punch made certain that Microsoft would not be a player in the next great wave of computing; The Cloud.   excerpt: This is yet another example of what I like to call the Wile E. Coyote syndrome. Like the unfortunate character in the old Warner Bros. cartoons, Microsoft now seems to be a company that has long since run off the cliff but, with legs spinning for all they are worth, doesn't know yet that it is ready to drop. Yet drop it most certainly will. Microsoft Win8 Tablet Is NOT a Game Changer Adam Hartung Contributor Snapshot: Steve Ballmer Follow (93) #44 Billionaires IDC Analyst: Microsoft's Surface Sizzle Needs Win8 Steak Daniel Nye Griffiths Contributor To understand how this happens, take a look at the work of Arnold J. Toynbee, a historian who studied the rise and fall of civilizations. He argued that a civilization flourishes when it motivates insiders and attracts outsiders with its creative dynamism and culture. The civilization breaks down when its leadership loses this creative capacity and gives way to, or transforms itself into, a dominant minority. When this happens, the driver of the civilization becomes control, not attraction. And it's precisely this switch from attraction to control that is the source of the breakdown. Interestingly, Toynbee says that the consequences may not be immediately apparent. A civilization can keep up momentum because the controls it puts in place generate some short-term efficiency. But eventually it will run its course and collapse, because no amount of control can replace the loss of collective creativity. Observe this in the corporate world by looking at the example of General Motors. G.M.'s 2009 bankruptcy came at the end of a long decline dating back to the early 197
1 - 20 of 148 Next › Last »
Showing 20 items per page