Skip to main content

Home/ Future of the Web/ Group items matching "energy" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Paul Merrell

With rules repealed, what's next for net neutrality? | TheHill - 0 views

  • The battle over the Federal Communications Commission’s (FCC) repeal of net neutrality rules is entering a new phase, with opponents of the move launching efforts to preserve the Obama-era consumer protections.The net neutrality rules had required internet service providers to treat all web traffic equally. Republicans on the commission decried the regulatory structure as a gross overreach, and quickly moved to reverse them once the Trump administration came to power. The reversal of the rules was published in the Federal Register Thursday, and even though the order is months away from implementation, net neutrality supporters are now free to mount legal challenges to the action. A coalition of Democratic state attorneys general, public interest groups and internet companies have vowed to fight in the courts. Twenty-three states, led by New York and its attorney general, Eric Schneiderman (D), have already filed a lawsuit. 
  • Even if Democrats do manage to find the tie-breaking vote in the Senate, the bill is almost certain to die in the House. But Democrats see a roll call vote as an opportunity to make GOP members stake out a position on an issue that they think could resonate in the midterm elections. On yet another front, Democratic states around the country have already launched their own attack on the FCC’s rules. Five governors (from Montana, Hawaii, New Jersey, Vermont and New York) have in recent weeks signed executive orders forbidding their states from doing business with internet service providers who violate net neutrality principles. And, according to the pro-net neutrality group Free Press, legislatures in 26 states are weighing bills that would codify their own open internet protections. The local efforts could ignite a separate legal battle over whether states have the authority to counteract the FCC’s order, which included a provision preempting them from replacing the rules.
  • The emerging court battle over net neutrality could keep the issue in limbo for years.Meanwhile, a separate battle over the rules is brewing in Congress.Senate Democrats have secured enough support to force a vote on a bill that would undo the FCC’s December vote and leave the net neutrality rules in place. The bill, which is being pushed by Sen. Ed MarkeyEdward (Ed) John MarkeyRegulators seek to remove barriers to electric grid storage Markey, Paul want to know if new rules are helping opioid treatment Oil spill tax on oil companies reinstated as part of budget deal MORE (D-Mass.), would use a legislative tool called the Congressional Review Act (CRA) to roll back the FCC’s repeal of net neutrality. The entry of the FCC’s repeal order in the Federal Register Thursday means that the Senate has 60 legislative days to move on the CRA bill. Democrats have secured support from one Republican, Sen. Susan CollinsSusan Margaret CollinsOvernight Tech: Judge blocks AT&T request for DOJ communications | Facebook VP apologizes for tweets about Mueller probe | Tech wants Treasury to fight EU tax proposal Overnight Regulation: Trump to take steps to ban bump stocks | Trump eases rules on insurance sold outside of ObamaCare | FCC to officially rescind net neutrality Thursday | Obama EPA chief: Reg rollback won't stand FCC to officially rescind net neutrality rules on Thursday MORE (Maine), and need just one more to cross the aisle for the bill to pass the chamber. 
  • ...1 more annotation...
  • For their part, Republicans who applauded the FCC repeal are calling for a legislation that would codify some net neutrality principles. They say doing so would allow for less heavy-handed protections that provide certainty to businesses.But most net neutrality supporters reject that course, at least while the repeal is tied up in court and Republicans control majorities in both the House and Senate. They argue that such a bill would amount to little more than watered-down protections that would be unable to keep internet service providers in check. For now, Democrats seem content to let the battles in the courts and Congress play out.
Paul Merrell

South Korea considers shutting down domestic cryptocurrency exchanges - 0 views

  • The request is one of the first responses from the U.S. Congress to the disclosure earlier this month by security researchers of the two major flaws, which may allow hackers to steal passwords or encryption keys on most types of computers, phones and cloud-based servers. McNerney, a member of the House Energy and Commerce Committee, asked the companies to explain the scope of Spectre and Meltdown, their timeframe for understanding the vulnerabilities, how consumers are affected and whether the flaws have been exploited, among other questions.
Paul Merrell

Symantec: CIA Linked To Cyberattacks In 16 Countries - 0 views

  • Internet and computer security company Symantec has issued a statement today related to the Vault 7 WikiLeaks documents leaked from the CIA, saying that the methods and protocols described in the documents are consistent with cyberattacks they’d been tracking for years. Symantec says they now believe that the CIA hacking tool Fluxwire is a malware that had been known as Corentry, which Symantec had previously attributed to an unknown cyberespionage group called Longhorn, which apparently was the CIA. They described Longhorn as having been active since at least 2011, and responsible for attacks in at least 16 countries across the world, targeting governments and NGOs, as well as financial, energy, and natural resource companies, things that would generally be of interest to a nation-state.
  • While the WikiLeaks themselves have been comparatively short on details, as WikiLeaks continues to share specific vulnerabilities with companies so they can fix them before the details are leaked to the general public, the ability of security companies like Symantec to link the CIA to known hacking operations could prove to be even more enlightening as to the scope of CIA cyber-espionage the world over.
Paul Merrell

Google, ACLU call to delay government hacking rule | TheHill - 0 views

  • A coalition of 26 organizations, including the American Civil Liberties Union (ACLU) and Google, signed a letter Monday asking lawmakers to delay a measure that would expand the government’s hacking authority. The letter asks Senate Majority Leader Mitch McConnellMitch McConnellTrump voices confidence on infrastructure plan GOP leaders to Obama: Leave Iran policy to Trump GOP debates going big on tax reform MORE (R-Ky.) and Minority Leader Harry ReidHarry ReidNevada can’t trust Trump to protect public lands Sanders, Warren face tough decision on Trump Google, ACLU call to delay government hacking rule MORE (D-Nev.), plus House Speaker Paul RyanPaul RyanTrump voices confidence on infrastructure plan GOP leaders to Obama: Leave Iran policy to Trump GOP debates going big on tax reform MORE (R-Wis.), and House Minority Leader Nancy Pelosi (D-Calif.) to further review proposed changes to Rule 41 and delay its implementation until July 1, 2017. ADVERTISEMENTThe Department of Justice’s alterations to the rule would allow law enforcement to use a single warrant to hack multiple devices beyond the jurisdiction that the warrant was issued in. The FBI used such a tactic to apprehend users of the child pornography dark website, Playpen. It took control of the dark website for two weeks and after securing two warrants, installed malware on Playpen users computers to acquire their identities. But the signatories of the letter — which include advocacy groups, companies and trade associations — are raising questions about the effects of the change. 
  •  
    ".. no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." Fourth Amendment. The changes to Rule 41 ignore the particularity requirement by allowing the government to search computers that are not particularly identified in multiple locations not particularly identifed, in other words, a general warrant that is precisely the reason the particularity requirement was adopted to outlaw.
Paul Merrell

Comcast hit with FCC's biggest cable fine ever - Oct. 11, 2016 - 0 views

  • Comcast is being forced to pay the largest fine the FCC has ever levied against a cable operator. Its offense: Charging customers for services and equipment they didn't ask for. The company agreed to pay a $2.3 million civil penalty and to submit to a "compliance plan," in which regulators will monitor Comcast for the next five years to ensure it cleans up its act.
  • The FCC said it received over 1,000 complaints from customers, who said Comcast charged them for premium channels, cable boxes, DVRs or other products that they never ordered. In many cases, the FCC said, customers expressly told Comcast that they didn't want the add-on options, but they were charged anyway. Complaints also describe how customers spent "significant time and energy to attempt to remove the unauthorized charges" and get refunds, the commission said. The complaints spurred the FCC to launch an investigation nearly two years ago. Today's settlement marks the conclusion of the probe. Under the five-year compliance plan, Comcast must begin sending customers special notifications every time a new charge or service is added to their bill. The company also has to add a way for customers to easily "block the addition of new services or equipment to their accounts," according to an FCC press release.
  • Comcast (CMCSA) will also be required to compensate or address complaints from customers who have disputed charges, and it will be barred from referring an account to collections or suspending an account that has a disputed charge. Comcast agreed to the fine without admitting any guilt.
Gonzalo San Gil, PhD.

SunZilla provides portable open source electricity | Opensource.com - 0 views

  •  
    "Do-it-yourself electricity generation is still difficult and expensive. The inventors of the SunZilla project aim to make it easier, cleaner, portable, quiet, and completely open source."
  •  
    "Do-it-yourself electricity generation is still difficult and expensive. The inventors of the SunZilla project aim to make it easier, cleaner, portable, quiet, and completely open source."
Gonzalo San Gil, PhD.

The Top Ten Hacker Tools of 2015 - 2 views

  •  
    "List of top ten hacker tools of 2015 Every task requires a good set of tools.This because having right tools in hand one can save much of its energy and time.In the world of Cyber Hacking ("Cyber Security" formally) there are millions of tools which are available on the Internet either as Freewares or as Sharewares."
Gonzalo San Gil, PhD.

Translating good documentation beats starting over from scratch | Opensource.com - 0 views

  •  
    "Writing documentation can have a way of getting into your blood, so that you think about it quite a bit, play with some ideas, start various new ideas that may not come to much, and it seems that what you're looking for as much as anything is a task that takes hold of you and develops its own energy to keep you going until you finish."
Paul Merrell

Most Agencies Falling Short on Mandate for Online Records - 1 views

  • Nearly 20 years after Congress passed the Electronic Freedom of Information Act Amendments (E-FOIA), only 40 percent of agencies have followed the law's instruction for systematic posting of records released through FOIA in their electronic reading rooms, according to a new FOIA Audit released today by the National Security Archive at www.nsarchive.org to mark Sunshine Week. The Archive team audited all federal agencies with Chief FOIA Officers as well as agency components that handle more than 500 FOIA requests a year — 165 federal offices in all — and found only 67 with online libraries populated with significant numbers of released FOIA documents and regularly updated.
  • Congress called on agencies to embrace disclosure and the digital era nearly two decades ago, with the passage of the 1996 "E-FOIA" amendments. The law mandated that agencies post key sets of records online, provide citizens with detailed guidance on making FOIA requests, and use new information technology to post online proactively records of significant public interest, including those already processed in response to FOIA requests and "likely to become the subject of subsequent requests." Congress believed then, and openness advocates know now, that this kind of proactive disclosure, publishing online the results of FOIA requests as well as agency records that might be requested in the future, is the only tenable solution to FOIA backlogs and delays. Thus the National Security Archive chose to focus on the e-reading rooms of agencies in its latest audit. Even though the majority of federal agencies have not yet embraced proactive disclosure of their FOIA releases, the Archive E-FOIA Audit did find that some real "E-Stars" exist within the federal government, serving as examples to lagging agencies that technology can be harnessed to create state-of-the art FOIA platforms. Unfortunately, our audit also found "E-Delinquents" whose abysmal web performance recalls the teletype era.
  • E-Delinquents include the Office of Science and Technology Policy at the White House, which, despite being mandated to advise the President on technology policy, does not embrace 21st century practices by posting any frequently requested records online. Another E-Delinquent, the Drug Enforcement Administration, insults its website's viewers by claiming that it "does not maintain records appropriate for FOIA Library at this time."
  • ...9 more annotations...
  • THE E-DELINQUENTS: WORST OVERALL AGENCIES In alphabetical order
  • The federal government has made some progress moving into the digital era. The National Security Archive's last E-FOIA Audit in 2007, " File Not Found," reported that only one in five federal agencies had put online all of the specific requirements mentioned in the E-FOIA amendments, such as guidance on making requests, contact information, and processing regulations. The new E-FOIA Audit finds the number of agencies that have checked those boxes is now much higher — 100 out of 165 — though many (66 in 165) have posted just the bare minimum, especially when posting FOIA responses. An additional 33 agencies even now do not post these types of records at all, clearly thwarting the law's intent.
  • The FOIAonline Members (Department of Commerce, Environmental Protection Agency, Federal Labor Relations Authority, Merit Systems Protection Board, National Archives and Records Administration, Pension Benefit Guaranty Corporation, Department of the Navy, General Services Administration, Small Business Administration, U.S. Citizenship and Immigration Services, and Federal Communications Commission) won their "E-Star" by making past requests and releases searchable via FOIAonline. FOIAonline also allows users to submit their FOIA requests digitally.
  • "The presumption of openness requires the presumption of posting," said Archive director Tom Blanton. "For the new generation, if it's not online, it does not exist." The National Security Archive has conducted fourteen FOIA Audits since 2002. Modeled after the California Sunshine Survey and subsequent state "FOI Audits," the Archive's FOIA Audits use open-government laws to test whether or not agencies are obeying those same laws. Recommendations from previous Archive FOIA Audits have led directly to laws and executive orders which have: set explicit customer service guidelines, mandated FOIA backlog reduction, assigned individualized FOIA tracking numbers, forced agencies to report the average number of days needed to process requests, and revealed the (often embarrassing) ages of the oldest pending FOIA requests. The surveys include:
  • Key Findings
  • Excuses Agencies Give for Poor E-Performance
  • Justice Department guidance undermines the statute. Currently, the FOIA stipulates that documents "likely to become the subject of subsequent requests" must be posted by agencies somewhere in their electronic reading rooms. The Department of Justice's Office of Information Policy defines these records as "frequently requested records… or those which have been released three or more times to FOIA requesters." Of course, it is time-consuming for agencies to develop a system that keeps track of how often a record has been released, which is in part why agencies rarely do so and are often in breach of the law. Troublingly, both the current House and Senate FOIA bills include language that codifies the instructions from the Department of Justice. The National Security Archive believes the addition of this "three or more times" language actually harms the intent of the Freedom of Information Act as it will give agencies an easy excuse ("not requested three times yet!") not to proactively post documents that agency FOIA offices have already spent time, money, and energy processing. We have formally suggested alternate language requiring that agencies generally post "all records, regardless of form or format that have been released in response to a FOIA request."
  • Disabilities Compliance. Despite the E-FOIA Act, many government agencies do not embrace the idea of posting their FOIA responses online. The most common reason agencies give is that it is difficult to post documents in a format that complies with the Americans with Disabilities Act, also referred to as being "508 compliant," and the 1998 Amendments to the Rehabilitation Act that require federal agencies "to make their electronic and information technology (EIT) accessible to people with disabilities." E-Star agencies, however, have proven that 508 compliance is no barrier when the agency has a will to post. All documents posted on FOIAonline are 508 compliant, as are the documents posted by the Department of Defense and the Department of State. In fact, every document created electronically by the US government after 1998 should already be 508 compliant. Even old paper records that are scanned to be processed through FOIA can be made 508 compliant with just a few clicks in Adobe Acrobat, according to this Department of Homeland Security guide (essentially OCRing the text, and including information about where non-textual fields appear). Even if agencies are insistent it is too difficult to OCR older documents that were scanned from paper, they cannot use that excuse with digital records.
  • Privacy. Another commonly articulated concern about posting FOIA releases online is that doing so could inadvertently disclose private information from "first person" FOIA requests. This is a valid concern, and this subset of FOIA requests should not be posted online. (The Justice Department identified "first party" requester rights in 1989. Essentially agencies cannot use the b(6) privacy exemption to redact information if a person requests it for him or herself. An example of a "first person" FOIA would be a person's request for his own immigration file.) Cost and Waste of Resources. There is also a belief that there is little public interest in the majority of FOIA requests processed, and hence it is a waste of resources to post them. This thinking runs counter to the governing principle of the Freedom of Information Act: that government information belongs to US citizens, not US agencies. As such, the reason that a person requests information is immaterial as the agency processes the request; the "interest factor" of a document should also be immaterial when an agency is required to post it online. Some think that posting FOIA releases online is not cost effective. In fact, the opposite is true. It's not cost effective to spend tens (or hundreds) of person hours to search for, review, and redact FOIA requests only to mail it to the requester and have them slip it into their desk drawer and forget about it. That is a waste of resources. The released document should be posted online for any interested party to utilize. This will only become easier as FOIA processing systems evolve to automatically post the documents they track. The State Department earned its "E-Star" status demonstrating this very principle, and spent no new funds and did not hire contractors to build its Electronic Reading Room, instead it built a self-sustaining platform that will save the agency time and money going forward.
Gonzalo San Gil, PhD.

Letter to the Council of the European Union: "Don't Turn Your Backs on Net Neutrality!" | La Quadrature du Net - 0 views

  •  
    "Paris, November 26, 2014 - Tomorrow on Thursday November 27th, the "Transport, Telecommunications and Energy" (TTE) Council will meet in Brussels to discuss the general approach on Telecom Single Market the Italian Presidency sent to the delegations of the Member States on November 14th. This text, which aims at protecting Net Neutrality and therefore the freedom of our communications, unfortunatel" [# ! No #NetNeutrality # ! … No #HumanRights. # ! Is this what #Europe wants to be said about @ur #Union…? # ! want to guess that not…]
  •  
    "Paris, November 26, 2014 - Tomorrow on Thursday November 27th, the "Transport, Telecommunications and Energy" (TTE) Council will meet in Brussels to discuss the general approach on Telecom Single Market the Italian Presidency sent to the delegations of the Member States on November 14th. This text, which aims at protecting Net Neutrality and therefore the freedom of our communications, unfortunatel"
  •  
    "Paris, November 26, 2014 - Tomorrow on Thursday November 27th, the "Transport, Telecommunications and Energy" (TTE) Council will meet in Brussels to discuss the general approach on Telecom Single Market the Italian Presidency sent to the delegations of the Member States on November 14th. This text, which aims at protecting Net Neutrality and therefore the freedom of our communications, unfortunatel" [# ! No #NetNeutrality # ! … No #HumanRights. # ! Is this what #Europe wants to be said about @ur #Union…? # ! want to guess that not…]
Paul Merrell

Utah lawmaker questions city water going to NSA - 0 views

  • SALT LAKE CITY – A Utah lawmaker concerned about government spying on its citizens is questioning whether city water service should be cut off to a massive National Security Agency data storage facility outside Salt Lake City.Republican Rep. Marc Roberts, of Santaquin, said there are serious questions about privacy and surveillance surrounding the center, and several Utah residents who spoke at a legislative committee hearing Wednesday agreed.During the last legislative session, lawmakers opted to hold off on Roberts' bill to shut off the facility's water and decided to study it during the interim."This is not a bill just about a data center. This is a bill about civil rights," web developer Joe Levi said. "This is a bill that needs to be taken up and needs to be taken seriously."Pete Ashdown, founder of Salt Lake City-based Internet provider XMission, called the center a stain upon the state and its technology industry. "I do encourage you to stand up and do something about it," he said.Lawmakers said they aren't considering shutting down $1.7 billion facility, but the committee chair acknowledged the concerns and said there might be another way to get the point across. "We may look at some type of a strong message to give our representatives to take back to Congress," said Republican Sen. David Hinkins, of Orangeville.
  • The NSA's largest data storage center in the U.S. was built in Utah over 37 other locations because of open land and cheap electricity. The center sits on a National Guard base about 25 miles south of Salt Lake City in the town of Bluffdale.NSA officials said the center is key to protecting national security networks and allowing U.S. authorities to watch for cyber threats. Beyond that, the agency has offered few details.The center attracted much discussion and concern after revelations last year that the NSA has been collecting millions of U.S. phone records and digital communications stored by major Internet providers.
  • Cybersecurity experts say the nondescript Utah facility is a giant storehouse for phone calls, emails and online records that have been secretly collected.Outside the computer storehouses are large coolers that keep the machines from overheating. The coolers use large amounts of water, which the nearby city of Bluffdale sells to the center at a discounted rate.City records released earlier this year showed monthly water use was much less than the 1 million gallons a day that the U.S. Army Corps of Engineers predicted the center would need, causing some to wonder if the center was fully operational.NSA officials have refused to say if the center is up and running after its scheduled opening in October 2013 was stalled by electrical problems.City utility records showed the NSA has been making monthly minimum payments of about $30,000 to Bluffdale. The city manager said that pays for more water than the center used.The state of Nevada shut off water to the site of the proposed Yucca Mountain nuclear waste dump 90 miles northwest of Las Vegas in 2002, after months of threats.The project didn't run dry because the Energy Department built a 1-million-gallon tank and a small well for the site. Department officials said the stored water, plus 400,000 gallons stored in other tanks at the Nevada Test Site, provided time for scientists to continue experiments and design work at the site.
  • ...1 more annotation...
  • SALT LAKE CITY – A Utah lawmaker concerned about government spying on its citizens is questioning whether city water service should be cut off to a massive National Security Agency data storage facility outside Salt Lake City.Rep
  •  
    Hey, go for their electricity too! But what do we do with the Bluffdale facility after we abolish the NSA? Turn it over to Internet Archives, with a $1 billion endowment for maintenance? Free and permanent web sites for everyone?  
Gary Edwards

Duke Engines' incredibly compact, lightweight valveless axial engine - 0 views

  • The Duke engine is an axial design, meaning that its five cylinders encircle the drive shaft and run parallel with it. The pistons drive a star-shaped reciprocator, which nutates around the drive shaft, kind of like a spinning coin coming to rest on a table.
  • The reciprocator's center point is used to drive the central drive shaft, which rotates in the opposite direction to the reciprocator. "That counter-rotation keeps it in tidy balance," says Duke co-founder John Garvey. "If you lay your hand on it while it's running, you can barely detect any motion at all, it's quite remarkable." That's borne out by the video below, where the engine revving doesn't even cause enough vibrations to tip a coin off its side.
  • Instead of cam- or pneumatically-operated intake and outlet valves, the cylinders rotate past intake and outlet ports in a stationary head ring. The spark plugs are also mounted in this stationary ring – the cylinders simply slide past each port or plug at the stage of the cycle it's needed for and move on. In this way, Duke eliminates all the complexity of valve operation and manages to run a five-cylinder engine with just three spark plugs and three fuel injectors. The Duke engine ends up delivering as many power strokes per revolution as a six cylinder engine, but with huge weight savings and a vast reduction in the number of engine parts.
  • ...1 more annotation...
  • The engine has shown excellent resistance to pre-ignition (or detonation) – potentially because its cylinders tend to run cooler than comparable engines. Duke has run compression ratios as high as 14:1 with regular 91-octane gasoline. This suggests that further developments will pull even more power out of a given amount of fuel, increasing the overall efficiency of the unit.
  •  
    Watch the second video! This is extraordinary. "New Zealand's Duke Engines has been busy developing and demonstrating excellent results with a bizarre axial engine prototype that completely does away with valves, while delivering excellent power and torque from an engine much smaller, lighter and simpler than the existing technology. We spoke with Duke co-founder John Garvey to find out how the Duke Axial Engine project is going."
Paul Merrell

Are processors pushing up against the limits of physics? | Ars Technica - 0 views

  • When I first started reading Ars Technica, performance of a processor was measured in megahertz, and the major manufacturers were rushing to squeeze as many of them as possible into their latest silicon. Shortly thereafter, however, the energy needs and heat output of these beasts brought that race crashing to a halt. More recently, the number of processing cores rapidly scaled up, but they quickly reached the point of diminishing returns. Now, getting the most processing power for each Watt seems to be the key measure of performance. None of these things happened because the companies making processors ran up against hard physical limits. Rather, computing power ended up being constrained because progress in certain areas—primarily energy efficiency—was slow compared to progress in others, such as feature size. But could we be approaching physical limits in processing power? In this week's edition of Nature, The University of Michigan's Igor Markov takes a look at the sorts of limits we might face.
Paul Merrell

Court gave NSA broad leeway in surveillance, documents show - The Washington Post - 0 views

  • Virtually no foreign government is off-limits for the National Security Agency, which has been authorized to intercept information “concerning” all but four countries, according to top-secret documents. The United States has long had broad no-spying arrangements with those four countries — Britain, Canada, Australia and New Zealand — in a group known collectively with the United States as the Five Eyes. But a classified 2010 legal certification and other documents indicate the NSA has been given a far more elastic authority than previously known, one that allows it to intercept through U.S. companies not just the communications of its overseas targets but any communications about its targets as well.
  • The certification — approved by the Foreign Intelligence Surveillance Court and included among a set of documents leaked by former NSA contractor Edward Snowden — lists 193 countries that would be of valid interest for U.S. intelligence. The certification also permitted the agency to gather intelligence about entities including the World Bank, the International Monetary Fund, the European Union and the International Atomic Energy Agency. The NSA is not necessarily targeting all the countries or organizations identified in the certification, the affidavits and an accompanying exhibit; it has only been given authority to do so. Still, the privacy implications are far-reaching, civil liberties advocates say, because of the wide spectrum of people who might be engaged in communication about foreign governments and entities and whose communications might be of interest to the United States.
  • That language could allow for surveillance of academics, journalists and human rights researchers. A Swiss academic who has information on the German government’s position in the run-up to an international trade negotiation, for instance, could be targeted if the government has determined there is a foreign-intelligence need for that information. If a U.S. college professor e-mails the Swiss professor’s e-mail address or phone number to a colleague, the American’s e-mail could be collected as well, under the program’s court-approved rules
  • ...4 more annotations...
  • On Friday, the Office of the Director of National Intelligence released a transparency report stating that in 2013 the government targeted nearly 90,000 foreign individuals or organizations for foreign surveillance under the program. Some tech-industry lawyers say the number is relatively low, considering that several billion people use U.S. e-mail services.
  • Still, some lawmakers are concerned that the potential for intrusions on Americans’ privacy has grown under the 2008 law because the government is intercepting not just communications of its targets but communications about its targets as well. The expansiveness of the foreign-powers certification increases that concern.
  • In a 2011 FISA court opinion, a judge using an NSA-provided sample estimated that the agency could be collecting as many as 46,000 wholly domestic e-mails a year that mentioned a particular target’s e-mail address or phone number, in what is referred to as “about” collection. “When Congress passed Section 702 back in 2008, most members of Congress had no idea that the government was collecting Americans’ communications simply because they contained a particular individual’s contact information,” Sen. Ron Wyden (D-Ore.), who has co-sponsored ­legislation to narrow “about” collection authority, said in an e-mail to The Washington Post. “If ‘about the target’ collection were limited to genuine national security threats, there would be very little privacy impact. In fact, this collection is much broader than that, and it is scooping up huge amounts of Americans’ wholly domestic communications.”
  • The only reason the court has oversight of the NSA program is that Congress in 2008 gave the government a new authority to gather intelligence from U.S. companies that own the Internet cables running through the United States, former officials noted. Edgar, the former privacy officer at the Office of the Director of National Intelligence, said ultimately he believes the authority should be narrowed. “There are valid privacy concerns with leaving these collection decisions entirely in the executive branch,” he said. “There shouldn’t be broad collection, using this authority, of foreign government information without any meaningful judicial role that defines the limits of what can be collected.”
Gonzalo San Gil, PhD.

Declaración de Tesla a favor del Open Source y contra las patentes - 0 views

  •  
    "Texto de Elon Musk de hoy, 12 de junio de 2014, para anunciar la renuncia de Tesla Motors a todas sus patentes. Sin duda, toda una declaración de principios por parte de la compañía, a favor del movimiento Open Source y en contra de las patentes. "Ayer había una pared de patentes de Tesla en la recepción de nuestras oficinas centrales en Palo Alto. Ya no va a estar. Han sido eliminadas, en el espíritu del movimiento Open Source, por el progreso de la tecnología de los vehículos eléctricos."
Gonzalo San Gil, PhD.

The energy and greenhouse-gas implications of internet video streaming in the United States - Abstract - Environmental Research Letters - IOPscience - 0 views

  •  
    [# ! Via Francisco Manuel Hernandez Sosa's FB...] OPEN ACCESS Arman Shehabi1, Ben Walker2 and Eric Masanet2 "Letters The rapid growth of streaming video entertainment has recently received attention as a possibly less energy intensive alternative to the manufacturing and transportation of digital video discs (DVDs). This study utilizes a life-cycle assessment approach to estimate the primary energy use and greenhouse-gas emissions associated with video viewing through both traditional DVD methods and online video streaming. Base-case estimates for 2011 video viewing energy and CO2(e) emission intensities indicate video streaming can be more efficient than DVDs, depending on DVD viewing method. "
  •  
    OPEN ACCESS Arman Shehabi1, Ben Walker2 and Eric Masanet2 "Letters The rapid growth of streaming video entertainment has recently received attention as a possibly less energy intensive alternative to the manufacturing and transportation of digital video discs (DVDs). This study utilizes a life-cycle assessment approach to estimate the primary energy use and greenhouse-gas emissions associated with video viewing through both traditional DVD methods and online video streaming. Base-case estimates for 2011 video viewing energy and CO2(e) emission intensities indicate video streaming can be more efficient than DVDs, depending on DVD viewing method. "
  •  
    [# ! Via Francisco Manuel Hernandez Sosa's FB...] OPEN ACCESS Arman Shehabi1, Ben Walker2 and Eric Masanet2 "Letters The rapid growth of streaming video entertainment has recently received attention as a possibly less energy intensive alternative to the manufacturing and transportation of digital video discs (DVDs). This study utilizes a life-cycle assessment approach to estimate the primary energy use and greenhouse-gas emissions associated with video viewing through both traditional DVD methods and online video streaming. Base-case estimates for 2011 video viewing energy and CO2(e) emission intensities indicate video streaming can be more efficient than DVDs, depending on DVD viewing method. "
Paul Merrell

Bankrolled by broadband donors, lawmakers lobby FCC on net neutrality | Ars Technica - 1 views

  • The 28 House members who lobbied the Federal Communications Commission to drop net neutrality this week have received more than twice the amount in campaign contributions from the broadband sector than the average for all House members. These lawmakers, including the top House leadership, warned the FCC that regulating broadband like a public utility "harms" providers, would be "fatal to the Internet," and could "limit economic freedom."​ According to research provided Friday by Maplight, the 28 House members received, on average, $26,832 from the "cable & satellite TV production & distribution" sector over a two-year period ending in December. According to the data, that's 2.3 times more than the House average of $11,651. What's more, one of the lawmakers who told the FCC that he had "grave concern" (PDF) about the proposed regulation took more money from that sector than any other member of the House. Rep. Greg Walden (R-OR) was the top sector recipient, netting more than $109,000 over the two-year period, the Maplight data shows.
  • Dan Newman, cofounder and president of Maplight, the California research group that reveals money in politics, said the figures show that "it's hard to take seriously politicians' claims that they are acting in the public interest when their campaigns are funded by companies seeking huge financial benefits for themselves." Signing a letter to the FCC along with Walden, who chairs the House Committee on Energy and Commerce, were three other key members of the same committee: Reps. Fred Upton (R-MI), Robert Latta (R-OH), and Marsha Blackburn (R-TN). Over the two-year period, Upton took in $65,000, Latta took $51,000, and Blackburn took $32,500. In a letter (PDF) those representatives sent to the FCC two days before Thursday's raucous FCC net neutrality hearing, the four wrote that they had "grave concern" over the FCC's consideration of "reclassifying Internet broadband service as an old-fashioned 'Title II common carrier service.'" The letter added that a switchover "harms broadband providers, the American economy, and ultimately broadband consumers, actually doing so would be fatal to the Internet as we know it."
  • Not every one of the 28 members who publicly lobbied the FCC against net neutrality in advance of Thursday's FCC public hearing received campaign financing from the industry. One representative took no money: Rep. Nick Rahall (D-WV). In all, the FCC received at least three letters from House lawmakers with 28 signatures urging caution on classifying broadband as a telecommunications service, which would open up the sector to stricter "common carrier" rules, according to letters the members made publicly available. The US has long applied common carrier status to the telephone network, providing justification for universal service obligations that guarantee affordable phone service to all Americans and other rules that promote competition and consumer choice. Some consumer advocates say that common carrier status is needed for the FCC to impose strong network neutrality rules that would force ISPs to treat all traffic equally, not degrading competing services or speeding up Web services in exchange for payment. ISPs have argued that common carrier rules would saddle them with too much regulation and would force them to spend less on network upgrades and be less innovative.
  • ...2 more annotations...
  • Of the 28 House members signing on to the three letters, Republicans received, on average, $59,812 from the industry over the two-year period compared to $13,640 for Democrats, according to the Maplight data. Another letter (PDF) sent to the FCC this week from four top members of the House, including Speaker John Boehner (R-OH), Majority Leader Eric Cantor (R-VA), Majority Whip Kevin McCarthy (R-CA), and Republican Conference Chair Cathy McMorris Rodgers (R-WA), argued in favor of cable companies: "We are writing to respectfully urge you to halt your consideration of any plan to impose antiquated regulation on the Internet, and to warn that implementation of such a plan will needlessly inhibit the creation of American private sector jobs, limit economic freedom and innovation, and threaten to derail one of our economy's most vibrant sectors," they wrote. Over the two-year period, Boehner received $75,450; Cantor got $80,800; McCarthy got $33,000; and McMorris Rodgers got $31,500.
  • The third letter (PDF) forwarded to the FCC this week was signed by 20 House members. "We respectfully urge you to consider the effect that regressing to a Title II approach might have on private companies' ability to attract capital and their continued incentives to invest and innovate, as well as the potentially negative impact on job creation that might result from any reduction in funding or investment," the letter said. Here are the 28 lawmakers who lobbied the FCC this week and their reported campaign contributions:
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML,
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

Cloud computing with Amazon Web Services, Part 1: Introduction - 0 views

  • Cloud computing is a paradigm shift in how we architect and deliver scalable applications. In the past, successful companies spent precious time and resources building an infrastructure that in turn provided them a competitive advantage. It was frequently a case of "You build it first and they will come." In most cases, this approach: Left large tracts of unused computing capacity that took up space in big data centers. Required someone to babysit the servers. Had associated energy costs. The unused computing power wasted away, with no way to push it out to other companies or users who might be willing to pay for additional compute cycles. With cloud computing, excess computing capacity can be put to use and be profitably sold to consumers. This transformation of computing and IT infrastructure into a utility, which is available to all, somewhat levels the playing field.
  • According to Amazon’s estimates, businesses spend about 70 percent of their time on building and maintaining their infrastructures while using only 30 percent of their precious time actually working on the ideas that power their businesses.
  •  
    We're a 100% free online dating site. View photos of singles in your area, see who's online now! Never pay for online dating, chat with singles here for free. www.sugarhoneys4u.com Match.com is the number one destination for online dating with more dates, more relationships, & more marriages than any other dating or personals site. www.killdo.de.gg 1 in 5 relationships now start online. Start dating for free with match.com, the dating site with more relationships & marriages than any other site.
Paul Merrell

Surveillance scandal rips through hacker community | Security & Privacy - CNET News - 0 views

  • One security start-up that had an encounter with the FBI was Wickr, a privacy-forward text messaging app for the iPhone with an Android version in private beta. Wickr's co-founder Nico Sell told CNET at Defcon, "Wickr has been approached by the FBI and asked for a backdoor. We said, 'No.'" The mistrust runs deep. "Even if [the NSA] stood up tomorrow and said that [they] have eliminated these programs," said Marlinspike, "How could we believe them? How can we believe that anything they say is true?" Where does security innovation go next? The immediate future of information security innovation most likely lies in software that provides an existing service but with heightened privacy protections, such as webmail that doesn't mine you for personal data.
  • Wickr's Sell thinks that her company has hit upon a privacy innovation that a few others are also doing, but many will soon follow: the company itself doesn't store user data. "[The FBI] would have to force us to build a new app. With the current app there's no way," she said, that they could incorporate backdoor access to Wickr users' texts or metadata. "Even if you trust the NSA 100 percent that they're going to use [your data] correctly," Sell said, "Do you trust that they're going to be able to keep it safe from hackers? What if somebody gets that database and posts it online?" To that end, she said, people will start seeing privacy innovation for services that don't currently provide it. Calling it "social networks 2.0," she said that social network competitors will arise that do a better job of protecting their customer's privacy and predicted that some that succeed will do so because of their emphasis on privacy. Abine's recent MaskMe browser add-on and mobile app for creating disposable e-mail addresses, phone numbers, and credit cards is another example of a service that doesn't have access to its own users' data.
  • The issue with balancing privacy and surveillance is that the wireless carriers are not interested in privacy, he said. "They've been providing wiretapping for 100 years. Apple may in the next year protect voice calls," he said, and said that the best hope for ending widespread government surveillance will be the makers of mobile operating systems like Apple and Google. Not all upcoming security innovation will be focused on that kind of privacy protection. Security researcher Brandon Wiley showed off at Defcon a protocol he calls Dust that can obfuscate different kinds of network traffic, with the end goal of preventing censorship. "I only make products about letting you say what you want to say anywhere in the world," such as content critical of governments, he said. Encryption can hide the specifics of the traffic, but some governments have figured out that they can simply block all encrypted traffic, he said. The Dust protocol would change that, he said, making it hard to tell the difference between encrypted and unencrypted traffic. It's hard to build encryption into pre-existing products, Wiley said. "I think people are going to make easy-to-use, encrypted apps, and that's going to be the future."
  • ...2 more annotations...
  • Stamos predicted changes in services that companies with cloud storage offer, including offering customers the ability to store their data outside of the U.S. "If they want to stay competitive, they're going to have to," he said. But, he cautioned, "It's impossible to do a cloud-based ad supported service." Soghoian added, "The only way to keep a service running is to pay them money." This, he said, is going to give rise to a new wave of ad-free, privacy protective subscription services.
  • Companies could face severe consequences from their security experts, said Stamos, if the in-house experts find out that they've been lied to about providing government access to customer data. You could see "lots of resignations and maybe publicly," he said. "It wouldn't hurt their reputations to go out in a blaze of glory." Perhaps not surprisingly, Marlinspike sounded a hopeful call for non-destructive activism on Defcon's 21st anniversary. "As hackers, we don't have a lot of influence on policy. I hope that's something that we can focus our energy on," he said.
  •  
    NSA as the cause of the next major disruption in the social networking service industry?  Grief ahead for Google? Note the point made that: "It's impossible to do a cloud-based ad supported service" where the encryption/decryption takes place on the client side. 
1 - 20 of 25 Next ›
Showing 20 items per page