Skip to main content

Home/ Open Web/ Group items tagged chips

Rss Feed Group items tagged

Gary Edwards

CPU Wars - Intel to Play Fab for an ARM Chipmaker: Understanding What the Altera Deal M... - 0 views

  • Intel wants x86 to conquer all computing spaces -- including mobile -- and is trying to leverage its process lead to make that happen.  However, it's been slowed by a lack of inclusion of 4G cellular modems on-die and difficulties adapting to the mobile market's low component prices.  ARM, meanwhile, wants a piece of the PC and server markets, but has received a lukewarm response from consumers due to software compatibility concerns. The disappointing sales of (x86) tablet products using Microsoft Corp.'s (MSFT) Windows 8 and the flop of Windows RT (ARM) product in general somewhat unexpectedly had the net result of being a driver to maintain the status quo, allowing neither company to gain much ground.  For Intel, its partnership with Microsoft (the historic "Wintel" combo) has damaged its mobile efforts, as Windows 8 flopped in the tablet market.  Likewise ARM's efforts to score PC market share were stifled by the flop of Windows RT, which led to OEMs killing off ARM-based laptops and convertibles.
  • Both companies seem to have learned their lesson and are migrating away from Windows towards other platforms -- in ARM's case Chromebooks, and in Intel's case Android tablets/smartphones. But suffice it to say, ARM Holdings and Intel are still very much bitter enemies from a sales perspective.
  • III. Profit vs. Risk -- Understanding the Modern CPU Food Chain
  • ...16 more annotations...
  • Whether it's tablets or PCs, the processor is still one of the most expensive components onboard.  Aside from the discrete GPU -- if a device has one -- the CPU has the greatest earning potential for a large company like Intel because the CPU is the most complex component. Other components like the power supply or memory tend to either be lower margin or have more competitors.  The display, memory, and storage components are all sensitive to process, but see profit split between different parties (e.g. the company who makes the DRAM chips and the company who sells the stick of DRAM) and are primarily dependent on process technology. CPUs and GPUs remain the toughest product to make, as it's not enough to simply have the best process, you must also have the best architecture and the best optimization of that architecture for the space you're competing in. There's essentially five points of potential profit on the processor food chain: [CPU] Fabrication [CPU] Architecture design [CPU] Optimization OEM OS platform Of these, the fabrication/OS point is the most profitable (but is dependent on the number of OEM adopters).  The second most profitable niche is optimization (which again is dependent on OEM adopter market share), followed by OEM markups.  In terms of expense, fabrication and operating system designs requires the greatest capital investment and the highest risk.
  • In terms of difficulty/risk, the fabrication and operating system are the most difficult/risky points.  Hence in terms of combined risk, cost, and profitability the ranking of which points are "best" is arguably: Optimization Architecture design OS platfrom OEM Fabrication ...with the fabrication point being last largely because it's so high risk. In other words, the last thing Intel wants is to settle into a niche of playing fabs for everybody else's product, as that's an unsound approach.  If you can't keep up in terms of chip design, you typically spin off your fabs and opt for a different architecture direction -- just look at Advanced Micro Devices, Inc.'s (AMD) spinoff of GlobalFoundries and upcoming ARM product to see that.
  • IV. Top Firms' Role on That Food Chain
  • Apple has seen unbelievable profits due to this fundamental premise.  It controls the two most desirable points on the food chain -- OS and optimization -- while sharing some profit with its architecture designer (ARM Holdings) and a bit with the fabricator (Samsung Electronics Comp., Ltd. (KSC:005930)).  By choosing to play operating system maker, too, it adds to its profits, but also its risk.  Note that nearly every other first-party exclusive smartphone platform has failed or is about to fail (i.e. BlackBerry, Ltd. (TSE:BB) and the now-dead Palm).
  • Intel controls points 1, 2, and 5, currently, on the food chain.  Compared to Apple, Intel's points of control offer less risk, but also slightly less profitability. Its architecture control may be at risk, but even so, it's currently the top in its most risky/expensive point of control (fabrication), where as Apple's most risky/expensive point of control (OS development) is much less of a clear leader (as Android has surpassed Apple in market share).  Hence Apple might be a better short-term investment, but Intel certainly appears a better long-term investment.
  • Samsung is another top company in terms of market dominance and profit.  It occupies points 1, 3, 4, and 5 -- sometimes.  Sometimes Samsung's devices use third-party optimization firms like Qualcomm Inc. (QCOM) and NVIDIA Corp. (NVDA), which hurts profitability by removing one of the most profitable roles.  But Samsung makes up for this by being one of the largest and most successful third party manufacturers.
  • Microsoft enjoys a lot of profit due to its OS dominance, as does Google Inc. (GOOG); but both companies are limited in controlling only one point which they monetize in different ways (Microsoft by direct sales; Google by giving away OS product for free in return for web services market share and by proxy search advertising revenue).
  • Qualcomm and NVIDIA are also quite profitable operating solely as optimizers, as is ARM Holdings who serves as architecture maker to Qualcomm, NVIDIA, Apple, and Samsung.
  • V. Four Scenarios in the x86 vs. ARM Competition
  • Scenario one is that x86 proves dominant in the mobile space, assuming a comparable process.
  • A second scenario is that x86 and ARM are roughly tied, assuming a comparable process.
  • A third scenario is that x86 is inferior to ARM at a comparable process, but comparable or superior to ARM when the x86 chip is built using a superior process.  From the benchmarks I've seen to date, I personally believe this is most likely.
  • A fourth scenario is that x86 is so drastically inferior to ARM architecturally that a process lead by Intel can't make up for it.
  • This is perhaps the most interesting scenario, in the sense of thinking of how Intel would react, if not overly likely.  If Intel were faced with this scenario, I believe Intel would simply bite the bullet and start making ARM chips, leveraging its process lead to become the dominant ARM chipmaker.  To make up for the revenue it lost, paying licensing fees to ARM Holdings, it could focus its efforts in the OS space (it's Tizen Linux OS project with Samsung hints at that).  Or it could look to make up for lost revenue by expanding its production of other basic process-sensitive components (e.g. DRAM).  I think this would be Intel's best and most likely option in this scenario.
  • VI. Why Intel is Unlikely to Play Fab For ARM Chipmakers (Even if ARM is Better)
  • From Intel's point of view, there is an entrenched, but declining market for x86 chips because of Windows, and Intel will continue to support Atom chips (which will be required to run Windows 8 tablets), but growth on desktops will come from 64 bit desktop/server class non-Windows ARM devices - Chromebooks, Android laptops, possibly Apple's desktop products as well given they are going 64 bit ARM for their future iPhones. Even Windows has been trying to transition (unsuccessfully) to ARM. Again, the Windows server market is tied to x86, but Linux and FreeBSD servers will run on ARM as well, and ARM will take a chunk out of the server market when a decent 64bit ARM server chip is available as a result.
  •  
    Excellent article explaining the CPU war for the future of computing, as Intel and ARM square off.  Intel's x86 architecture dominates the era of client/server computing, with their famed WinTel alliance monopolizing desktop, notebook and server implementations.  But Microsoft was a no show with the merging mobile computing market, and now ARM is in position transition from their mobile dominance to challenge the desktop -notebook - server markets.   WinTel lost their shot at the mobile computing market, and now their legacy platforms are in play.  Good article!!! Well worth the read time  ................
Gary Edwards

Electronic Imp: Former Apple, Google, Facebook engineers launch IoT startup - 2012-05-1... - 0 views

  • "We've put it in a user-installable module. The user buys the card and just plugs it into any device that has a slot," Fiennes explained." All a developer needs to do is add a socket and a 3-pin Atmel ID chip to their product. That's 75 cents: 30 cents for the ID chip and 45 cents for the socket." This assumes the availability of 3.3 V. "But given that most things you want to control from the Internet are electrical, we think that's reasonable," he said. If not, developers can include a battery.
  • Fiennes demonstrated a power adaptor with an Imp socket. He installed a card and an appropriately labeled block appeared in a browser window. Fiennes plugged in a chain of decorative lights and we clicked on the box on our browser. After clicking, the box text went from "off" to "on." Over Skype, we could see the lights had come on.Fiennes emphasized that control need not be manual and could be linked to other Internet apps such as weather reports, or to Electric Imp sensor nodes that monitor conditions such as humidity.A second example is an Electric Imp enabled passive infrared sensor. Fiennes demonstrated how it could be programmed to report the time and date of detected motion to a client's Web pages on the Electric Imp server. In turn, those pages could be programmed to send an alarm to a mobile phone. The alarm could also be triggered if no motion was detected, allowing the sensor to serve as a monitor for the elderly in their homes, for example. If there is no activity before 9 a.m., a message is sent to a caregiver.
  • The final example is an Electric Imp washing machine. Machine operation can be made conditional on a number of variables, including the price of electricity. "Every washing machine has microcontroller and that microcontroller has a lot of data," said Fiennes. "That data could be sent back to a washing machine service organization that could call the client up before the washing machine breaks down."
  • ...1 more annotation...
  • The cards will be on sale to developers by the end of June for $25 each and Electric Imp will also supply development kits that include a socket, ID chip and power connection on a small board for about $10. While these are intended for consumer electronics developers Electric Imp is happy to sell them to students and non-professional developers. "Hobbyists can play with it and tell us what they think."
  •  
    Put Electronic Imp at the top of the "Technologies to watch" list.  Good stuff and great implementation - platform plan.   excerpt "We've put it in a user-installable module. The user buys the card and just plugs it into any device that has a slot," Fiennes explained." All a developer needs to do is add a socket and a 3-pin Atmel ID chip to their product. That's 75 cents: 30 cents for the ID chip and 45 cents for the socket." This assumes the availability of 3.3 V. "But given that most things you want to control from the Internet are electrical, we think that's reasonable," he said. If not, developers can include a battery. When the $25 card is installed in a slot and powered up, it will find the ID number and automatically transmit the information to Electric Imp's servers. Fiennes and his colleagues have written a virtual machine that runs under a proprietary embedded operating system on the node and looks for updates of itself on the Internet. SSL encryption is used for data security when transmitted over the link. ........
Paul Merrell

The coming merge of human and machine intelligence - 0 views

  • Now, as the Internet revolution unfolds, we are seeing not merely an extension of mind but a unity of mind and machine, two networks coming together as one. Our smaller brains are in a quest to bypass nature's intent and grow larger by proxy. It is not a stretch of the imagination to believe we will one day have all of the world's information embedded in our minds via the Internet.
  • BCI stands for brain-computer interface, and Jan is one of only a few people on earth using this technology, through two implanted chips attached directly to the neurons in her brain. The first human brain implant was conceived of by John Donoghue, a neuroscientist at Brown University, and implanted in a paralyzed man in 2004. These dime-sized computer chips use a technology called BrainGate that directly connects the mind to computers and the Internet. Having served as chairman of the BrainGate company, I have personally witnessed just how profound this innovation is. BrainGate is an invention that allows people to control electrical devices with nothing but their thoughts. The BrainGate chip is implanted in the brain and attached to connectors outside of the skull, which are hooked up to computers that, in Jan Scheuermann's case, are linked to a robotic arm. As a result, Scheuermann can feed herself chocolate by controlling the robotic arm with nothing but her thoughts.
  • Mind meld But imagine the ways in which the world will change when any of us, disabled or not, can connect our minds to computers.
  • ...2 more annotations...
  • Back in 2004, Google's founders told Playboy magazine that one day we'd have direct access to the Internet through brain implants, with "the entirety of the world's information as just one of our thoughts." A decade later, the road map is taking shape. While it may be years before implants like BrainGate are safe enough to be commonplace—they require brain surgery, after all—there are a host of brainwave sensors in development for use outside of the skull that will be transformational for all of us: caps for measuring driver alertness, headbands for monitoring sleep, helmets for controlling video games. This could lead to wearable EEGs, implantable nanochips or even technology that can listen to our brain signals using the electromagnetic waves that pervade the air we breathe. Just as human intelligence is expanding in the direction of the Internet, the Internet itself promises to get smarter and smarter. In fact, it could prove to be the basis of the machine intelligence that scientists have been racing toward since the 1950s.
  • Neurons may be good analogs for transistors and maybe even computer chips, but they're not good building blocks of intelligence. The neural network is fundamental. The BrainGate technology works because the chip attaches not to a single neuron, but to a network of neurons. Reading the signals of a single neuron would tell us very little; it certainly wouldn't allow BrainGate patients to move a robotic arm or a computer cursor. Scientists may never be able to reverse engineer the neuron, but they are increasingly able to interpret the communication of the network. It is for this reason that the Internet is a better candidate for intelligence than are computers. Computers are perfect calculators composed of perfect transistors; they are like neurons as we once envisioned them. But the Internet has all the quirkiness of the brain: it can work in parallel, it can communicate across broad distances, and it makes mistakes. Even though the Internet is at an early stage in its evolution, it can leverage the brain that nature has given us. The convergence of computer networks and neural networks is the key to creating real intelligence from artificial machines. It took millions of years for humans to gain intelligence, but with the human mind as a guide, it may only take a century to create Internet intelligence.
  •  
    Of course once the human brain is interfaced with the internet, then we will be able to do the Vulcan mind-meld thing. And NSA will be busily crawling the Internet for fresh brain dumps to their data center, which then encompasses the entire former state of Utah. Conventional warfare is a thing of the past as the cyberwar commands of great powers battle for control of the billions of minds making up BrainNet, the internet's successor.  Meanwhile, a hackers' Reaper malware trawls BrainNet for bank account numbers and paswords that it forwards for automated harvesting of personal funds. "Ah, Houston ... we have a problem ..."  
Paul Merrell

Less is more - Coeur d'Alene Press Newspaper - Local and National News - Kootenai Count... - 0 views

  • Researchers at the University of Idaho have created a single computer chip more powerful than 17,000 Intel quad core processors that runs on .03 percent of the power those chips would require.
  • Researchers at the University of Idaho have created a single computer chip more powerful than 17,000 Intel quad core processors that runs on .03 percent of the power those chips would require.The chip will be used on NASA's developing Geostationary Synthetic Thinned Aperture Radiometer (GeoSTAR) project, which will observe hurricanes and other severe storms in the U.S.
  • Though commercial electronics have used this technology for some time, they could not make the electronics resistant to radiation, which is required for operations in space
Paul Merrell

World's first programmable quantum photonic chip | ExtremeTech - 0 views

  • A team of engineering geniuses from the University of Bristol, England has developed the world’s first re-programmable, multi-purpose quantum photonic computer chip that relies on quantum entanglement to perform calculations.With multiple waveguide channels (made from standard silicon dioxide), and eight electrodes (see image above), the silicon chip is capable of repeatedly entangling photons. Depending on how the electrodes are programmed, different quantum states can be produced. The end result is two qubits that can be used to perform quantum computing — and unlike D-Wave’s 128-qubit processor (well, depending on who you ask) this is real quantum computing.
  • We know that entanglement can be used for very effective encryption, but beyond that it’s mostly guesswork. There’s general agreement that qubits should allow for faster computation of very complex numbers — think biological processes and weather systems — and early work by Google suggests that pattern recognition might also be a strength of qubits.
Gary Edwards

Will Intel let Jen-Hsun Huang spread graphics beyond PCs? » VentureBeat - 1 views

  •  
    Nvidia chief executive Jen-Hsun Huang is on a mission to get graphics chips into everything from handheld computers to smart phones. He expects, for instance, that low-cost Netbooks will become the norm and that gadgets will need to have battery life lasting for days. Holding up an Ion platform, which couples an Intel low-cost Atom processor with an Nvidia integrated graphics chip set, he said his company is looking to determine "what is the soul of the new PC." With Ion, Huang said he is prepared for the future of the computer industry. But first, he has to deal with Intel. Good interview. See interview with Charlie Rose! The Dance of the Sugarplum Documents is about the evolution of the Web document model from a text-typographical/calculation model to one that is visually rich with graphical media streams meshing into traditional text/calc. The thing is, this visual document model is being defined on the edge. The challenge to the traditional desktop document model is coming from the edge, primarily from the WebKit - Chrome - iPhone Community. Jen-Hsun argues on Charlie Rose that desktop computers featured processing power and applications designed to automate typewritter (wordprocessing) and calculator (spreadsheet) functions. The x86 CPU design reflects this orientation. He argues that we are now entering the age of visual computing. A GPU is capable of dramatic increases in processing power because the architecture is geared to the volumes of graphical information being processed. Let the CPU do the traditional stuff, and let the GPU race into the future with the visual processing. That a GPU architecture can scale in parallel is an enormous advantage. But Jen-Hsun does not see the need to try to replicate CPU tasks in a GPU. The best way forward in his opinion is to combine the two!!!
Paul Merrell

Time to 'Break Facebook Up,' Sanders Says After Leaked Docs Show Social Media Giant 'Tr... - 0 views

  • After NBC News on Wednesday published a trove of leaked documents that show how Facebook "treated user data as a bargaining chip with external app developers," White House hopeful Sen. Bernie Sanders declared that it is time "to break Facebook up."
  • When British investigative journalist Duncan Campbell first shared the trove of documents with a handful of media outlets including NBC News in April, journalists Olivia Solon and Cyrus Farivar reported that "Facebook CEO Mark Zuckerberg oversaw plans to consolidate the social network's power and control competitors by treating its users' data as a bargaining chip, while publicly proclaiming to be protecting that data." With the publication Wednesday of nearly 7,000 pages of records—which include internal Facebook emails, web chats, notes, presentations, and spreadsheets—journalists and the public can now have a closer look at exactly how the company was using the vast amount of data it collects when it came to bargaining with third parties.
  • According to Solon and Farivar of NBC: Taken together, they show how Zuckerberg, along with his board and management team, found ways to tap Facebook users' data—including information about friends, relationships, and photos—as leverage over the companies it partnered with. In some cases, Facebook would reward partners by giving them preferential access to certain types of user data while denying the same access to rival companies. For example, Facebook gave Amazon special access to user data because it was spending money on Facebook advertising. In another case the messaging app MessageMe was cut off from access to data because it had grown too popular and could compete with Facebook.
  • ...2 more annotations...
  • The document dump comes as Facebook and Zuckerberg are facing widespread criticism over the company's political advertising policy, which allows candidates for elected office to lie in the ads they pay to circulate on the platform. It also comes as 47 state attorneys general, led by Letitia James of New York, are investigating the social media giant for antitrust violations.
  • The call from Sanders (I-Vt.) Wednesday to break up Facebook follows similar but less definitive statements from the senator. One of Sanders' rivals in the 2020 Democratic presidential primary race, Sen. Elizabeth Warren (D-Mass.), released her plan to "Break Up Big Tech" in March. Zuckerberg is among the opponents of Warren's proposal, which also targets other major technology companies like Amazon and Google.
Paul Merrell

Testosterone Pit - Home - The Other Reason Why IBM Throws A Billion At Linux ... - 0 views

  • IBM announced today that it would throw another billion at Linux, the open-source operating system, to run its Power System servers. The first time it had thrown a billion at Linux was in 2001, when Linux was a crazy, untested, even ludicrous proposition for the corporate world. So the moolah back then didn’t go to Linux itself, which was free, but to related technologies across hardware, software, and service, including things like sales and advertising – and into IBM’s partnership with Red Hat which was developing its enterprise operating system, Red Hat Enterprise Linux. “It helped start a flurry of innovation that has never slowed,” said Jim Zemlin, executive director of the Linux Foundation. IBM claims that the investment would “help clients capitalize on big data and cloud computing with modern systems built to handle the new wave of applications coming to the data center in the post-PC era.” Some of the moolah will be plowed into the Power Systems Linux Center in Montpellier, France, which opened today. IBM’s first Power Systems Linux Center opened in Beijing in May. IBM may be trying to make hay of the ongoing revelations that have shown that the NSA and other intelligence organizations in the US and elsewhere have roped in American tech companies of all stripes with huge contracts to perfect a seamless spy network. They even include physical aspects of surveillance, such as license plate scanners and cameras, which are everywhere [read.... Surveillance Society: If You Drive, You Get Tracked].
  • Then another boon for IBM. Experts at the German Federal Office for Security in Information Technology (BIS) determined that Windows 8 is dangerous for data security. It allows Microsoft to control the computer remotely through a “special surveillance chip,” the wonderfully named Trusted Platform Module (TPM), and a backdoor in the software – with keys likely accessible to the NSA and possibly other third parties, such as the Chinese. Risks: “Loss of control over the operating system and the hardware” [read.... LEAKED: German Government Warns Key Entities Not To Use Windows 8 – Links The NSA.
  • It would be an enormous competitive advantage for an IBM salesperson to walk into a government or corporate IT department and sell Big Data servers that don’t run on Windows, but on Linux. With the Windows 8 debacle now in public view, IBM salespeople don’t even have to mention it. In the hope of stemming the pernicious revenue decline their employer has been suffering from, they can politely and professionally hype the security benefits of IBM’s systems and mention in passing the comforting fact that some of it would be developed in the Power Systems Linux Centers in Montpellier and Beijing. Alas, Linux too is tarnished. The backdoors are there, though the code can be inspected, unlike Windows code. And then there is Security-Enhanced Linux (SELinux), which was integrated into the Linux kernel in 2003. It provides a mechanism for supporting “access control” (a backdoor) and “security policies.” Who developed SELinux? Um, the NSA – which helpfully discloses some details on its own website (emphasis mine): The results of several previous research projects in this area have yielded a strong, flexible mandatory access control architecture called Flask. A reference implementation of this architecture was first integrated into a security-enhanced Linux® prototype system in order to demonstrate the value of flexible mandatory access controls and how such controls could be added to an operating system. The architecture has been subsequently mainstreamed into Linux and ported to several other systems, including the Solaris™ operating system, the FreeBSD® operating system, and the Darwin kernel, spawning a wide range of related work.
  • ...1 more annotation...
  • Among a slew of American companies who contributed to the NSA’s “mainstreaming” efforts: Red Hat. And IBM? Like just about all of our American tech heroes, it looks at the NSA and other agencies in the Intelligence Community as “the Customer” with deep pockets, ever increasing budgets, and a thirst for technology and data. Which brings us back to Windows 8 and TPM. A decade ago, a group was established to develop and promote Trusted Computing that governs how operating systems and the “special surveillance chip” TPM work together. And it too has been cooperating with the NSA. The founding members of this Trusted Computing Group, as it’s called facetiously: AMD, Cisco, Hewlett-Packard, Intel, Microsoft, and Wave Systems. Oh, I almost forgot ... and IBM. And so IBM might not escape, despite its protestations and slick sales presentations, the suspicion by foreign companies and governments alike that its Linux servers too have been compromised – like the cloud products of other American tech companies. And now, they’re going to pay a steep price for their cooperation with the NSA. Read...  NSA Pricked The “Cloud” Bubble For US Tech Companies
Paul Merrell

Lawmakers want Internet sites to flag 'terrorist activity' to law enforcement - The Was... - 0 views

  • Social media sites such as Twitter and YouTube would be required to report videos and other content posted by suspected terrorists to federal authorities under legislation approved this past week by the Senate Intelligence Committee. The measure, contained in the 2016 intelligence authorization, which still has to be voted on by the full Senate, is an effort to help intelligence and law enforcement officials detect threats from the Islamic State and other terrorist groups.
  •  
    Chipping away at the First Amendment. 
Gary Edwards

Google acquisitions may signal big push against Microsoft Office | VentureBeat - 0 views

  •  
    Google has been making a number of acquisitions that are clearly Docs-related. Over the weekend, TechCrunch reported that the search giant is in the final stages of talks to acquire DocVerse, a startup that lets users collaborate around Office documents, for $25 million. The deal would also bring Google some key hires, since the startup's co-founders were managers on SharePoint, Microsoft's popular collaboration service. This follows the November acquisition of AppJet, a company founded by former Googlers that created a collaborative word processor. (It's worth noting that Google Docs itself was the offspring of several acquisitions, including Google's purchase of Writely.) Meanwhile, Google has been talking up the splash it wants Google Docs to make in 2010. Don Dodge, who just made the move from Microsoft to Google, recently told me, "2010 is going to be the year of Gmail and Google Docs and Google Apps." Even more concretely, Enterprise President Dave Girouard said last month that Docs will see 30 to 50 improvements over the next year, at which point big companies will be able to "get rid of Office if they choose to." Presumably features from AppJet and DocVerse will be among those improvements. I'd certainly be thrilled to see the battle between Office Docs become a real competition, rather than upstart Google slowly chipping away at Microsoft's Office behemoth.
Paul Merrell

Huawei launches first product with own operating system - 0 views

  • Chinese telecom giant Huawei, which has been caught in the crossfires of the Washington-Beijing trade war, on Saturday unveiled a new smart television, the first product to use its own operating system.The television will be available from Thursday in China and marks the first use of HarmonyOS, chief executive George Zhao said, adding that it will be marketed by its mid-range brand, Honor.Huawei revealed its highly-anticipated HarmonyOS on Friday as an alternative operating system for phones and other smart devices in the event that looming US sanctions prevent the firm from using Android technology.American companies are theoretically no longer allowed to sell technology products to Huawei, but a three-month exemption period -- which ends next week -- was granted by Washington before the measure came into force.That ban could stop the tech giant from getting hold of key hardware and software, including smartphone chips and elements of the Google Android operating system, which runs the vast majority of smartphones in the world, including Huawei's.Huawei -- considered the world leader in fast fifth-generation or 5G equipment and the world's number two smartphone producer -- has been blacklisted by US President Donald Trump amid suspicions it provides a backdoor for Chinese intelligence services, which the firm denies.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

Windows Phone 7: An In-depth Look at the Features and Interface - PCWorld - 0 views

  • Microsoft's hardware partners include Dell, HTC, Garmin ASUS, LG, Samsung, SE, Toshiba, HP and Qualcomm. NVIDIA, which provided the Tegra chip in the Zune HD hardware, is noticeably absent. Microsoft had no comment.
    • Gary Edwards
       
      This is interesting!  Nvidia not invited to Microsoft's Windows 7 coming out?  Things must be heating up on the gaming platform.  
  •  
    The Barcelona World Mobility Conference is under way, and Microsoft is making a show of force.  They are hitting with features that support both "Social Network Hubs" (People Hub) and "Productivity Hubs" (Office Hub).  The key to Microsoft's Windows 7 Smartphone success is not that of new and exciting features competitive with Apple and Google smartphones.  No, the key feature that Microsoft owns is that of integration into legacy desktop productivity business systems based on the MSOffice productivity platform, and, more importantly, integration into the emerging Web centered but proprietary Microsoft Business Productivity Platform.  Stay tuned. excerpt on the new Windows 7 feature set:  The Office Hub lets you easily sync your documents between your phone and your PC. Office Hub comes with OneNote, for notetaking, Documents and Sharepoint for presentation collaboration. Users will also have access to an Outlook Mail application which gives similar features, like flagging important e-mails, that you'd find on the desktop version.
Paul Merrell

Did NSA, GCHQ steal the secret key in YOUR phone SIM? It's LIKELY * The Register - 0 views

  • The NSA and Britain's GCHQ hacked the world's biggest SIM card maker to harvest the encryption keys needed to silently and effortlessly eavesdrop on potentially millions of people. That's according to documents obtained by surveillance whistleblower Edward Snowden and leaked to the web on Thursday. "Wow. This is huge – it's one of the most significant findings of the Snowden files so far," computer security guru Bruce Schneier told The Register this afternoon. "We always knew that they would occasionally steal SIM keys. But all of them? The odds that they just attacked this one firm are extraordinarily low and we know the NSA does like to steal keys where it can." The damning slides, published by Snowden's chums at The Intercept, detail the activities of the as-yet unheard-of Mobile Handset Exploitation Team (MHET), run by the US and UK. The group targeted Gemalto, which churns out about two billion SIM cards each year for use around the world, and targeted it in an operation dubbed DAPINO GAMMA.
  • Gemalto's hacking may also bring into question some of its other security products as well. The company supplies chips for electronic passports issued by the US, Singapore, India, and many European states, and is also involved in the NFC and mobile banking sector. It's important to note that this is useful for tracking the phone activity of a target, but the mobile user can still use encryption on the handset itself to ensure that some communications remain private. "Ironically one of your best defenses against a hijacked SIM is to use software encryption," Jon Callas, CTO of encrypted chat biz Silent Circle told The Register. "In our case there's a TCP/IP cloud between Alice and Bob and that can deal with compromised routers along the path as well as SIM issues, and the same applies to similar mobile software."
  • On Wednesday the UK government admitted that its intelligence agencies had in fact broken the ECHR when spying on communications between lawyers and those suing the British state, so GCHQ might want to reconsider that statement.
Paul Merrell

Are processors pushing up against the limits of physics? | Ars Technica - 0 views

  • When I first started reading Ars Technica, performance of a processor was measured in megahertz, and the major manufacturers were rushing to squeeze as many of them as possible into their latest silicon. Shortly thereafter, however, the energy needs and heat output of these beasts brought that race crashing to a halt. More recently, the number of processing cores rapidly scaled up, but they quickly reached the point of diminishing returns. Now, getting the most processing power for each Watt seems to be the key measure of performance. None of these things happened because the companies making processors ran up against hard physical limits. Rather, computing power ended up being constrained because progress in certain areas—primarily energy efficiency—was slow compared to progress in others, such as feature size. But could we be approaching physical limits in processing power? In this week's edition of Nature, The University of Michigan's Igor Markov takes a look at the sorts of limits we might face.
Paul Merrell

Chinese State Media Declares iPhone a Threat To National Security - Slashdot - 0 views

  • "When NSA whistleblower Edward Snowden came forth last year with U.S. government spying secrets, it didn't take long to realize that some of the information revealed could bring on serious repercussions — not just for the U.S. government, but also for U.S.-based companies. The latest to feel the hit? None other than Apple, and in a region the company has been working hard to increase market share: China. China, via state media, has today declared that Apple's iPhone is a threat to national security — all because of its thorough tracking capabilities. It has the ability to keep track of user locations, and to the country, this could potentially reveal "state secrets" somehow. It's being noted that the iPhone will continue to track the user to some extent even if the overall feature is disabled. China's iPhone ousting comes hot on the heels of Russia's industry and trade deeming AMD and Intel processors to be untrustworthy. The nation will instead be building its own ARM-based "Baikal" processor.
Paul Merrell

The Great SIM Heist: How Spies Stole the Keys to the Encryption Castle - 0 views

  • AMERICAN AND BRITISH spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe, according to top-secret documents provided to The Intercept by National Security Agency whistleblower Edward Snowden. The hack was perpetrated by a joint unit consisting of operatives from the NSA and its British counterpart Government Communications Headquarters, or GCHQ. The breach, detailed in a secret 2010 GCHQ document, gave the surveillance agencies the potential to secretly monitor a large portion of the world’s cellular communications, including both voice and data. The company targeted by the intelligence agencies, Gemalto, is a multinational firm incorporated in the Netherlands that makes the chips used in mobile phones and next-generation credit cards. Among its clients are AT&T, T-Mobile, Verizon, Sprint and some 450 wireless network providers around the world. The company operates in 85 countries and has more than 40 manufacturing facilities. One of its three global headquarters is in Austin, Texas and it has a large factory in Pennsylvania. In all, Gemalto produces some 2 billion SIM cards a year. Its motto is “Security to be Free.”
  • With these stolen encryption keys, intelligence agencies can monitor mobile communications without seeking or receiving approval from telecom companies and foreign governments. Possessing the keys also sidesteps the need to get a warrant or a wiretap, while leaving no trace on the wireless provider’s network that the communications were intercepted. Bulk key theft additionally enables the intelligence agencies to unlock any previously encrypted communications they had already intercepted, but did not yet have the ability to decrypt.
  • Leading privacy advocates and security experts say that the theft of encryption keys from major wireless network providers is tantamount to a thief obtaining the master ring of a building superintendent who holds the keys to every apartment. “Once you have the keys, decrypting traffic is trivial,” says Christopher Soghoian, the principal technologist for the American Civil Liberties Union. “The news of this key theft will send a shock wave through the security community.”
  • ...2 more annotations...
  • According to one secret GCHQ slide, the British intelligence agency penetrated Gemalto’s internal networks, planting malware on several computers, giving GCHQ secret access. We “believe we have their entire network,” the slide’s author boasted about the operation against Gemalto. Additionally, the spy agency targeted unnamed cellular companies’ core networks, giving it access to “sales staff machines for customer information and network engineers machines for network maps.” GCHQ also claimed the ability to manipulate the billing servers of cell companies to “suppress” charges in an effort to conceal the spy agency’s secret actions against an individual’s phone. Most significantly, GCHQ also penetrated “authentication servers,” allowing it to decrypt data and voice communications between a targeted individual’s phone and his or her telecom provider’s network. A note accompanying the slide asserted that the spy agency was “very happy with the data so far and [was] working through the vast quantity of product.”
  • The U.S. and British intelligence agencies pulled off the encryption key heist in great stealth, giving them the ability to intercept and decrypt communications without alerting the wireless network provider, the foreign government or the individual user that they have been targeted. “Gaining access to a database of keys is pretty much game over for cellular encryption,” says Matthew Green, a cryptography specialist at the Johns Hopkins Information Security Institute. The massive key theft is “bad news for phone security. Really bad news.”
  •  
    Remember all those NSA claims that no evidence of their misbehavior has emerged? That one should never take wing again. Monitoring call content without the involvement of any court? Without a warrant? Without probable cause?  Was there even any Congressional authorization?  Wiretapping unequivocally requires a judicially-approved search warrant. It's going to be very interesting to learn the government's argument for this misconduct's legality. 
Paul Merrell

Google Is Constantly Tracking, Even If You Turn Off Device 'Location History' | Zero Hedge - 0 views

  • In but the latest in a continuing saga of big tech tracking and surveillance stories which should serve to convince us all we are living in the beginning phases of a Minority Report style tracking and pansophical "pre-crime" system, it's now confirmed that the world's most powerful tech company and search tool will always find a way to keep your location data. The Associated Press sought the help of Princeton researchers to prove that while Google is clear and upfront about giving App users the ability to turn off or "pause" Location History on their devices, there are other hidden means through which it retains the data.
  • According to the AP report: Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.” That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking. For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like “chocolate chip cookies,” or “kids science kits,” pinpoint your precise latitude and longitude — accurate to the square foot — and save it to your Google account. The issue directly affects around two billion people using Google's Android operating software and iPhone users relying on Google maps or a simple search. Among the computer science researchers at Princeton conducting the tests is Jonathan Mayer, who told the AP, “If you’re going to allow users to turn off something called ‘Location History,’ then all the places where you maintain location history should be turned off,” and added, “That seems like a pretty straightforward position to have.”
Paul Merrell

Shocking Leak Reveals Facebook Leveraged User Data To Reward Friends, Punish Enemies | ... - 0 views

  • As traders focused on bank earnings and the outlook for global growth, NBC News wrested the market's attention back toward Facebook by publishing a report on what appears to be the largest leak of internal documents since the data privacy scandal that has dogged the company for more than a year erupted with the first reports about Cambridge Analytica's 'improper' leveraging of Facebook user data to influence elections.
  • Some 4,000 pages of documents shared with the network news organization by a journalist affiliated with the ICIJ, the same organization that helped bring us the Panama Papers leaks, revealed that Facebook had employed sensitive user data as a bargaining chip to attract major advertisers and close other deals between 2011 and 2015, when the company was struggling to cement its business model following its botched 2012 IPO.
  • Facebook essentially offered companies like Amazon unfettered access to its data in exchange for agreeing to advertise on Facebook's platform, according to the documents, only a small fraction of which have been previously reported on. All of this was happening at a time when the company publicly professed to bee safeguarding user data.
Paul Merrell

NAS Report: A New Light in the Debate over Government Access to Encrypted Content - Law... - 0 views

  • The encryption debate dates back to Clinton administration proposals for the “clipper chip” and mandatory deposit of decryption keys. But that debate reached new prominence in connection with the FBI’s efforts to compel Apple to decrypt the phone of a dead terrorist in the San Bernardino case. A new study by the National Academies of Sciences, Engineering, and Medicine tries to shed some light, and turn down the heat, in the debate over whether government agencies should be provided access to plaintext versions of encrypted communications and other data. FBI and other law enforcement officials, and some intelligence officials, have argued that in the face of widespread encryption provided by smart phones, messaging apps, and other devices and software, the internet is “going dark.” These officials warn that encryption is restricting their access to information needed for criminal and national security investigations, arguing that they need a reliable, timely and scalable way to access it. Critics have raised legal and practical objections that regulations to ensure government access would pose unacceptable risks to privacy and civil liberties and undermine computer security in the face of rising cyber threats, and may be less necessary given the wider availability of data and alternative means of obtaining access to encrypted data. As the encryption debate has become increasingly polarized with participants on all sides making sweeping, sometimes absolutist, assertions, the new National Academies’ report doesn’t purport to tell anyone what to do, but rather provides a primer on the relevant issues.
1 - 20 of 20
Showing 20 items per page