Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "Software" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
11More

Think you're a good employee? Office snooping software can tell - CNN.com - 1 views

  • More than that, Killock believes using such software can have a negative psychological impact on a workplace. "It is a powerful signal that you do not fully trust the people you are paying or perhaps don't invest the time and care to properly manage them," he says.
    • Weiman Kow
       
      the presentation group brought up this point.. =)
  • Ultimately, true privacy only begins outside the workplace -- and the law supports that. In the United States, at least all email and other electronic content created on the employer's equipment belongs to the employer, not the employee. Slackers would do well to remember that.
  • But Charnock is keen to stress Cataphora isn't only about bosses spying on their team -- it works both ways.
    • Weiman Kow
       
      Is that really true?
  • ...5 more annotations...
  • Our software builds a multi-dimensional model of normal behavior,
  • the emails they send, the calls they make and the documents they write.
  • [We can tell] who is really being consulted by other employees, and on which topics; who is really making decisions
  • The software began as a tool to assist lawyers with the huge corporate databases often subpoenaed as evidence in trials but has now moved into human resources.
  • We do have extensive filters to try to weed out people who are highly productive in areas such as sports banter and knowledge of local bars,
  •  
    Just a link on advances in extensive office surveillance - this program is supposed to "separate the good employees from the bad by analyzing workers 'electronic footprints' -- the emails they send, the calls they make and the documents they write"
13More

McKinsey & Company - Clouds, big data, and smart assets: Ten tech-enabled business tren... - 0 views

  • 1. Distributed cocreation moves into the mainstreamIn the past few years, the ability to organise communities of Web participants to develop, market, and support products and services has moved from the margins of business practice to the mainstream. Wikipedia and a handful of open-source software developers were the pioneers. But in signs of the steady march forward, 70 per cent of the executives we recently surveyed said that their companies regularly created value through Web communities. Similarly, more than 68m bloggers post reviews and recommendations about products and services.
  • for every success in tapping communities to create value, there are still many failures. Some companies neglect the up-front research needed to identify potential participants who have the right skill sets and will be motivated to participate over the longer term. Since cocreation is a two-way process, companies must also provide feedback to stimulate continuing participation and commitment. Getting incentives right is important as well: cocreators often value reputation more than money. Finally, an organisation must gain a high level of trust within a Web community to earn the engagement of top participants.
  • 2. Making the network the organisation In earlier research, we noted that the Web was starting to force open the boundaries of organisations, allowing nonemployees to offer their expertise in novel ways. We called this phenomenon "tapping into a world of talent." Now many companies are pushing substantially beyond that starting point, building and managing flexible networks that extend across internal and often even external borders. The recession underscored the value of such flexibility in managing volatility. We believe that the more porous, networked organisations of the future will need to organise work around critical tasks rather than molding it to constraints imposed by corporate structures.
  • ...10 more annotations...
  • 3. Collaboration at scale Across many economies, the number of people who undertake knowledge work has grown much more quickly than the number of production or transactions workers. Knowledge workers typically are paid more than others, so increasing their productivity is critical. As a result, there is broad interest in collaboration technologies that promise to improve these workers' efficiency and effectiveness. While the body of knowledge around the best use of such technologies is still developing, a number of companies have conducted experiments, as we see in the rapid growth rates of video and Web conferencing, expected to top 20 per cent annually during the next few years.
  • 4. The growing ‘Internet of Things' The adoption of RFID (radio-frequency identification) and related technologies was the basis of a trend we first recognised as "expanding the frontiers of automation." But these methods are rudimentary compared with what emerges when assets themselves become elements of an information system, with the ability to capture, compute, communicate, and collaborate around information—something that has come to be known as the "Internet of Things." Embedded with sensors, actuators, and communications capabilities, such objects will soon be able to absorb and transmit information on a massive scale and, in some cases, to adapt and react to changes in the environment automatically. These "smart" assets can make processes more efficient, give products new capabilities, and spark novel business models. Auto insurers in Europe and the United States are testing these waters with offers to install sensors in customers' vehicles. The result is new pricing models that base charges for risk on driving behavior rather than on a driver's demographic characteristics. Luxury-auto manufacturers are equipping vehicles with networked sensors that can automatically take evasive action when accidents are about to happen. In medicine, sensors embedded in or worn by patients continuously report changes in health conditions to physicians, who can adjust treatments when necessary. Sensors in manufacturing lines for products as diverse as computer chips and pulp and paper take detailed readings on process conditions and automatically make adjustments to reduce waste, downtime, and costly human interventions.
  • 5. Experimentation and big data Could the enterprise become a full-time laboratory? What if you could analyse every transaction, capture insights from every customer interaction, and didn't have to wait for months to get data from the field? What if…? Data are flooding in at rates never seen before—doubling every 18 months—as a result of greater access to customer data from public, proprietary, and purchased sources, as well as new information gathered from Web communities and newly deployed smart assets. These trends are broadly known as "big data." Technology for capturing and analysing information is widely available at ever-lower price points. But many companies are taking data use to new levels, using IT to support rigorous, constant business experimentation that guides decisions and to test new products, business models, and innovations in customer experience. In some cases, the new approaches help companies make decisions in real time. This trend has the potential to drive a radical transformation in research, innovation, and marketing.
  • Using experimentation and big data as essential components of management decision making requires new capabilities, as well as organisational and cultural change. Most companies are far from accessing all the available data. Some haven't even mastered the technologies needed to capture and analyse the valuable information they can access. More commonly, they don't have the right talent and processes to design experiments and extract business value from big data, which require changes in the way many executives now make decisions: trusting instincts and experience over experimentation and rigorous analysis. To get managers at all echelons to accept the value of experimentation, senior leaders must buy into a "test and learn" mind-set and then serve as role models for their teams.
  • 6. Wiring for a sustainable world Even as regulatory frameworks continue to evolve, environmental stewardship and sustainability clearly are C-level agenda topics. What's more, sustainability is fast becoming an important corporate-performance metric—one that stakeholders, outside influencers, and even financial markets have begun to track. Information technology plays a dual role in this debate: it is both a significant source of environmental emissions and a key enabler of many strategies to mitigate environmental damage. At present, information technology's share of the world's environmental footprint is growing because of the ever-increasing demand for IT capacity and services. Electricity produced to power the world's data centers generates greenhouse gases on the scale of countries such as Argentina or the Netherlands, and these emissions could increase fourfold by 2020. McKinsey research has shown, however, that the use of IT in areas such as smart power grids, efficient buildings, and better logistics planning could eliminate five times the carbon emissions that the IT industry produces.
  • 7. Imagining anything as a service Technology now enables companies to monitor, measure, customise, and bill for asset use at a much more fine-grained level than ever before. Asset owners can therefore create services around what have traditionally been sold as products. Business-to-business (B2B) customers like these service offerings because they allow companies to purchase units of a service and to account for them as a variable cost rather than undertake large capital investments. Consumers also like this "paying only for what you use" model, which helps them avoid large expenditures, as well as the hassles of buying and maintaining a product.
  • In the IT industry, the growth of "cloud computing" (accessing computer resources provided through networks rather than running software or storing data on a local computer) exemplifies this shift. Consumer acceptance of Web-based cloud services for everything from e-mail to video is of course becoming universal, and companies are following suit. software as a service (SaaS), which enables organisations to access services such as customer relationship management, is growing at a 17 per cent annual rate. The biotechnology company Genentech, for example, uses Google Apps for e-mail and to create documents and spreadsheets, bypassing capital investments in servers and software licenses. This development has created a wave of computing capabilities delivered as a service, including infrastructure, platform, applications, and content. And vendors are competing, with innovation and new business models, to match the needs of different customers.
  • 8. The age of the multisided business model Multisided business models create value through interactions among multiple players rather than traditional one-on-one transactions or information exchanges. In the media industry, advertising is a classic example of how these models work. Newspapers, magasines, and television stations offer content to their audiences while generating a significant portion of their revenues from third parties: advertisers. Other revenue, often through subscriptions, comes directly from consumers. More recently, this advertising-supported model has proliferated on the Internet, underwriting Web content sites, as well as services such as search and e-mail (see trend number seven, "Imagining anything as a service," earlier in this article). It is now spreading to new markets, such as enterprise software: Spiceworks offers IT-management applications to 950,000 users at no cost, while it collects advertising from B2B companies that want access to IT professionals.
  • 9. Innovating from the bottom of the pyramid The adoption of technology is a global phenomenon, and the intensity of its usage is particularly impressive in emerging markets. Our research has shown that disruptive business models arise when technology combines with extreme market conditions, such as customer demand for very low price points, poor infrastructure, hard-to-access suppliers, and low cost curves for talent. With an economic recovery beginning to take hold in some parts of the world, high rates of growth have resumed in many developing nations, and we're seeing companies built around the new models emerging as global players. Many multinationals, meanwhile, are only starting to think about developing markets as wellsprings of technology-enabled innovation rather than as traditional manufacturing hubs.
  • 10. Producing public good on the grid The role of governments in shaping global economic policy will expand in coming years. Technology will be an important factor in this evolution by facilitating the creation of new types of public goods while helping to manage them more effectively. This last trend is broad in scope and draws upon many of the other trends described above.
6More

Android software piracy rampant despite Google's efforts to curb - Computerworld - 0 views

  • Some have argued that piracy is rampant in those countries where the online Android Market is not yet available. But a recent KeyesLabs research project suggests that may not be true. KeyesLabs created a rough methodology to track total downloads of its apps, determine which ones were pirated, and the location of the end users. The results were posted in August, along with a “heat map” showing pirate activity. 
  • In July 2010, Google announced the Google Licensing Service, available via Android Market. Applications can include the new License Verification Library (LVL). “At run time, with the inclusion of a set of libraries provided by us, your application can query the Android Market licensing server to determine the license status of your users,” according to a blog post by Android engineer Eric Chu. “It returns information on whether your users are authorized to use the app based on stored sales records.”
  • Justin Case, at the Android Police Web site, dissected the LVL. “A minor patch to an application employing this official, Google-recommended protection system will render it completely worthless,” he concluded.
  • ...2 more annotations...
  • In response, Google has promised continued improvements and outlined a multipronged strategy around the new licensing service to make piracy much harder. “A determined attacker who’s willing to disassemble and reassemble code can eventually hack around the service,” acknowledged Android engineer Trevor Johns in a recent blog post.  But developers can make their work much harder by combining a cluster of techniques, he counsels: obfuscating code, modifying the licensing library to protect against common cracking techniques, designing the app to be tamper-resistant, and offloading license validation to a trusted server.
  • Gareau isn’t quite as convinced of the benefits of code obfuscation, though he does make use of it. He’s taken several other steps to protect his software work. One is providing a free trial version, which allows only a limited amount of data but is otherwise fully-featured. The idea: Let customers prove that the app will do everything they want, and they may be more willing to pay for it. He also provides a way to detect whether the app has been tampered with, for example, by removing the licensing checks. If yes, the app can be structured to stop working or behave erratically.
  •  
    Android software piracy rampant despite Google's efforts to curb
5More

Android software piracy rampant despite Google's efforts to curb - Computerworld - 0 views

  • lot of Android applications are being pirated. The openness of the platform has made it easy for people to steal applications without paying for them.
  • growing popularity of the OS with enterprise users and developers is creating greater urgency, as pirated code robs developers of revenue and the incentive to remain committed Android. (See Android Set to Rule Over Apple and RIM Operating Systems.)
  • Network World's Android Angle blogger, Mark Murphy, bluntly noted a year ago that “Right now, it is very straightforward — if you publish on Android Market, your application will be made available for free download outside of the Market.” He added, “This is part and parcel of having an open environment like Android.” The then-current Android Market copy protection mechanisms “have been demonstrated to be ineffective.”
  • ...1 more annotation...
  • What’s especially galling to professional developers is watching sales plunge as piracy rates soar. “The current issue we face with Android is rampant piracy, and we’re working to provide hacking counter measures, a difficult task,” says Jean Gareau, founder of VidaOne, an Austin, Texas, software company that specializes in health and fitness applications for a variety of operating systems.
  •  
    Android software piracy rampant despite Google's efforts to curb
4More

Anonymous speaks: the inside story of the HBGary hack - 0 views

  • It has been an embarrassing week for security firm HBGary and its HBGary Federal offshoot. HBGary Federal CEO Aaron Barr thought he had unmasked the hacker hordes of Anonymous and was preparing to name and shame those responsible for co-ordinating the group's actions, including the denial-of-service attacks that hit MasterCard, Visa, and other perceived enemies of WikiLeaks late last year.
  • When Barr told one of those he believed to be an Anonymous ringleader about his forthcoming exposé, the Anonymous response was swift and humiliating. HBGary's servers were broken into, its e-mails pillaged and published to the world, its data destroyed, and its website defaced. As an added bonus, a second site owned and operated by Greg Hoglund, owner of HBGary, was taken offline and the user registration database published.
  • HBGary and HBGary Federal position themselves as experts in computer security. The companies offer both software and services to both the public and private sectors. On the software side, HBGary has a range of computer forensics and malware analysis tools to enable the detection, isolation, and analysis of worms, viruses, and trojans. On the services side, it offers expertise in implementing intrusion detection systems and secure networking, and performs vulnerability assessment and penetration testing of systems and software. A variety of three letter agencies, including the NSA, appeared to be in regular contact with the HBGary companies, as did Interpol, and HBGary also worked with well-known security firm McAfee. At one time, even Apple expressed an interest in the company's products or services.
  • ...1 more annotation...
  • One might think that such an esteemed organization would prove an insurmountable challenge for a bunch of disaffected kids to hack. World-renowned, government-recognized experts against Anonymous? HBGary should be able to take their efforts in stride. Unfortunately for HBGary, neither the characterization of Anonymous nor the assumption of competence on the security company's part are accurate, as the story of how HBGary was hacked will make clear. Anonymous is a diverse bunch: though they tend to be younger rather than older, their age group spans decades. Some may still be in school, but many others are gainfully employed office-workers, software developers, or IT support technicians, among other things. With that diversity in age and experience comes a diversity of expertise and ability.
8More

Eben Moglen Is Reshaping Internet With a Freedom Box - NYTimes.com - 0 views

  • Secretary of State Hillary Rodham Clinton spoke in Washington about the Internet and human liberty, a Columbia law professor in Manhattan, Eben Moglen, was putting together a shopping list to rebuild the Internet — this time, without governments and big companies able to watch every twitch of our fingers.
  • The list begins with “cheap, small, low-power plug servers,” Mr. Moglen said. “A small device the size of a cellphone charger, running on a low-power chip. You plug it into the wall and forget about it.”
  • Almost anyone could have one of these tiny servers, which are now produced for limited purposes but could be adapted to a full range of Internet applications, he said. “They will get very cheap, very quick,” Mr. Moglen said. “They’re $99; they will go to $69. Once everyone is getting them, they will cost $29.”
  • ...5 more annotations...
  • The missing ingredients are software packages, which are available at no cost but have to be made easy to use. “You would have a whole system with privacy and security built in for the civil world we are living in,” he said. “It stores everything you care about.” Put free software into the little plug server in the wall, and you would have a Freedom Box that would decentralize information and power, Mr. Moglen said. This month, he created the Freedom Box Foundation to organize the software.
  • In the first days of the personal computer era, many scoffed at the idea that free software could have an important place in the modern world. Today, it is the digital genome for millions of phones, printers, cameras, MP3 players, televisions, the Pentagon, the New York Stock Exchange and the computers that underpin Google’s empire.
  • Social networking has changed the balance of political power, he said, “but everything we know about technology tells us that the current forms of social network communication, despite their enormous current value for politics, are also intensely dangerous to use. They are too centralized; they are too vulnerable to state retaliation and control.”
  • investors were said to have put a value of about $50 billion on Facebook, the social network founded by Mark Zuckerberg. If revolutions for freedom rest on the shoulders of Facebook, Mr. Moglen said, the revolutionaries will have to count on individuals who have huge stakes in keeping the powerful happy.
  • “It is not hard, when everybody is just in one big database controlled by Mr. Zuckerberg, to decapitate a revolution by sending an order to Mr. Zuckerberg that he cannot afford to refuse,” Mr. Moglen said. By contrast, with tens of thousands of individual encrypted servers, there would be no one place where a repressive government could find out who was publishing or reading “subversive” material.
2More

FreedomBox Foundation - 0 views

  • Freedom Box is the name we give to a personal server running a free software operating system, with free applications designed to create and preserve personal privacy. Freedom Box software is particularly tailored to run in "plug servers," which are compact computers that are no larger than power adapters for electronic appliances. Located in people's homes or offices such inexpensive servers can provide privacy in normal life, and safe communications for people seeking to preserve their freedom in oppressive regimes.
  • Because social networking and digital communications technologies are now critical to people fighting to make freedom in their societies or simply trying to preserve their privacy where the Web and other parts of the Net are intensively surveilled by profit-seekers and government agencies. Because smartphones, mobile tablets, and other common forms of consumer electronics are being built as "platforms" to control their users and monitor their activity. Freedom Box exists to counter these unfree "platform" technologies that threaten political freedom. Freedom Box exists to provide people with privacy-respecting technology alternatives in normal times, and to offer ways to collaborate safely and securely with others in building social networks of protest, demonstration, and mobilization for political change in the not-so-normal times. Freedom Box software is built to run on hardware that already exists, and will soon become much more widely available and much more inexpensive. "Plug servers" and other compact devices are going to become ubiquitous in the next few years, serving as "media centers," "communications centers," "wireless routers," and many other familiar and not-so-familiar roles in office and home. Freedom Box software images will turn all sorts of such devices into privacy appliances. Taken together, these appliances will afford people around the world options for communicating, publishing, and collaborating that will resist state intervention or disruption. People owning these appliances will be able to restore anonymity in the Net, despite efforts of despotic regimes to keep track of who reads what and who communicates with whom. For a list of specific Freedom Box capabilities, check out our Goals page.
7More

Android phones record user-locations according to research | Technology | The Guardian - 0 views

  • The discovery that Android devices - which are quickly becoming the best-selling products in the smartphone space - also collect location data indicates how essential such information has become to their effective operation. "Location services", which can help place a user on a map, are increasingly seen as important for providing enhanced services including advertising - which forms the basis of Google's business.
  • Smartphones running Google's Android software collect data about the user's movements in almost exactly the same way as the iPhone, according to an examination of files they contain. The discovery, made by a Swedish researcher, comes as the Democratic senator Al Franken has written to Apple's chief executive Steve Jobs demanding to know why iPhones keep a secret file recording the location of their users as they move around, as the Guardian revealed this week.
  • Magnus Eriksson, a Swedish programmer, has shown that Android phones – now the bestselling smartphones – do the same, though for a shorter period. According to files discovered by Android devices keep a record of the locations and unique IDs of the last 50 mobile masts that it has communicated with, and the last 200 Wi-Fi networks that it has "seen". These are overwritten, oldest first, when the relevant list is full. It is not yet known whether the lists are sent to Google. That differs from Apple, where the data is stored for up to a year.
  • ...4 more annotations...
  • In addition, the file is not easily accessible to users: it requires some computer skills to extract the data. By contrast, the Apple file is easily extracted directly from the computer or phone.
  • Senator Franken has asked Jobs to explain the purpose and extent of the iPhone's tracking. "The existence of this information - stored in an unencrypted format - raises serious privacy concerns," Franken writes in his letter to Jobs. "Anyone who gains access to this single file could likely determine the location of a user's home, the businesses he frequents, the doctors he visits, the schools his children attend, and the trips he has taken - over the past months or even a year."
  • Franken points out that a stolen or lost iPhone or iPad could be used to map out its owner's precise movements "for months at a time" and that it is not limited by age, meaning that it could track the movements of users who are under 13
  • security researcher, Alex Levinson, says that he discovered the file inside the iPhone last year, and that it has been used in the US by the police in a number of cases. He says that its purpose is simply to help the phone determine its location, and that he has seen no evidence that it is sent back to Apple. However documents lodged by Apple with the US Congress suggest that it does use the data if the user agrees to give the company "diagnostic information" from their iPhone or iPad.
8More

Daily Kos: UPDATED: The HB Gary Email That Should Concern Us All - 0 views

  • HB Gary people are talking about creating "personas", what we would call sockpuppets. This is not new. PR firms have been using fake "people" to promote products and other things for a while now, both online and even in bars and coffee houses.
  • But for a defense contractor with ties to the federal government, Hunton & Williams, DOD, NSA, and the CIA -  whose enemies are labor unions, progressive organizations,  journalists, and progressive bloggers,  a persona apparently goes far beyond creating a mere sockpuppet. According to an embedded MS Word document found in one of the HB Gary emails, it involves creating an army of sockpuppets, with sophisticated "persona management" software that allows a small team of only a few people to appear to be many, while keeping the personas from accidentally cross-contaminating each other. Then, to top it off, the team can actually automate some functions so one persona can appear to be an entire Brooks Brothers riot online.
  • Persona management entails not just the deconfliction of persona artifacts such as names, email addresses, landing pages, and associated content.  It also requires providing the human actors technology that takes the decision process out of the loop when using a specific persona.  For this purpose we custom developed either virtual machines or thumb drives for each persona.  This allowed the human actor to open a virtual machine or thumb drive with an associated persona and have all the appropriate email accounts, associations, web pages, social media accounts, etc. pre-established and configured with visual cues to remind the actor which persona he/she is using so as not to accidentally cross-contaminate personas during use.
  • ...5 more annotations...
  • all of this is for the purposes of infiltration, data mining, and (here's the one that really worries me) ganging up on bloggers, commenters  and otherwise "real" people to smear enemies and distort the truth.
  • CEO of HB Gary's Federal subsidiary, to several of his colleagues to present to clients: To build this capability we will create a set of personas on twitter,‭ ‬blogs,‭ ‬forums,‭ ‬buzz,‭ ‬and myspace under created names that fit the profile‭ (‬satellitejockey,‭ ‬hack3rman,‭ ‬etc‭)‬.‭  ‬These accounts are maintained and updated automatically through RSS feeds,‭ ‬retweets,‭ ‬and linking together social media commenting between platforms.‭  ‬With a pool of these accounts to choose from,‭ ‬once you have a real name persona you create a Facebook and LinkedIn account using the given name,‭ ‬lock those accounts down and link these accounts to a selected‭ ‬#‭ ‬of previously created social media accounts,‭ ‬automatically pre-aging the real accounts.
  • one of the team spells out how automation can work so one person can be many personas: Using the assigned social media accounts we can automate the posting of content that is relevant to the persona.  In this case there are specific social media strategy website RSS feeds we can subscribe to and then repost content on twitter with the appropriate hashtags.  In fact using hashtags and gaming some location based check-in services we can make it appear as if a persona was actually at a conference and introduce himself/herself to key individuals as part of the exercise, as one example.  There are a variety of social media tricks we can use to add a level of realness to all fictitious personas
  • It goes far beyond the mere ability for a government stooge, corporation or PR firm to hire people to post on sites like this one. They are talking about creating  the illusion of consensus. And consensus is a powerful persuader. What has more effect, one guy saying BP is not at fault? Or 20 people saying it? For the weak minded, the number can make all the difference.
  • UPDATE: From another email, I found a  government solicitation for this "Persona Management Software". This confirms that in fact, the US Gov. is attempting to use this kind of technology. But it appears from the solicitation it is contracted for use in foreign theaters like Afghanistan and Iraq. I can't imagine why this is posted on an open site. And whenthis was discovered by a couple of HB Gary staffers, they weren't too happy about it either:
11More

Report: Piracy a "global pricing problem" with only one solution - 0 views

  • Over the last three years, 35 researchers contributed to the Media Piracy Project, released last week by the Social Science Research Council. Their mission was to examine media piracy in emerging economies, which account for most of the world's population, and to find out just how and why piracy operates in places like Russia, Mexico, and India.
  • Their conclusion is not that citizens of such piratical societies are somehow morally deficient or opposed to paying for content. Instead, they write that “high prices for media goods, low incomes, and cheap digital technologies are the main ingredients of global media piracy. If piracy is ubiquitous in most parts of the world, it is because these conditions are ubiquitous.”
  • When legitimate CDs, DVDs, and computer software are five to ten times higher (relative to local incomes) than they are in the US and Europe, simply ratcheting up copyright enforcement won't do enough to fix the problem. In the view of the report's authors, the only real solution is the creation of local companies that “actively compete on price and services for local customers” as they sell movies, music, and more.
  • ...7 more annotations...
  • Some markets have local firms that compete on price to offer legitimate content (think the US, which has companies like Hulu, Netflix, Apple, and Microsoft that compete to offer legal video content). But the authors conclude that, in most of the world, legitimate copyrighted goods are only distributed by huge multinational corporations whose dominant goals are not to service a large part of local markets but to “protect the pricing structure in the high-income countries that generate most of their profits.”
  • This might increase profits globally, but it has led to disaster in many developing economies, where piracy may run north of 90 percent. Given access to cheap digital tools, but charged terrific amounts of money for legitimate versions of content, users choose piracy.
  • In Russia, for instance, researchers noted that legal versions of the film The Dark Knight went for $15. That price, akin to what a US buyer would pay, might sound reasonable until you realize that Russians make less money in a year than US workers. As a percentage of their wages, that $15 price is actually equivalent to a US consumer dropping $75 on the film. Pirate versions can be had for one-third the price.
  • Simple crackdowns on pirate behavior won't work in the absence of pricing and other reforms, say the report's authors (who also note that even "developed" economies routinely pirate TV shows and movies that are not made legally available to them for days, weeks, or months after they originally appear elsewhere).
  • The "strong moralization of the debate” makes it difficult to discuss issues beyond enforcement, however, and the authors slam the content companies for lacking any credible "endgame" to their constant requests for more civil and police powers in the War on Piracy.
  • piracy is a “signal of unmet consumer demand.
  • Our studies raise concerns that it may be a long time before such accommodations to reality reach the international policy arena. Hardline enforcement positions may be futile at stemming the tide of piracy, but the United States bears few of the costs of such efforts, and US companies reap most of the modest benefits. This is a recipe for continued US pressure on developing countries, very possibly long after media business models in the United States and other high-income countries have changed.
  •  
    A major new report from a consortium of academic researchers concludes that media piracy can't be stopped through "three strikes" Internet disconnections, Web censorship, more police powers, higher statutory damages, or tougher criminal penalties. That's because the piracy of movies, music, video games, and software is "better described as a global pricing problem." And the only way to solve it is by changing the price.
1More

Red-Wine Researcher Charged With 'Photoshop' Fraud - 0 views

  •  
    A University of Connecticut researcher known for touting the health benefits of red wine is guilty of 145 counts of fabricating and falsifying data with image-editing software, according to a 3-year university investigation made public Wednesday. The researcher, Dipak K. Das, PhD, is a director of the university's Cardiovascular Research Center (CRC) and a professor in the Department of Surgery. The university stated in a press release that it has frozen all externally funded research in Dr. Das's lab and turned down $890,000 in federal research grants awarded to him. The process to dismiss Dr. Das from the university is already underway, the university added.
15More

Cancer resembles life 1 billion years ago, say astrobiologists - microbiology, genomics... - 0 views

  • astrobiologists, working with oncologists in the US, have suggested that cancer resembles ancient forms of life that flourished between 600 million and 1 billion years ago.
  • Read more about what this discovery means for cancer research.
  • The genes that controlled the behaviour of these early multicellular organisms still reside within our own cells, managed by more recent genes that keep them in check.It's when these newer controlling genes fail that the older mechanisms take over, and the cell reverts to its earlier behaviours and grows out of control.
  • ...11 more annotations...
  • The new theory, published in the journal Physical Biology, has been put forward by two leading figures in the world of cosmology and astrobiology: Paul Davies, director of the Beyond Center for Fundamental Concepts in Science, Arizona State University; and Charles Lineweaver, from the Australian National University.
  • According to Lineweaver, this suggests that cancer is an atavism, or an evolutionary throwback.
  • In the paper, they suggest that a close look at cancer shows similarities with early forms of multicellular life.
  • “Unlike bacteria and viruses, cancer has not developed the capacity to evolve into new forms. In fact, cancer is better understood as the reversion of cells to the way they behaved a little over one billion years ago, when humans were nothing more than loose-knit colonies of only partially differentiated cells. “We think that the tumours that develop in cancer patients today take the same form as these simple cellular structures did more than a billion years ago,” he said.
  • One piece of evidence to support this theory is that cancers appear in virtually all metazoans, with the notable exception of the bizarre naked mole rat."This quasi-ubiquity suggests that the mechanisms of cancer are deep-rooted in evolutionary history, a conjecture that receives support from both paleontology and genetics," they write.
  • the genes that controlled this early multi-cellular form of life are like a computer operating system's 'safe mode', and when there are failures or mutations in the more recent genes that manage the way cells specialise and interact to form the complex life of today, then the earlier level of programming takes over.
  • Their notion is in contrast to a prevailing theory that cancer cells are 'rogue' cells that evolve rapidly within the body, overcoming the normal slew of cellular defences.
  • However, Davies and Lineweaver point out that cancer cells are highly cooperative with each other, if competing with the host's cells. This suggests a pre-existing complexity that is reminiscent of early multicellular life.
  • cancers' manifold survival mechanisms are predictable, and unlikely to emerge spontaneously through evolution within each individual in such a consistent way.
  • The good news is that this means combating cancer is not necessarily as complex as if the cancers were rogue cells evolving new and novel defence mechanisms within the body.Instead, because cancers fall back on the same evolved mechanisms that were used by early life, we can expect them to remain predictable, thus if they're susceptible to treatment, it's unlikely they'll evolve new ways to get around it.
  • If the atavism hypothesis is correct, there are new reasons for optimism," they write.
  •  
    Feature: Inside DNA vaccines bioMD makes a bid for Andrew Forest's Allied Medical and Coridon Alexion acquires technology for MoCD therapy More > Most Popular Media Releases Cancer resembles life 1 billion years ago, say astrobiologists Feature: The challenge of a herpes simplex vaccine Feature: Proteomics power of pawpaw bioMD makes a bid for Andrew Forest's Allied Medical and Coridon Immune system boosting hormone might lead to HIV cure Biotechnology Directory Company Profile Check out this company's profile and more in the Biotechnology Directory! Biotechnology Directory Find company by name Find company by category Latest Jobs Senior Software Developer / Java Analyst Programm App Support Developer - Java / J2ee Solutions Consultant - VIC Technical Writer Product Manager (Fisheye/Crucible)   BUYING GUIDES Portable Multimedia Players Digital Cameras Digital Video Cameras LATEST PRODUCTS HTC Wildfire S Android phone (preview) Panasonic LUMIX DMC-GH2 digital camera HTC Desire S Android phone (preview) Qld ICT minister Robert Schwarten retires Movie piracy costs Aus economy $1.37 billion in 12 months: AFACT Wireless smartphones essential to e-health: CSIRO Aussie outsourcing CRM budgets to soar in 2011: Ovum Federal government to evaluate education revolution targets Business continuity planning - more than just disaster recovery Proving the value of IT - Part one 5 open source security projects to watch In-memory computing Information security in 2011 EFA shoots down 'unproductive' AFACT movie piracy study In Pictures: IBM hosts Galactic dinner Emerson Network Power launches new infrastructure solutions Consumers not smart enough for smartphones? Google one-ups Apple online media subscription service M2M offerings expand as more machines go online India cancels satellite spectrum deal after controversy Lenovo profit rises in Q3 on strong PC sales in China Taiwan firm to supply touch sensors to Samsung HP regains top position in India's PC market Copyright 20
7More

Fake tweets by 'socialbot' fool hundreds of followers - tech - 23 March 2011 - New Scie... - 0 views

  • Socialbots 2011, a competition designed to test whether bots can be used to alter the structure of a social network. Each team had a Twitter account controlled by a socialbot. Like regular human users, the bot could follow other Twitter users and send messages. Bots were rewarded for the number of followers they amassed and the number of responses their tweets generated.
  • The socialbots looked at tweets sent by members of a network of Twitter users who shared a particular interest, and then generated a suitable response. In one exchange a bot asks a human user which character they would like to bring back to life from their favourite book. When the human replies "Jesus" it responds: "Honestly? no fracking way. ahahahhaa."
  • When the experiment ended last month, a before-and-after comparison of connections within the target community showed that the bots were "able to heavily shape and distort the structure of the network", according to its organiser, Tim Hwang, founder of the startup company Robot, Robot and Hwang, based in San Francisco. Some members of the community who had not previously been directly connected were now linked, for example. Hwang has not revealed the identities of the entrants, or of the members of the 500-person Twitter network that the bots infiltrated.
  • ...4 more annotations...
  • The success suggests that socialbots could manipulate social networks on a larger scale, for good or ill. "We could use these bots in the future to encourage social participation or support for humanitarian causes," Hwang claims. He also acknowledges that there is a flip side, if bots were also used to inhibit activism.
  • The military may already be onto the idea. Officials at US Central Command (Centcom), which oversees military activities in the Middle East and central Asia, issued a request last June for an "online persona management service". The details of the request suggest that the military want to create and control 50 fictitious online identities who appear to be real people from Afghanistan and Iraq.
  • It is not clear, however, if any of the management of the fake identities would be delegated to software. A Centcom spokesperson told New Scientist that the contract supports "classified blogging activities on foreign language websites to enable Centcom to counter violent extremist and enemy propaganda outside the US".
  • Hwang has ambitious plans for the next stage of the socialbot project: "We're going to survey and identify two sites of 5000-person unconnected Twitter communities, and over a six-to-12-month period use waves of bots to thread and rivet those clusters together into a directly connected social bridge between those two formerly independent groups," he wrote in a blog post on 3 March. "The bot-driven social 'scaffolding' will then be dropped away, completing the bridge, with swarms of bots being launched to maintain the superstructure as needed," he adds.
13More

Open science: a future shaped by shared experience | Education | The Observer - 0 views

  • one day he took one of these – finding a mathematical proof about the properties of multidimensional objects – and put his thoughts on his blog. How would other people go about solving this conundrum? Would somebody else have any useful insights? Would mathematicians, notoriously competitive, be prepared to collaborate? "It was an experiment," he admits. "I thought it would be interesting to try."He called it the Polymath Project and it rapidly took on a life of its own. Within days, readers, including high-ranking academics, had chipped in vital pieces of information or new ideas. In just a few weeks, the number of contributors had reached more than 40 and a result was on the horizon. Since then, the joint effort has led to several papers published in journals under the collective pseudonym DHJ Polymath. It was an astonishing and unexpected result.
  • "If you set out to solve a problem, there's no guarantee you will succeed," says Gowers. "But different people have different aptitudes and they know different tricks… it turned out their combined efforts can be much quicker."
  • There are many interpretations of what open science means, with different motivations across different disciplines. Some are driven by the backlash against corporate-funded science, with its profit-driven research agenda. Others are internet radicals who take the "information wants to be free" slogan literally. Others want to make important discoveries more likely to happen. But for all their differences, the ambition remains roughly the same: to try and revolutionise the way research is performed by unlocking it and making it more public.
  • ...10 more annotations...
  • Jackson is a young bioscientist who, like many others, has discovered that the technologies used in genetics and molecular biology, once the preserve of only the most well-funded labs, are now cheap enough to allow experimental work to take place in their garages. For many, this means that they can conduct genetic experiments in a new way, adopting the so-called "hacker ethic" – the desire to tinker, deconstruct, rebuild.
  • The rise of this group is entertainingly documented in a new book by science writer Marcus Wohlsen, Biopunk (Current £18.99), which describes the parallels between today's generation of biological innovators and the rise of computer software pioneers of the 1980s and 1990s. Indeed, Bill Gates has said that if he were a teenager today, he would be working on biotechnology, not computer software.
  • open scientists suggest that it doesn't have to be that way. Their arguments are propelled by a number of different factors that are making transparency more viable than ever.The first and most powerful change has been the use of the web to connect people and collect information. The internet, now an indelible part of our lives, allows like-minded individuals to seek one another out and share vast amounts of raw data. Researchers can lay claim to an idea not by publishing first in a journal (a process that can take many months) but by sharing their work online in an instant.And while the rapidly decreasing cost of previously expensive technical procedures has opened up new directions for research, there is also increasing pressure for researchers to cut costs and deliver results. The economic crisis left many budgets in tatters and governments around the world are cutting back on investment in science as they try to balance the books. Open science can, sometimes, make the process faster and cheaper, showing what one advocate, Cameron Neylon, calls "an obligation and responsibility to the public purse".
  • "The litmus test of openness is whether you can have access to the data," says Dr Rufus Pollock, a co-founder of the Open Knowledge Foundation, a group that promotes broader access to information and data. "If you have access to the data, then anyone can get it, use it, reuse it and redistribute it… we've always built on the work of others, stood on the shoulders of giants and learned from those who have gone before."
  • moves are afoot to disrupt the closed world of academic journals and make high-level teaching materials available to the public. The Public Library of Science, based in San Francisco, is working to make journals more freely accessible
  • it's more than just politics at stake – it's also a fundamental right to share knowledge, rather than hide it. The best example of open science in action, he suggests, is the Human Genome Project, which successfully mapped our DNA and then made the data public. In doing so, it outflanked J Craig Venter's proprietary attempt to patent the human genome, opening up the very essence of human life for science, rather than handing our biological information over to corporate interests.
  • the rise of open science does not please everyone. Critics have argued that while it benefits those at either end of the scientific chain – the well-established at the top of the academic tree or the outsiders who have nothing to lose – it hurts those in the middle. Most professional scientists rely on the current system for funding and reputation. Others suggest it is throwing out some of the most important elements of science and making deep, long-term research more difficult.
  • Open science proponents say that they do not want to make the current system a thing of the past, but that it shouldn't be seen as immutable either. In fact, they say, the way most people conceive of science – as a highly specialised academic discipline conducted by white-coated professionals in universities or commercial laboratories – is a very modern construction.It is only over the last century that scientific disciplines became industrialised and compartmentalised.
  • open scientists say they don't want to throw scientists to the wolves: they just want to help answer questions that, in many cases, are seen as insurmountable.
  • "Some people, very straightforwardly, said that they didn't like the idea because it undermined the concept of the romantic, lone genius." Even the most dedicated open scientists understand that appeal. "I do plan to keep going at them," he says of collaborative projects. "But I haven't given up on solitary thinking about problems entirely."
2More

Facial Recognition Software Singles Out Innocent Man | The Utopianist - Think Bigger - 0 views

  • Gass was at home when he got a letter from the Massachusetts Registry of Motor Vehicles saying his license had been revoked. Why? The Boston Globe explains: An antiterrorism computerized facial recognition system that scans a database of millions of state driver’s license images had picked his as a possible fraud. It turned out Gass was flagged because he looks like another driver, not because his image was being used to create a fake identity. His driving privileges were returned but, he alleges in a lawsuit, only after 10 days of bureaucratic wrangling to prove he is who he says he is.
  •  
    While a boon to police departments looking to save time and money fighting identity fraud, it's frightening to think that people are having their lives seriously disrupted thanks to computer errors. If you are, say, a truck driver, something like this could cause you weeks of lost pay, something many Americans just can't afford to do. And what if this technology expands beyond just rooting out identity fraud? What if you were slammed against a car hood as police falsely identified you as a criminal? The fact that Hass didn't even have a chance to fight the computer's findings before his license was suspended is especially disturbing. What would you do if this happened to you?
5More

Measuring the Unmeasurable (Internet) and Why It Matters « Gurstein's Communi... - 0 views

  • it appears that there is a quite significant hole in the National Accounting (and thus the GDP statistics) around Internet related activities since most of this accounting is concerned with measuring the production and distribution of tangible products and the associated services. For the most part the available numbers don’t include many Internet (or “social capital” e.g. in health and education) related activities as they are linked to intangible outputs. The significance of not including social capital components in the GDP has been widely discussed elsewhere. The significance (and potential remediation) of the absence of much of the Internet related activities was the subject of the workshop.
  • there had been a series of critiques of GDP statistics from Civil Society (CS) over the last few years—each associated with a CS “movements—the Woman’s Movement and the absence of measurement of “women’s (and particularly domestic) work”; the Environmental Movement and the absence of the longer term and environmental costs of the production of the goods that the GDP so blithely counts as a measure of national economic well-being; and most recently with the Sustainability Movement, and the absence of measures reflective of the longer term negative effects/costs of resource depletion and environmental degradation. What I didn’t see anywhere apart from the background discussions to the OECD workshop itself were critiques reflecting issues related to the Internet or ICTs.
  • the implications of the limitations in the Internet accounting went beyond a simple technical glitch and had potentially quite profound implications from a national policy and particularly a CS and community based development perspective. The possible distortions in economic measurement arising from the absence of Internet associated numbers in the SNA (there may be some $750 BILLION a year in “value’ being generated by Internet based search alone!) lead to the very real possibility that macro-economic analysis and related policy making may be operating on the basis of inadequate and even fallacious assumptions.
  • ...2 more annotations...
  • perhaps of greatest significance from the perspective of Civil Society and of communities is the overall absence of measurement and thus inclusion in the economic accounting of the value of the contributions provided to, through and on the Internet of various voluntary and not-for-profit initiatives and activities. Thus for example, the millions of hours of labour contributed to Wikipedia, or to the development of Free or Open Source software, or to providing support for public Internet access and training is not included as a net contribution or benefit to the economy (as measured through the GDP). Rather, this is measured as a negative effect since, as some would argue, those who are making this contribution could be using their time and talents in more “productive” (and “economically measurable”) activities. Thus for example, a region or country that chooses to go with free or open source software as the basis for its in-school computing is not only “not contributing to ‘economic well being’” it is “statistically” a “cost” to the economy since it is not allowing for expenditures on, for example, suites of Microsoft products.
  • there appears to have been no systematic attention paid to the relationship of the activities and growth of voluntary contributions to the Internet and the volume, range and depth of Internet activity, digital literacy and economic value being derived from the use of the Internet.
9More

Roger Pielke Jr.'s Blog: Faith-Based Education and a Return to Shop Class - 0 views

  • In the United States, nearly a half century of research, application of new technologies and development of new methods and policies has failed to translate into improved reading abilities for the nation’s children1.
  • the reasons why progress has been so uneven point to three simple rules for anticipating when more research and development (R&D) could help to yield rapid social progress. In a world of limited resources, the trick is distinguishing problems amenable to technological fixes from those that are not. Our rules provide guidance\ in making this distinction . . .
  • unlike vaccines, the textbooks and software used in education do not embody the essence of what needs to be done. That is, they don’t provide the basic ‘go’ of teaching and learning. That depends on the skills of teachers and on the attributes of classrooms and students. Most importantly, the effectiveness of a vaccine is largely independent of who gives or receives it, and of the setting in which it is given.
  • ...5 more annotations...
  • The three rules for a technological fix proposed by Sarewitz and Nelson are: I. The technology must largely embody the cause–effect relationship connecting problem to solution. II. The effects of the technological fix must be assessable using relatively unambiguous or uncontroversial criteria. III. Research and development is most likely to contribute decisively to solving a social problem when it focuses on improving a standardized technical core that already exists.
  • technology in the classroom fails with respect to each of the three criteria: (a) technology is not a causal factor in learning in the sense that more technology means more learning, (b) assessment of educational outcome sis itself difficult and contested, much less disentangling various causal factors, and (c) the lack of evidence that technology leads to improved educational outcomes means that there is no such standardized technological core.
  • This conundrum calls into question one of the most significant contemporary educational movements. Advocates for giving schools a major technological upgrade — which include powerful educators, Silicon Valley titans and White House appointees — say digital devices let students learn at their own pace, teach skills needed in a modern economy and hold the attention of a generation weaned on gadgets. Some backers of this idea say standardized tests, the most widely used measure of student performance, don’t capture the breadth of skills that computers can help develop. But they also concede that for now there is no better way to gauge the educational value of expensive technology investments.
  • absent clear proof, schools are being motivated by a blind faith in technology and an overemphasis on digital skills — like using PowerPoint and multimedia tools — at the expense of math, reading and writing fundamentals. They say the technology advocates have it backward when they press to upgrade first and ask questions later.
  • [D]emand for educated labour is being reconfigured by technology, in much the same way that the demand for agricultural labour was reconfigured in the 19th century and that for factory labour in the 20th. Computers can not only perform repetitive mental tasks much faster than human beings. They can also empower amateurs to do what professionals once did: why hire a flesh-and-blood accountant to complete your tax return when Turbotax (a software package) will do the job at a fraction of the cost? And the variety of jobs that computers can do is multiplying as programmers teach them to deal with tone and linguistic ambiguity. Several economists, including Paul Krugman, have begun to argue that post-industrial societies will be characterised not by a relentless rise in demand for the educated but by a great “hollowing out”, as mid-level jobs are destroyed by smart machines and high-level job growth slows. David Autor, of the Massachusetts Institute of Technology (MIT), points out that the main effect of automation in the computer era is not that it destroys blue-collar jobs but that it destroys any job that can be reduced to a routine. Alan Blinder, of Princeton University, argues that the jobs graduates have traditionally performed are if anything more “offshorable” than low-wage ones. A plumber or lorry-driver’s job cannot be outsourced to India.
  •  
    In 2008 Dick Nelson and Dan Sarewitz had a commentary in Nature (here in PDF) that eloquently summarized why it is that we should not expect technology in the classroom to reault in better educational outcomes as they suggest we should in the case of a tehcnology like vaccines
8More

A Data Divide? Data "Haves" and "Have Nots" and Open (Government) Data « Gurs... - 0 views

  • Researchers have extensively explored the range of social, economic, geographical and other barriers which underlie and to a considerable degree “explain” (cause) the Digital Divide.  My own contribution has been to argue that “access is not enough”, it is whether opportunities and pre-conditions are in place for the “effective use” of the technology particularly for those at the grassroots.
  • The idea of a possible parallel “Data Divide” between those who have access and the opportunity to make effective use of data and particularly “open data” and those who do not, began to occur to me.  I was attending several planning/recruitment events for the Open Data “movement” here in Vancouver and the socio-demographics and some of the underlying political assumptions seemed to be somewhat at odds with the expressed advocacy position of “data for all”.
  • Thus the “open data” which was being argued for would not likely be accessible and usable to the groups and individuals with which Community Informatics has largely been concerned – the grassroots, the poor and marginalized, indigenous people, rural people and slum dwellers in Less Developed countries. It was/is hard to see, given the explanations, provided to date how these folks could use this data in any effective way to help them in responding to the opportunities for advance and social betterment which open data advocates have been indicating as the outcome of their efforts.
  • ...5 more annotations...
  • many involved in “open data” saw their interests and activities being confined to making data ‘legally” and “technically” accessible — what happened to it after that was somebody else’s responsibility.
  • while the Digital Divide deals with, for the most part “infrastructure” issues, the Data Divide is concerned with “content” issues.
  • where a Digital Divide might exist for example, as a result of geographical or policy considerations and thus have uniform effects on all those on the wrong side of the “divide” whatever their socio-demographic situation; a Data Divide and particularly one of the most significant current components of the Open Data movement i.e. OGD, would have particularly damaging negative effects and result in particularly significant lost opportunities for the most vulnerable groups and individuals in society and globally. (I’ve discussed some examples here at length in a previous blogpost.)
  • Data Divide thus would be the gap between those who have access to and are able to use Open (Government) Data and those who are not so enabled.
  • 1. infrastructure—being on the wrong side of the “Digital Divide” and thus not having access to the basic infrastructure supporting the availability of OGD. 2. devices—OGD that is not universally accessible and device independent (that only runs on I-Phones for example) 3. software—“accessible” OGD that requires specialized technical software/training to become “usable” 4. content—OGD not designed for use by those with handicaps, non-English speakers, those with low levels of functional literacy for example 5.  interpretation/sense-making—OGD that is only accessible for use through a technical intermediary and/or is useful only if “interpreted” by a professional intermediary 6. advocacy—whether the OGD is in a form and context that is supportive for use in advocacy (or other purposes) on behalf of marginalized and other groups and individuals 7. governance—whether the OGD process includes representation from the broad public in its overall policy development and governance (not just lawyers, techies and public servants).
6More

Stanford Security Lab Tracks Do Not Track - 0 views

  • What they found is that more than half the NAI member companies did not remove tracking codes after someone opted out.
  • At least eight NAI members promise to stop tracking after opting out, but nonetheless leave tracking cookies in place.
  • I take that to mean that the other 25 companies never actually said they would remove tracking cookies, it’s just that they belong to a fellowship that wishes they would. On the positive side, ten companies went beyond what their privacy policy promises (say that three times fast) and two companies were “taking overt steps to respect Do Not Track.”
  • ...2 more annotations...
  • There’s probably a small percentage of companies who will blatantly ignore any attempts to stop tracking. For the rest, it’s more likely a case of not having procedures in place. Their intentions are good, but lack of manpower and the proper tech is probably what’s keeping them from following through on those good thoughts.
  • Since they can’t go after them with big guns, the Stanford study went with public embarrassment. They’ve published a list of the websites showing which ones are compliant and which ones aren’t. If you’re working with an ad network, you might want to check it out.
  •  
    The folks at the Stanford Security Lab are a suspicious bunch. Since they're studying how to make computers more secure, I guess it comes with the territory. Their current interest is tracking cookies and the Do Not Track opt-out process. Using "experimental software," they conducted a survey to see how many members of the Network Advertising Initiative (NAI), actually complied with the new Do Not Track initiatives.
3More

Harvard professor spots Web search bias - Business - The Boston Globe - 0 views

  • Sweeney said she has no idea why Google searches seem to single out black-sounding names. There could be myriad issues at play, some associated with the software, some with the people searching Google. For example, the more often searchers click on a particular ad, the more frequently it is displayed subsequently. “Since we don’t know the reason for it,” she said, “it’s hard to say what you need to do.”
  • But Danny Sullivan, editor of SearchEngineLand.com, an online trade publication that tracks the Internet search and advertising business, said Sweeney’s research has stirred a tempest in a teapot. “It looks like this fairly isolated thing that involves one advertiser.” He also said that the results could be caused by black Google users clicking on those ads as much as white users. “It could be that black people themselves could be causing the stuff that causes the negative copy to be selected more,” said Sullivan. “If most of the searches for black names are done by black people . . . is that racially biased?”
  • On the other hand, Sullivan said Sweeney has uncovered a problem with online searching — the casual display of information that might put someone in a bad light. Rather than focusing on potential instances of racism, he said, search services such as Google might want to put more restrictions on displaying negative information about anyone, black or white.
1 - 20 of 40 Next ›
Showing 20 items per page