Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged optimization

Rss Feed Group items tagged

Arthur Cane

Outstanding Team of SEO Specialists - 1 views

We have already tried a number of link builders and SEO services over the years and we were generally disappointed. Until we found our way to Syntactics Inc. I find their service great that is why,...

seo specialist specialists

started by Arthur Cane on 26 Jan 12 no follow-up yet
Weiye Loh

Search Optimization and Its Dirty Little Secrets - NYTimes.com - 0 views

  • in the last several months, one name turned up, with uncanny regularity, in the No. 1 spot for each and every term: J. C. Penney. The company bested millions of sites — and not just in searches for dresses, bedding and area rugs. For months, it was consistently at or near the top in searches for “skinny jeans,” “home decor,” “comforter sets,” “furniture” and dozens of other words and phrases, from the blandly generic (“tablecloths”) to the strangely specific (“grommet top curtains”).
  • J. C. Penney even beat out the sites of manufacturers in searches for the products of those manufacturers. Type in “Samsonite carry on luggage,” for instance, and Penney for months was first on the list, ahead of Samsonite.com.
  • the digital age’s most mundane act, the Google search, often represents layer upon layer of intrigue. And the intrigue starts in the sprawling, subterranean world of “black hat” optimization, the dark art of raising the profile of a Web site with methods that Google considers tantamount to cheating.
  • ...8 more annotations...
  • Despite the cowboy outlaw connotations, black-hat services are not illegal, but trafficking in them risks the wrath of Google. The company draws a pretty thick line between techniques it considers deceptive and “white hat” approaches, which are offered by hundreds of consulting firms and are legitimate ways to increase a site’s visibility. Penney’s results were derived from methods on the wrong side of that line, says Mr. Pierce. He described the optimization as the most ambitious attempt to game Google’s search results that he has ever seen.
  • TO understand the strategy that kept J. C. Penney in the pole position for so many searches, you need to know how Web sites rise to the top of Google’s results. We’re talking, to be clear, about the “organic” results — in other words, the ones that are not paid advertisements. In deriving organic results, Google’s algorithm takes into account dozens of criteria, many of which the company will not discuss.
  • But it has described one crucial factor in detail: links from one site to another. If you own a Web site, for instance, about Chinese cooking, your site’s Google ranking will improve as other sites link to it. The more links to your site, especially those from other Chinese cooking-related sites, the higher your ranking. In a way, what Google is measuring is your site’s popularity by polling the best-informed online fans of Chinese cooking and counting their links to your site as votes of approval.
  • But even links that have nothing to do with Chinese cooking can bolster your profile if your site is barnacled with enough of them. And here’s where the strategy that aided Penney comes in. Someone paid to have thousands of links placed on hundreds of sites scattered around the Web, all of which lead directly to JCPenney.com.
  • Who is that someone? A spokeswoman for J. C. Penney, Darcie Brossart, says it was not Penney.
  • “J. C. Penney did not authorize, and we were not involved with or aware of, the posting of the links that you sent to us, as it is against our natural search policies,” Ms. Brossart wrote in an e-mail. She added, “We are working to have the links taken down.”
  • Using an online tool called Open Site Explorer, Mr. Pierce found 2,015 pages with phrases like “casual dresses,” “evening dresses,” “little black dress” or “cocktail dress.” Click on any of these phrases on any of these 2,015 pages, and you are bounced directly to the main page for dresses on JCPenney.com.
  • Some of the 2,015 pages are on sites related, at least nominally, to clothing. But most are not. The phrase “black dresses” and a Penney link were tacked to the bottom of a site called nuclear.engineeringaddict.com. “Evening dresses” appeared on a site called casino-focus.com. “Cocktail dresses” showed up on bulgariapropertyportal.com. ”Casual dresses” was on a site called elistofbanks.com. “Semi-formal dresses” was pasted, rather incongruously, on usclettermen.org.
Arthur Cane

Excellent SEO Service That Last - 1 views

I have been working with Syntactics Inc. for five years now, and I have entrusted my online business to them for that long because I found their services really excellent. In fact, for that five...

seo outsourcing services

started by Arthur Cane on 13 Dec 11 no follow-up yet
Syntacticsinc SEO

The Results of Persistent SEO - 1 views

I have hired Philippine outsourcing firm Syntactics Inc to work on my website and take care of my online marketing needs too. In just one month, they were able to put a business-oriented website th...

search engine optimization

started by Syntacticsinc SEO on 06 Jul 11 no follow-up yet
Weiye Loh

Search Optimization and Its Dirty Little Secrets - NYTimes.com - 0 views

  • Mr. Stevens turned out to be a boyish-looking 31-year-old native of Singapore. (Stevens is the name he uses for work; he says he has a Chinese last name, which he did not share.) He speaks with a slight accent and in an animated hush, like a man worried about eavesdroppers. He describes his works with the delighted, mischievous grin of a sophomore who just hid a stink bomb.
  • “The key is to roll the campaign out slowly,” he said as he nibbled at seared duck foie gras. “A lot of companies are in a rush. They want as many links as we can get them as fast as possible. But Google will spot that. It will flag a Web site that goes from zero links to a few hundred in a week.”
  • The hardest part about the link-selling business, he explained, is signing up deep-pocketed mainstream clients. Lots of them, it seems, are afraid they’ll get caught. Another difficulty is finding quality sites to post links. Whoever set up the JCPenney.com campaign, he said, relied on some really low-rent, spammy sites — the kind with low PageRanks, as Google calls its patented measure of a site’s quality. The higher the PageRank, the more “Google juice” a site offers others to which it is linked.
  • ...5 more annotations...
  • Mr. Stevens said that Web site owners, or publishers, as he calls them, get a small fee for each link, and the transaction is handled entirely over the Web. Publishers can reject certain keywords and links — Mr. Stevens said some balked at a lingerie link — but for the most part the system is on a kind of autopilot. A client pays Mr. Stevens and his colleagues for links, which are then farmed out to Web sites. Payment to publishers is handled via PayPal.
  • You might expect Mr. Stevens to have a certain amount of contempt for Google, given that he spends his professional life finding ways to subvert it. But through the evening he mentioned a few times that he’s in awe of the company, and the quality of its search engine.
  • “I think we need to make a distinction between two different kinds of searches — informational and commercial,” he said. “If you search ‘cancer,’ that’s an informational search and on those, Google is amazing. But in commercial searches, Google’s results are really polluted. My own personal experience says that the guy with the biggest S.E.O. budget always ranks the highest.”
  • To Mr. Stevens, S.E.O. is a game, and if you’re not paying black hats, you are losing to rivals with fewer compunctions.
  • WHY did Google fail to catch a campaign that had been under way for months? One, no less, that benefited a company that Google had already taken action against three times? And one that relied on a collection of Web sites that were not exactly hiding their spamminess? Mr. Cutts emphasized that there are 200 million domain names and a mere 24,000 employees at Google.
Weiye Loh

Search Optimization and Its Dirty Little Secrets - NYTimes.com - 0 views

  • Search experts, however, say Penney likely reaped substantial rewards from the paid links. If you think of Google as the entrance to the planet’s largest shopping center, the links helped Penney appear as though it was the first and most inviting spot in the mall, to millions and millions of online shoppers.
  • A study last May by Daniel Ruby of Chitika, an online advertising network of 100,000 sites, found that, on average, 34 percent of Google’s traffic went to the No. 1 result, about twice the percentage that went to No. 2.
  • The Keyword Estimator at Google puts the number of searches for “dresses” in the United States at 11.1 million a month, an average based on 12 months of data. So for “dresses” alone, Penney may have been attracting roughly 3.8 million visits every month it showed up as No. 1. Exactly how many of those visits translate into sales, and the size of each sale, only Penney would know.
  • ...6 more annotations...
  • in January, the company was crowing about its online holiday sales. Kate Coultas, a company spokeswoman, wrote to a reporter in January, “Internet sales through jcp.com posted strong growth in December, with significant increases in traffic and orders for the key holiday shopping periods of the week after Thanksgiving and the week before Christmas.”
  • Penney also issued a statement: “We are disappointed that Google has reduced our rankings due to this matter,” Ms. Brossart wrote, “but we will continue to work actively to retain our high natural search position.”
  • She added that while the collection of links surely brought in additional revenue, it was hardly a bonanza. Just 7 percent of JCPenney.com’s traffic comes from clicks on organic search results, she wrote.
  • MANY owners of Web sites with Penney links seem to relish their unreachability. But there were exceptions, and they included cocaman.ch. (“Geekness — closer to the world” is the cryptic header atop the site.) It turned out to be owned and run by Corsin Camichel, a chatty 25-year-old I.T. security analyst in Switzerland.
  • The link came through a Web site, TNX.net, which pays Mr. Camichel with TNX points, which he then trades for links that drive traffic to his other sites, like cookingutensils.net. He earns money when people visit that site and click on the ads. He could also, he said, get cash from TNX. Currently, Cocaman is home to 403 links, all of them placed there by TNX on behalf of clients.
  • “You do pretty well,” he wrote, referring to income from his links trading. “The thing is, the more you invest (time and money) the better results you get. Right now I get enough to buy myself new test devices for my Android apps (like $150/month) with zero effort. I have to do nothing. Ads just sit there and if people click, I make money.”
Weiye Loh

Search Optimization and Its Dirty Little Secrets - NYTimes.com - 0 views

  • When you read the enormous list of sites with Penney links, the landscape of the Internet acquires a whole new topography. It starts to seem like a city with a few familiar, well-kept buildings, surrounded by millions of hovels kept upright for no purpose other than the ads that are painted on their walls.
  • Exploiting those hovels for links is a Google no-no. The company’s guidelines warn against using tricks to improve search engine rankings, including what it refers to as “link schemes.” The penalty for getting caught is a pair of virtual concrete shoes: the company sinks in Google’s results.
  • In 2006, Google announced that it had caught BMW using a black-hat strategy to bolster the company’s German Web site, BMW.de. That site was temporarily given what the BBC at the time called “the death penalty,” stating that it was “removed from search results.”
  • ...9 more annotations...
  • BMW acknowledged that it had set up “doorway pages,” which exist just to attract search engines and then redirect traffic to a different site. The company at the time said it had no intention of deceiving users, adding “if Google says all doorway pages are illegal, we have to take this into consideration.”
  • The Times sent Google the evidence it had collected about the links to JCPenney.com. Google promptly set up an interview with Matt Cutts, the head of the Webspam team at Google, and a man whose every speech, blog post and Twitter update is parsed like papal encyclicals by players in the search engine world.
  • He said Google had detected previous guidelines violations related to JCPenney.com on three occasions, most recently last November. Each time, steps were taken that reduced Penney’s search results — Mr. Cutts avoids the word “punished” — but Google did not later “circle back” to the company to see if it was still breaking the rules, he said.
  • He and his team had missed this recent campaign of paid links, which he said had been up and running for the last three to four months. “Do I wish our system had detected things sooner? I do,” he said. “But given the one billion queries that Google handles each day, I think we do an amazing job.”
  • You get the sense that Mr. Cutts and his colleagues are acutely aware of the singular power they wield as judge, jury and appeals panel, and they’re eager to project an air of maturity and judiciousness.
  • Mr. Cutts sounded remarkably upbeat and unperturbed during this conversation, which was a surprise given that we were discussing a large, sustained effort to snooker his employer. Asked about his zenlike calm, he said the company strives not to act out of anger.
  • PENNEY reacted to this instant reversal of fortune by, among other things, firing its search engine consulting firm, SearchDex. Executives there did not return e-mail or phone calls.
  • “Am I happy this happened?” he later asked. “Absolutely not. Is Google going to take strong corrective action? We absolutely will.” And the company did. On Wednesday evening, Google began what it calls a “manual action” against Penney, essentially demotions specifically aimed at the company.
  • At 7 p.m. Eastern time on Wednesday, J. C. Penney was still the No. 1 result for “Samsonite carry on luggage.” Two hours later, it was at No. 71.
Weiye Loh

Search Optimization and Its Dirty Little Secrets - NYTimes.com - 0 views

  • Here’s another hypothesis, this one for the conspiracy-minded. Last year, Advertising Age obtained a Google document that listed some of its largest advertisers, including AT&T, eBay and yes, J. C. Penney. The company, this document said, spent $2.46 million a month on paid Google search ads — the kind you see next to organic results.
  • Is it possible that Google was willing to countenance an extensive black-hat campaign because it helped one of its larger advertisers? It’s the sort of question that European Union officials are now studying in an investigation of possible antitrust abuses by Google.
  • Investigators have been asking advertisers in Europe questions like this: “Please explain whether and, if yes, to what extent your advertising spending with Google has ever had an influence on your ranking in Google’s natural search.” And: “Has Google ever mentioned to you that increasing your advertising spending could improve your ranking in Google’s natural search?”
  • ...5 more annotations...
  • Asked if Penney received any breaks because of the money it has spent on ads, Mr. Cutts said, “I’ll give a categorical denial.” He then made an impassioned case for Google’s commitment to separating the money side of the business from the search side. The former has zero influence on the latter, he said.
  • “There is a very long history at Google of saying ‘We are not going to worry about short-term revenue.’ ” He added: “We rely on the trust of our users. We realize the responsibility that we have to our users.”
  • He noted, too, that before The Times presented evidence of the paid links to JCPenney.com, Google had just begun to roll out an algorithm change that had a negative effect on Penney’s search results. (
  • True, JCPenney.com’s showing in Google searches had declined slightly by Feb. 8, as the algorithm change began to take effect. In “comforter sets,” Penney went from No. 1 to No. 7. In “sweater dresses,” from No. 1 to No. 10. But the real damage to Penney’s results began when Google started that “manual action.” The decline can be charted: On Feb. 1, the average Penney position for 59 search terms was 1.3.
  • MR. CUTTS said he did not plan to write about Penney’s situation, as he did with BMW in 2006. Rarely, he explained, does he single out a company publicly, because Google’s goal is to preserve the integrity of results, not to embarrass people. “But just because we don’t talk about it,” he said, “doesn’t mean we won’t take strong action.”
Weiye Loh

How Is Twitter Impacting Search and SEO? Here's the (Visual) Proof | MackCollier.com - ... - 0 views

  • I picked a fairly specific term, in “Social Media Crisis Management”.  I checked prior to publishing yesterday’s post, and there were just a shade under 29,000 Google results for that term.  This is important because you need to pick the most specific term as possible, because this will result in less competition, and (if you’ve picked the right term for you) it means you will be more likely to get the ‘right’ kind of traffic.
  • Second, I made sure the term was in the title and mentioned a couple of times in the post.  I also made the term “Social Media Crisis Management” at the front of the post title, I originally had the title as “A No-Nonsense Guide to Social Media Crisis Management” but Amy wisely suggested that I flip it so the term I was targeting was at the front of the title.
  • when I published the post yesterday at 12:20pm, there were 28,900 Google results for the term “Social Media Crisis Management”.  I tweeted a link to it at that time.  Fifty minutes later at 1:10pm, the post was already showing up on the 3rd page for a Google search of #Social Media Crisis Management”:
  • ...5 more annotations...
  • I tweeted out another link to the post around 2pm, and then at 2:30pm, it moved a bit further up the results on the 3rd page:
  • The Latest results factors in real-time linking behavior, so it is picking up all the tweets where my post was being RTed, and as a result, the top half of the Latest results for the term “Social Media Crisis Management” were completely devoted to MY post.
  • That’s a perfect example of how Twitter and Facebook sharing is now impacting Google results.  And it’s also a wonderful illustration of the value of being active on Twitter.  I tweeted a link to that post several times yesterday and this morning, which was a big reason why it moved up the Google results so quickly, and a big reason why it dominated the Latest results for that term.
  • there are two things I want you to take away from this: 1 – This was very basic SEO stuff that any of you can do.  It was simply a case of targeting a specific phrase, and inserting it in the post.  Now as far as my having a large and engaged Twitter network and readership here (thanks guys!), that definitely played a big factor in the post moving up the results so quickly.  But at a basic level, everything I did from a SEO perspective is what you can do with every post.  And you should.
  • 2 – You can best learn by breaking stuff.  There are a gazillion ‘How to’ and ’10 Steps to…’ articles about using social media, and I have certainly written my fair share of these.  But the best way *I* learn is if you can show me the first 1 or 2 steps, then leave me alone and let me figure out the remaining 8 or 9 steps for myself.  Don’t just blindly follow my social media advice or anyone else’s.  Use the advice as a guide for how you can get started.  But there is no one RIGHT way to use social media.  Never forget that.  I can tell you what works for me and my clients, but you still need to tweak any advice so that it is perfect for you.  SEO geeks will no doubt see a ton of things that I could have done or altered in this experiment to get even better results.  And moving forward, I am going to continue to tweak and ‘break stuff’ in order to better figure out how all the moving parts work together.
Weiye Loh

How Google's +1 Button Affects SEO - 0 views

  •  
    Google defines the +1 as a feature to help people discover and share relevant content from the people they already know and trust. Users can +1 different types of content, including Google search results, websites, and advertisements. Once users +1 a piece of content, it can be seen on the +1 tab in their Google+ profile, in Google search results, and on websites with a +1 button. The plot thickened last month when Google launched Search plus Your World. Jack Menzel, director of product management for Google Search, explained that now Google+ users would be able to "search across information that is private and only shared to you, not just the public web." According to Ian Lurie from the blog Conversation Marketing, in Search plus Your World, search results that received a lot of +1s tend to show up higher in results.
Weiye Loh

Censorship of War News Undermines Public Trust - 20 views

I posted a bookmark on something related to this issue. http://www.todayonline.com/World/EDC090907-0000047/The-photo-thats-caused-a-stir AP decided to publish a photo of a fatally wounded young ...

censorship PR

Weiye Loh

George Will: Earth Doesn't Care What Is Done to It - Newsweek - 0 views

  • The cover of The American Scholar quarterly carries an impertinent assertion: “The Earth Doesn’t Care if You Drive a Hybrid.” The essay inside is titled “What the Earth Knows.” What it knows, according to Robert B. Laughlin, co-winner of the 1998 Nobel Prize in Physics, is this: What humans do to, and ostensibly for, the earth does not matter in the long run, and the long run is what matters to the earth. We must, Laughlin says, think about the earth’s past in terms of geologic time.
  • For example: The world’s total precipitation in a year is about one meter—“the height of a golden retriever.” About 200 meters—the height of the Hoover Dam—have fallen on earth since the Industrial Revolution. Since the Ice Age ended, enough rain has fallen to fill all the oceans four times; since the dinosaurs died, rainfall has been sufficient to fill the oceans 20,000 times. Yet the amount of water on earth probably hasn’t changed significantly over geologic time.
  • Damaging this old earth is, Laughlin says, “easier to imagine than it is to accomplish.”
  • ...6 more annotations...
  • Someday, all the fossil fuels that used to be in the ground will be burned. After that, in about a millennium, the earth will dissolve most of the resulting carbon dioxide into the oceans. (The oceans have dissolved in them “40 times more carbon than the atmosphere contains, a total of 30 trillion tons, or 30 times the world’s coal reserves.”) The dissolving will leave the concentration in the atmosphere only slightly higher than today’s. Then “over tens of millennia, or perhaps hundreds” the earth will transfer the excess carbon dioxide into its rocks, “eventually returning levels in the sea and air to what they were before humans arrived on the scene.” This will take an eternity as humans reckon, but a blink in geologic time.
  • It seems, Laughlin says, that “something, presumably a geologic regulatory process, fixed the world’s carbon dioxide levels before humans arrived” with their SUVs and computers. Some scientists argue that “the photosynthetic machinery of plants seems optimized” to certain carbon dioxide levels. But “most models, even pessimistic ones,” envision “a thousand-year carbon dioxide pulse followed by glacially slow decay back to the pre-civilization situation.”
  • humans can “do damage persisting for geologic time” by “biodiversity loss”—extinctions that are, unlike carbon dioxide excesses, permanent. The earth did not reverse the extinction of the dinosaurs. Today extinctions result mostly from human population pressures—habitat destruction, pesticides, etc.—but “slowing man-made extinctions in a meaningful way would require drastically reducing the world’s human population.” Which will not happen.
  • To avoid mixing fact and speculation, earth scientists are, Laughlin says, “ultraconservative,” meaning they focus on the present and the immediate future: “[They] go to extraordinary lengths to prove by means of measurement that the globe is warming now, the ocean is acidifying now, fossil fuel is being exhausted now, and so forth, even though these things are self-evident in geologic time.”
  • Climate change over geologic time is, Laughlin says, something the earth has done “on its own without asking anyone’s permission or explaining itself.” People can cause climate change, but major glacial episodes have occurred “at regular intervals of 100,000 years,” always “a slow, steady cooling followed by abrupt warming back to conditions similar to today’s.”
  • Six million years ago the Mediterranean dried up. Ninety million years ago there were alligators in the Arctic. Three hundred million years ago Northern Europe was a desert and coal formed in Antarctica. “One thing we know for sure,” Laughlin says about these convulsions, “is that people weren’t involved.”
  •  
    The Earth Doesn't Care About what is done to or for it.
Weiye Loh

Skepticblog » Investing in Basic Science - 0 views

  • A recent editorial in the New York Times by Nicholas Wade raises some interesting points about the nature of basic science research – primarily that its’ risky.
  • As I have pointed out about the medical literature, researcher John Ioaniddis has explained why most published studies turn out in retrospect to be wrong. The same is true of most basic science research – and the underlying reason is the same. The world is complex, and most of our guesses about how it might work turn out to be either flat-out wrong, incomplete, or superficial. And so most of our probing and prodding of the natural world, looking for the path to the actual answer, turn out to miss the target.
  • research costs considerable resources of time, space, money, opportunity, and people-hours. There may also be some risk involved (such as to subjects in the clinical trial). Further, negative studies are actually valuable (more so than terrible pictures). They still teach us something about the world – they teach us what is not true. At the very least this narrows the field of possibilities. But the analogy holds in so far as the goal of scientific research is to improve our understanding of the world and to provide practical applications that make our lives better. Wade writes mostly about how we fund research, and this relates to our objectives. Most of the corporate research money is interested in the latter – practical (and profitable) applications. If this is your goal, than basic science research is a bad bet. Most investments will be losers, and for most companies this will not be offset by the big payoffs of the rare winners. So many companies will allow others to do the basic science (government, universities, start up companies) then raid the winners by using their resources to buy them out, and then bring them the final steps to a marketable application. There is nothing wrong or unethical about this. It’s a good business model.
  • ...8 more annotations...
  • What, then, is the role of public (government) funding of research? Primarily, Wade argues (and I agree), to provide infrastructure for expensive research programs, such as building large colliders.
  • the more the government invests in basic science and infrastructure, the more winners will emerge that private industry can then capitalize on. This is a good way to build a competitive dynamic economy.
  • But there is a pitfall – prematurely picking winners and losers. Wade give the example of California investing specifically into developing stem cell treatments. He argues that stem cells, while promising, do not hold a guarantee of eventual success, and perhaps there are other technologies that will work and are being neglected. The history of science and technology has clearly demonstrated that it is wickedly difficult to predict the future (and all those who try are destined to be mocked by future generations with the benefit of perfect hindsight). Prematurely committing to one technology therefore contains a high risk of wasting a great deal of limited resources, and missing other perhaps more fruitful opportunities.
  • The underlying concept is that science research is a long-term game. Many avenues of research will not pan out, and those that do will take time to inspire specific applications. The media, however, likes catchy headlines. That means when they are reporting on basic science research journalists ask themselves – why should people care? What is the application of this that the average person can relate to? This seems reasonable from a journalistic point of view, but with basic science reporting it leads to wild speculation about a distant possible future application. The public is then left with the impression that we are on the verge of curing the common cold or cancer, or developing invisibility cloaks or flying cars, or replacing organs and having household robot servants. When a few years go by and we don’t have our personal android butlers, the public then thinks that the basic science was a bust, when in fact there was never a reasonable expectation that it would lead to a specific application anytime soon. But it still may be on track for interesting applications in a decade or two.
  • this also means that the government, generally, should not be in the game of picking winners an losers – putting their thumb on the scale, as it were. Rather, they will get the most bang for the research buck if they simply invest in science infrastructure, and also fund scientists in broad areas.
  • The same is true of technology – don’t pick winners and losers. The much-hyped “hydrogen economy” comes to mind. Let industry and the free market sort out what will work. If you have to invest in infrastructure before a technology is mature, then at least hedge your bets and keep funding flexible. Fund “alternative fuel” as a general category, and reassess on a regular basis how funds should be allocated. But don’t get too specific.
  • Funding research but leaving the details to scientists may be optimal
  • The scientific community can do their part by getting better at communicating with the media and the public. Try to avoid the temptation to overhype your own research, just because it is the most interesting thing in the world to you personally and you feel hype will help your funding. Don’t make it easy for the media to sensationalize your research – you should be the ones trying to hold back the reigns. Perhaps this is too much to hope for – market forces conspire too much to promote sensationalism.
Weiye Loh

Study: Airport Security Should Stop Racial Profiling | Smart Journalism. Real Solutions... - 0 views

  • Plucking out of line most of the vaguely Middle Eastern-looking men at the airport for heightened screening is no more effective at catching terrorists than randomly sampling everyone. It may even be less effective. Press stumbled across this counterintuitive concept — sometimes the best way to find something is not to weight it by probability — in the unrelated context of computational biology. The parallels to airport security struck him when a friend mentioned he was constantly being pulled out of line at the airport.
  • Racial profiling, in other words, doesn’t work because it devotes heightened resources to innocent people — and then devotes those resources to them repeatedly even after they’ve been cleared as innocent the first time. The actual terrorists, meanwhile, may sneak through while Transportation Security Administration agents are focusing their limited attention on the wrong passengers.
  • Press tested the theory in a series of probability equations (the ambitious can check his math here and here).
  • ...3 more annotations...
  • Sampling based on profiling is mathematically no more effective than uniform random sampling. The optimal equation, rather, turns out to be something called “square-root sampling,” a compromise between the other two methods.
  • “Crudely,” Press writes of his findings in the journal Significance, if certain people are “nine times as likely to be the terrorist, we pull out only three times as many of them for special checks. Surprisingly, and bizarrely, this turns out to be the most efficient way of catching the terrorist.”
  • Square-root sampling, though, still represents a kind of profiling, and, Press adds, not one that could be realistically implemented at airports today. Square-root sampling only works if the profile probabilities are accurate in the first place — if we are able to say with mathematical certainty that some types of people are “nine times as likely to be the terrorist” compared to others. TSA agents in a crowded holiday terminal making snap judgments about facial hair would be far from this standard. “The nice thing about uniform sampling is there’s nothing to be inaccurate about, you don’t need any data, it never can be worse than you expect,” Press said. “As soon as you use profile probabilities, if the profile probabilities are just wrong, then the strong profiling just does worse than the random sampling.”
Weiye Loh

Rationally Speaking: The problem of replicability in science - 0 views

  • The problem of replicability in science from xkcdby Massimo Pigliucci
  • In recent months much has been written about the apparent fact that a surprising, indeed disturbing, number of scientific findings cannot be replicated, or when replicated the effect size turns out to be much smaller than previously thought.
  • Arguably, the recent streak of articles on this topic began with one penned by David Freedman in The Atlantic, and provocatively entitled “Lies, Damned Lies, and Medical Science.” In it, the major character was John Ioannidis, the author of some influential meta-studies about the low degree of replicability and high number of technical flaws in a significant portion of published papers in the biomedical literature.
  • ...18 more annotations...
  • As Freedman put it in The Atlantic: “80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Ioannidis himself was quoted uttering some sobering words for the medical community (and the public at large): “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
  • Julia and I actually addressed this topic during a Rationally Speaking podcast, featuring as guest our friend Steve Novella, of Skeptics’ Guide to the Universe and Science-Based Medicine fame. But while Steve did quibble with the tone of the Atlantic article, he agreed that Ioannidis’ results are well known and accepted by the medical research community. Steve did point out that it should not be surprising that results get better and better as one moves toward more stringent protocols like large randomized trials, but it seems to me that one should be surprised (actually, appalled) by the fact that even there the percentage of flawed studies is high — not to mention the fact that most studies are in fact neither large nor properly randomized.
  • The second big recent blow to public perception of the reliability of scientific results is an article published in The New Yorker by Jonah Lehrer, entitled “The truth wears off.” Lehrer also mentions Ioannidis, but the bulk of his essay is about findings in psychiatry, psychology and evolutionary biology (and even in research on the paranormal!).
  • In these disciplines there are now several documented cases of results that were initially spectacularly positive — for instance the effects of second generation antipsychotic drugs, or the hypothesized relationship between a male’s body symmetry and the quality of his genes — that turned out to be increasingly difficult to replicate over time, with the original effect sizes being cut down dramatically, or even disappearing altogether.
  • As Lehrer concludes at the end of his article: “Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling.”
  • None of this should actually be particularly surprising to any practicing scientist. If you have spent a significant time of your life in labs and reading the technical literature, you will appreciate the difficulties posed by empirical research, not to mention a number of issues such as the fact that few scientists ever actually bother to replicate someone else’s results, for the simple reason that there is no Nobel (or even funded grant, or tenured position) waiting for the guy who arrived second.
  • n the midst of this I was directed by a tweet by my colleague Neil deGrasse Tyson (who has also appeared on the RS podcast, though in a different context) to a recent ABC News article penned by John Allen Paulos, which meant to explain the decline effect in science.
  • Paulos’ article is indeed concise and on the mark (though several of the explanations he proposes were already brought up in both the Atlantic and New Yorker essays), but it doesn’t really make things much better.
  • Paulos suggests that one explanation for the decline effect is the well known statistical phenomenon of the regression toward the mean. This phenomenon is responsible, among other things, for a fair number of superstitions: you’ve probably heard of some athletes’ and other celebrities’ fear of being featured on the cover of a magazine after a particularly impressive series of accomplishments, because this brings “bad luck,” meaning that the following year one will not be able to repeat the performance at the same level. This is actually true, not because of magical reasons, but simply as a result of the regression to the mean: extraordinary performances are the result of a large number of factors that have to line up just right for the spectacular result to be achieved. The statistical chances of such an alignment to repeat itself are low, so inevitably next year’s performance will likely be below par. Paulos correctly argues that this also explains some of the decline effect of scientific results: the first discovery might have been the result of a number of factors that are unlikely to repeat themselves in exactly the same way, thus reducing the effect size when the study is replicated.
  • nother major determinant of the unreliability of scientific results mentioned by Paulos is the well know problem of publication bias: crudely put, science journals (particularly the high-profile ones, like Nature and Science) are interested only in positive, spectacular, “sexy” results. Which creates a powerful filter against negative, or marginally significant results. What you see in science journals, in other words, isn’t a statistically representative sample of scientific results, but a highly biased one, in favor of positive outcomes. No wonder that when people try to repeat the feat they often come up empty handed.
  • A third cause for the problem, not mentioned by Paulos but addressed in the New Yorker article, is the selective reporting of results by scientists themselves. This is essentially the same phenomenon as the publication bias, except that this time it is scientists themselves, not editors and reviewers, who don’t bother to submit for publication results that are either negative or not strongly conclusive. Again, the outcome is that what we see in the literature isn’t all the science that we ought to see. And it’s no good to argue that it is the “best” science, because the quality of scientific research is measured by the appropriateness of the experimental protocols (including the use of large samples) and of the data analyses — not by whether the results happen to confirm the scientist’s favorite theory.
  • The conclusion of all this is not, of course, that we should throw the baby (science) out with the bath water (bad or unreliable results). But scientists should also be under no illusion that these are rare anomalies that do not affect scientific research at large. Too much emphasis is being put on the “publish or perish” culture of modern academia, with the result that graduate students are explicitly instructed to go for the SPU’s — Smallest Publishable Units — when they have to decide how much of their work to submit to a journal. That way they maximize the number of their publications, which maximizes the chances of landing a postdoc position, and then a tenure track one, and then of getting grants funded, and finally of getting tenure. The result is that, according to statistics published by Nature, it turns out that about ⅓ of published studies is never cited (not to mention replicated!).
  • “Scientists these days tend to keep up the polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants. ... We speak piously of taking measurements and making small studies that will ‘add another brick to the temple of science.’ Most such bricks lie around the brickyard.”
    • Weiye Loh
       
      Written by John Platt in a "Science" article published in 1964
  • Most damning of all, however, is the potential effect that all of this may have on science’s already dubious reputation with the general public (think evolution-creation, vaccine-autism, or climate change)
  • “If we don’t tell the public about these problems, then we’re no better than non-scientists who falsely claim they can heal. If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • Joseph T. Lapp said... But is any of this new for science? Perhaps science has operated this way all along, full of fits and starts, mostly duds. How do we know that this isn't the optimal way for science to operate?My issues are with the understanding of science that high school graduates have, and with the reporting of science.
    • Weiye Loh
       
      It's the media at fault again.
  • What seems to have emerged in recent decades is a change in the institutional setting that got science advancing spectacularly since the establishment of the Royal Society. Flaws in the system such as corporate funded research, pal-review instead of peer-review, publication bias, science entangled with policy advocacy, and suchlike, may be distorting the environment, making it less suitable for the production of good science, especially in some fields.
  • Remedies should exist, but they should evolve rather than being imposed on a reluctant sociological-economic science establishment driven by powerful motives such as professional advance or funding. After all, who or what would have the authority to impose those rules, other than the scientific establishment itself?
Weiye Loh

The Dawn of Paid Search Without Keywords - Search Engine Watch (SEW) - 0 views

  • This year will fundamentally change how we think about and buy access to prospects, namely keywords. It is the dawn of paid search without keywords.
  • Google's search results were dominated by the "10 blue links" -- simple headlines, descriptions, and URLs to entice and satisfy searchers. Until it wasn't. Universal search wove in images, video, and real-time updates.
  • For most of its history, too, AdWords been presented in a text format even as the search results morphed into a multimedia experience. The result is that attention was pulled towards organic results at the expense of ads.
  • ...8 more annotations...
  • Google countered that trend with their big push for universal paid search in 2010. It was, perhaps, the most radical evolution to the paid search results since the introduction of Quality Score. Consider the changes:
  • New ad formats: Text is no longer the exclusive medium for advertising on Google. No format exemplifies that more than Product List Ads (and their cousin, Product Extensions). There is no headline, copy or display URL. Instead, it's just a product image, name, price and vendor slotted in the highest positions on the right side. What's more, you don't choose keywords. We also saw display creep into image search results with Image Search Ads and traditional display ads.
  • New calls-to-action: The way you satisfy your search with advertising on Google has evolved as well. Most notably, through the introduction of click-to-call as an option for mobile search ads (as well as the limited release AdWords call metrics). Similarly, more of the site experience is being pulled into the search results. The beta Comparison Ads creates a marketplace for loan and credit card comparison all on Google. The call to action is comparison and filtering, not just clicking on an ad.
  • New buying/monetization models: Cost-per-click (CPC) and cost-per-thousand-impressions (CPM) are no longer the only ways you can buy. Comparison Ads are sold on a cost-per-lead basis. Product listing ads are sold on a cost-per-acquisition (CPA) basis for some advertisers (CPC for most).
  • New display targeting options: Remarketing (a.k.a. retargeting) brought highly focused display buys to the AdWords interface. Specifically, the ability to only show display ads to segments of people who visit your site, in many cases after clicking on a text ad.
  • New advertising automation: In a move that radically simplifies advertising for small businesses, Google began testing Google Boost. It involves no keyword research and no bidding. If you have a Google Places page, you can even do it without a website. It's virtually hands-off advertising for SMBs.
  • Of those changes, Google Product Listing Ads and Google Boost offer the best glimpse into the future of paid search without keywords. They're notable for dramatic departures in every step of how you advertise on Google: Targeting: Automated targeting toward certain audiences as determined by Google vs. keywords chosen by the advertiser. Ads: Product listing ads bring a product search like result in the top position in the right column and Boost promotes a map-like result in a preferred position above organic results. Pricing: CPA and monthly budget caps replace daily budgets and CPC bids.
  • For Google to continue their pace of growth, they need two things: Another line of business to complement AdWords, and display advertising is it. They've pushed even more aggressively into the channel, most notably with the acquisition of Invite Media, a demand side platform. To remove obstacles to profit and incremental growth within AdWords. These barriers are primarily how wide advertisers target and how much they pay for the people they reach (see: "Why Google Wants to Eliminate Bidding In Exchange for Your Profits").
Weiye Loh

Official Google Blog: Microsoft's Bing uses Google search results-and denies it - 0 views

  • By now, you may have read Danny Sullivan’s recent post: “Google: Bing is Cheating, Copying Our Search Results” and heard Microsoft’s response, “We do not copy Google's results.” However you define copying, the bottom line is, these Bing results came directly from Google
  • We created about 100 “synthetic queries”—queries that you would never expect a user to type, such as [hiybbprqag]. As a one-time experiment, for each synthetic query we inserted as Google’s top result a unique (real) webpage which had nothing to do with the query.
  • To be clear, the synthetic query had no relationship with the inserted result we chose—the query didn’t appear on the webpage, and there were no links to the webpage with that query phrase. In other words, there was absolutely no reason for any search engine to return that webpage for that synthetic query. You can think of the synthetic queries with inserted results as the search engine equivalent of marked bills in a bank.
  • ...1 more annotation...
  • We gave 20 of our engineers laptops with a fresh install of Microsoft Windows running Internet Explorer 8 with Bing Toolbar installed. As part of the install process, we opted in to the “Suggested Sites” feature of IE8, and we accepted the default options for the Bing Toolbar.We asked these engineers to enter the synthetic queries into the search box on the Google home page, and click on the results, i.e., the results we inserted. We were surprised that within a couple weeks of starting this experiment, our inserted results started appearing in Bing. Below is an example: a search for [hiybbprqag] on Bing returned a page about seating at a theater in Los Angeles. As far as we know, the only connection between the query and result is Google’s result page (shown above).
Weiye Loh

EdgeRank: The Secret Sauce That Makes Facebook's News Feed Tick - 0 views

  • but News Feed only displays a subset of the stories generated by your friends — if it displayed everything, there’s a good chance you’d be overwhelmed. Developers are always trying to make sure their sites and apps are publishing stories that make the cut, which has led to the concept of “News Feed Optimization”, and their success is dictated by EdgeRank.
  • At a high level, the EdgeRank formula is fairly straightforward. But first, some definitions: every item that shows up in your News Feed is considered an Object. If you have an Object in the News Feed (say, a status update), whenever another user interacts with that Object they’re creating what Facebook calls an Edge, which includes actions like tags and comments. Each Edge has three components important to Facebook’s algorithm: First, there’s an affinity score between the viewing user and the item’s creator — if you send your friend a lot of Facebook messages and check their profile often, then you’ll have a higher affinity score for that user than you would, say, an old acquaintance you haven’t spoken to in years. Second, there’s a weight given to each type of Edge. A comment probably has more importance than a Like, for example. And finally there’s the most obvious factor — time. The older an Edge is, the less important it becomes.
  • Multiply these factors for each Edge then add the Edge scores up and you have an Object’s EdgeRank. And the higher that is, the more likely your Object is to appear in the user’s feed. It’s worth pointing out that the act of creating an Object is also considered an Edge, which is what allows Objects to show up in your friends’ feeds before anyone has interacted with them.
  • ...3 more annotations...
  • an Object is more likely to show up in your News Feed if people you know have been interacting with it recently. That really isn’t particularly surprising. Neither is the resulting message to developers: if you want your posts to show up in News Feed, make sure people will actually want to interact with them.
  • Steinberg hinted that a simpler version of News Feed may be on the way, as the current two-tabbed system is a bit complicated. That said, many people still use both tabs, with over 50% of users clicking over to the ‘most recent’ tab on a regular basis.
  • If you want to watch the video for yourself, click here, navigate to the Techniques sessions, and click on ‘Focus on Feed’. The talk about Facebook’s algorithms begins around 22 minutes in.
Weiye Loh

Google's Fight Against 'Low-Quality' Sites Continues - Slashdot - 0 views

  •  
    "A couple weeks ago, JC Penney made the news for plummeting in Google rankings for everything from 'area rugs' to 'grommet top curtains.' Turns out the retail site had a number of suspicious links pointing at it that could be traced back to a link network intended to manipulate Google's ranking algorithms. Now, Overstock.com has lost rankings for another type of link that Google finds to be manipulation of their algorithms. This situation has led Google to implement a significant change to their search algorithms, affecting almost 12% of queries in an effort to cull content farms and other webspam. And in the midst of all of this, a company with substantial publicity lately for running a paid link network announces they are getting out of the link business entirely."
Weiye Loh

Cancer resembles life 1 billion years ago, say astrobiologists - microbiology, genomics... - 0 views

  • astrobiologists, working with oncologists in the US, have suggested that cancer resembles ancient forms of life that flourished between 600 million and 1 billion years ago.
  • Read more about what this discovery means for cancer research.
  • The genes that controlled the behaviour of these early multicellular organisms still reside within our own cells, managed by more recent genes that keep them in check.It's when these newer controlling genes fail that the older mechanisms take over, and the cell reverts to its earlier behaviours and grows out of control.
  • ...11 more annotations...
  • The new theory, published in the journal Physical Biology, has been put forward by two leading figures in the world of cosmology and astrobiology: Paul Davies, director of the Beyond Center for Fundamental Concepts in Science, Arizona State University; and Charles Lineweaver, from the Australian National University.
  • According to Lineweaver, this suggests that cancer is an atavism, or an evolutionary throwback.
  • In the paper, they suggest that a close look at cancer shows similarities with early forms of multicellular life.
  • “Unlike bacteria and viruses, cancer has not developed the capacity to evolve into new forms. In fact, cancer is better understood as the reversion of cells to the way they behaved a little over one billion years ago, when humans were nothing more than loose-knit colonies of only partially differentiated cells. “We think that the tumours that develop in cancer patients today take the same form as these simple cellular structures did more than a billion years ago,” he said.
  • One piece of evidence to support this theory is that cancers appear in virtually all metazoans, with the notable exception of the bizarre naked mole rat."This quasi-ubiquity suggests that the mechanisms of cancer are deep-rooted in evolutionary history, a conjecture that receives support from both paleontology and genetics," they write.
  • the genes that controlled this early multi-cellular form of life are like a computer operating system's 'safe mode', and when there are failures or mutations in the more recent genes that manage the way cells specialise and interact to form the complex life of today, then the earlier level of programming takes over.
  • Their notion is in contrast to a prevailing theory that cancer cells are 'rogue' cells that evolve rapidly within the body, overcoming the normal slew of cellular defences.
  • However, Davies and Lineweaver point out that cancer cells are highly cooperative with each other, if competing with the host's cells. This suggests a pre-existing complexity that is reminiscent of early multicellular life.
  • cancers' manifold survival mechanisms are predictable, and unlikely to emerge spontaneously through evolution within each individual in such a consistent way.
  • The good news is that this means combating cancer is not necessarily as complex as if the cancers were rogue cells evolving new and novel defence mechanisms within the body.Instead, because cancers fall back on the same evolved mechanisms that were used by early life, we can expect them to remain predictable, thus if they're susceptible to treatment, it's unlikely they'll evolve new ways to get around it.
  • If the atavism hypothesis is correct, there are new reasons for optimism," they write.
  •  
    Feature: Inside DNA vaccines bioMD makes a bid for Andrew Forest's Allied Medical and Coridon Alexion acquires technology for MoCD therapy More > Most Popular Media Releases Cancer resembles life 1 billion years ago, say astrobiologists Feature: The challenge of a herpes simplex vaccine Feature: Proteomics power of pawpaw bioMD makes a bid for Andrew Forest's Allied Medical and Coridon Immune system boosting hormone might lead to HIV cure Biotechnology Directory Company Profile Check out this company's profile and more in the Biotechnology Directory! Biotechnology Directory Find company by name Find company by category Latest Jobs Senior Software Developer / Java Analyst Programm App Support Developer - Java / J2ee Solutions Consultant - VIC Technical Writer Product Manager (Fisheye/Crucible)   BUYING GUIDES Portable Multimedia Players Digital Cameras Digital Video Cameras LATEST PRODUCTS HTC Wildfire S Android phone (preview) Panasonic LUMIX DMC-GH2 digital camera HTC Desire S Android phone (preview) Qld ICT minister Robert Schwarten retires Movie piracy costs Aus economy $1.37 billion in 12 months: AFACT Wireless smartphones essential to e-health: CSIRO Aussie outsourcing CRM budgets to soar in 2011: Ovum Federal government to evaluate education revolution targets Business continuity planning - more than just disaster recovery Proving the value of IT - Part one 5 open source security projects to watch In-memory computing Information security in 2011 EFA shoots down 'unproductive' AFACT movie piracy study In Pictures: IBM hosts Galactic dinner Emerson Network Power launches new infrastructure solutions Consumers not smart enough for smartphones? Google one-ups Apple online media subscription service M2M offerings expand as more machines go online India cancels satellite spectrum deal after controversy Lenovo profit rises in Q3 on strong PC sales in China Taiwan firm to supply touch sensors to Samsung HP regains top position in India's PC market Copyright 20
1 - 20 of 31 Next ›
Showing 20 items per page