Skip to main content

Home/ Groups/ DISC Inc
Rob Laporte

Social Media Links and SEO -- Spam Ye Not! - Search Engine Watch (SEW) - 0 views

  •  
    impact
Rob Laporte

Google: Ask Yourself These 23 Questions if Panda Impacted Your Website #SEWatch - 0 views

  •  
    blog post
Rob Laporte

Will Author Markup Help Google's Scraper Problem?  - Search Engine Watch (#SEW) - 0 views

  •  
    been
Rob Laporte

Key Problems With Current Social Link Graph Signals - 0 views

  •  
    Introducing: The Periodic Table Of SEO Ranking Factors
Rob Laporte

Report: Google Still Web's Dominant Traffic Driver, But Some Niches See Facebook Gaining - 0 views

  •  
    sites
Rob Laporte

Article Pagination: Actions that Improved Google Search Traffic Google SEO News and Dis... - 0 views

  •  
    The value of "long-form journalism" has been tested on websites such as Salon and shown to be quite viable. It also attracts a better caliber of writer. With this in mind, over a year ago I was working with an online magazine that was already publishing longer, in-depth articles, in the area of many thousands of words. The SEO challenge we had was that page 2 and beyond for most articles were not getting any search traffic - even though there was plenty of awesome content there. The approach we decided on is labor intensive for the content creators. But after some education, the writers were all interested in trying to increase the audience size. Here are the steps we took: Page 1 naturally enough uses the overall title of the article for both its title tag and header, and has a unique meta-description. Every internal page then has its own unique title and header tag . These are based on the first SUB-head for that section of the article. This means more keyword research and writing of subheads than would normally be the case. If the article is considered as a whole, then an tag would seem more accurate semantically. But Google looks at the semantic structure one URL at a time, not for the overall multi-URL article. Most pages also include internal subheads, and these are style as On each internal page, there is also a "pre-head" that does use the article title from page 1 in a small font. This pre-head does not use a header tag of any kind, just a CSS style. This pre-head article title is at the top as a navigation cue for the user. An additional navigation cue is that the unique page titles each begin with the numeral "2." or "3." Each internal page also has a unique meta description, one that summarizes that page specifically, rather than summarizing the overall article. Every page of the article links to every other page at the top and the bottom. None of this anemic "Back | Next" junk. There's a complete page choice shown on everywhe
Rob Laporte

Why Google Panda Is More A Ranking Factor Than Algorithm Update - 0 views

  • At our SMX Advanced conference earlier this month, the head of Google’s spam fighting team, Matt Cutts, explained that the Panda filter isn’t running all the time. Right now, it’s too much computing power to be running this particular analysis of pages. Instead, Google runs the filter periodically to calculate the values it needs. Each new run so far has also coincided with changes to the filter, some big, some small, that Google hopes improves catching poor quality content.  So far, the Panda schedule has been like this: Panda Update 1.0: Feb. 24, 2011 Panda Update 2.0: April 11, 2011 (about 7 weeks later) Panda Update 2.1: May 10, 2011 (about  4 weeks later) Panda Update 2.2: June 16, 2011 (about 5 weeks later) Recovering From Panda For anyone who was hit by Panda, it’s important to understand that the changes you’ve made won’t have any immediate impact. For instance, if you started making improvements to your site the day after Panda 1.0 happened, none of those would have registered for getting you back into Google’s good graces until the next time Panda scores were assessed — which wasn’t until around April 11. With the latest Panda round now live, Google says it’s possible some sites that were hit by past rounds might see improvements, if they themselves have improved.
Rob Laporte

Social Media - Google takes on Facebook - Internet Retailer - 0 views

  • Google+ will test whether the search giant can reach some of the 157.2 million consumers who accessed Facebook in May, according to web measurement firm comScore Inc. Those consumers are particularly valuable because they looked at 103 billion pages and spent an average of 375 minutes on the site. Consumers on Google sites, which includes YouTube, viewed 46.3 billion pages and spent 231 minutes on those sites. That helps explain how Facebook has been able to garner nearly one in three online display ads on the web, according to a recent comScore report.
Rob Laporte

Keyword Research: The Ultimate Guide - 0 views

  •  
    "visitors found this page by searching for" niche
Rob Laporte

As Deal With Twitter Expires, Google Realtime Search Goes Offline - 0 views

  • While Twitter may need Google to continue offering archive search, Google also potentially needs Twitter in another way. Google may have lost some of the data it has recently been using to bring social signals into its results, as covered more below: Google’s Search Results Get More Social; Twitter As The New Facebook “Like” I’ve not yet been able to check on whether Google Social Search and other parts of Google have been impacted by the deal’s end. I’ll look at that later — I’m heading off to enjoy the 4th Of July myself now. Update: Google has sent us a statement addressing the issue above: While we will not have access to this special feed from Twitter, information on Twitter that’s publicly available to our crawlers will still be searchable and discoverable on Google. As for other features such as social search, they will continue to exist, though without Twitter data from the special feed. You can certainly understand why Google+ has become even more important to the service now. While Google has gotten by largely without social signals from Facebook, having its own data from Google+ gives it insulation if it now has to get by without Twitter signals, as well.
Rob Laporte

Analytics Tips for Setting up Google +1 - Search Engine Watch (#SEW) - 0 views

  • New Data Nuggets Web analytics may not necessarily need another metric but +1 button promises to help measure engagement. Official details on what data will stream from Google's +1 button are not available yet, but Jim Prosser from Google confirmed to SEW that "we're bringing data to Analytics, Webmaster Tools, and AdWords frontend soon". Nontheless, there are methods of tracking +1s to your pages. By writing your own Javascript function, you can track +1 clicks as a Google Analytics event using _gaq.push() and use GA's standard reporting functionality.  All-in-all, the +1 button will now become another micro-conversion that may provide insight to how your site is performing and how users are engaging with your content.
Rob Laporte

Google Places Eats Hotpot: What's it Mean for Local Search? - Search Engine Watch (#SEW) - 0 views

  •  
    how much author authority is being used as a ranking signal
Rob Laporte

Google Product Search Insights: The Impact of UPCs on Customer Conversions - Search Eng... - 0 views

  •  
    Google Extends Support for rel=canonical
Rob Laporte

Google Study: PPC Ads Do NOT Cannibalize Your Organic Traffic - 0 views

  •  
    Does ad position effect conversion rates?
Rob Laporte

Redirects: Good, Bad & Conditional - 0 views

  •  
    There's one workaround I will leave you with that negates the use of redirects altogether-including conditional ones. It's useful specifically for tracking, and involves appending tracking information to URLs in such a way that tracked URLs are automatically collapsed by the engines. No, it doesn't involve JavaScript. Curiously, I don't ever hear this method being discussed. The method makes use of the # (hash or pound character), which is normally used to direct visitors to an anchored part of a web page. Simply append a # to your URL followed by the tracking code or ID. For example: www.example.com/widgets.php#partner42. Search engines will ignore the # and everything after it; thus, PageRank is aggregated and duplicates are avoided. Hopefully this has challenged you to think critically about redirects-temporary, permanent and conditional-and their implications for SEO. Opt for permanent (301) over temporary (302) if you want the link juice to transfer. Conditional redirects should be avoided, especially if your risk tolerance for penalization is low. If you take a good hard look at your "need" for conditional redirects, I think you may find you don't really need them at all.
Rob Laporte

Why Quality Is The Only Sustainable SEO Strategy - 0 views

  •  
    Adam Audette
Rob Laporte

Google Introduces New Pagination Tags, Pushes 'View-All' Pages - Search Engine Watch (#... - 0 views

  •  
    this Google help article
Rob Laporte

Google Extends Support for rel=canonical - Search Engine Watch (#SEW) - 0 views

  •  
    Google announced
Rob Laporte

SEO Techniques for Large Sites: How to Maximize Product Visibility in Organic Search - ... - 0 views

  •  
    Bloomreach
Rob Laporte

Hacked Canonical Tags: Coming Soon To A Website Near You? - Search Engine Watch (#SEW) - 0 views

  •  
    discussion
« First ‹ Previous 121 - 140 Next › Last »
Showing 20 items per page