Skip to main content

Home/ DISC Inc/ Group items tagged International

Rss Feed Group items tagged

Rob Laporte

SEO Solutions for Multi-Country Sites: Multi-Lingual XML Sitemaps | ClickZ - 0 views

  •  
    For these reasons, option two, editing the sitemap.xml, is the better method. It only concerns one file per version of the website, doesn't affect page loading times, and can be easily used with other file types. Issues Although this does the job solving the problem for Google searches, this method isn't universally recognized by other search engines like Bing and Yahoo, which still yield consistent traffic, albeit low, but converting still. Despite the lack of support by Bing, this method is a great stratagem for working with the biggest in the search game, Google. It permits you to do region-specific targeting of your website in search without incurring penalties associated with duplicate or similar content; an SEO win! This can also be achieved on other search engines. Bing, for example, allows you to make such a distinction with a meta tag inserted into the HTML page or make a change to the HTTP headers; a harder solution than Google's, but still recommended.
Rob Laporte

Google Confirms "Mayday" Update Impacts Long Tail Traffic - 0 views

  • Google Confirms “Mayday” Update Impacts Long Tail Traffic May 27, 2010 at 11:02am ET by Vanessa Fox Google made between 350 and 550 changes in its organic search algorithms in 2009. This is one of the reasons I recommend that site owners not get too fixated on specific ranking factors. If you tie construction of your site to any one perceived algorithm signal, you’re at the mercy of Google’s constant tweaks. These frequent changes are one reason Google itself downplays algorithm updates. Focus on what Google is trying to accomplish as it refines things (the most relevant, useful results possible for searchers) and you’ll generally avoid too much turbulence in your organic search traffic. However, sometimes a Google algorithm change is substantial enough that even those who don’t spend a lot of time focusing on the algorithms notice it. That seems to be the case with what those discussing it at Webmaster World have named “Mayday”. Last week at Google I/O, I was on a panel with Googler Matt Cutts who said, when asked during Q&A,  ”this is an algorithmic change in Google, looking for higher quality sites to surface for long tail queries. It went through vigorous testing and isn’t going to be rolled back.” I asked Google for more specifics and they told me that it was a rankings change, not a crawling or indexing change, which seems to imply that sites getting less traffic still have their pages indexed, but some of those pages are no longer ranking as highly as before. Based on Matt’s comment, this change impacts “long tail” traffic, which generally is from longer queries that few people search for individually, but in aggregate can provide a large percentage of traffic. This change seems to have primarily impacted very large sites with “item” pages that don’t have many individual links into them, might be several clicks from the home page, and may not have substantial unique and value-added content on them. For instance, ecommerce sites often have this structure. The individual product pages are unlikely to attract external links and the majority of the content may be imported from a manufacturer database. Of course, as with any change that results in a traffic hit for some sites, other sites experience the opposite. Based on Matt’s comment at Google I/O, the pages that are now ranking well for these long tail queries are from “higher quality” sites (or perhaps are “higher quality” pages). My complete speculation is that perhaps the relevance algorithms have been tweaked a bit. Before, pages that didn’t have high quality signals might still rank well if they had high relevance signals. And perhaps now, those high relevance signals don’t have as much weight in ranking if the page doesn’t have the right quality signals. What’s a site owner to do? It can be difficult to create compelling content and attract links to these types of pages. My best suggestion to those who have been hit by this is to isolate a set of queries for which the site now is getting less traffic and check out the search results to see what pages are ranking instead. What qualities do they have that make them seen as valuable? For instance, I have no way of knowing how amazon.com has faired during this update, but they’ve done a fairly good job of making individual item pages with duplicated content from manufacturer’s databases unique and compelling by the addition of content like of user reviews. They have set up a fairly robust internal linking (and anchor text) structure with things like recommended items and lists. And they attract external links with features such as the my favorites widget. From the discussion at the Google I/O session, this is likely a long-term change so if your site has been impacted by it, you’ll likely want to do some creative thinking around how you can make these types of pages more valuable (which should increase user engagement and conversion as well). Update on 5/30/10: Matt Cutts from Google has posted a YouTube video about the change. In it, he says “it’s an algorithmic change that changes how we assess which sites are the best match for long tail queries.” He recommends that a site owner who is impacted evaluate the quality of the site and if the site really is the most relevant match for the impacted queries, what “great content” could be added, determine if the the site is considered an “authority”, and ensure that the page does more than simply match the keywords in the query and is relevant and useful for that query. He notes that the change: has nothing to do with the “Caffeine” update (an infrastructure change that is not yet fully rolled out). is entirely algorithmic (and isn’t, for instance, a manual flag on individual sites). impacts long tail queries more than other types was fully tested and is not temporary
Rob Laporte

Should you sculpt PageRank using nofollow? | MickMel SEO - 0 views

  • Home About Contact RSS Feed   « Google releases Ad Manager A little more about Placement Targeting in AdSense » Should you sculpt PageRank using nofollow? I’ve seen a few posts (Dave Naylor, Joost de Valk) discussing this over the last few days and thought I’d share my view of it. Both posts bring up the same analogy, attributed to Matt Cutts: Nofollowing your internals can affect your ranking in Google, but it’s a 2nd order effect. My analogy is: suppose you’ve got $100. Would you rather work on getting $300, or would you spend your time planning how to spend your $100 more wisely. Spending the $100 more wisely is a matter of good site architecture (and nofollowing/sculpting PageRank if you want). But most people would benefit more from looking at how to get to the $300 level. While I agree in theory, I think that’s a bit oversimplified.  What if you could re-allocate your $100 more effectively in just a few minutes, then go try to raise it to $300? Sculpting PageRank is one of those things that can earn a nice benefit in a short period of time, but you can keep tweaking forever for progressively lesser and lesser gains.  See the chart on the left. For example, you probably have links on your site for “log-in”, “privacy policy” and other such pages.  Go in and nofollow those.  How long did that take?  Two minutes?  That alone probably brought as much benefit as it will to go through every page and carefully sculpt things out. Knock out a few of those links, then spend your time trying to work on getting $300.
Rob Laporte

SEOmoz | I Don't Buy Links - 0 views

  • How Google Can Discover Paid Links A while back I did a post called 15 Methods for Paid Link Detection. Here is a list of the methods I discussed in that post: Links Labeled as Advertisements Site Wides Links Are Sold By a Link Agency Selling Site Has Information on How to Buy a Text Link Ad Relevance of Your Link Relevance of Nearby Links Advertising Location Type Someone Reports Your Site for Buying Links Someone Reports Your Site for Some Other Reason Someone Reports the Site you Bought Links from for Selling Links Someone Reports the Site you Bought Links from for Some Other Reason Disgruntled Employee Leaves Your Company, and Reports Your Site Disgruntled Employee Leaves the Agency Your Used, and Reports Your Site Disgruntled Employee Leaves the Company of the Site You Bought Links from, and Reports Your Site Internal Human Review There are two major methods I want to emphasize here. These are: 1. Your competitor can report you. It's the grim truth that your paid links can be reported by your competitor. There is a form built right into Google Webmaster Tools. Here is what it looks like:
Rob Laporte

Giving Links Away - Search Engine Watch - 0 views

  • Enter Siloing and PageRank Sculpting This is simply the activity of controlling what pages of your site share their link love. You do this by adding a "nofollow" attribute to any link that you don't want the search engines to give credit to. Take the example Matt Cutts gives. Maybe you have a friend who is a total underground, blackhat, do-no-good, evil-empire, anarchist spammer. You know he's bad to the bone. But you have a soft place in your heart for him and you want others to check out his site. All you have to do is add a nofollow attribute to the link. It would look like this: <a href="http://www.total-underground-blackhat-do-no-good-evil-empire-anarchist-spammer.com/" rel="nofollow">a blackhat spammer</a>. In this article, Joost de Valk, a Dutch SEO and Web developer, quotes Matt Cutts as saying, "There's no stigma to using nofollow, even on your own internal links; for Google, nofollowed links are dropped out of our link graph; we don't even use such links for discovery." Joost's article explains PageRank sculpting in more detail if you find this topic fascinating. His article also talks about "siloing." He points to an article on BruceClay.com that discussed this concept in a great amount of detail. Siloing is the idea of only linking out to other pages on your site and other outside resources that relate to that specific category or topic. So, if you had a cherry ice cream cone page, you would only link to resources discussing cherry ice cream cones. Information about chocolate ice cream cones and ice cream sundaes would either not be linked to or would be linked to using the nofollow tag like I showed you above. Controlling Link Flow Using Robots.txt Finally, there's more than one way to block link love. You can also add this information to your robots.txt file. This handy file goes in the root folder of your Web server and tells the search engines how to not spider and index all sorts of things.
« First ‹ Previous 41 - 60 of 84 Next › Last »
Showing 20 items per page