Skip to main content

Home/ DISC Inc/ Group items tagged Design

Rss Feed Group items tagged

Rob Laporte

Google Docs Gains E-Commerce Option - Google Blog - InformationWeek - 0 views

  • Google Docs Gains E-Commerce Option Posted by Thomas Claburn, Jul 30, 2009 06:10 PM Google (NSDQ: GOOG) on Thursday released the Google Checkout store gadget, software that allows any Google Docs user to create an online store and sell items using a Google spreadsheet. "Using new Spreadsheet Data APIs, we've integrated Google Docs and Google Checkout to make online selling a breeze," explains Google Checkout strategist Mike Giardina in a blog post. "In three simple steps, you'll be able to create an online store that's powered by Google Checkout and has inventory managed and stored in a Google spreadsheet." Giardina insists that the process is simple and can be completed in less than five minutes. To use the gadget, Google users first have to sign up for Google Checkout. They can then list whatever they want to sell in a Google spreadsheet and insert the Checkout gadget, which can also be used on Google Sites, Blogger, and iGoogle.
Rob Laporte

Image Alt Text Vs. Image Title : What's the Difference? - 1 views

  • Image Alt Text Vs. Image Title : What’s the Difference? May 19th, 2008 by Ann Smarty | 5 Comments search_engine399:http://www.searchenginejournal.com/image-alt-text-vs-image-title-whats-the-difference/6930/Buzz up! submit_url = "http://www.searchenginejournal.com/image-alt-text-vs-image-title-whats-the-difference/6930/"; Most webmasters don’t see any difference between image alt text and title mostly keeping them the same. A great discussion over at Google Webmaster Groups provides an exhaustive information on the differences between an image Alt attribute and an image title and standard recommendations of how to use them. Alt text is meant to be an alternative information source for those people who have chosen to disable images in their browsers and those user agents that are simply unable to “see” the images. It should describe what the image is about and get those visitors interested to see it. Without an alt text an image will be displayed as an empty icon: In Internet Explorer Alt text also pops up when you hover over an image. Last year Google officially confirmed that it mainly focuses on an alt text when trying to understand what an image is about. Image title (and the element name speaks for itself) should provide additional information and follow the rules of the regular title: it should be relevant, short, catchy and concise (a title “offers advisory information about the element for which it is set“). In FireFox and Opera it pops up when you hover over an image: So based on the above, we can discuss how to properly handle them: the both tags are primarily meant for visitors (though alt text seems more important for crawlers) - so provide explicit information on an image to encourage them to view it or get them interested. include your main keywords in both of them but keep them different. Keyword stuffing in Alt text and Title is still keyword stuffing, so keep them relevant and meaningful. Another good point to take into consideration: According to Aaron Wall, alt text is crucially important when used for a sitewide header banner.
Rob Laporte

WebMama's Look at the Web: Help Me Create a Top 10 SEM List for Blogs.com - 0 views

  • My morning coffee is enjoyed over--http://www.searchengineland.com/http://www.searchengineguide.com/http://www.seroundtable.com/hope that helps!   At 22/10/08 17:07 ,  Barry Schwartz said... Heh... I'd also vote for SERoundtable.com   At 23/10/08 12:22 ,  Claudia Bruemmer said... http://searchengineland.com/http://www.toprankblog.com/http://www.seroundtable.com/http://www.seomoz.org/bloghttp://www.seobook.com/blog
Rob Laporte

Should you sculpt PageRank using nofollow? | MickMel SEO - 0 views

  • Home About Contact RSS Feed   « Google releases Ad Manager A little more about Placement Targeting in AdSense » Should you sculpt PageRank using nofollow? I’ve seen a few posts (Dave Naylor, Joost de Valk) discussing this over the last few days and thought I’d share my view of it. Both posts bring up the same analogy, attributed to Matt Cutts: Nofollowing your internals can affect your ranking in Google, but it’s a 2nd order effect. My analogy is: suppose you’ve got $100. Would you rather work on getting $300, or would you spend your time planning how to spend your $100 more wisely. Spending the $100 more wisely is a matter of good site architecture (and nofollowing/sculpting PageRank if you want). But most people would benefit more from looking at how to get to the $300 level. While I agree in theory, I think that’s a bit oversimplified.  What if you could re-allocate your $100 more effectively in just a few minutes, then go try to raise it to $300? Sculpting PageRank is one of those things that can earn a nice benefit in a short period of time, but you can keep tweaking forever for progressively lesser and lesser gains.  See the chart on the left. For example, you probably have links on your site for “log-in”, “privacy policy” and other such pages.  Go in and nofollow those.  How long did that take?  Two minutes?  That alone probably brought as much benefit as it will to go through every page and carefully sculpt things out. Knock out a few of those links, then spend your time trying to work on getting $300.
Rob Laporte

NoFollow and PageRank Sculpting is it Worth the Effort - 0 views

  • For some websites using nofollow and pagerank sculpting is a complete waste of time, energy and resources. For other websites there may be some moderate level of benefit, and for some websites ignoring pagerank sculpting may be costing you traffic and sales.
Rob Laporte

Deduping Duplicate Content - ClickZ - 0 views

  •  
    One interesting thing that came out of SES San Jose's Duplicate Content and Multiple Site Issues session in August was the sheer volume of duplicate content on the Web. Ivan Davtchev, Yahoo's lead product manager for search relevance, said "more than 30 percent of the Web is made up of duplicate content." At first I thought, "Wow! Three out of every 10 pages consist of duplicate content on the Web." My second thought was, "Sheesh, the Web is one tangled mess of equally irrelevant content." Small wonder trust and linkage play such significant roles in determining a domain's overall authority and consequent relevancy in the search engines. Three Flavors of Bleh Davtchev went on to explain three basic types of duplicate content: 1. Accidental content duplication: This occurs when Webmasters unintentionally allow content to be replicated by non-canonicalization (define), session IDs, soft 404s (define), and the like. 2. Dodgy content duplication: This primarily consists of replicating content across multiple domains. 3. Abusive content duplication: This includes scraper spammers, weaving or stitching (mixed and matched content to create "new" content), and bulk content replication. Fortunately, Greg Grothaus from Google's search quality team had already addressed the duplicate content penalty myth, noting that Google "tries hard to index and show pages with distinct information." It's common knowledge that Google uses a checksum-like method for initially filtering out replicated content. For example, most Web sites have a regular and print version of each article. Google only wants to serve up one copy of the content in its search results, which is predominately determined by linking prowess. Because most print-ready pages are dead-end URLs sans site navigation, it's relatively simply to equate which page Google prefers to serve up in its search results. In exceptional cases of content duplication that Google perceives as an abusive attempt to manipula
  •  
    One interesting thing that came out of SES San Jose's Duplicate Content and Multiple Site Issues session in August was the sheer volume of duplicate content on the Web. Ivan Davtchev, Yahoo's lead product manager for search relevance, said "more than 30 percent of the Web is made up of duplicate content." At first I thought, "Wow! Three out of every 10 pages consist of duplicate content on the Web." My second thought was, "Sheesh, the Web is one tangled mess of equally irrelevant content." Small wonder trust and linkage play such significant roles in determining a domain's overall authority and consequent relevancy in the search engines. Three Flavors of Bleh Davtchev went on to explain three basic types of duplicate content: 1. Accidental content duplication: This occurs when Webmasters unintentionally allow content to be replicated by non-canonicalization (define), session IDs, soft 404s (define), and the like. 2. Dodgy content duplication: This primarily consists of replicating content across multiple domains. 3. Abusive content duplication: This includes scraper spammers, weaving or stitching (mixed and matched content to create "new" content), and bulk content replication. Fortunately, Greg Grothaus from Google's search quality team had already addressed the duplicate content penalty myth, noting that Google "tries hard to index and show pages with distinct information." It's common knowledge that Google uses a checksum-like method for initially filtering out replicated content. For example, most Web sites have a regular and print version of each article. Google only wants to serve up one copy of the content in its search results, which is predominately determined by linking prowess. Because most print-ready pages are dead-end URLs sans site navigation, it's relatively simply to equate which page Google prefers to serve up in its search results. In exceptional cases of content duplication that Google perceives as an abusive attempt to manipula
Rob Laporte

Google Changes Course on Nofollow - Search Engine Watch (SEW) - 0 views

  • This week at the SMX Advanced conference in Seattle, Cutts joined the discussion around nofollow during the duplicate content session. According to Outspoken Media's Lisa Barone: A debate broke out mid-session when Matt Cutts got involved about whether or not nofollow is still effective. Of course, as soon as it got hot, all search representatives got very tight lipped about who said what and what they really meant. As far as I could, Matt Cutts did NOT say that they ignore nofollow, but he DID hint that it is less effective today than it used to be. Later, Cutts addressed the issue again in his You&A keynote. When asked about PageRank sculpting, Cutts said that it will still work, but not as well. Basically, using nofollow will still prevent PageRank from passing from the linking page through the nofollowed link. But that PageRank is no longer "saved" to be used by other links on the page. It just "evaporates," according to Cutts. Rand Fishkin at SEOmoz has some visual aids to help describe the process. This change mainly affects those SEOs that have tried to optimize their pages using the nofollow tag for PageRank sculpting. It's safe to say that most site owners have no idea what PageRank sculpting is, which is probable a good thing, since it can quite easily be done wrong and cause more problems than it solves.
Rob Laporte

SEO Challenges of Restructuring a Site - Search Engine Watch (SEW) - 0 views

  • When you must prioritize, here are a few techniques you can use to make sure you've truly covered the most important pages: Go into your Web analytics tool and identify the top pages receiving traffic. Go into Google Webmaster Tools and get a list of your external links, so you can identify all pages that have received external links. Use Yahoo Site Explorer to identify the top pages listed there. Site Explorer tends to list the most important pages first. Next, make sure that the search engines find your 301 redirects. While it's common advice to update your sitemap to the new site on day one, consider leaving the old site's sitemap in place for a period of time, to help the search engines see the 301 redirects (hat tip to Stephan Spencer for this idea). How long should you leave it that way? That depends primarily on the size of your site and the number of pages that the search engines crawl on a daily basis. At a minimum, make sure that the prioritized pages list we developed above has been thoroughly crawled.
Rob Laporte

Google Loses "Backwards Compatibility" On Paid Link Blocking & PageRank Sculpting - 0 views

  •  
    Google I/O: New Advances In The Searchability of JavaScript and Flash, But Is It Enough?
Rob Laporte

Top Signs Your Site Isn't Ready for Prime Time, Part 2 - Search Engine Watch (SEW) - 0 views

  • You can find domains that may be available by checking out these resources: Go Daddy has an expired domain name auction. Justdropped.com lets you search for deleted domain names. FreshDrop.net lets you search all of the domain name auctions.
« First ‹ Previous 81 - 100 of 675 Next › Last »
Showing 20 items per page