Skip to main content

Home/ DISC Inc/ Group items matching "Linking" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
8More

What Are Entities & Why They Matter for SEO - Search Engine Journal - 1 views

  • Contribution is determined by external signals (e.g., links, reviews) and is basically a measure of an entity’s contribution to a topic
    • jack_fox
       
      It's unclear how Notability is different from Contribution
  • Each entity is assigned a unique identifier.
  • Determining the most likely entity being requested by a searcher can be completed by establishing which entity appears the most times in the top 10 results.
  • ...4 more annotations...
  • There is an entity database.
  • Entities are ranked by a quality score that may include freshness, previous selections by users, incoming links, and possibly outgoing links.
  • Stronger sites like Wikipedia provide a stronger relationship between entities. For example, a Wikipedia page discussing Ronald Reagan as the president of the U.S. would connect the two entities of “Ronald Reagan” and “President” far more than their mentions in this article with a topical authority related to SEO and marketing
  • Are you a realtor in Miami? Get links on realty sites but also on sites related to Miami.
1More

Google Link Schemes Adds Clause About Contractually Requiring Follow Links Against Guid... - 0 views

  • make sure you do not legally require people to link to you with a follow link. Just leave those out of our terms of service.
3More

How to boost search rankings using only your internal linking strategy - Search Engine ... - 0 views

  • The more links a page receives, the more value Google gives it.
  • Google now considers that 1000 is a “reasonable number” of links per page.
  • Links from fresh content pass fresh value, and can, therefore, signal new content to Google, helping new pages get crawled.
6More

The Real Impact of Mobile-First Indexing & The Importance of Fraggles - Moz - 0 views

  • We have also recently discovered that Google has begun to index URLs with a # jump-link, after years of not doing so, and is reporting on them separately from the primary URL in Search Console. As you can see below from our data, they aren't getting a lot of clicks, but they are getting impressions. This is likely because of the low average position. 
  • Start to think of GMB as a social network or newsletter — any assets that are shared on Facebook or Twitter can also be shared on Google Posts, or at least uploaded to the GMB account.
  • You should also investigate the current Knowledge Graph entries that are related to your industry, and work to become associated with recognized companies or entities in that industry. This could be from links or citations on the entity websites, but it can also include being linked by third-party lists that give industry-specific advice and recommendations, such as being listed among the top competitors in your industry ("Best Plumbers in Denver," "Best Shoe Deals on the Web," or "Top 15 Best Reality TV Shows"). Links from these posts also help but are not required — especially if you can get your company name on enough lists with the other top players. Verify that any links or citations from authoritative third-party sites like Wikipedia, Better Business Bureau, industry directories, and lists are all pointing to live, active, relevant pages on the site, and not going through a 301 redirect. While this is just speculation and not a proven SEO strategy, you might also want to make sure that your domain is correctly classified in Google’s records by checking the industries that it is associated with. You can do so in Google’s MarketFinder tool. Make updates or recommend new categories as necessary. Then, look into the filters and relationships that are given as part of Knowledge Graph entries and make sure you are using the topic and filter words as keywords on your site.
  • ...3 more annotations...
  • The biggest problem for SEOs is the missing organic traffic, but it is also the fact that current methods of tracking organic results generally don’t show whether things like Knowledge Graph, Featured Snippets, PAA, Found on the Web, or other types of results are appearing at the top of the query or somewhere above your organic result. Position one in organic results is not what it used to be, nor is anything below it, so you can’t expect those rankings to drive the same traffic. If Google is going to be lifting and representing everyone’s content, the traffic will never arrive at the site and SEOs won’t know if their efforts are still returning the same monetary value. This problem is especially poignant for publishers, who have only been able to sell advertising on their websites based on the expected traffic that the website could drive. The other thing to remember is that results differ — especially on mobile, which varies from device to device (generally based on screen size) but also can vary based on the phone IOS. They can also change significantly based on the location or the language settings of the phone, and they definitely do not always match with desktop results for the same query. Most SEO’s don't know much about the reality of their mobile search results because most SEO reporting tools still focus heavily on desktop results, even though Google has switched to Mobile-First.  As well, SEO tools generally only report on rankings from one location — the location of their servers — rather than being able to test from different locations. 
  • The only thing that good SEO’s can do to address this problem is to use tools like the MobileMoxie SERP Test to check what rankings look like on top keywords from all the locations where their users may be searching. While the free tool only provides results with one location at a time, subscribers can test search results in multiple locations, based on a service-area radius or based on an uploaded CSV of addresses. The tool has integrations with Google Sheets, and a connector with Data Studio, to help with SEO reporting, but APIs are also available, for deeper integrations in content editing tools, dashboards and for use within other SEO tools.
  • Fraggles and Fraggled indexing re-frames the switch to Mobile-First Indexing, which means that SEOs and SEO tool companies need to start thinking mobile-first — i.e. the portability of their information. While it is likely that pages and domains still carry strong ranking signals, the changes in the SERP all seem to focus less on entire pages, and more on pieces of pages, similar to the ones surfaced in Featured Snippets, PAAs, and some Related Searches. If Google focuses more on windowing content and being an "answer engine" instead of a "search engine," then this fits well with their stated identity, and their desire to build a more efficient, sustainable, international engine.
2More

Building Your Own Link Profile Based on Google's Data - Go Fish Digital - 0 views

  • Begin downloading your backlinks from Search Console on a weekly basis and make it a habit.
  • Every now and then, Google rotates a small batch of links into your sample files. If you keep downloading them for long enough, you’ll start seeing a bigger picture of exactly what your link profile looks like.
2More

Everything You Need to Know about Google PageRank in 2020 - 0 views

  • The likelihood of a link being clicked is a key influencer of PageRank and is referenced by Google's reasonable surfer patent
  •  
    "The likelihood of a link being clicked is a key influencer of PageRank and is referenced by Google's reasonable surfer patent"
2More

Google's Penguin Algorithm May Not Just Ignore Links, It May Target Whole Site - 0 views

  • Is the penguin penalty still relevant at all or are less relevant/spammy/toxic backlinks more or less ignored by the ranking algorithm these days?"John replied saying that in most cases, Google will just ignore the links but in some cases, where there is a clear pattern of spammy and manipulative links by the site, Penguin may decide to simply distrust the whole site.John said "I'd say it's a mix of both" when he answered that question. Meaning, Google Penguin can both ignore links and demote sites, if necessary. John said "if our systems recognize that they can't isolate and ignore these links across a website." John added that if Google can see a "very strong pattern there" the Google "algorithms" can lose "trust with this website" and you may see a "drop in the visibility there."
  •  
    "Penguin 4.0"
6More

How to Detect (and Deflect) Negative SEO Attacks - 0 views

shared by jack_fox on 10 Nov 21 - No Cached
  • John Mueller, Search Advocate at Google, basically calls negative SEO a meme these days
  • Gary Illyes, another Google’s representative, has made similar statements: [I’ve] looked at hundreds of supposed cases of negative SEO, but none have actually been the real reason a website was hurt
  • here’s what we think:Negative SEO can still work, but it’s much less of a problem than it used to be.
  • ...3 more annotations...
  • The volume approach: Blasting thousands upon thousands of low-quality links at your site.The over-optimized anchor text approach: Pointing lots of links with exact-match anchor text at a ranking page to give it an unnatural anchor text ratio.
  • Links from 0–30 DR domains will always be more prevalent. Some of them are spammy. It’s normal and nothing to worry about.
  • If you see an abnormally high percentage of keyword-rich anchors, it could be a sign of bad link-building practices or, indeed, a sneaky link-based negative SEO attack.
2More

Google Says Using Internal Linking Can Help Google Trust Your Site More Over Time - 0 views

  • John Mueller explaining that new sites can ultimately use internal linking to funnel Google through the most trusted and quality pages on the site, to earn trust and then get Google to crawl more and more of those pages over time after that trust was earned.
  •  
    "John Mueller explaining that new sites can ultimately use internal linking to funnel Google through the most trusted and quality pages on the site, to earn trust and then get Google to crawl more and more of those pages over time after that trust was earned."
4More

Internal PageRank Optimization Strategies - Portent - 0 views

  • On sites with hundreds of pagination pages, a blog post might rely on a pagination page that is 25 clicks away from the homepage for its only internal link. Category, tag, and author pages are effective ways to provide an alternative click path that is much shorter. So long as tag and category pages are well-formed and useful as navigation for users, they should be indexed.
  • By carefully controlling which filters are indexable in a faceted navigation
  • Estimating Internal PageRank With Screaming Frog Link Score
  •  
    "They have a guide to Link Score here"
1More

What Is Link Score? - Screaming Frog - 0 views

  •  
    "Link Score Introduction"
2More

Inbound links: Official Google Webmaster Central Blog - 0 views

  • So how can you engage more users and potentially increase merit-based inbound links?Many webmasters have written about their success in growing their audience. We've compiled several ideas and resources that can improve the web for all users.Create unique and compelling content on your site and the web in generalStart a blog: make videos, do original research, and post interesting stuff on a regular basis. If you're passionate about your site's topic, there are lots of great avenues to engage more users.If you're interested in blogging, see our Help Center for specific tips for bloggers.
  •  
    How they factor into ranking. Most importantly - Google appropriately flows PageRank and related signals through 301 redirects!!
1More

Evaluating Google's Response To Mapspam Reports - 0 views

  •  
    Conclusions * Local business owners seem to be confused about what actually constitutes spam, but can you blame them? The world of the Local search engines is often confusing even to those of us who study them on a daily basis! * Google's creation of a public forum for reporting anomalies in Maps has helped a lot of businesses recover traffic lost via Maps, and has probably helped Google identify weaknesses in its own algorithm as well. The responsiveness of the Maps team has been relatively admirable, even without providing verbal confirmation in the thread that changes have been made. (Of course, business owners whose situation hasn't been addressed are irate over the lack of response...) * The on-again/off-again bulk upload feature of Google Maps seems to be a particular favorite tool of mapspammers. * Local business owners: claim your listing at Google to avoid being victimized by hijackers and to decrease the likelihood of conflation with someone else's listing. If you don't have a website, direct your Local Business Listing at Google to one of your listings featuring the same information on another portal, such as Yahoo, Citysearch, or Yelp. * The large percentage of reported record conflations also underlines the importance of giving Google a strong signal of your business information (i.e. spiderable HTML address and phone number) on your own website. The more closely Google can associate that particular information with your business, the lower the chance of identifying someone else's business with the same information. In all honesty, I was surprised that the total number of bona-fide instances of spam reported in two months was so low, and I'm not quite sure what to make of it. It's possible that the quality of Local results has improved dramatically since the advent of the 10-pack in January. However, more likely is that the typical local business owner doesn't know where to report possible spam. It'll be interesting to see whether
1More

Appropriate uses of nofollow tag -- popular pick - Crawling, indexing, and ranking | Go... - 0 views

  • What are some appropriate ways to use the nofollow tag? One good example is the home page of expedia.com. If you visit that page, you'll see that the "Sign in" link is nofollow'ed. That's a great use of the tag: Googlebot isn't going to know how to sign into expedia.com, so why waste that PageRank on a page that wouldn't benefit users or convert any new visitors? Likewise, the "My itineraries" link on expedia.com is nofollow'ed as well. That's another page that wouldn't really convert well or have any use except for signed in users, so the nofollow on Expedia's home page means that Google won't crawl those specific links. Most webmasters don't need to worry about sculpting the flow of PageRank on their site, but if you want to try advanced things with nofollow to send less PageRank to copyright pages, terms of service, privacy pages, etc., that's your call.
2More

Deduping Duplicate Content - ClickZ - 0 views

  •  
    One interesting thing that came out of SES San Jose's Duplicate Content and Multiple Site Issues session in August was the sheer volume of duplicate content on the Web. Ivan Davtchev, Yahoo's lead product manager for search relevance, said "more than 30 percent of the Web is made up of duplicate content." At first I thought, "Wow! Three out of every 10 pages consist of duplicate content on the Web." My second thought was, "Sheesh, the Web is one tangled mess of equally irrelevant content." Small wonder trust and linkage play such significant roles in determining a domain's overall authority and consequent relevancy in the search engines. Three Flavors of Bleh Davtchev went on to explain three basic types of duplicate content: 1. Accidental content duplication: This occurs when Webmasters unintentionally allow content to be replicated by non-canonicalization (define), session IDs, soft 404s (define), and the like. 2. Dodgy content duplication: This primarily consists of replicating content across multiple domains. 3. Abusive content duplication: This includes scraper spammers, weaving or stitching (mixed and matched content to create "new" content), and bulk content replication. Fortunately, Greg Grothaus from Google's search quality team had already addressed the duplicate content penalty myth, noting that Google "tries hard to index and show pages with distinct information." It's common knowledge that Google uses a checksum-like method for initially filtering out replicated content. For example, most Web sites have a regular and print version of each article. Google only wants to serve up one copy of the content in its search results, which is predominately determined by linking prowess. Because most print-ready pages are dead-end URLs sans site navigation, it's relatively simply to equate which page Google prefers to serve up in its search results. In exceptional cases of content duplication that Google perceives as an abusive attempt to manipula
  •  
    One interesting thing that came out of SES San Jose's Duplicate Content and Multiple Site Issues session in August was the sheer volume of duplicate content on the Web. Ivan Davtchev, Yahoo's lead product manager for search relevance, said "more than 30 percent of the Web is made up of duplicate content." At first I thought, "Wow! Three out of every 10 pages consist of duplicate content on the Web." My second thought was, "Sheesh, the Web is one tangled mess of equally irrelevant content." Small wonder trust and linkage play such significant roles in determining a domain's overall authority and consequent relevancy in the search engines. Three Flavors of Bleh Davtchev went on to explain three basic types of duplicate content: 1. Accidental content duplication: This occurs when Webmasters unintentionally allow content to be replicated by non-canonicalization (define), session IDs, soft 404s (define), and the like. 2. Dodgy content duplication: This primarily consists of replicating content across multiple domains. 3. Abusive content duplication: This includes scraper spammers, weaving or stitching (mixed and matched content to create "new" content), and bulk content replication. Fortunately, Greg Grothaus from Google's search quality team had already addressed the duplicate content penalty myth, noting that Google "tries hard to index and show pages with distinct information." It's common knowledge that Google uses a checksum-like method for initially filtering out replicated content. For example, most Web sites have a regular and print version of each article. Google only wants to serve up one copy of the content in its search results, which is predominately determined by linking prowess. Because most print-ready pages are dead-end URLs sans site navigation, it's relatively simply to equate which page Google prefers to serve up in its search results. In exceptional cases of content duplication that Google perceives as an abusive attempt to manipula
1More

Link Building Strategies - The Complete List - 0 views

  •  
    Rand Fishkin said
1More

Official Google Webmaster Central Blog: Handling legitimate cross-domain content duplic... - 0 views

  • Use the cross-domain rel="canonical" link elementThere are situations where it's not easily possible to set up redirects. This could be the case when you need to move your website from a server that does not feature server-side redirects. In a situation like this, you can use the rel="canonical" link element across domains to specify the exact URL of whichever domain is preferred for indexing. While the rel="canonical" link element is seen as a hint and not an absolute directive, we do try to follow it where possible
« First ‹ Previous 61 - 80 of 392 Next › Last »
Showing 20 items per page