Web-based academic search engines such as CiteSeer(X), Google Scholar, Microsoft Academic Search and SciPlore have introduced a new era of search for academic articles.
With classic digital libraries, researchers have no influence on getting their articles indexed. They either have published in a publication indexed by a digital library, and then their article is available in that digital library, or they have not
citation counts obtained from Google Scholar are sometimes used to evaluate the impact of articles and their authors.
Citation counts are commonly used to evaluate the impact and performance of researchers and their articles.
Nowadays, citation counts from Web-based academic search engines are also used for impact evaluations.
Most academic search engines offer features such as showing articles cited by an article, or showing related articles to a given article. Citation spam could bring more articles from manipulating researchers onto more of these lists.
It is apparent that a citation from a PowerPoint presentation or thesis proposal has less value than a citation in a peer reviewed academic article. However, Google does not distinguish on its website between these different origins of citations[8].
Google Scholar indexes Wikipedia articles when the article is available as PDF on a third party website.
That means, again, that not all citations on Google Scholar are what we call ‘full-value’ citations.
As long as Google Scholar applies only very rudimentary or no mechanisms to detect and prevent spam, citation counts should be used with care to evaluate articles’ and researchers’ impact.
However, Google Scholar is a Web-based academic search engine and as with all Web-based search engines, the linked content should not be trusted blindly.
Search queries, the words that users type into the search box, carry extraordinary value.
In addition to making content available to search engines, SEO also helps boost rankings so that content will be placed where searchers will more readily find it.
black-hat services are not illegal, but trafficking in them risks the wrath of Google. The company draws a pretty thick line between techniques it considers deceptive and “white hat” approaches, which are offered by hundreds of consulting firms and are legitimate ways to increase a site’s visibility.
In deriving organic results, Google’s algorithm takes into account dozens of criteria,
one crucial factor in detail: links from one site to another.
Cahill, K., & Chalut, R. (2009). Optimal Results: What Libraries Need to Know About Google and Search Engine Optimization. The Reference Librarian, 50(3), 234-247. doi:10.1080/02763870902961969 ( You will need to be logged into Curtin Library to access this).
When we refer to the deep Web, we are usually talking about the following:
The content of databases.
Non-text files such as multimedia, images, software, and documents in formats such as Portable Document Format (PDF) and Microsoft Word.
Content available on sites protected by passwords or other restrictions.
Special content not presented as Web pages, such as full text articles and books
Dynamically-changing, updated content,
let's consider adding new content to our list of deep Web sources. For example:
Blog postings
Comments
Discussions and other communication activities on social networking sites, for example Facebook and Twitter
Bookmarks and citations stored on social bookmarking sites
Tips for dealing with deep Web content
Vertical search
Use a general search engine to locate a vertical search engine.