Skip to main content

Home/ IMT122 OUA Group/ Group items tagged google

Rss Feed Group items tagged

Joanne S

Academic Search Engine Spam and Google Scholar's Resilience Against it - 0 views

  • Web-based academic search engines such as CiteSeer(X), Google Scholar, Microsoft Academic Search and SciPlore have introduced a new era of search for academic articles.
  • With classic digital libraries, researchers have no influence on getting their articles indexed. They either have published in a publication indexed by a digital library, and then their article is available in that digital library, or they have not
  • citation counts obtained from Google Scholar are sometimes used to evaluate the impact of articles and their authors.
  • ...9 more annotations...
  • ‘Academic Search Engine Optimization’ (ASEO)
  • Citation counts are commonly used to evaluate the impact and performance of researchers and their articles.
  • Nowadays, citation counts from Web-based academic search engines are also used for impact evaluations.
  • Most academic search engines offer features such as showing articles cited by an article, or showing related articles to a given article. Citation spam could bring more articles from manipulating researchers onto more of these lists.
  • It is apparent that a citation from a PowerPoint presentation or thesis proposal has less value than a citation in a peer reviewed academic article. However, Google does not distinguish on its website between these different origins of citations[8].
  • Google Scholar indexes Wikipedia articles when the article is available as PDF on a third party website.
  • That means, again, that not all citations on Google Scholar are what we call ‘full-value’ citations.
  • As long as Google Scholar applies only very rudimentary or no mechanisms to detect and prevent spam, citation counts should be used with care to evaluate articles’ and researchers’ impact.
  • However, Google Scholar is a Web-based academic search engine and as with all Web-based search engines, the linked content should not be trusted blindly.
Joanne S

How to solve impossible problems: Daniel Russell's awesome Google search techniques - 0 views

  • Most of what you know about Boolean is wrong.
  • Think about how somebody else would write about the topic.
  • Use language tools.
  • ...8 more annotations...
  • Use quotes to search for phrases.
  • Force Google to include search terms.
  • intext:”San Antonio” intext:Alamo
  • It forces Google to show results with the phrase “San Antonio” and the word Alamo. You won’t get results that are missing either search term.
  • Minus does not equal plus.
  • “Control F” is your friend
  • Limit the time frame.
  • Use this keyboard shortcut to find a word or phrase on any web page. I
Joanne S

Eli Pariser: Beware online "filter bubbles" | Video on TED.com - 0 views

    • Joanne S
       
      Mark Zuckerberg, a journalist was asking him a question about the news feed. And the journalist was asking him, "Why is this so important?" And Zuckerberg said, "A squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa." And I want to talk about what a Web based on that idea of relevance might look like. So when I was growing up in a really rural area in Maine, the Internet meant something very different to me. It meant a connection to the world. It meant something that would connect us all together. And I was sure that it was going to be great for democracy and for our society. But there's this shift in how information is flowing online, and it's invisible. And if we don't pay attention to it, it could be a real problem. So I first noticed this in a place I spend a lot of time -- my Facebook page. I'm progressive, politically -- big surprise -- but I've always gone out of my way to meet conservatives. I like hearing what they're thinking about; I like seeing what they link to; I like learning a thing or two. And so I was surprised when I noticed one day that the conservatives had disappeared from my Facebook feed. And what it turned out was going on was that Facebook was looking at which links I clicked on, and it was noticing that, actually, I was clicking more on my liberal friends' links than on my conservative friends' links. And without consulting me about it, it had edited them out. They disappeared. So Facebook isn't the only place that's doing this kind of invisible, algorithmic editing of the Web. Google's doing it too. If I search for something, and you search for something, even right now at the very same time, we may get very different search results. Even if you're logged out, one engineer told me, there are 57 signals that Google looks at -- everything from what kind of computer you're on to what kind of browser you're using to where you're located -- that it uses to personally tailor you
Michelle Pitman

Google Plus Beginner's Guide - 0 views

  •  
    Handy for those who are not yet comfortable using G+ 
Joanne S

Taylor & Francis Online :: Optimal Results: What Libraries Need to Know About Google an... - 0 views

  •  
    Cahill, K., & Chalut, R. (2009). Optimal Results: What Libraries Need to Know About Google and Search Engine Optimization. The Reference Librarian, 50(3), 234-247. doi:10.1080/02763870902961969 ( You will need to be logged into Curtin Library to access this).
Joanne S

Search Optimization and Its Dirty Little Secrets - NYTimes.com - 0 views

  • black-hat services are not illegal, but trafficking in them risks the wrath of Google. The company draws a pretty thick line between techniques it considers deceptive and “white hat” approaches, which are offered by hundreds of consulting firms and are legitimate ways to increase a site’s visibility.
  • In deriving organic results, Google’s algorithm takes into account dozens of criteria,
  • one crucial factor in detail: links from one site to another.
Joanne S

Reprogramming The Museum | museumsandtheweb.com - 0 views

  • Powerhouse experie
  • her APIs
  • Flickr AP
  • ...23 more annotations...
  • Thomson Reuters OpenCalais
  • OCLC's WorldCat
  • Before we began our work on the Commons on Flickr, some museum colleagues were concerned that engaging with the Flickr community would increase workloads greatly. While the monitoring of the site does take some work, the value gained via the users has far outweighed any extra effort. In some cases, users have dated images for us.
  • In subsequent use of the Flickr API, we appropriated tags users had added to our images, and now include them in our own collection database website (OPAC). We also retrieved geo-location data added to our images for use in third party apps like Sepiatown and Layar.
  • In our case the purpose of creating an API was to allow others to use our content.
  • So consider the questions above not in the context of should we or shouldn't we put our data online (via an API or otherwise) but rather in the context of managing expectations of the data's uptake.
  • Steps to an API
  • several important things which had to happen before we could provide a public web API. The first was the need to determine the licence status of our content.
  • The drive to open up the licensing of our content came when, on a tour we conducted of the Museum's collection storage facilities for some Wikipedian
  • This prompted Seb Chan to make the changes required to make our online collection documentation available under a mix of Creative Commons licences. (Chan, April 2009)
  • Opening up the licensing had another benefit: it meant that we had already cleared one hurdle in the path to creating an API.
  • The Government 2.0 Taskforce (http://gov2.net.au/about/) was the driver leading us to take the next step.
  • "increasing the openness of government through making public sector information more widely available to promote transparency, innovation and value adding to government information"
  • the first cultural institution in Australia to provided a bulk data dump of any sort.
  • The great thing about this use is that it exposes the Museum and its collection to the academic sector, enlightening them regarding potential career options in the cultural sector.
  • I will briefly mention some of the technical aspects of the API now for those interested. In line with industry best practice the Powerhouse Museum is moving more and more to open-source based hosting and so we chose a Linux platform for serving the API
  • Images are served from the cloud as we had already moved them there for our OPAC, to reduce outgoing bandwidth from the Museum's network.
  • Once we had the API up and running, we realised it would not be too much work to make a WordPress plug-in which allowed bloggers to add objects from our collection to their blogs or blog posts. Once built, this was tested internally on our own blogs. Then in early 2011 we added it to the WordPress plugin directory: http://wordpress.org/extend/plugins/powerhouse-museum-collection-image-grid/
  • One of the main advantages the API has over the data dump is the ability to track use.
  • It is also worth noting that since the API requests usually do not generate pages that are rendered in a browser it is not possible to embed Google Analytics tracking scripts in the API's output.
  • y requiring people to sign up using a valid email address before requesting an API key we are able to track API use back to individuals or organisations.
  • Concerns that people would use the API inappropriately were dealt with by adding a limit to the number of requests per hour each key can generate
  • An Application Programming Interface (API) is a particular set of rules and specifications that a software program can follow to access and make use of the services and resources provided by another particular software program
  •  
    Dearnley, L. (2011). Repreogramming the museum. In Museums and the Web 2011 : Proceedings. Presented at the Museums and the Web 2011, Toronto: Archives & Museum Informatics. Retrieved from http://conference.archimuse.com/mw2011/papers/reprogramming_the_museum
Joanne S

Archives & Museum Informatics: Museums and the Web 2009: Paper: Gow, V. et al., Making ... - 0 views

  • New Zealand content difficult to discover, share and use
  • DigitalNZ is testing ways to create digital content, collect and share existing digital content, and build smart, freely available search and discovery tools.
  • Memory Maker blurs the line between consuming and producing content. What’s sometimes called ‘remix culture’ […]. Digital technologies have opened up new possibilities for young people to access and represent the stories of their culture by taking sound and images and recombining them to say something new, something relevant to them. (Sarah Jones, Lunch Box: Software & digital media for learning, November 2008) http://lunchbox.org.nz/2008/11/get-coming-home-on-your-schools-website-wiki-or-blog/)
  • ...7 more annotations...
  • The Memory Maker provides a taste of what is possible when collecting institutions modernise their practices for keeping and managing copyright information, using Creative Commons licenses or ‘no known copyright’ statements.
  • Learning about ‘hyperlinks’ today, these young New Zealanders will be the developers and creators of tomorrow.
  • The full set of contributions is accessible through a Coming Home search tool, occasionally on a google-like hosted search page (Figure 5), but more often through a search widget embedded on many New Zealand Web sites (Figure 6).
  • Digital New Zealand is developing and testing solutions that showcase what’s possible when we really focus on improving access to and discovery of New Zealand content.
  • Technically, the Digital New Zealand system is in three parts: a backend, a metadata store, and a front end.
  • The coolest thing to be done with your data will be thought of by someone else
  • “an API is basically a way to give developers permission to hack into your database”.
  •  
    Gow, V., Brown, L., Johnston, C., Neale, A., Paynter, G., & Rigby, F. (2009). Making New Zealand Content Easier to Find, Share and Use. In Museums and the Web 2009. Presented at the Museums and the Web 2009, Toronto: Archives & Museum Informatics, Retrieved from http://www.archimuse.com/mw2009/papers/gow/gow.html
Joanne S

Group Video Chat Showdown: Google Hangouts and AnyMeeting Come Out on Top | PCWorld Bus... - 0 views

  •  
    Useful descriptions of group chat apps  (useful for A3)
Joanne S

The Strongest Link: Libraries and Linked Data - 0 views

  • For many years now we have been hearing that the semantic web is just around the corner
  • most libraries, however, is that we are still grappling with 2.0 technologies.
  • By marking up information in standardized, highly structured formats like Resource Description Framework (RDF), we can allow computers to better "understand" the meaning of content
  • ...17 more annotations...
  • For most librarians this concept is fairly easy to understand. We have been creating highly structured machine-readable metadata for many years
  • By linking our data to shared ontologies that describe the properties and relationships of objects, we begin to allow computers not just to "understand" content, but also to derive new knowledge by "reasoning" about that content.
  • the term "Semantic Web" to refer to a full suite of W3C standards including RDF, SPARQL query language, and OWL web ontology language.
  • This article will outline some of the benefits that linked data could have for libraries, will discuss some of the non-technical obstacles that we face in moving forward, and will finally offer suggestions for practical ways in which libraries can participate in the development of the semantic web.
  • What benefits will libraries derive from linked data?
  • Having a common format for all data would be a huge boon for interoperability and the integration of all kinds of systems.
  • The linking hub would expose a network of tightly linked information from publishers, aggregators, book and journal vendors, subject authorities, name authorities, and other libraries.
  • semantic search could take us far beyond the current string-matching capabilities of search engines like Google.
  • What are the major obstacles for libraries?
  • A fundamental challenge for the development of linked data in libraries is lack of awareness.
  • Linked Data becomes more powerful the more of it there is.
  • Until there is enough linking between collections and imaginative uses of data collections there is a danger librarians will see linked data as simply another metadata standard, rather than the powerful discovery tool it will underpin.
  • a more practical concern is that changing the foundation of library metadata is no trivial task.
  • Privacy is a huge concern for many interested in linked data.
  • Related to privacy is trust.
  • Rights management poses potential problems for linked data in libraries. Libraries no longer own much of the content they provide to users; rather it is subscribed to from a variety of vendors.
  • What needs to happen to move libraries to the next level?
  •  
    Byrne, G., & Goddard, L. (2010). The Strongest Link: Libraries and Linked Data. D-Lib Magazine, 16(11/12). doi:10.1045/november2010-byrne Retrieved from http://www.dlib.org/dlib/november10/byrne/11byrne.html
1 - 14 of 14
Showing 20 items per page