Skip to main content

Home/ IMT122 OUA Group/ Group items tagged presentation

Rss Feed Group items tagged

Joanne S

Library Mashups and APIs - 0 views

  •  
    This "L Plate" presentation was presented at the VALA: Libraries, Technologies and the Future Conference in February 2010. The notes underneath each slide explain them very nicely. It gives you an idea of what is considered "L Plate" material at a professional conference. Hagon, P. (2010, February). Library Mashups and APIs. Presented at the VALA 2010 Conference. L Plate Session, Melbourne Australia. Retrieved from http://www.slideshare.net/paulhagon/library-mashups-and-apis
Joanne S

What Is Library 2.0 ? - 0 views

  •  
    Greenhill, K. (2007, October 10). What Is Library 2.0 ? Presented at the Library 2.0 Unconference, State Library of Queensland. Retrieved from http://www.slideshare.net/sirexkat/what-is-library-20 This is a 20 minute presentation with audio synced to the slides. To hear the audio, click on the green arrow in the centre of the box
Joanne S

2001 Public Libraries Conference paper - 0 views

  •  
    Hutley, S., Joseph, M., & Saunders, P. (2001). Follow the eBook road: eBooks in Oz public libraries. In Endless Possibilities. Presented at the ALIA Public Libraries Conference, Melbourne Australia: Australian Library and Information Association. Retrieved from http://conferences.alia.org.au/public2001/hutley.joseph.saunders.html
Joanne S

. EEbook Readers in Australian public libraries - are they REAL-e worth it? - 0 views

  •  
    Hutley, S., & Harwood, W. (2002). Ebook Readers in Australian public libraries - are they REAL-e worth it? In e. Presented at the VALA, Melbourne Australia. Retrieved from http://www.vala.org.au/vala2002/2002pdf/34HutHor.pdf
Joanne S

VALA2012 Session 12 Warren - VALA - 0 views

  •  
    NATIONAL AND STATE LIBRARIES OF AUSTRALASIA'S LIBRARY HACK PROJECT Warren, M., & Hayward, R. (2012). Hacking the nation: Libraryhack and community-created aps. VALA 2012: eM-powering eFutures. Presented at the VALA 2012: eM-powering eFutures, Melbourne Australia: VALA: Libraries, technology and the future. Retrieved from http://www.vala.org.au/vala2012-proceedings/vala2012-session-12-warren
Joanne S

Reprogramming The Museum | museumsandtheweb.com - 0 views

  • Powerhouse experie
  • her APIs
  • Flickr AP
  • ...23 more annotations...
  • Thomson Reuters OpenCalais
  • OCLC's WorldCat
  • Before we began our work on the Commons on Flickr, some museum colleagues were concerned that engaging with the Flickr community would increase workloads greatly. While the monitoring of the site does take some work, the value gained via the users has far outweighed any extra effort. In some cases, users have dated images for us.
  • In subsequent use of the Flickr API, we appropriated tags users had added to our images, and now include them in our own collection database website (OPAC). We also retrieved geo-location data added to our images for use in third party apps like Sepiatown and Layar.
  • In our case the purpose of creating an API was to allow others to use our content.
  • So consider the questions above not in the context of should we or shouldn't we put our data online (via an API or otherwise) but rather in the context of managing expectations of the data's uptake.
  • Steps to an API
  • several important things which had to happen before we could provide a public web API. The first was the need to determine the licence status of our content.
  • The drive to open up the licensing of our content came when, on a tour we conducted of the Museum's collection storage facilities for some Wikipedian
  • This prompted Seb Chan to make the changes required to make our online collection documentation available under a mix of Creative Commons licences. (Chan, April 2009)
  • Opening up the licensing had another benefit: it meant that we had already cleared one hurdle in the path to creating an API.
  • The Government 2.0 Taskforce (http://gov2.net.au/about/) was the driver leading us to take the next step.
  • "increasing the openness of government through making public sector information more widely available to promote transparency, innovation and value adding to government information"
  • the first cultural institution in Australia to provided a bulk data dump of any sort.
  • The great thing about this use is that it exposes the Museum and its collection to the academic sector, enlightening them regarding potential career options in the cultural sector.
  • I will briefly mention some of the technical aspects of the API now for those interested. In line with industry best practice the Powerhouse Museum is moving more and more to open-source based hosting and so we chose a Linux platform for serving the API
  • Images are served from the cloud as we had already moved them there for our OPAC, to reduce outgoing bandwidth from the Museum's network.
  • Once we had the API up and running, we realised it would not be too much work to make a WordPress plug-in which allowed bloggers to add objects from our collection to their blogs or blog posts. Once built, this was tested internally on our own blogs. Then in early 2011 we added it to the WordPress plugin directory: http://wordpress.org/extend/plugins/powerhouse-museum-collection-image-grid/
  • One of the main advantages the API has over the data dump is the ability to track use.
  • It is also worth noting that since the API requests usually do not generate pages that are rendered in a browser it is not possible to embed Google Analytics tracking scripts in the API's output.
  • y requiring people to sign up using a valid email address before requesting an API key we are able to track API use back to individuals or organisations.
  • Concerns that people would use the API inappropriately were dealt with by adding a limit to the number of requests per hour each key can generate
  • An Application Programming Interface (API) is a particular set of rules and specifications that a software program can follow to access and make use of the services and resources provided by another particular software program
  •  
    Dearnley, L. (2011). Repreogramming the museum. In Museums and the Web 2011 : Proceedings. Presented at the Museums and the Web 2011, Toronto: Archives & Museum Informatics. Retrieved from http://conference.archimuse.com/mw2011/papers/reprogramming_the_museum
Joanne S

Archives & Museum Informatics: Museums and the Web 2009: Paper: Gow, V. et al., Making ... - 0 views

  • New Zealand content difficult to discover, share and use
  • DigitalNZ is testing ways to create digital content, collect and share existing digital content, and build smart, freely available search and discovery tools.
  • Memory Maker blurs the line between consuming and producing content. What’s sometimes called ‘remix culture’ […]. Digital technologies have opened up new possibilities for young people to access and represent the stories of their culture by taking sound and images and recombining them to say something new, something relevant to them. (Sarah Jones, Lunch Box: Software & digital media for learning, November 2008) http://lunchbox.org.nz/2008/11/get-coming-home-on-your-schools-website-wiki-or-blog/)
  • ...7 more annotations...
  • The Memory Maker provides a taste of what is possible when collecting institutions modernise their practices for keeping and managing copyright information, using Creative Commons licenses or ‘no known copyright’ statements.
  • Learning about ‘hyperlinks’ today, these young New Zealanders will be the developers and creators of tomorrow.
  • The full set of contributions is accessible through a Coming Home search tool, occasionally on a google-like hosted search page (Figure 5), but more often through a search widget embedded on many New Zealand Web sites (Figure 6).
  • Digital New Zealand is developing and testing solutions that showcase what’s possible when we really focus on improving access to and discovery of New Zealand content.
  • Technically, the Digital New Zealand system is in three parts: a backend, a metadata store, and a front end.
  • The coolest thing to be done with your data will be thought of by someone else
  • “an API is basically a way to give developers permission to hack into your database”.
  •  
    Gow, V., Brown, L., Johnston, C., Neale, A., Paynter, G., & Rigby, F. (2009). Making New Zealand Content Easier to Find, Share and Use. In Museums and the Web 2009. Presented at the Museums and the Web 2009, Toronto: Archives & Museum Informatics, Retrieved from http://www.archimuse.com/mw2009/papers/gow/gow.html
Joanne S

Eli Pariser: Beware online "filter bubbles" | Video on TED.com - 0 views

    • Joanne S
       
      Mark Zuckerberg, a journalist was asking him a question about the news feed. And the journalist was asking him, "Why is this so important?" And Zuckerberg said, "A squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa." And I want to talk about what a Web based on that idea of relevance might look like. So when I was growing up in a really rural area in Maine, the Internet meant something very different to me. It meant a connection to the world. It meant something that would connect us all together. And I was sure that it was going to be great for democracy and for our society. But there's this shift in how information is flowing online, and it's invisible. And if we don't pay attention to it, it could be a real problem. So I first noticed this in a place I spend a lot of time -- my Facebook page. I'm progressive, politically -- big surprise -- but I've always gone out of my way to meet conservatives. I like hearing what they're thinking about; I like seeing what they link to; I like learning a thing or two. And so I was surprised when I noticed one day that the conservatives had disappeared from my Facebook feed. And what it turned out was going on was that Facebook was looking at which links I clicked on, and it was noticing that, actually, I was clicking more on my liberal friends' links than on my conservative friends' links. And without consulting me about it, it had edited them out. They disappeared. So Facebook isn't the only place that's doing this kind of invisible, algorithmic editing of the Web. Google's doing it too. If I search for something, and you search for something, even right now at the very same time, we may get very different search results. Even if you're logged out, one engineer told me, there are 57 signals that Google looks at -- everything from what kind of computer you're on to what kind of browser you're using to where you're located -- that it uses to personally tailor you
Joanne S

K. G Schneider, "The Thick of the Fray: Open Source Software in Libraries in the First ... - 0 views

  • the vast majority of libraries continue to rely on legacy proprietary systems
  • libraries using open source integrated library systems indicates that the vast majority of libraries continue to rely on legacy proprietary systems
  • there are at least a dozen active OSS projects based in or with their genesis in library organizations
  • ...10 more annotations...
  • xCatalog
  • LibraryFind
  • Blackligh
  • iVia,
  • What makes OSS different from proprietary software is that it is free in every sense of the word: free as in “no cost,” free as in “unencumbered” and free as in “not locked up.”
  • questioned whether OSS is overall less expensive than its proprietary counterparts and has called for libraries to look hard at cost factors
  • OSS projects are thriving communities with leaders, followers, contributors, audiences and reputation systems.
  • Like so many things librarians hold dear – information, books and library buildings themselves – OSS is open, available and visible for all to see
  • OSS presents important opportunities for libraries
  • This is the world we want to be in again. It will not always be easy, and there will be a few spectacular failures. But there will also be spectacular successes – and this time, they will happen in the open.
Joanne S

Academic Search Engine Spam and Google Scholar's Resilience Against it - 0 views

  • Web-based academic search engines such as CiteSeer(X), Google Scholar, Microsoft Academic Search and SciPlore have introduced a new era of search for academic articles.
  • With classic digital libraries, researchers have no influence on getting their articles indexed. They either have published in a publication indexed by a digital library, and then their article is available in that digital library, or they have not
  • citation counts obtained from Google Scholar are sometimes used to evaluate the impact of articles and their authors.
  • ...9 more annotations...
  • ‘Academic Search Engine Optimization’ (ASEO)
  • Citation counts are commonly used to evaluate the impact and performance of researchers and their articles.
  • Nowadays, citation counts from Web-based academic search engines are also used for impact evaluations.
  • Most academic search engines offer features such as showing articles cited by an article, or showing related articles to a given article. Citation spam could bring more articles from manipulating researchers onto more of these lists.
  • It is apparent that a citation from a PowerPoint presentation or thesis proposal has less value than a citation in a peer reviewed academic article. However, Google does not distinguish on its website between these different origins of citations[8].
  • Google Scholar indexes Wikipedia articles when the article is available as PDF on a third party website.
  • That means, again, that not all citations on Google Scholar are what we call ‘full-value’ citations.
  • As long as Google Scholar applies only very rudimentary or no mechanisms to detect and prevent spam, citation counts should be used with care to evaluate articles’ and researchers’ impact.
  • However, Google Scholar is a Web-based academic search engine and as with all Web-based search engines, the linked content should not be trusted blindly.
Joanne S

The Deep Web - 0 views

  • defined as the content on the Web not accessible through a search on general search engines.
  • sometimes also referred to as the hidden or invisible web.
  • the part of the Web that is not static, and is served dynamically "on the fly," is far larger than the static documents
  • ...11 more annotations...
  • When we refer to the deep Web, we are usually talking about the following:
  • The content of databases.
  • Non-text files such as multimedia, images, software, and documents in formats such as Portable Document Format (PDF) and Microsoft Word.
  • Content available on sites protected by passwords or other restrictions.
  • Special content not presented as Web pages, such as full text articles and books
  • Dynamically-changing, updated content,
  • let's consider adding new content to our list of deep Web sources. For example:
  • Blog postings Comments Discussions and other communication activities on social networking sites, for example Facebook and Twitter Bookmarks and citations stored on social bookmarking sites
  • Tips for dealing with deep Web content
  • Vertical search
  • Use a general search engine to locate a vertical search engine.
  •  
    The Web not accessible through a search on general search engines..
Joanne S

BBC News - French downloaders face government grilling - 0 views

  • Hadopi takes its name from the 2009 legislation which permits authorities to fine copyright infringers, or to cut off their internet connection.
  • In the UK, the Digital Economy Act makes some similar provisions, although the exact nature of possible sanctions has yet to be fully explained.
  • It has sent a total of 470,000 first warnings by email, with 20,000 users receiving a second warning through the mail.
  • ...1 more annotation...
  • If the person does not confess or does not even show up, the only evidence the agency can present before the judge is a series of numbers - a particular computer's IP address
  •  
    "We looked at what it would mean if the internet was a "human right" in France, given that there is legislation that people who violate copyright can have internet access cut off for up to a month."
  •  
    "We looked at what it would mean if the internet was a "human right" in France, given that there is legislation that people who violate copyright can have internet access cut off for up to a month."
Joanne S

Page 2. Long Live the Web: A Call for Continued Open Standards and Neutrality: Scientif... - 0 views

  • Several threats to the Web’s universality have arisen recently. Cable television companies that sell Internet connectivity are considering whether to limit their Internet users to downloading only the company’s mix of entertainment.
  • Social-networking sites present a different kind of problem. Facebook, LinkedIn, Friendster and others typically provide value by capturing information as you enter it
  • The sites assemble these bits of data into brilliant databases and reuse the information to provide value-added service—but only within their sites.
  • ...1 more annotation...
  • The basic Web technologies that individuals and companies need to develop powerful services must be available for free, with no royalties.
1 - 14 of 14
Showing 20 items per page