Skip to main content

Home/ IMT122 OUA Group/ Group items tagged Web 2.0

Rss Feed Group items tagged

Joanne S

What Is Web 2.0 - O'Reilly Media - 0 views

  •  
    O'Reilly, T. (2005, September 30). What Is Web 2.0 - O'Reilly Media. Retrieved September 10, 2010, from http://oreilly.com/web2/archive/what-is-web-20.html To discover how Tim O'Reilly originally conceptualised Web 2.0, please read the following explanation. Do not worry too much about understanding every web tool mentioned or all the technical processes. Do pay particular attention to the discussion of RSS on page 3 and the different ways that users relate to the web in this vision.
Joanne S

Kroski, E. (2008). Web 2.0. In Web 2.0 for librarians and information professionals - 0 views

  •  
    Kroski, E. (2008). Web 2.0. In Web 2.0 for librarians and information professionals. New York: Neal-Schuman.  (Available from: https://auth.lis.curtin.edu.au/cgi-bin/auth-ng/authredirect.cgi?redirurl=http://edocs.lis.curtin.edu.au/eres.cgi&url=dc60266981 )
Joanne S

Library 2.0 Theory: Web 2.0 and Its Implications for Libraries - 0 views

  • Already libraries are creating RSS feeds for users to subscribe to, including updates on new items in a collection, new services, and new content in subscription databases.
  • hybrid applications, where two or more technologies or services are conflated into a completely new, novel service.
  • personalized OPAC that includes access to IM, RSS feeds, blogs, wikis, tags, and public and private profiles within the library's network.
  •  
    Maness, J. (2006). Library 2.0 Theory: Web 2.0 and its Implications for Libraries. Webology, 3(2). Retrieved from http://webology.ir/2006/v3n2/a25.html
Joanne S

Library 2.0 : service for the next generation library. - 0 views

  • he heart of Library 2.0 is user-centered change
  • nviting user participatio
  • It also attempts to reach new users and better serve current ones through improved customer-driven offerin
  • ...6 more annotations...
  • Technological advances in the past several years have enabled libraries to create new services that before were not possible, such as virtual reference, personalized OPAC interfaces, or downloadable media that library customers can use in the comfort of their own homes. This increase in available technologies gives libraries the ability to offer improved, customer-driven service opportunities.
  • Libraries are in the habit of providing the same services and the same programs to the same groups
  • , Stephens believes that “Library 2.0 will be a meeting place, online or in the physical world, where [library users'] needs will be fulfilled through entertainment, information, and the ability to create [their] own stuff to contribute to the ocean of content out there.”
  • . It's never been easy to reach this group with physical services, because libraries are constrained by space and money and cannot carry every item that every user desires
  • Chris Anderson, editor-in-chief of Wired, who coined the phrase in an article of the same name in 2004, argues that the demand for movies or books that are not hits far outnumbers the demand for those that are hit
  • Going after the diverse long tail requires a combination of physical and virtual services
  •  
    Casey, M. E., & Savastinuk, L. C. (2006). Library 2.0 : service for the next generation library. Library Journal, 131(4), 40-42. Retrieved from http://www.libraryjournal.com/article/CA6365200.html
Joanne S

Scot Colford, "Explaining free and Open Source software," - 0 views

  • Ten criteria must be met in order for a software distribution to be considered open source:
  • Free redistribution
  • the source code freely available to developers.
  • ...13 more annotations...
  • The license must permit modifications
  • Integrity of the author's source code
  • No discrimination against persons or groups
  • No discrimination against fields of endeavor
  • same license must be passed on to others when the program is redistributed.
  • License must not be specific to a product
  • License must not restrict other software
  • License must be technology-neutral
  • list of the nine most widely used licenses is
  • Apache Software License 2.0 (www.apache.org/licenses/LICENSE-2.0.html) New BSD License (www.opensource.org/licenses/bsd-license.php) GNU General Public License (GPL) (www.gnu.org/licenses/gpl.html) GNU Lesser General Public License (LGPL) (www.gnu.org/licenses/lgpl.html) MIT License (www.opensource.org/licenses/mit-license.php) Mozilla Public License 1.1 (MPL) (www.mozilla.org/MPL/MPL-1.1.html) Common Development and Distribution License (www.sun.com/cddl/cddl.html) Common Public License 1.0 (www.ibm.com/developerworks/library/os-cpl.html)  Eclipse Public License (www.eclipse.org/legal/epl-v10.html) [5].
  • common misconception, alluded to above, is that since the source code is freely distributed without royalty or licensing fee, open source applications are free of cost.
  • Free and open source software application users, on the other hand, must rely on development communities for support.
  • The pervasiveness of the World Wide Web guarantees that nearly every information organization is using free or open source software to perform some function.
Joanne S

Reprogramming The Museum | museumsandtheweb.com - 0 views

  • Powerhouse experie
  • her APIs
  • Flickr AP
  • ...23 more annotations...
  • Thomson Reuters OpenCalais
  • OCLC's WorldCat
  • Before we began our work on the Commons on Flickr, some museum colleagues were concerned that engaging with the Flickr community would increase workloads greatly. While the monitoring of the site does take some work, the value gained via the users has far outweighed any extra effort. In some cases, users have dated images for us.
  • In subsequent use of the Flickr API, we appropriated tags users had added to our images, and now include them in our own collection database website (OPAC). We also retrieved geo-location data added to our images for use in third party apps like Sepiatown and Layar.
  • In our case the purpose of creating an API was to allow others to use our content.
  • So consider the questions above not in the context of should we or shouldn't we put our data online (via an API or otherwise) but rather in the context of managing expectations of the data's uptake.
  • Steps to an API
  • several important things which had to happen before we could provide a public web API. The first was the need to determine the licence status of our content.
  • The drive to open up the licensing of our content came when, on a tour we conducted of the Museum's collection storage facilities for some Wikipedian
  • This prompted Seb Chan to make the changes required to make our online collection documentation available under a mix of Creative Commons licences. (Chan, April 2009)
  • Opening up the licensing had another benefit: it meant that we had already cleared one hurdle in the path to creating an API.
  • The Government 2.0 Taskforce (http://gov2.net.au/about/) was the driver leading us to take the next step.
  • "increasing the openness of government through making public sector information more widely available to promote transparency, innovation and value adding to government information"
  • the first cultural institution in Australia to provided a bulk data dump of any sort.
  • The great thing about this use is that it exposes the Museum and its collection to the academic sector, enlightening them regarding potential career options in the cultural sector.
  • I will briefly mention some of the technical aspects of the API now for those interested. In line with industry best practice the Powerhouse Museum is moving more and more to open-source based hosting and so we chose a Linux platform for serving the API
  • Images are served from the cloud as we had already moved them there for our OPAC, to reduce outgoing bandwidth from the Museum's network.
  • Once we had the API up and running, we realised it would not be too much work to make a WordPress plug-in which allowed bloggers to add objects from our collection to their blogs or blog posts. Once built, this was tested internally on our own blogs. Then in early 2011 we added it to the WordPress plugin directory: http://wordpress.org/extend/plugins/powerhouse-museum-collection-image-grid/
  • One of the main advantages the API has over the data dump is the ability to track use.
  • It is also worth noting that since the API requests usually do not generate pages that are rendered in a browser it is not possible to embed Google Analytics tracking scripts in the API's output.
  • y requiring people to sign up using a valid email address before requesting an API key we are able to track API use back to individuals or organisations.
  • Concerns that people would use the API inappropriately were dealt with by adding a limit to the number of requests per hour each key can generate
  • An Application Programming Interface (API) is a particular set of rules and specifications that a software program can follow to access and make use of the services and resources provided by another particular software program
  •  
    Dearnley, L. (2011). Repreogramming the museum. In Museums and the Web 2011 : Proceedings. Presented at the Museums and the Web 2011, Toronto: Archives & Museum Informatics. Retrieved from http://conference.archimuse.com/mw2011/papers/reprogramming_the_museum
Joanne S

The Strongest Link: Libraries and Linked Data - 0 views

  • For many years now we have been hearing that the semantic web is just around the corner
  • most libraries, however, is that we are still grappling with 2.0 technologies.
  • By marking up information in standardized, highly structured formats like Resource Description Framework (RDF), we can allow computers to better "understand" the meaning of content
  • ...17 more annotations...
  • For most librarians this concept is fairly easy to understand. We have been creating highly structured machine-readable metadata for many years
  • By linking our data to shared ontologies that describe the properties and relationships of objects, we begin to allow computers not just to "understand" content, but also to derive new knowledge by "reasoning" about that content.
  • the term "Semantic Web" to refer to a full suite of W3C standards including RDF, SPARQL query language, and OWL web ontology language.
  • This article will outline some of the benefits that linked data could have for libraries, will discuss some of the non-technical obstacles that we face in moving forward, and will finally offer suggestions for practical ways in which libraries can participate in the development of the semantic web.
  • What benefits will libraries derive from linked data?
  • Having a common format for all data would be a huge boon for interoperability and the integration of all kinds of systems.
  • The linking hub would expose a network of tightly linked information from publishers, aggregators, book and journal vendors, subject authorities, name authorities, and other libraries.
  • semantic search could take us far beyond the current string-matching capabilities of search engines like Google.
  • What are the major obstacles for libraries?
  • A fundamental challenge for the development of linked data in libraries is lack of awareness.
  • Linked Data becomes more powerful the more of it there is.
  • Until there is enough linking between collections and imaginative uses of data collections there is a danger librarians will see linked data as simply another metadata standard, rather than the powerful discovery tool it will underpin.
  • a more practical concern is that changing the foundation of library metadata is no trivial task.
  • Privacy is a huge concern for many interested in linked data.
  • Related to privacy is trust.
  • Rights management poses potential problems for linked data in libraries. Libraries no longer own much of the content they provide to users; rather it is subscribed to from a variety of vendors.
  • What needs to happen to move libraries to the next level?
  •  
    Byrne, G., & Goddard, L. (2010). The Strongest Link: Libraries and Linked Data. D-Lib Magazine, 16(11/12). doi:10.1045/november2010-byrne Retrieved from http://www.dlib.org/dlib/november10/byrne/11byrne.html
Joanne S

What is Cloud Computing and How will it Affect Libraries? | TechSoup for Libraries - 0 views

  • If you’ve used any of the popular Web 2.0 services over the past few years (e.g. Gmail, Wikipedia, Flickr or Twitter), you already have some experience with cloud computing
  • Like water and electricity, a computing cloud is a communally-shared resource that you lease on a metered basis, paying for as little or as much as you need, when you need it
    • Joanne S
       
      Benefits-  Cost Savings, Flexibility and Innovation.
  • ...4 more annotations...
  • As individuals and members of organizations, we’re already choosing between desktop applications and cloud applications when it comes to e-mail, RSS, file storage, word processing and other simple applications. Sooner or later we’ll have to make this choice for mission-critical enterprise applications too
  • Libraries may soon be building and managing their own data centers.
  • For more practical, technical explanations of cloud computing, check out the Wikipedia article the Anatomy of Cloud Computing the MIT Technology Review Briefing on Cloud Computing.
  • For a discussion of problems and concerns about the digital cloud, read: How Secure is Cloud Computing? Security in the Ether Industry Challenges: The Standards Question
Joanne S

The Code4Lib Journal - How Hard Can It Be? : Developing in Open Source - 0 views

  • We experienced freedom to explore alternate avenues, to innovate, to take risks in ways that would have been difficult under the direct control of a district council.
  • patrons made it clear that while they appreciated that computers were a necessary part of a modern library, they did not consider them the most important part.
  • Our overall objective was to source a library system which: could be installed before Y2K complications immobilised us, was economical, in terms of both initial purchase and future license and maintenance support fees, ran effectively and fast by dial-up modem on an ordinary telephone line, used up-to-the minute technologies, looked good, and was easy for both staff and public to use, took advantage of new technology to permit members to access our catalogue and their own records from home, and let us link easily to other sources of information – other databases and the Internet. If we could achieve all of these objectives, we’d be well on the way to an excellent service.
  • ...14 more annotations...
  • "How hard can it be" Katipo staff wondered, "to write a library system that uses Internet technology?" Well, not very, as it turned out.
  • Koha would thus be available to anyone who wanted to try it and had the technical expertise to implement it.
  • fairly confident that we already had a high level of IT competence right through the staff, a high level of understanding of what our current system did and did not do.
  • ensure the software writers did not miss any key points in their fundamental understanding of the way libraries work.
  • The programming we commissioned cost us about 40% of the purchase price of an average turn-key solution.
  • no requirement to purchase a maintenance contract, and no annual licence fees.
  • An open source project is never finished.
  • Open source projects only survive if a community builds up around the product to ensure its continual improvement. Koha is stronger than ever now, supported by active developers (programmers) and users (librarians)
  • There are a range of support options available for Koha, both free and paid, and this has contributed to the overall strength of the Koha project.
  • Vendors like Anant, Biblibre, ByWater, Calyx, Catalyst, inLibro, IndServe, Katipo, KohaAloha, LibLime, LibSoul, NCHC, OSSLabs, PakLAG, PTFS, Sabinet, Strategic Data, Tamil and Turo Technology take the code and sell support around the product, develop add-ons and enhancements for their clients and then contribute these back to the project under the terms of the GPL license.
  • FRBR [5] arrangement, although of course it wasn’t called that 10 years ago, it was just a logical way for us to arrange the catalogue. A single bibliographic record essentially described the intellectual content, then a bunch of group records were attached, each one representing a specific imprint or publication.
  • The release of Koha 3.0 in late 2008 brought Koha completely into the web 2.0 age and all that entails. We are reconciled to taking a small step back for now, but the FRBR logic is around and RDA should see us back where want to be in a year or so – but with all the very exciting features and opportunities that Koha 3 has now.
  • In the early days, the Koha list appeared to have been dominated by programmers but I have noticed a lot more librarians participating now
  • "Adopt technology that keeps data open and free, abandon[ing] technology that does not." The time is right for OSS.
  •  
    For more information about Koha and how it was developed, see: Ransom, J., Cormack, C., & Blake, R. (2009). How Hard Can It Be? : Developing in Open Source. Code4Lib Journal, (7). Retrieved from http://journal.code4lib.org/articles/1638
1 - 9 of 9
Showing 20 items per page