Recording video with the Zoom Q3HD is quite easy. Press one button on the side of the device to power it up, press the large red record button on the back of the device once to start recording, then press the record button again to stop. Playback is equally easy, with a play button conveniently located just below the display.
Update: Several commenters noted that the audio is out of sync with the video on both Zoom videos. This is not the case with the original files, and appears to be a problem with the way that YouTube handled the uploaded file. Zoom is looking into the situation and I will update again as soon as I hear more.
In the end, it's all going to boil down to two things: whether you have the money to buy a secondary video camera such as the Zoom Q3HD and if you truly need the extra video quality of this camera for your needs
For other Mac users, the Zoom Q3HD might be an excellent way to capture HD video at a relatively bargain price of $300 compared to higher end dedicated camcorders.
A built-in USB plug transfers recordings to your Mac with or without the pointless software bundle.
you can’t use an attached Q3HD as a live mic or camera for your Mac
The weak battery life—we got fewer than 90 minutes of recording time per set of AAs—will make you fearful that they’ll run out in the middle of a shot. And the plastic device even includes a plastic tripod adapter; that’s cheap.
. The Q3HD pairs decent image quality with uncommonly good audio. It’s easy to overlook this pocket shooter’s shortcomings once you hear the difference.
Product
Q3HD Handy Video Recorder
We experienced freedom to explore alternate avenues, to innovate, to take risks in ways that would have been difficult under the direct control of a district council.
patrons made it clear that while they appreciated that computers were a necessary part of a modern library, they did not consider them the most important part.
Our overall objective was to source a library system which:
could be installed before Y2K complications immobilised us,
was economical, in terms of both initial purchase and future license and maintenance support fees,
ran effectively and fast by dial-up modem on an ordinary telephone line,
used up-to-the minute technologies, looked good, and was easy for both staff and public to use,
took advantage of new technology to permit members to access our catalogue and their own records from home, and
let us link easily to other sources of information – other databases and the Internet.
If we could achieve all of these objectives, we’d be well on the way to an excellent service.
"How hard can it be" Katipo staff wondered, "to write a library system that uses Internet technology?" Well, not very, as it turned out.
Koha would thus be available to anyone who wanted to try it and had the technical expertise to implement it.
fairly confident that we already had a high level of IT competence right through the staff, a high level of understanding of what our current system did and did not do.
ensure the software writers did not miss any key points in their fundamental understanding of the way libraries work.
The programming we commissioned cost us about 40% of the purchase price of an average turn-key solution.
no requirement to purchase a maintenance contract, and no annual licence fees.
An open source project is never finished.
Open source projects only survive if a community builds up around the product to ensure its continual improvement. Koha is stronger than ever now, supported by active developers (programmers) and users (librarians)
There are a range of support options available for Koha, both free and paid, and this has contributed to the overall strength of the Koha project.
Vendors like Anant, Biblibre, ByWater, Calyx, Catalyst, inLibro, IndServe, Katipo, KohaAloha, LibLime, LibSoul, NCHC, OSSLabs, PakLAG, PTFS, Sabinet, Strategic Data, Tamil and Turo Technology take the code and sell support around the product, develop add-ons and enhancements for their clients and then contribute these back to the project under the terms of the GPL license.
FRBR [5] arrangement, although of course it wasn’t called that 10 years ago, it was just a logical way for us to arrange the catalogue. A single bibliographic record essentially described the intellectual content, then a bunch of group records were attached, each one representing a specific imprint or publication.
The release of Koha 3.0 in late 2008 brought Koha completely into the web 2.0 age and all that entails. We are reconciled to taking a small step back for now, but the FRBR logic is around and RDA should see us back where want to be in a year or so – but with all the very exciting features and opportunities that Koha 3 has now.
In the early days, the Koha list appeared to have been dominated by programmers but I have noticed a lot more librarians participating now
"Adopt technology that keeps data open and free, abandon[ing] technology that does not." The time is right for OSS.
For more information about Koha and how it was developed, see:
Ransom, J., Cormack, C., & Blake, R. (2009). How Hard Can It Be? : Developing in Open Source. Code4Lib Journal, (7). Retrieved from http://journal.code4lib.org/articles/1638
Before we began our work on the Commons on Flickr, some museum colleagues were concerned that engaging with the Flickr community would increase workloads greatly. While the monitoring of the site does take some work, the value gained via the users has far outweighed any extra effort. In some cases, users have dated images for us.
In subsequent use of the Flickr API, we appropriated tags users had added to our images, and now include them in our own collection database website (OPAC). We also retrieved geo-location data added to our images for use in third party apps like Sepiatown and Layar.
In our case the purpose of creating an API was to allow others to use our content.
So consider the questions above not in the context of should we or shouldn't we put our data online (via an API or otherwise) but rather in the context of managing expectations of the data's uptake.
Steps to an API
several important things which had to happen before we could provide a public web API. The first was the need to determine the licence status of our content.
The drive to open up the licensing of our content came when, on a tour we conducted of the Museum's collection storage facilities for some Wikipedian
This prompted Seb Chan to make the changes required to make our online collection documentation available under a mix of Creative Commons licences. (Chan, April 2009)
Opening up the licensing had another benefit: it meant that we had already cleared one hurdle in the path to creating an API.
The Government 2.0 Taskforce (http://gov2.net.au/about/) was the driver leading us to take the next step.
"increasing the openness of government through making public sector information more widely available to promote transparency, innovation and value adding to government information"
the first cultural institution in Australia to provided a bulk data dump of any sort.
The great thing about this use is that it exposes the Museum and its collection to the academic sector, enlightening them regarding potential career options in the cultural sector.
I will briefly mention some of the technical aspects of the API now for those interested. In line with industry best practice the Powerhouse Museum is moving more and more to open-source based hosting and so we chose a Linux platform for serving the API
Images are served from the cloud as we had already moved them there for our OPAC, to reduce outgoing bandwidth from the Museum's network.
Once we had the API up and running, we realised it would not be too much work to make a WordPress plug-in which allowed bloggers to add objects from our collection to their blogs or blog posts. Once built, this was tested internally on our own blogs. Then in early 2011 we added it to the WordPress plugin directory: http://wordpress.org/extend/plugins/powerhouse-museum-collection-image-grid/
One of the main advantages the API has over the data dump is the ability to track use.
It is also worth noting that since the API requests usually do not generate pages that are rendered in a browser it is not possible to embed Google Analytics tracking scripts in the API's output.
y requiring people to sign up using a valid email address before requesting an API key we are able to track API use back to individuals or organisations.
Concerns that people would use the API inappropriately were dealt with by adding a limit to the number of requests per hour each key can generate
An Application Programming Interface (API) is a particular set of rules and specifications that a software program can follow to access and make use of the services and resources provided by another particular software program
NATIONAL AND STATE LIBRARIES OF AUSTRALASIA'S LIBRARY HACK PROJECT
Warren, M., & Hayward, R. (2012). Hacking the nation: Libraryhack and community-created aps. VALA 2012: eM-powering eFutures. Presented at the VALA 2012: eM-powering eFutures, Melbourne Australia: VALA: Libraries, technology and the future. Retrieved from http://www.vala.org.au/vala2012-proceedings/vala2012-session-12-warren
WEB START PAGES AS LIBRARY HOME PAGES
This is long, so just browse it to get the gist of the tools examined and the criteria used.
Pigott, C. (2009). An Alternative to Existing Library Websites: Evaluation of Nine Start Pages Using Criteria Extracted from Library Literature. School of Information Management, Victoria University of Wellington. Retrieved from http://researcharchive.vuw.ac.nz//handle/10063/1276
SELECTING THE RIGHT TOOL FOR A PORTAL-BASED SUBJECT GUIDE
Valenza, J. (2011). My Perpetual Pursuit of the Perfect Pathfinder Platform. VOYA: Voice of Youth Advocates. Retrieved from http://www.voya.com/2011/03/18/tag-team-tech-april-2011/