Skip to main content

Home/ DJCamp2011/ Group items tagged of

Rss Feed Group items tagged

Tom Johnson

http://theyrule.net - 1 views

  •  
    They Rule Overview They Rule aims to provide a glimpse of some of the relationships of the US ruling class. It takes as its focus the boards of some of the most powerful U.S. companies, which share many of the same directors. Some individuals sit on 5, 6 or 7 of the top 1000 companies. It allows users to browse through these interlocking directories and run searches on the boards and companies. A user can save a map of connections complete with their annotations and email links to these maps to others. They Rule is a starting point for research about these powerful individuals and corporations. Context A few companies control much of the economy and oligopolies exert control in nearly every sector of the economy. The people who head up these companies swap on and off the boards from one company to another, and in and out of government committees and positions. These people run the most powerful institutions on the planet, and we have almost no say in who they are. This is not a conspiracy, they are proud to rule, yet these connections of power are not always visible to the public eye. Karl Marx once called this ruling class a 'band of hostile brothers.' They stand against each other in the competitve struggle for the continued accumulation of their capital, but they stand together as a family supporting their interests in perpetuating the profit system as whole. Protecting this system can require the cover of a 'legitimate' force - and this is the role that is played by the state. An understanding of this system can not be gleaned from looking at the inter-personal relations of this class alone, but rather how they stand in relation to other classes in society. Hopefully They Rule will raise larger questions about the structure of our society and in whose benefit it is run. The Data We do not claim that this data is 100% accurate at all times. Corporate directors have a habit of dying, quitting boards, joining new ones and most frustratingly passing on their name
  •  
    I think this data must be very useful to the people in Occupy Wall Street
Tom Johnson

The Overview Project » Document mining shows Paul Ryan relying on the the pro... - 0 views

  •  
    Document mining shows Paul Ryan relying on the the programs he criticizes by Jonathan Stray on 11/02/2012 0 One of the jobs of a journalist is to check the record. When Congressman Paul Ryan became a vice-presidential candidate, Associated Press reporter Jack Gillum decided to examine the candidate through his own words. Hundreds of Freedom of Information requests and 9,000 pages later, Gillum wrote a story showing that Ryan has asked for money from many of the same Federal programs he has criticized as wasteful, including stimulus money and funding for alternative fuels. This would have been much more difficult without special software for journalism. In this case Gillum relied on two tools: DocumentCloud to upload, OCR, and search the documents, and Overview to automatically sort the documents into topics and visualize the contents. Both projects are previous Knight News Challenge winners. But first Gillum had to get the documents. As a member of Congress, Ryan isn't subject to the Freedom of Information Act. Instead, Gillum went to every federal agency - whose files are covered under FOIA - for copies of letters or emails that might identify Ryan's favored causes, names of any constituents who sought favors, and more. Bit by bit, the documents arrived - on paper. The stack grew over weeks, eventually piling up two feet high on Gillum's desk. Then he scanned the pages and loaded them into the AP's internal installation of DocumentCloud. The software converts the scanned pages to searchable text, but there were still 9000 pages of material. That's where Overview came in. Developed in house at the Associated Press, this open-source visualization tool processes the full text of each document and clusters similar documents together, producing a visualization that graphically shows the contents of the complete document set. "I used Overview to take these 9000 pages of documents, and knowing there was probably going to be a lot of garbage or ext
Tom Johnson

8 must-reads detail how to verify information in real-time, from social media, users | ... - 0 views

  •  
    8 must-reads detail how to verify information in real-time, from social media, users Craig Silverman by Craig Silverman Published Apr. 27, 2012 7:46 am Updated Apr. 27, 2012 9:23 am Over the past couple of years, I've been trying to collect every good piece of writing and advice about verifying social media content and other types of information that flow across networks. This form of verification involves some new tools and techniques, and requires a basic understanding of the way networks operate and how people use them. It also requires many of the so-called old school values and techniques that have been around for a while: being skeptical, asking questions, tracking down high quality sources, exercising restraint, collaborating and communicating with team members. For example, lots of people talk about how Andy Carvin does crowdsourced verification and turns his Twitter feed into a real time newswire. Lost in the discussion is the fact that Carvin also develops sources and contacts on the ground and stays in touch with them on Skype and through other means. What you see on Twitter is only one part of the process. Some things never go out of style. At the same time, there are new tools, techniques and approaches every journalist should have in their arsenal. Fortunately, several leading practitioners of what I sometimes call the New Verification are gracious and generous about sharing what they know. One such generous lot are the folks at Storyful, a social media curation and verification operation that works with clients such as Reuters, ABC News, and The New York Times, among others. I wrote about them last year and examined how in some ways they act as an outsourced verification service for newsrooms. That was partly inspired by this post from Storyful founder Mark Little: I find it helps to think of curation as three central questions: * Discovery: How do we find valuable social media content? * Verification: How do we make sure we c
Tom Johnson

The Overview Project » Using Overview to analyze 4500 pages of documents on s... - 0 views

  • Using Overview to analyze 4500 pages of documents on security contractors in Iraq by Jonathan Stray on 02/21/2012 0 This post describes how we used a prototype of the Overview software to explore 4,500 pages of incident reports concerning the actions of private security contractors working for the U.S. State Department during the Iraq war. This was the core of the reporting work for our previous post, where we reported the results of that analysis. The promise of a document set like this is that it will give us some idea of the broader picture, beyond the handful of really egregious incidents that have made headlines. To do this, in some way we have to take into account most or all of the documents, not just the small number that might match a particular keyword search.  But at one page per minute, eight hours per day, it would take about 10 days for one person to read all of these documents — to say nothing of taking notes or doing any sort of followup. This is exactly the sort of problem that Overview would like to solve. The reporting was a multi-stage process: Splitting the massive PDFs into individual documents and extracting the text Exploration and subject tagging with the Overview prototype Random sampling to estimate the frequency of certain types of events Followup and comparison with other sources
  •  
    Using Overview to analyze 4500 pages of documents on security contractors in Iraq by Jonathan Stray on 02/21/2012 0 This post describes how we used a prototype of the Overview software to explore 4,500 pages of incident reports concerning the actions of private security contractors working for the U.S. State Department during the Iraq war. This was the core of the reporting work for our previous post, where we reported the results of that analysis. The promise of a document set like this is that it will give us some idea of the broader picture, beyond the handful of really egregious incidents that have made headlines. To do this, in some way we have to take into account most or all of the documents, not just the small number that might match a particular keyword search. But at one page per minute, eight hours per day, it would take about 10 days for one person to read all of these documents - to say nothing of taking notes or doing any sort of followup. This is exactly the sort of problem that Overview would like to solve. The reporting was a multi-stage process: Splitting the massive PDFs into individual documents and extracting the text Exploration and subject tagging with the Overview prototype Random sampling to estimate the frequency of certain types of events Followup and comparison with other sources
Tom Johnson

Using balloons to get aerial shots of demonstration in Santiago - 0 views

  • Written by Elizabeth Wolf, Fundación Ciudadano Inteligente The recent months of 2011 have marked the manifestation of student frustration with the Chilean education system. Hundreds of thousands of university and secondary students have flooded the streets of Santiago and other cities across Chile, in a series of protests demanding: lower tuition, more opportunities to access public universities, better quality education, and increased government spending on education, including more scholarships for lower class students.

 The Chilean government spends less on education than most developed countries: 4.4% of its GDP, compared to the average 7%, which means a good portion of Chilean students pay for their own university education. With the combination of being one of the most socially stratified countries in the world, (40% of the country’s wealth is concentrated in 10% of the population), and the high costs of education, many students do not have the ability to attend a quality university. The Piñera administration and the leading group of Chilean students, along with their supporters have been at odds with each other for the last few months, struggling on the debate of education policy reform. Unable to agree upon a solution, the result has been the eruption of student protests across the country.


  •  
    Could be a fun student project. Written by Elizabeth Wolf, Fundación Ciudadano Inteligente The recent months of 2011 have marked the manifestation of student frustration with the Chilean education system. Hundreds of thousands of university and secondary students have flooded the streets of Santiago and other cities across Chile, in a series of protests demanding: lower tuition, more opportunities to access public universities, better quality education, and increased government spending on education, including more scholarships for lower class students.(( The Chilean government spends less on education than most developed countries: 4.4% of its GDP, compared to the average 7%, which means a good portion of Chilean students pay for their own university education. With the combination of being one of the most socially stratified countries in the world, (40% of the country's wealth is concentrated in 10% of the population), and the high costs of education, many students do not have the ability to attend a quality university. The Piñera administration and the leading group of Chilean students, along with their supporters have been at odds with each other for the last few months, struggling on the debate of education policy reform. Unable to agree upon a solution, the result has been the eruption of student protests across the country.((
Tom Johnson

Part 2 of the Open Data, Open Society report is now available online | Stop - 0 views

  • Part 2 of the Open Data, Open Society report is now available online Posted on September 1, 2011 by marco Open Data, Open Society is a research project about openness of public data in EU local administrations by for the Laboratory of Economics and Management of Scuola Superiore Sant’Anna, Pisa. The first report of the project, released in October 2010 under a Creative Commons cc-by license, can be downloaded from the website of the DIME project (PDF) or read online as one HTML file on the Sant’Anna School website (*). The conclusions of the project, a shorter report titled “Open Data: Emerging trends, issues and best practices” and finished in June 2011, are now available online under the same license at the following locations: single HTML file PDF format, Sant’Anna school PDF format, DIME website Another part of the project, the Open Data, Open Society survey has been extended until the end of 2011. Thank you in advance for announcing the survey to all the city and regional administrations of EU-15 and, if you want, to add further translations of its introduction!
  •  
    Part 2 of the Open Data, Open Society report is now available online Posted on September 1, 2011 by marco Open Data, Open Society is a research project about openness of public data in EU local administrations by for the Laboratory of Economics and Management of Scuola Superiore Sant'Anna, Pisa. The first report of the project, released in October 2010 under a Creative Commons cc-by license, can be downloaded from the website of the DIME project (PDF) or read online as one HTML file on the Sant'Anna School website (*). The conclusions of the project, a shorter report titled "Open Data: Emerging trends, issues and best practices" and finished in June 2011, are now available online under the same license at the following locations: single HTML file PDF format, Sant'Anna school PDF format, DIME website Another part of the project, the Open Data, Open Society survey has been extended until the end of 2011. Thank you in advance for announcing the survey to all the city and regional administrations of EU-15 and, if you want, to add further translations of its introduction!
Tom Johnson

Shorenstein Center paper argues for collaboration in investigative reporting | Harvard ... - 0 views

  • Shorenstein Center paper argues for collaboration in investigative reporting Thursday, June 2, 2011 Sandy Rowe, former editor of The Oregonian, and Knight Fellow at the Shorenstein Center fall 2010 and spring 2011. Photograph by Martha Stewart Shorenstein Center, Harvard Kennedy School Contact: Janell Simsjanell_sims@harvard.eduhttp://www.hks.harvard.edu/presspol/index.html Media organizations may be able to perform their watchdog roles more effectively working together than apart. That is one conclusion in a new paper, “Partners of Necessity: The Case for Collaboration in Local Investigative Reporting,” authored by Sandy Rowe, former editor of Portland’s The Oregonian. The paper is based on interviews and research that Rowe conducted while serving as a Knight Fellow at the Shorenstein Center on the Press, Politics and Public Policy at Harvard Kennedy School. Rowe’s research examines the theory underpinning collaborative work and shows emerging models of collaboration that can lead to more robust investigative and accountability reporting in local and regional markets. “Growing evidence suggests that collaborations and partnerships between new and established news organizations, universities and foundations may be the overlooked key for investigative journalism to thrive at the local and state levels,” Rowe writes. “These partnerships, variously and often loosely organized, can share responsibility for content creation, generate wider distribution of stories and spread the substantial cost of accountability journalism.” Rowe was editor of The Oregonian from 1993 until January 2010. Under her leadership, the newspaper won five Pulitzer Prizes including the Gold Medal for Public Service. Rowe chairs the Board of Visitors of The Knight Fellowships at Stanford University and is a board member of the Committee to Protect Journalists. From 1984 until April 1993, Rowe was executive editor and vice president of The Virginian-Pilot and The Ledger-Star, Norfolk and Virginia Beach, Virginia. The Virginian-Pilot won the Pulitzer Prize for general news reporting under her leadership. Rowe’s year-long fellowship at the Shorenstein Center was funded by the John S. and James L. Knight Foundation. Read the full paper on the Shorenstein Center’s website.
  •  
    Shorenstein Center paper argues for collaboration in investigative reporting Thursday, June 2, 2011 Sandy Rowe, former editor of The Oregonian, and Knight Fellow at the Shorenstein Center fall 2010 and spring 2011. Photograph by Martha Stewart Shorenstein Center, Harvard Kennedy School Contact: Janell Sims janell_sims@harvard.edu http://www.hks.harvard.edu/presspol/index.html Media organizations may be able to perform their watchdog roles more effectively working together than apart. That is one conclusion in a new paper, "Partners of Necessity: The Case for Collaboration in Local Investigative Reporting," authored by Sandy Rowe, former editor of Portland's The Oregonian. The paper is based on interviews and research that Rowe conducted while serving as a Knight Fellow at the Shorenstein Center on the Press, Politics and Public Policy at Harvard Kennedy School. Rowe's research examines the theory underpinning collaborative work and shows emerging models of collaboration that can lead to more robust investigative and accountability reporting in local and regional markets. "Growing evidence suggests that collaborations and partnerships between new and established news organizations, universities and foundations may be the overlooked key for investigative journalism to thrive at the local and state levels," Rowe writes. "These partnerships, variously and often loosely organized, can share responsibility for content creation, generate wider distribution of stories and spread the substantial cost of accountability journalism." Rowe was editor of The Oregonian from 1993 until January 2010. Under her leadership, the newspaper won five Pulitzer Prizes including the Gold Medal for Public Service. Rowe chairs the Board of Visitors of The Knight Fellowships at Stanford University and is a board member of the Committee to Protect Journalists. From 1984 until April 1993, Rowe was executive editor and vice president of The Virginian-Pi
Tom Johnson

National Science Foundation Helps Fund scrible, A New Web Annotation Tool/Per... - 0 views

  • INFOdocket Information Industry News + New Web Sites and Tools From Gary Price and Shirl Kennedy National Science Foundation Helps Fund scrible, A New Web Annotation Tool/Personal Web Cache + Video Demo Posted on May 12, 2011 by Gary D. Price scrible (pronounced scribble) launched about a week ago and you can learn more (free to register and use) here. The company has received a $500,000 grant from the National Science Foundation. From Venture Beat: The company lets users do three things: Save articles and pages so they’re available if the original goes offline; richly annotate online content using tools reminiscent of Word (highlighter, sticky note, etc.), and share annotated pages privately with others. scrible is free and will continue to be free to all users (125MB of storage space). A premium edition is also planned but features (aside from a larger storage quota) have not been announced. Robert Scoble has posted a video demo of scrible with the CEO of of the company, Victor Karkar, doing the “driving.” scrible sounds a lot like Diigo without the mobile access options. It also sounds similar (minus the markup features) to Pinboard. Pinboard does charge $9.97 for a lifetime membership with almost all features (there are many with new ones are debut regularly). For an extra $25/year all of the material you’ve bookmarked is cached by Pinboard. Cached pages look great INCLUDING PDF files. Pinboard is extremely fast and has a very low learning curve. Think Delicious and then add a ton of useful tools to it. Pinboard also provides mobile access to your saved bookmarks and cached documents. Finally, when used responsibly (aka abused) there are no storage space quotas. Which service do you prefer or does each service have a niche depending on the work you’re doing? What other tools to you use? Hat Tips and Thanks: @NspireD2 and @New Media Consortium Share this: Share Share Tagged: Annotation Tools, Diigo, Pinboard, scrible Posted in: Personal Archiving, Web To
  • INFOdocket Information Industry News + New Web Sites and Tools From Gary Price and Shirl Kennedy National Science Foundation Helps Fund scrible, A New Web Annotation Tool/Personal Web Cache + Video Demo Posted on May 12, 2011 by Gary D. Price scrible (pronounced scribble) launched about a week ago and you can learn more (free to register and use) here. The company has received a $500,000 grant from the National Science Foundation. From Venture Beat: The company lets users do three things: Save articles and pages so they’re available if the original goes offline; richly annotate online content using tools reminiscent of Word (highlighter, sticky note, etc.), and share annotated pages privately with others. scrible is free and will continue to be free to all users (125MB of storage space). A premium edition is also planned but features (aside from a larger storage quota) have not been announced. Robert Scoble has posted a video demo of scrible with the CEO of of the company, Victor Karkar, doing the “driving.” scrible sounds a lot like Diigo without the mobile access options. It also sounds similar (minus the markup features) to Pinboard. Pinboard does charge $9.97 for a lifetime membership with almost all features (there are many with new ones are debut regularly). For an extra $25/year all of the material you’ve bookmarked is cached by Pinboard. Cached pages look great INCLUDING PDF files. Pinboard is extremely fast and has a very low learning curve. Think Delicious and then add a ton of useful tools to it. Pinboard also provides mobile access to your saved bookmarks and cached documents. Finally, when used responsibly (aka abused) there are no storage space quotas. Which service do you prefer or does each service have a niche depending on the work you’re doing? What other tools to you use? Hat Tips and Thanks: @NspireD2 and @New Media Consortium Share this: Share Share Tagged: Annotation Tools, Diigo, Pinboard, scrible Posted in: Personal Archiving, Web Tools
  •  
    " INFOdocket Information Industry News + New Web Sites and Tools From Gary Price and Shirl Kennedy National Science Foundation Helps Fund scrible, A New Web Annotation Tool/Personal Web Cache + Video Demo Posted on May 12, 2011 by Gary D. Price scrible (pronounced scribble) launched about a week ago and you can learn more (free to register and use) here. The company has received a $500,000 grant from the National Science Foundation. From Venture Beat: The company lets users do three things: Save articles and pages so they're available if the original goes offline; richly annotate online content using tools reminiscent of Word (highlighter, sticky note, etc.), and share annotated pages privately with others. scrible is free and will continue to be free to all users (125MB of storage space). A premium edition is also planned but features (aside from a larger storage quota) have not been announced. Robert Scoble has posted a video demo of scrible with the CEO of of the company, Victor Karkar, doing the "driving." scrible sounds a lot like Diigo without the mobile access options. It also sounds similar (minus the markup features) to Pinboard. Pinboard does charge $9.97 for a lifetime membership with almost all features (there are many with new ones are debut regularly). For an extra $25/year all of the material you've bookmarked is cached by Pinboard. Cached pages look great INCLUDING PDF files. Pinboard is extremely fast and has a very low learning curve. Think Delicious and then add a ton of useful tools to it. Pinboard also provides mobile access to your saved bookmarks and cached documents. Finally, when used responsibly (aka abused) there are no storage space quotas. Which service do you prefer or does each service have a niche depending on the work you're doing? What other tools to you use? Hat Tips and Thanks: @NspireD2 and @New Media Consortium Share this: Share Tagged: Annotation Tools, Diigo, Pinboard, scrible Posted in: P
Tom Johnson

DIVA-GIS | DIVA-GIS: free, simple & effective - 0 views

  • DIVA-GIS DIVA-GIS is a free computer program for mapping and geographic data analysis (a geographic information system (GIS). With DIVA-GIS you can make maps of the world, or of a very small area, using, for example, state boundaries, rivers, a satellite image, and the locations of sites where an animal species was observed. We also provide free spatial data for the whole world that you can use in DIVA-GIS or other programs. You can use the discussion forum to ask questions, report problems, or make suggestions. Or contact us, and read the blog entries for the latest news. But first download the program and read the documentation. DIVA-GIS is particularly useful for mapping and analyzing biodiversity data, such as the distribution of species, or other 'point-distributions'. It reads and write standard data formats such as ESRI shapefiles, so interoperability is not a problem. DIVA-GIS runs on Windows and (with minor effort) on Mac OSX (see instructions). You can use the program to analyze data, for example by making grid (raster) maps of the distribution of biological diversity, to find areas that have high, low, or complementary levels of diversity. And you can also map and query climate data. You can predict species distributions using the BIOCLIM or DOMAIN models.
  •  
    DIVA-GIS DIVA-GIS is a free computer program for mapping and geographic data analysis (a geographic information system (GIS). With DIVA-GIS you can make maps of the world, or of a very small area, using, for example, state boundaries, rivers, a satellite image, and the locations of sites where an animal species was observed. We also provide free spatial data for the whole world that you can use in DIVA-GIS or other programs. You can use the discussion forum to ask questions, report problems, or make suggestions. Or contact us, and read the blog entries for the latest news. But first download the program and read the documentation. DIVA-GIS is particularly useful for mapping and analyzing biodiversity data, such as the distribution of species, or other 'point-distributions'. It reads and write standard data formats such as ESRI shapefiles, so interoperability is not a problem. DIVA-GIS runs on Windows and (with minor effort) on Mac OSX (see instructions). You can use the program to analyze data, for example by making grid (raster) maps of the distribution of biological diversity, to find areas that have high, low, or complementary levels of diversity. And you can also map and query climate data. You can predict species distributions using the BIOCLIM or DOMAIN models.
  •  
    DIVA-GIS DIVA-GIS is a free computer program for mapping and geographic data analysis (a geographic information system (GIS). With DIVA-GIS you can make maps of the world, or of a very small area, using, for example, state boundaries, rivers, a satellite image, and the locations of sites where an animal species was observed. We also provide free spatial data for the whole world that you can use in DIVA-GIS or other programs. You can use the discussion forum to ask questions, report problems, or make suggestions. Or contact us, and read the blog entries for the latest news. But first download the program and read the documentation. DIVA-GIS is particularly useful for mapping and analyzing biodiversity data, such as the distribution of species, or other 'point-distributions'. It reads and write standard data formats such as ESRI shapefiles, so interoperability is not a problem. DIVA-GIS runs on Windows and (with minor effort) on Mac OSX (see instructions). You can use the program to analyze data, for example by making grid (raster) maps of the distribution of biological diversity, to find areas that have high, low, or complementary levels of diversity. And you can also map and query climate data. You can predict species distributions using the BIOCLIM or DOMAIN models.
Tom Johnson

T-LAB Tools for Text Analysis - 0 views

  •  
    The all-in-one software for Content Analysis and Text Mining Hello We are pleased to announce the release of T-LAB 8.0. This version represents a major change in the usability and the effectiveness of our software for text analysis. The most significant improvements concern the integration of bottom-up (i.e. unsupervised) methods for exploratory text analysis with top-down (i.e. supervised) approaches for the automated classification of textual units like words, sentences, paragraphs and documents. Among other things, this means that - besides discovering emerging patterns of words and themes from texts - the users can now easily build, apply and validate their models (e.g. dictionaries of categories or pre-existing manual categorizations) both for classical content analysis and for sentiment analysis. For this purpose several T-LAB functionalities have been expanded and a new ergonomic and powerful tool named 'Dictionary-Based Classification' has been added. No specific dictionaries have been built in; however, with some minor re-formatting, lots of resources available over the Internet and customized word lists can be quickly imported. Last but not least, in order to meet the needs of many customers, temporary licenses of the software are now on sale; moreover, without any time limit, the trial mode of the software now allows you to analyse your own texts up to 20 kb in txt format, each of which can include up to 20 short documents. To learn more, use the following link http://www.tlab.it/en/80news.php The Demo, the User's Manual and the Quick Introduction are available at http://www.tlab.it/en/download.php Kind Regards The T-LAB Team web: http://www.tlab.it/ e-mail: info@tlab.it
Tom Johnson

TransparencyCamp '11 Recap - Sunlight Foundation - 0 views

  • TransparencyCamp '11 Recap Nicole Aro May 4, 2011, 11:28 a.m. Sunlight’s fourth TransparencyCamp was this past weekend, and I’d like to take this moment to say to all of our attendees: Thank you -- you guys rock. To everyone else, I’m sorry that you missed such an awesome weekend, but we hope to see you next time around! This weekend was made possible by the generosity of our sponsors: Microsoft, Google, O’Reilly, Governing, iStrategyLabs, Forum One, and Adobe. I’d like to say a special thank you to Patrick Svenburg of Microsoft who stayed late to make sure we could finish setup and even helped us carry supplies(!). The weekend brought together about 250 government workers, software developers, investigative journalists, bloggers, students and open government advocates of all stripes to share stories, build relationships, and plan together to take on the challenges of building more open government. This year, TransparencyCamp also went global, bringing in 22 amazing transparency advocates from around the world to teach, learn and share with us here in the states.
  • TransparencyCamp '11 Recap Nicole Aro May 4, 2011, 11:28 a.m. Sunlight’s fourth TransparencyCamp was this past weekend, and I’d like to take this moment to say to all of our attendees: Thank you -- you guys rock. To everyone else, I’m sorry that you missed such an awesome weekend, but we hope to see you next time around! This weekend was made possible by the generosity of our sponsors: Microsoft, Google, O’Reilly, Governing, iStrategyLabs, Forum One, and Adobe. I’d like to say a special thank you to Patrick Svenburg of Microsoft who stayed late to make sure we could finish setup and even helped us carry supplies(!). The weekend brought together about 250 government workers, software developers, investigative journalists, bloggers, students and open government advocates of all stripes to share stories, build relationships, and plan together to take on the challenges of building more open government. This year, TransparencyCamp also went global, bringing in 22 amazing transparency advocates from around the world to teach, learn and share with us here in the states.
  •  
    "TransparencyCamp '11 Recap Nicole Aro May 4, 2011, 11:28 a.m. Sunlight's fourth TransparencyCamp was this past weekend, and I'd like to take this moment to say to all of our attendees: Thank you -- you guys rock. To everyone else, I'm sorry that you missed such an awesome weekend, but we hope to see you next time around! This weekend was made possible by the generosity of our sponsors: Microsoft, Google, O'Reilly, Governing, iStrategyLabs, Forum One, and Adobe. I'd like to say a special thank you to Patrick Svenburg of Microsoft who stayed late to make sure we could finish setup and even helped us carry supplies(!). The weekend brought together about 250 government workers, software developers, investigative journalists, bloggers, students and open government advocates of all stripes to share stories, build relationships, and plan together to take on the challenges of building more open government. This year, TransparencyCamp also went global, bringing in 22 amazing transparency advocates from around the world to teach, learn and share with us here in the states. "
Tom Johnson

Constructing the Open Data Landscape | ScraperWiki Data Blog - 0 views

  • Constructing the Open Data Landscape Posted on September 7, 2011 by Nicola Hughes In an article in today’s Telegraph regarding Francis Maude’s Public Data Corporation, Michael Cross asks: “What makes the state think it can be at the cutting edge of the knowledge economy“. He writes in terms of market and business share, giving the example of the satnav market worth over $100bn a year yet it’s based on free data from the US Government’s GPS system. He credits the internet revolution for transforming public sector data into ‘cashable proposition’. We, along with many other start-ups, foundations and civic coding groups, are part of this ‘geeky world’ of Open Data. So we’d like to add our piece concerning the Open Data movement. Michael has the right to ask this question because there is this constant custodial battle being fought every day, every scrape and every script on the web for the rights to data. So let me tell you about the geeks’ take on Open Data.
  •  
    Constructing the Open Data Landscape Posted on September 7, 2011 by Nicola Hughes In an article in today's Telegraph regarding Francis Maude's Public Data Corporation, Michael Cross asks: "What makes the state think it can be at the cutting edge of the knowledge economy". He writes in terms of market and business share, giving the example of the satnav market worth over $100bn a year yet it's based on free data from the US Government's GPS system. He credits the internet revolution for transforming public sector data into 'cashable proposition'. We, along with many other start-ups, foundations and civic coding groups, are part of this 'geeky world' of Open Data. So we'd like to add our piece concerning the Open Data movement. Michael has the right to ask this question because there is this constant custodial battle being fought every day, every scrape and every script on the web for the rights to data. So let me tell you about the geeks' take on Open Data.
Tom Johnson

Medialab-Prado Madrid - 0 views

  •  
    Site available in Spanish and English Medialab-Prado is a program of the Department of Arts of the City Council of Madrid, aimed at the production, research, and dissemination of digital culture and of the area where art, science, technology, and society intersect. Many workshops for the production of projects, conferences, seminars, encounters, project exhibition, concerts, presentations, etc. take place in its versatile space. All activities are free and open to the general public. Our primary objective is to create a structure where both research and production are processes permeable to user participation. To that end, Medialab-Prado offers: A permanent information, reception, and meeting space attended by cultural mediators. Open calls for the presentation of proposals and participation in the collaborative development of projects. We have several on-going programmes, which are as follows: Interactivos?: creative uses of electronics and programming Inclusiva.net: research and reflections on the network culture Visualizar: data visualization tools and strategies Commons Lab: trans-disciplinary discussion on the Commons AVLAB: audio-visual and sound creation http://medialab-prado.es/article/que_es
Tom Johnson

Narrative + investigative: tips from IRE 2012, Part 1 - Nieman Storyboard - A project o... - 0 views

  • Narrative + investigative: tips from IRE 2012, Part 1 At last month’s Investigative Reporters & Editors conference, in Boston, hundreds of reporters attended dozens of sessions on everything from analyzing unstructured data to working with the coolest web tools and building a digital newsroom. The conference, which started in the 1970s, after a Phoenix reporter died in a car bomb while covering the mob, is usually considered an investigative-only playground, but narrative writers can learn a lot from these journalists’ techniques and resources. When might a narrative writer need investigative skills? A few possible scenarios: • When developing a character’s timeline and activities beyond the basic backgrounding • When navigating precarious relationships with sources • When organizing large and potentially complicated amounts of material • When gathering data and documents that might provide storytelling context – geopolitical, financial, etc. We asked This Land correspondent Kiera Feldman to cover the conference with an eye for material that might be particularly useful in narrative. She netted a range of ideas, tips and resources. Today, in Part 1, she covers areas including documents and data, online research and source relationships. Check back tomorrow for Part 2, “Writing the Investigative Story,” with best practices from Ken Armstrong of the Seattle Times and Steve Fainaru of ESPN.
  •  
    Narrative + investigative: tips from IRE 2012, Part 1 At last month's Investigative Reporters & Editors conference, in Boston, hundreds of reporters attended dozens of sessions on everything from analyzing unstructured data to working with the coolest web tools and building a digital newsroom. The conference, which started in the 1970s, after a Phoenix reporter died in a car bomb while covering the mob, is usually considered an investigative-only playground, but narrative writers can learn a lot from these journalists' techniques and resources. When might a narrative writer need investigative skills? A few possible scenarios: * When developing a character's timeline and activities beyond the basic backgrounding * When navigating precarious relationships with sources * When organizing large and potentially complicated amounts of material * When gathering data and documents that might provide storytelling context - geopolitical, financial, etc. We asked This Land correspondent Kiera Feldman to cover the conference with an eye for material that might be particularly useful in narrative. She netted a range of ideas, tips and resources. Today, in Part 1, she covers areas including documents and data, online research and source relationships. Check back tomorrow for Part 2, "Writing the Investigative Story," with best practices from Ken Armstrong of the Seattle Times and Steve Fainaru of ESPN.
Tom Johnson

Timeline JS - Beautifully crafted timelines that are easy, and intuitive to use. - 0 views

  • Document History TimelineJS can pull in media from different sources. It has built in support for: Twitter, Flickr, Google Maps, YouTube, Vimeo, Dailymotion, Wikipedia, SoundCloud and more media types in the future. Creating one is as easy as filling in a Google spreadsheet or as detailed as JSON. Tips and tricks to best utilize TimelineJS. Keep it short, and write each event as a part of a larger narrative. Pick stories that have a strong chronological narrative. It does not work well for stories that need to jump around in the timeline. Include events that build up to major occurrences. Not just the major events. Sign up for Updates Get updates, tips and news by email. No Spam. Subscribe var fnames = new Array();var ftypes = new Array();fnames[0]='EMAIL';ftypes[0]='email';fnames[1]='NAME';ftypes[1]='text'; try { var jqueryLoaded=jQuery; jqueryLoaded=true; } catch(err) { var jqueryLoaded=false; } var head= document.getElementsByTagName('head')[0]; if (!jqueryLoaded) { var script = document.createElement('script'); script.type = 'text/javascript'; script.src = 'http://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js'; head.appendChild(script); if (script.readyState && script.onload!==null){ script.onreadystatechange= function () { if (this.readyState == 'complete') mce_preload_check(); } } } var script = document.createElement('script'); script.type = 'text/javascript'; script.src = 'http://downloads.mailchimp.com/js/jquery.form-n-validate.js'; head.appendChild(script); var err_style = ''; try{ err_style = mc_custom_error_style; } catch(e){ err_style = '#mc_embed_signup input.mce_inline_error{border-color:#6B0505;} #mc_embed_signup div.mce_inline_error{margin: 0 0 1em 0; padding: 5px 10px; background-color:#6B0505; font-weight: bold; z-index: 1; color:#fff;}'; } var head= document.getElementsByTagName('head')[0]; var style= document.createElement('style'); style.type= 'text/css'; if (style.styleSheet) { style.styleSheet.cssText = err_style; } else { style.appendChild(document.createTextNode(err_style)); } head.appendChild(style); setTimeout('mce_preload_check();', 250); var mce_preload_checks = 0; function mce_preload_check(){ if (mce_preload_checks>40) return; mce_preload_checks++; try { var jqueryLoaded=jQuery; } catch(err) { setTimeout('mce_preload_check();', 250); return; } try { var validatorLoaded=jQuery("#fake-form").validate({}); } catch(err) { setTimeout('mce_preload_check();', 250); return; } mce_init_form(); } function mce_init_form(){ jQuery(document).ready( function($) { var options = { errorClass: 'mce_inline_error', errorElement: 'div', onkeyup: function(){}, onfocusout:function(){}, onblur:function(){} }; var mce_validator = $("#mc-embedded-subscribe-form").validate(options); $("#mc-embedded-subscribe-form").unbind('submit');//remove the validator so we can get into beforeSubmit on the ajaxform, which then calls the validator options = { url: 'http://verite.us4.list-manage2.com/subscribe/post-json?u=7cc197123f5f6d3b8dc4e176f&id=d7f2b5d664&c=?', type: 'GET', dataType: 'json', contentType: "application/json; charset=utf-8", beforeSubmit: function(){ $('#mce_tmp_error_msg').remove(); $('.datefield','#mc_embed_signup').each( function(){ var txt = 'filled'; var fields = new Array(); var i = 0; $(':text', this).each( function(){ fields[i] = this; i++; }); $(':hidden', this).each( function(){ var bday = false; if (fields.length == 2){ bday = true; fields[2] = {'value':1970};//trick birthdays into having years } if ( fields[0].value=='MM' && fields[1].value=='DD' && (fields[2].value=='YYYY' || (bday && fields[2].value==1970) ) ){ this.value = ''; } else if ( fields[0].value=='' && fields[1].value=='' && (fields[2].value=='' || (bday && fields[2].value==1970) ) ){ this.value = ''; } else { if (/\[day\]/.test(fields[0].name)){ this.value = fields[1].value+'/'+fields[0].value+'/'+fields[2].value; } else { this.value = fields[0].value+'/'+fields[1].value+'/'+fields[2].value; } } }); }); return mce_validator.form(); }, success: mce_success_cb }; $('#mc-embedded-subscribe-form').ajaxForm(options); }); } function mce_success_cb(resp){ $('#mce-success-response').hide(); $('#mce-error-response').hide(); if (resp.result=="success"){ $('#mce-'+resp.result+'-response').show(); $('#mce-'+resp.result+'-response').html(resp.msg); $('#mc-embedded-subscribe-form').each(function(){ this.reset(); }); } else { var index = -1; var msg; try { var parts = resp.msg.split(' - ',2); if (parts[1]==undefined){ msg = resp.msg; } else { i = parseInt(parts[0]); if (i.toString() == parts[0]){ index = parts[0]; msg = parts[1]; } else { index = -1; msg = resp.msg; } } } catch(e){ index = -1; msg = resp.msg; } try{ if (index== -1){ $('#mce-'+resp.result+'-response').show(); $('#mce-'+resp.result+'-response').html(msg); } else { err_id = 'mce_tmp_error_msg'; html = ' '+msg+''; var input_id = '#mc_embed_signup'; var f = $(input_id); if (ftypes[index]=='address'){
  •  
    Document History TimelineJS can pull in media from different sources. It has built in support for: Twitter, Flickr, Google Maps, YouTube, Vimeo, Dailymotion, Wikipedia, SoundCloud and more media types in the future. Creating one is as easy as filling in a Google spreadsheet or as detailed as JSON. Tips and tricks to best utilize TimelineJS. Keep it short, and write each event as a part of a larger narrative. Pick stories that have a strong chronological narrative. It does not work well for stories that need to jump around in the timeline. Include events that build up to major occurrences. Not just the major events. Sign up for Updates Get updates, tips and news by email. No Spam. Download Coming Soon Changelog Issues The project is hosted on GitHub, the largest code host in the world. We encourage you to contribute to the project and we value your feedback. You can report bugs and discuss features on the issues page, or ask a question on our Google Group TimelineJS Download View on GitHub Google Group Wordpress Plugin Download View on GitHub This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. http://www.gnu.org/licenses/ Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under CC BY SA. TimelineJS was created and built by VéritéCo, as a project of the Knight News Innovation Lab Stay connected with us on twitter Examples
Tom Johnson

Michelle Minkoff » Learning to love…grep (let the computer search text for you) - 0 views

  • Blog Learning to love…grep (let the computer search text for you) Posted by Michelle Minkoff on Aug 9, 2012 in Blog, Uncategorized | No Comments I’ve gotten into the habit of posting daily learnings on Twitter, but some things require a more in-depth reminder. I also haven’t done as much paying as forward as I’d like (but I’m having a TON of fun!  and dealing with health problems!  but mostly fun!) I’d like to try to start posting more helpful tips here, partially as a notebook for myself, and partially to help others with similar issues. Today’s problem: I needed to search for a few lines of text, which could be contained in any one of nine files with 100,000 lines each. Opening all of the files took a very long time on my computer, not to mention executing a search. Enter the “grep” command in Terminal, that allows you to quickly search files using the power of the computer.
  •  
    Blog Learning to love…grep (let the computer search text for you) Posted by Michelle Minkoff on Aug 9, 2012 in Blog, Uncategorized | No Comments I've gotten into the habit of posting daily learnings on Twitter, but some things require a more in-depth reminder. I also haven't done as much paying as forward as I'd like (but I'm having a TON of fun! and dealing with health problems! but mostly fun!) I'd like to try to start posting more helpful tips here, partially as a notebook for myself, and partially to help others with similar issues. Today's problem: I needed to search for a few lines of text, which could be contained in any one of nine files with 100,000 lines each. Opening all of the files took a very long time on my computer, not to mention executing a search. Enter the "grep" command in Terminal, that allows you to quickly search files using the power of the computer.
  •  
    An easy to use method for content analysis
Tom Johnson

International Dataset Search - 0 views

  • International Dataset Search View View Source Description:  The TWC International Open Government Dataset Catalog (IOGDC) is a linked data application based on metadata scraped from an increasing number of international dataset catalog websites publishing a rich variety of government data. Metadata extracted from these catalog websites is automatically converted to RDF linked data and re-published via the TWC LOGD SPAQRL endpoint and made available for download. The TWC IOGDC demo site features an efficient, reconfigurable faceted browser with search capabilities offering a compelling demonstration of the value of a common metadata model for open government dataset catalogs. We believe that the vocabulary choices demonstrated by IOGDC highlights the potential for useful linked data applications to be created from open government catalogs and will encourage the adoption of such a standard worldwide. Warning: This demo will crash IE7 and IE8. Contributor: Eric Rozell Contributor: Jinguang Zheng Contributor: Yongmei Shi Live Demo:  http://logd.tw.rpi.edu/demo/international_dataset_catalog_search Notes: This is an experimental demo and some queries may take longer time to response (30 ~60 seconds). Please referesh this page if the demo is not loaded. Our metadata model can be accessed here . Procedure to getting and publishing metadata is described here . The RDF dump of the datasets can be downloaded here. Welcome to S2S! International OGD Catalog Search (searching 736,578 datasets)
  •  
    International Dataset Search View View Source Description: The TWC International Open Government Dataset Catalog (IOGDC) is a linked data application based on metadata scraped from an increasing number of international dataset catalog websites publishing a rich variety of government data. Metadata extracted from these catalog websites is automatically converted to RDF linked data and re-published via the TWC LOGD SPAQRL endpoint and made available for download. The TWC IOGDC demo site features an efficient, reconfigurable faceted browser with search capabilities offering a compelling demonstration of the value of a common metadata model for open government dataset catalogs. We believe that the vocabulary choices demonstrated by IOGDC highlights the potential for useful linked data applications to be created from open government catalogs and will encourage the adoption of such a standard worldwide. Warning: This demo will crash IE7 and IE8. Contributor: Eric Rozell Jinguang Zheng Yongmei Shi Live Demo: http://logd.tw.rpi.edu/demo/international_dataset_catalog_search Notes: This is an experimental demo and some queries may take longer time to response (30 ~60 seconds). Please referesh this page if the demo is not loaded. Our metadata model can be accessed here . Procedure to getting and publishing metadata is described here . The RDF dump of the datasets can be downloaded here. International OGD Catalog Search (searching 736,578 datasets) http://logd.tw.rpi.edu/demo/international_dataset_catalog_search
  •  
    Loads surprisingly quickly. Try entering your favorite search term in top blue box. Can use quotes to define phrases.
Tom Johnson

Interactive Dynamics for Visual Analysis - - 0 views

  •  
    A taxonomy of tools that support the fluent and flexible use of visualizations Jeffrey Heer, Stanford University Ben Shneiderman, University of Maryland, College Park The increasing scale and availability of digital data provides an extraordinary resource for informing public policy, scientific discovery, business strategy, and even our personal lives. To get the most out of such data, however, users must be able to make sense of it: to pursue questions, uncover patterns of interest, and identify (and potentially correct) errors. In concert with data-management systems and statistical algorithms, analysis requires contextualized human judgments regarding the domain-specific significance of the clusters, trends, and outliers discovered in data.
Tom Johnson

Investigative Dashboard - Resources | Resources for investigators - 0 views

  •  
    The Investigative Dashboard (ID) is a work in progress, that is designed to showcase the potential for collaboration and data-sharing between investigative reporters across the world. The initiative is spearheaded by the Organized Crime and Corruption Reporting Project, the Romanian Center for Investigative Journalism, the Forum for African Investigative Reporters and the International Center for Journalists, and will expand to include other institutional members of the Global Investigative Journalism Network. The project is coordinated by Paul Cristian Radu (of OCCRP and CRJI) and Justin Arenstein (of FAIR) and was developed while both were in residence at Stanford University as Knight fellows. The John S. Knight Fellowships for Professional Journalists made possible the ID by providing access to the know-how of co-fellow journalists and of experts at Stanford University and in Silicon Valley. This first iteration of the ID website shares detailed methodologies, resources, and links for journalists to track money, shareholders, and company ownership across international borders. It also shares video tutorials, and other tools, to help journalists navigate often rapidly evolving data-sources. Future versions of ID will offer more advanced collaborative workspaces, data-archives, and discounted (or, where possible, free) access to expensive or proprietary research services. But, perhaps most importantly, the ID will campaign for investigative centres across the world to collaborate with each other to improve the depth and impact of their reportage.
Tom Johnson

Telling Stories With Data - 0 views

  •  
    Goals and Topics Our goal with this workshop is to bring together data storytellers from diverse disciplines and continue the conversation of how these different fields utilize each other's techniques and articulate principles for telling data narratives. Our target participants are researchers, journalists, bloggers, and others who seek to understand how visualizations support narrative, stories, and other communicative goals. Participants may be designers of such visualizations or designers of tools that support the creation of narrative visualizations. Visualizations that serve as a "community mirror" and thus create opportunities for discussion, reflection and sharing within a social network are also suitable topics. While we are inspired by many visualizations that display personal histories and storylines, our focus is on visualization situated in storytelling contexts, not necessarily visualizations of stories. Specific topics of interest may include, but are not limited to: Media and genres Embedding visualizations in social media to tell stories Multimodal storytelling with visualization (e.g. narrated or acted visualization, such as Rosling's Gapminder presentations) Non-traditional narrative - games and other procedural narratives incorporating data Visualization in (data)journalism - how news stories and visualization can complement each other Visualizations that support specific types of stories: Personal stories ("Here's a history of my cancer treatment") Community and collaboration stories ("How has our Facebook group changed over the past year?") Public data sets and narrative ("What is your Senator doing with your taxes?") Fictional, semi-fictional, and non-fiction stories
1 - 20 of 123 Next › Last »
Showing 20 items per page