Skip to main content

Home/ DJCamp2011/ Group items tagged text analysis

Rss Feed Group items tagged

Tom Johnson

T-LAB Tools for Text Analysis - 0 views

  •  
    The all-in-one software for Content Analysis and Text Mining Hello We are pleased to announce the release of T-LAB 8.0. This version represents a major change in the usability and the effectiveness of our software for text analysis. The most significant improvements concern the integration of bottom-up (i.e. unsupervised) methods for exploratory text analysis with top-down (i.e. supervised) approaches for the automated classification of textual units like words, sentences, paragraphs and documents. Among other things, this means that - besides discovering emerging patterns of words and themes from texts - the users can now easily build, apply and validate their models (e.g. dictionaries of categories or pre-existing manual categorizations) both for classical content analysis and for sentiment analysis. For this purpose several T-LAB functionalities have been expanded and a new ergonomic and powerful tool named 'Dictionary-Based Classification' has been added. No specific dictionaries have been built in; however, with some minor re-formatting, lots of resources available over the Internet and customized word lists can be quickly imported. Last but not least, in order to meet the needs of many customers, temporary licenses of the software are now on sale; moreover, without any time limit, the trial mode of the software now allows you to analyse your own texts up to 20 kb in txt format, each of which can include up to 20 short documents. To learn more, use the following link http://www.tlab.it/en/80news.php The Demo, the User's Manual and the Quick Introduction are available at http://www.tlab.it/en/download.php Kind Regards The T-LAB Team web: http://www.tlab.it/ e-mail: info@tlab.it
Tom Johnson

The Overview Project » Using Overview to analyze 4500 pages of documents on s... - 0 views

  • Using Overview to analyze 4500 pages of documents on security contractors in Iraq by Jonathan Stray on 02/21/2012 0 This post describes how we used a prototype of the Overview software to explore 4,500 pages of incident reports concerning the actions of private security contractors working for the U.S. State Department during the Iraq war. This was the core of the reporting work for our previous post, where we reported the results of that analysis. The promise of a document set like this is that it will give us some idea of the broader picture, beyond the handful of really egregious incidents that have made headlines. To do this, in some way we have to take into account most or all of the documents, not just the small number that might match a particular keyword search.  But at one page per minute, eight hours per day, it would take about 10 days for one person to read all of these documents — to say nothing of taking notes or doing any sort of followup. This is exactly the sort of problem that Overview would like to solve. The reporting was a multi-stage process: Splitting the massive PDFs into individual documents and extracting the text Exploration and subject tagging with the Overview prototype Random sampling to estimate the frequency of certain types of events Followup and comparison with other sources
  •  
    Using Overview to analyze 4500 pages of documents on security contractors in Iraq by Jonathan Stray on 02/21/2012 0 This post describes how we used a prototype of the Overview software to explore 4,500 pages of incident reports concerning the actions of private security contractors working for the U.S. State Department during the Iraq war. This was the core of the reporting work for our previous post, where we reported the results of that analysis. The promise of a document set like this is that it will give us some idea of the broader picture, beyond the handful of really egregious incidents that have made headlines. To do this, in some way we have to take into account most or all of the documents, not just the small number that might match a particular keyword search. But at one page per minute, eight hours per day, it would take about 10 days for one person to read all of these documents - to say nothing of taking notes or doing any sort of followup. This is exactly the sort of problem that Overview would like to solve. The reporting was a multi-stage process: Splitting the massive PDFs into individual documents and extracting the text Exploration and subject tagging with the Overview prototype Random sampling to estimate the frequency of certain types of events Followup and comparison with other sources
Tom Johnson

Michelle Minkoff » Learning to love…grep (let the computer search text for you) - 0 views

  • Blog Learning to love…grep (let the computer search text for you) Posted by Michelle Minkoff on Aug 9, 2012 in Blog, Uncategorized | No Comments I’ve gotten into the habit of posting daily learnings on Twitter, but some things require a more in-depth reminder. I also haven’t done as much paying as forward as I’d like (but I’m having a TON of fun!  and dealing with health problems!  but mostly fun!) I’d like to try to start posting more helpful tips here, partially as a notebook for myself, and partially to help others with similar issues. Today’s problem: I needed to search for a few lines of text, which could be contained in any one of nine files with 100,000 lines each. Opening all of the files took a very long time on my computer, not to mention executing a search. Enter the “grep” command in Terminal, that allows you to quickly search files using the power of the computer.
  •  
    Blog Learning to love…grep (let the computer search text for you) Posted by Michelle Minkoff on Aug 9, 2012 in Blog, Uncategorized | No Comments I've gotten into the habit of posting daily learnings on Twitter, but some things require a more in-depth reminder. I also haven't done as much paying as forward as I'd like (but I'm having a TON of fun! and dealing with health problems! but mostly fun!) I'd like to try to start posting more helpful tips here, partially as a notebook for myself, and partially to help others with similar issues. Today's problem: I needed to search for a few lines of text, which could be contained in any one of nine files with 100,000 lines each. Opening all of the files took a very long time on my computer, not to mention executing a search. Enter the "grep" command in Terminal, that allows you to quickly search files using the power of the computer.
  •  
    An easy to use method for content analysis
Tom Johnson

The Overview Project » Document mining shows Paul Ryan relying on the the pro... - 0 views

  •  
    Document mining shows Paul Ryan relying on the the programs he criticizes by Jonathan Stray on 11/02/2012 0 One of the jobs of a journalist is to check the record. When Congressman Paul Ryan became a vice-presidential candidate, Associated Press reporter Jack Gillum decided to examine the candidate through his own words. Hundreds of Freedom of Information requests and 9,000 pages later, Gillum wrote a story showing that Ryan has asked for money from many of the same Federal programs he has criticized as wasteful, including stimulus money and funding for alternative fuels. This would have been much more difficult without special software for journalism. In this case Gillum relied on two tools: DocumentCloud to upload, OCR, and search the documents, and Overview to automatically sort the documents into topics and visualize the contents. Both projects are previous Knight News Challenge winners. But first Gillum had to get the documents. As a member of Congress, Ryan isn't subject to the Freedom of Information Act. Instead, Gillum went to every federal agency - whose files are covered under FOIA - for copies of letters or emails that might identify Ryan's favored causes, names of any constituents who sought favors, and more. Bit by bit, the documents arrived - on paper. The stack grew over weeks, eventually piling up two feet high on Gillum's desk. Then he scanned the pages and loaded them into the AP's internal installation of DocumentCloud. The software converts the scanned pages to searchable text, but there were still 9000 pages of material. That's where Overview came in. Developed in house at the Associated Press, this open-source visualization tool processes the full text of each document and clusters similar documents together, producing a visualization that graphically shows the contents of the complete document set. "I used Overview to take these 9000 pages of documents, and knowing there was probably going to be a lot of garbage or ext
Tom Johnson

mapping texts/texas - 0 views

  •  
    Assessing Language Patterns: A Look at Texas Newspapers, 1829-2008 This visualization plots the language patterns embedded in 232,567 pages of historical Texas newspapers, as they evolved over time and space. For any date range and location, you can browse the most common words (word counts), named entities (people, places, etc), and highly correlated words (topic models). [ About Mapping Texts ]
Tom Johnson

Reconstruction 2012 - 0 views

  •  
    "ReConstitution 2012, a fun experiment by Sosolimited, processes transcripts from the presidential debates, and recreates them with animated words and charts. Part data visualization, part experimental typography, ReConstitution 2012 is a live web app linked to the US Presidential Debates. During and after the three debates, language used by the candidates generates a live graphical map of the events. Algorithms track the psychological states of Romney and Obama and compare them to past candidates. The app allows the user to get beyond the punditry and discover the hidden meaning in the words chosen by the candidates. As you let the transcript run, numbers followed by their units (like "18 months") flash on the screen, and trigger words for emotions like positivity, negativity, and rage are highlighted yellow, blue, and red, respectively. You can also see the classifications in graph form. There are a handful of less straightforward text classifications for truthy and suicidal, which are based on linguistic studies, which in turn are based on word frequencies. These estimates are more fuzzy. So, as the creators suggest, it's best not to interpret the project as an analytical tool, and more of a fun way to look back at the debate, which it is. It's pretty fun to watch. Here's a short video from Sosolimited for more on how the application works: "
Tom Johnson

Corporate Accountability Data in Influence Explorer - Sunlight Labs: Blog - 0 views

  •  
    Again, US-centric, but this might generate some ideas of what could be accomplish in your city/nation. Late yesterday we announced a bunch of new features for Influence Explorer: http://sunlightlabs.com/blog/2011/ie-corporate-accountability/ As the blog post explains, you can now find information about a corporation's EPA violations, federal advisory committee memberships, and participation in the rulemaking process -- all in one place. I wanted to highlight that last feature a bit more, though. To my knowledge, this is the first time that the full corpus of public comments submitted to regulations.gov has been available for bulk download and analysis. This isn't a coincidence: regulations.gov is built using technologies that make scraping it unusually difficult. This is unfortunate, since everyone seems to agree that federal rulemakings are gaining in importance -- both because of congressional gridlock that leaves the regulatory process as a second-best option, and because of calls to simplify the regulatory landscape as a pro-growth measure. It's an area where influence is certainly exerted -- rulemakers are obliged to review every comment -- but little attention is paid to who's flooding dockets with comments, and which directions rules are being pushed. It's taken us several months to develop a reliable solution and to obtain past rulemakings, but we now have the data in hand. We plan to do much more with this dataset, and we're hoping that others will want to dig in, too. You can find a link to the bulk download options in the post above -- the full compressed archive of extracted text and metadata is ~16GB, but we've provided options for grabbing individual agencies' or dockets' data. If anyone wants the original documents (PDFs, DOCs, etc) we can talk through how to make that happen, but as they clock in at 1.5TB we'll want to make sure folks know what they're getting into before we spend the time and bandwidth. Finally, note that we currently o
Tom Johnson

Politilines - 0 views

  •  
    Visualizing the words used in the 2011-2012 Republican Primary debates. The method: We collected transcripts from the American Presidency Project at UCSB, categorized them by hand, then ranked lemmatized word-phrases (or n-grams) by their frequency of use. Word-phrases can be made of up to five words. Our ranking agorithm accounts for things such as exclusive word-phrases - meaning, it won't count "United States" twice if it's used in a higher n-gram such as "President of the United States." While still in beta, the mini-app is responsive and easy to use. The next challenge, I think, is to really show what everyone talked about. For example, click on education and you see Newt Gingrich, Ron Paul, and Rick Perry brought those up. Then roll over the names to see the words each candidate used related to that topic. You get some sense of content, but it's still hard to decipher what each actually said about education.
Tom Johnson

Shorenstein Center paper argues for collaboration in investigative reporting | Harvard ... - 0 views

  • Shorenstein Center paper argues for collaboration in investigative reporting Thursday, June 2, 2011 Sandy Rowe, former editor of The Oregonian, and Knight Fellow at the Shorenstein Center fall 2010 and spring 2011. Photograph by Martha Stewart Shorenstein Center, Harvard Kennedy School Contact: Janell Simsjanell_sims@harvard.eduhttp://www.hks.harvard.edu/presspol/index.html Media organizations may be able to perform their watchdog roles more effectively working together than apart. That is one conclusion in a new paper, “Partners of Necessity: The Case for Collaboration in Local Investigative Reporting,” authored by Sandy Rowe, former editor of Portland’s The Oregonian. The paper is based on interviews and research that Rowe conducted while serving as a Knight Fellow at the Shorenstein Center on the Press, Politics and Public Policy at Harvard Kennedy School. Rowe’s research examines the theory underpinning collaborative work and shows emerging models of collaboration that can lead to more robust investigative and accountability reporting in local and regional markets. “Growing evidence suggests that collaborations and partnerships between new and established news organizations, universities and foundations may be the overlooked key for investigative journalism to thrive at the local and state levels,” Rowe writes. “These partnerships, variously and often loosely organized, can share responsibility for content creation, generate wider distribution of stories and spread the substantial cost of accountability journalism.” Rowe was editor of The Oregonian from 1993 until January 2010. Under her leadership, the newspaper won five Pulitzer Prizes including the Gold Medal for Public Service. Rowe chairs the Board of Visitors of The Knight Fellowships at Stanford University and is a board member of the Committee to Protect Journalists. From 1984 until April 1993, Rowe was executive editor and vice president of The Virginian-Pilot and The Ledger-Star, Norfolk and Virginia Beach, Virginia. The Virginian-Pilot won the Pulitzer Prize for general news reporting under her leadership. Rowe’s year-long fellowship at the Shorenstein Center was funded by the John S. and James L. Knight Foundation. Read the full paper on the Shorenstein Center’s website.
  •  
    Shorenstein Center paper argues for collaboration in investigative reporting Thursday, June 2, 2011 Sandy Rowe, former editor of The Oregonian, and Knight Fellow at the Shorenstein Center fall 2010 and spring 2011. Photograph by Martha Stewart Shorenstein Center, Harvard Kennedy School Contact: Janell Sims janell_sims@harvard.edu http://www.hks.harvard.edu/presspol/index.html Media organizations may be able to perform their watchdog roles more effectively working together than apart. That is one conclusion in a new paper, "Partners of Necessity: The Case for Collaboration in Local Investigative Reporting," authored by Sandy Rowe, former editor of Portland's The Oregonian. The paper is based on interviews and research that Rowe conducted while serving as a Knight Fellow at the Shorenstein Center on the Press, Politics and Public Policy at Harvard Kennedy School. Rowe's research examines the theory underpinning collaborative work and shows emerging models of collaboration that can lead to more robust investigative and accountability reporting in local and regional markets. "Growing evidence suggests that collaborations and partnerships between new and established news organizations, universities and foundations may be the overlooked key for investigative journalism to thrive at the local and state levels," Rowe writes. "These partnerships, variously and often loosely organized, can share responsibility for content creation, generate wider distribution of stories and spread the substantial cost of accountability journalism." Rowe was editor of The Oregonian from 1993 until January 2010. Under her leadership, the newspaper won five Pulitzer Prizes including the Gold Medal for Public Service. Rowe chairs the Board of Visitors of The Knight Fellowships at Stanford University and is a board member of the Committee to Protect Journalists. From 1984 until April 1993, Rowe was executive editor and vice president of The Virginian-Pi
Tom Johnson

Jigsaw: Visual Analytics for Exploring and Understanding Document Collections - 0 views

  •  
    Be sure to view the video tutorial: http://www.cc.gatech.edu/gvu/ii/jigsaw/Jigsaw-tutorial.movhttp://www.cc.gatech.edu/gvu/ii/jigsaw/Jigsaw-tutorial.mov http://www.cc.gatech.edu/gvu/ii/jigsaw/views.html Jigsaw: Visual Analytics for Exploring and Understanding Document Collections System Views Jigsaw presents the individual reports in a document collection and the entities within those reports through a series of visualizations. We call these visualizations the system views. Below, we illustrate each view provided by the system and briefly describe their characteristics. Click on the individual images to see a larger version of the view. Also, a tutorial video illustrates the different views as well and the interactive behavior for each view can be seen on the video tutorial page. -tj
  •  
    Also see "The Information Interfaces Group, an HCI research group in the School of Interactive Computing at Georgia Tech, develops computing technologies that help people take advantage of information to enrich their lives. " http://www.cc.gatech.edu/gvu/ii/
1 - 10 of 10
Showing 20 items per page