Skip to main content

Home/ CTLT and Friends/ Group items tagged statistics

Rss Feed Group items tagged

Gary Brown

Does testing for statistical significance encourage or discourage thoughtful ... - 1 views

  • Does testing for statistical significance encourage or discourage thoughtful data analysis? Posted by Patricia Rogers on October 20th, 2010
  • Epidemiology, 9(3):333–337). which argues not only for thoughtful interpretation of findings, but for not reporting statistical significance at all.
  • We also would like to see the interpretation of a study based not on statistical significance, or lack of it, for one or more study variables, but rather on careful quantitative consideration of the data in light of competing explanations for the findings.
  • ...6 more annotations...
  • we prefer a researcher to consider whether the magnitude of an estimated effect could be readily explained by uncontrolled confounding or selection biases, rather than simply to offer the uninspired interpretation that the estimated effect is significant, as if neither chance nor bias could then account for the findings.
  • Many data analysts appear to remain oblivious to the qualitative nature of significance testing.
  • statistical significance is itself only a dichotomous indicator.
  • it cannot convey much useful information
  • Even worse, those two values often signal just the wrong interpretation. These misleading signals occur when a trivial effect is found to be ’significant’, as often happens in large studies, or when a strong relation is found ’nonsignificant’, as often happens in small studies.
  • Another useful paper on this issue is Kristin Sainani, (2010) “Misleading Comparisons: The Fallacy of Comparing Statistical Significance”Physical Medicine and Rehabilitation, Vol. 2 (June), 559-562 which discusses the need to look carefully at within-group differences as well as between-group differences, and at sub-group significance compared to interaction. She concludes: ‘Readers should have a particularly high index of suspicion for controlled studies that fail to report between-group comparisons, because these likely represent attempts to “spin” null results.”
  •  
    and at sub-group significance compared to interaction. She concludes: 'Readers should have a particularly high index of suspicion for controlled studies that fail to report between-group comparisons, because these likely represent attempts to "spin" null results."
Gary Brown

Mini-Digest of Education Statistics, 2009 - 0 views

  • This publication is a pocket-sized compilation of statistical information covering the broad field of American education from kindergarten through graduate school. The statistical highlights are excerpts from the Digest of Education of Statistics, 2009.
  •  
    just released for 2009, great resource
Nils Peterson

YouTube - Michael Wesch - PdF2009 - The Machine is (Changing) Us - 1 views

shared by Nils Peterson on 18 Sep 09 - Cached
  •  
    Michael Wesch updates Machine is Us/ing us. 30 min video His point, our tools are changing us. Worth thinking about by us greybeards are his statistics about the number of hours of video uploaded to You Tube per day. For someone who remembers what a byte is, this is a paradigm shifting amount of data being moved and stored for free.
  •  
    Michael Wesch updates Machine is Us/ing us. His point, our tools are changing us. Worth thinking about by us greybeards are his statistics about the number of hours of video uploaded to You Tube per day. For someone who remembers what a byte is, this is a paradigm shifting amount of data being moved and stored for free.
Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessmen... - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
Corinna Lo

Statistics Show Social Media Is Bigger Than You Think « Socialnomics - Social... - 0 views

  •  
    "People care more about how their social graph ranks products and services than how Google ranks them." "We no longer search for the news, the news finds us... "
Nils Peterson

Does having a computer at home improve results at school? | A World Bank Blog on ICT us... - 0 views

  • Does differential access to computer technology at home compound the educational disparities between and rich and poor? and Would a program of government provision of computers to early secondary school students reduce these disparities? In this case, Vigdor and Ladd found that the introduction of home computer technology is associated with modest but statistically significant and persistent negative impacts on student math and reading test scores. Further evidence suggests that providing universal access to home computers and high-speed internet access would broaden, rather than narrow, math and reading achievement gaps.
    • Nils Peterson
       
      so there is some contextualization of computers in the home that is also needed... as I find when my daughter wants to spend computer time dressing up Barbie.
  • A 2010 report from the OECD (Are New Millennium Learners Making the Grade? [pdf]) considers a number of studies, combined with new analysis it has done based on internationally comparable student achievement data (PISA), and finds that indeed that gains in educational performance are correlated with the frequency of computer use at home.
  • One way to try to make sense of all of these studies together is to consider that ICTs may function as a sort of 'amplifier' of existing learning environments in homes.  Where such environments are conducive to student learning (as a result, for example, of strong parental direction and support), ICT use can help; where home learning environments are not already strong (especially, for example, where children are left unsupervised to their own devices -- pun intended), we should not be surprised if the introduction of ICTs has a negative effect on learning.
  • ...1 more annotation...
  • On a broader note, and in response to his reading of the Vigdor/Ladd paper, Warschauer states on his insightful blog that the "aim of our educational efforts should not be mere access, but rather development of a social environment where access to technology is coupled with the most effective curriculum, pedagogy, instruction, and assessment."
    • Nils Peterson
       
      specific things need to be done to 'mobilize' the learning latent in the computing environment.
Joshua Yeidel

Effect Size Resources - CEM - 3 views

  •  
    "'Effect Size' is a way of expressing the difference between two groups. In particular, if the groups have been systematically treated differently in an experiment, the Effect Size indicates how effective the experimental treatment was."
  •  
    An interesting approach to comparing parametric statistics between groups
Gary Brown

Educators Mull How to Motivate Professors to Improve Teaching - Curriculum - The Chroni... - 4 views

  • "Without an unrelenting focus on quality—on defining and measuring and ensuring the learning outcomes of students—any effort to increase college-completion rates would be a hollow effort indeed."
  • If colleges are going to provide high-quality educations to millions of additional students, they said, the institutions will need to develop measures of student learning than can assure parents, employers, and taxpayers that no one's time and money are being wasted.
  • "Effective assessment is critical to ensure that our colleges and universities are delivering the kinds of educational experiences that we believe we actually provide for students," said Ronald A. Crutcher, president of Wheaton College, in Massachusetts, during the opening plenary. "That data is also vital to addressing the skepticism that society has about the value of a liberal education."
  • ...13 more annotations...
  • But many speakers insisted that colleges should go ahead and take drastic steps to improve the quality of their instruction, without using rigid faculty-incentive structures or the fiscal crisis as excuses for inaction.
  • Handing out "teacher of the year" awards may not do much for a college
  • W.E. Deming argued, quality has to be designed into the entire system and supported by top management (that is, every decision made by CEOs and Presidents, and support systems as well as operations) rather than being made the responsibility solely of those delivering 'at the coal face'.
  • I see as a certain cluelessness among those who think one can create substantial change based on volunteerism
  • Current approaches to broaden the instructional repertoires of faculty members include faculty workshops, summer leave, and individual consultations, but these approaches work only for those relatively few faculty members who seek out opportunities to broaden their instructional methods.
  • The approach that makes sense to me is to engage faculty members at the departmental level in a discussion of the future and the implications of the future for their field, their college, their students, and themselves. You are invited to join an ongoing discussion of this issue at http://innovate-ideagora.ning.com/forum/topics/addressing-the-problem-of
  • Putting pressure on professors to improve teaching will not result in better education. The primary reason is that they do not know how to make real improvements. The problem is that in many fields of education there is either not enough research, or they do not have good ways of evaluationg the results of their teaching.
  • Then there needs to be a research based assessment that can be used by individual professors, NOT by the administration.
  • Humanities educatiors either have to learn enough statistics and cognitive science so they can make valid scientific comparisons of different strategies, or they have to work with cognitive scientists and statisticians
  • good teaching takes time
  • On the measurement side, about half of the assessments constructed by faculty fail to meet reasonable minimum standards for validity. (Interestingly, these failures leave the door open to a class action lawsuit. Physicians are successfully sued for failing to apply scientific findings correctly; commerce is replete with lawsuits based on measurement errors.)
  • The elephant in the corner of the room --still-- is that we refuse to measure learning outcomes and impact, especially proficiencies generalized to one's life outside the classroom.
  • until universities stop playing games to make themselves look better because they want to maintain their comfortable positions and actually look at what they can do to improve nothing is going to change.
  •  
    our work, our friends (Ken and Jim), and more context that shapes our strategy.
  •  
    How about using examples of highly motivational lecture and teaching techniques like the Richard Dawkins video I presented on this forum, recently. Even if teacher's do not consciously try to adopt good working techniques, there is at least a strong subconscious human tendency to mimic behaviors. I think that if teachers see more effective techniques, they will automatically begin to adopt adopt them.
Theron DesRosier

Education Data Model (National Forum on Education Statistics). Strategies for building ... - 0 views

  •  
    "The National Education Data Model is a conceptual but detailed representation of the education information domain focused at the student, instructor and course/class levels. It delineates the relationships and interdependencies between the data elements necessary to document, operate, track, evaluate, and improve key aspects of an education system. The NEDM strives to be a shared understanding among all education stakeholders as to what information needs to be collected and managed at the local level in order to enable effective instruction of students and superior leadership of schools. It is a comprehensive, non-proprietary inventory and a map of education information that can be used by schools, LEAs, states, vendors, and researchers to identify the information required for teaching, learning, administrative systems, and evaluation of education programs and approaches. "
Nils Peterson

AAC&U News | April 2010 | Feature - 1 views

  • Comparing Rubric Assessments to Standardized Tests
  • First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU).
  • And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.  
  • ...13 more annotations...
  • outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication
  • “The wonderful thing about this approach is that full-time faculty across the university  are gathering data about how their  students are doing, and since they’ll be teaching their courses in the future, they’re really invested in rubric assessment—they really care,” Escoe says. In one case, the capstone survey data revealed that students weren’t doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
  • the university planned a “dual pilot” study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment,
  • The rubrics the UC team used were slightly modified versions of those developed by AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) project. 
  • In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years.  The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.
  • “We found no statistically significant correlation between the CLA scores and the portfolio scores,”
  • There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points “in a black box”:
  • faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test. 
  • “The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”
  • Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students’ whole program of study.”  Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning.
  • “We’re really trying to stress that assessment is pedagogy,”
  • “It’s not some nitpicky, onerous administrative add-on. It’s what we do as we teach our courses, and it really helps close that assessment loop.”
  • In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
    • Nils Peterson
       
      CLA did not provide information for continuous program improvement -- we've heard this argument before
  •  
    The lack of correlation might be rephrased--there appears to be no corrlation between what is useful for faculty who teach and what is useful for the VSA. A corollary question: Of what use is the VSA?
Nils Peterson

Walmart's Growth: An Awesome Visualization Of The Retailer's Rapid Expansion (INFOGRAPHIC) - 3 views

  • Beginning with the first Walmart store, which opened in Rogers, Arkansas, in 1962, this incredible visualization -- put together by FlowingData, a data visualization website run by UCLA statistics doctoral student Nathan Yau -- traces the expansion of the seemingly omnipresent discount chain across America
  •  
    an interesting visualization in its own right, perhaps another tool set we might use at "FlowingData"
Gary Brown

News: The Specialists - Inside Higher Ed - 4 views

  • Choosing the academic program at a single university, they say, is a relic of a time before online education made it possible for a student in Oregon to take courses at a university in Florida
  • Much of the talk about this imminent unbundling has come from colleges that predict that students might want to transfer credits from other colleges that might have different missions. But the competition may also come from entities that do not even offer degrees.
  • The company outsources grading and other work to master’s degree-holders in India for much less than it would cost to employ similarly qualified teaching assistants in the United States.
  • ...1 more annotation...
  • the confluence of several economic factors — particularly rising tuition and the unwillingness of many students to take on exorbitant debt, especially as they see their degree-holding peers struggling to land jobs — may force institutions to consider turning to outside specialists if they want to continue offering certain courses.And if they don’t, Smith says, students will likely turn to the outside specialists themselves.
  •  
    Variations on a theme, but notable now in particular as we debate general education reform.
Gary Brown

Ethics? Let's Outsource Them! - Brainstorm - The Chronicle of Higher Education - 4 views

  • Many students are already buying their papers from term-paper factories located in India and other third world countries. Now we are sending those papers back there to be graded. I wonder how many people are both writing and grading student work, and whether, serendipitously, any of those people ever get the chance to grade their own writing.”
  • The great learning loop of outcomes assessment is neatly “closed,” with education now a perfect, completed circle of meaningless words.
  • With outsourced grading, it’s clearer than ever that the world of rubrics behaves like that wicked southern plant called kudzu, smothering everything it touches. Certainly teaching and learning are being covered over by rubrics, which are evolving into a sort of quasi-religious educational theory controlled by priests whose heads are so stuck in playing with statistics that they forget to try to look openly at what makes students turn into real, viable, educated adults and what makes great, or even good, teachers.
  • ...2 more annotations...
  • Writing an essay is an art, not a science. As such, people, not instruments, must take its measure, and judge it. Students have the right to know who is doing the measuring. Instead of going for outsourced grading, Ms. Whisenant should cause a ruckus over the size of her course with the administration at Houston. After all, if she can’t take an ethical stand, how can she dare to teach ethics?
  • "People need to get past thinking that grading must be done by the people who are teaching.” Sorry, Mr. Rajam, but what you should be saying is this: Teachers, including those who teach large classes and require teaching assistants and readers, need to get past thinking that they can get around grading.
  •  
    the outsourcing loop becomes a diatribe against rubrics...
  •  
    It's hard to see how either outsourced assessment or harvested assessment can be accomplished convincingly without rubrics. How else can the standards of the teacher be enacted by the grader? From there we are driven to consider how, in the absence of a rubric, the standards of the teacher can be enacted by the student. Is it "ethical" to use the Potter Stewart standard: "I'll know it when I see it"?
  •  
    Yes, who is the "priest" in the preceding rendering--one who shares principles of quality (rubrics), or one who divines a grade a proclaims who is a "real, viable, educated adult"?
Nils Peterson

News & Broadcast - World Bank Frees Up Development Data - 0 views

  • April 20, 2010—The World Bank Group said today it will offer free access to more than 2,000 financial, business, health, economic and human development statistics that had mostly been available only to paying subscribers.
  • Hans Rosling, Gapminder Foundation co-founder and vigorous advocate of open data at the World Bank, said, “It’s the right thing to do, because it will foster innovation. That is the most important thing.”He said he hoped the move would inspire more tools for visualizing data and set an example for other international institutions.
  • The new website at data.worldbank.org offers full access to data from 209 countries, with some of the data going back 50 years. Users will be able to download entire datasets for a particular country or indicator, quickly access raw data, click a button to comment on the data, email and share data with social media sites, says Neil Fantom, a senior statistician at the World Bank.
Nils Peterson

Tech's 29 Most Powerful Colleges - Page 1 - The Daily Beast - 0 views

  • which schools really represent a pipeline to the top jobs? To find out, The Daily Beast scoured the biographies of hundreds of key technology executives from the nation’s biggest companies and some of its hottest startups, too. Our goal was to identify which colleges, compared student-for-student (undergraduate enrollment data courtesy of the National Center for Education Statistics), have turned out the most undergraduates destined for high-tech greatness.
    • Nils Peterson
       
      Post-hoc analysis. Who holds the job, where did they graduate?
  • some schools excel at inculcating a crucial skill for techland: dealing with uncertainty and making the right decision without taking too long to size up a situation quickly.
    • Nils Peterson
       
      Rubric dimensions.
  • I want someone who’s quick and decisive and a good leader, like a graduate of' and then they'll name certain schools.” Champion says part of that stems from the competitive environment of the top schools, which vet their admittees so heavily. "Is the competition the only the reason they’re successful? No,” Champion says. “But is it the beginning of training in a process that helps them be successful? Yes.”
Theron DesRosier

How Group Dynamics May Be Killing Innovation - Knowledge@Wharton - 5 views

  • Christian Terwiesch and Karl Ulrich argue that group dynamics are the enemy of businesses trying to develop one-of-a-kind new products, unique ways to save money or distinctive marketing strategies.
  • Terwiesch, Ulrich and co-author Karan Girotra, a professor of technology and operations management at INSEAD, found that a hybrid process -- in which people are given time to brainstorm on their own before discussing ideas with their peers -- resulted in more and better quality ideas than a purely team-oriented process.
    • Theron DesRosier
       
      This happens naturally when collaboration is asynchronous.
    • Theron DesRosier
       
      They use the term "team oriented process" but what they mean, I think, is a synchronous, face to face, brainstorming session.
  • Although several existing experimental studies criticize the team brainstorming process due to the interference of group dynamics, the Wharton researchers believe their work stands out due to a focus on the quality, in addition to the number, of ideas generated by the different processes -- in particular, the quality of the best idea.
  • ...8 more annotations...
  • "The evaluation part is critical. No matter which process we used, whether it was the [team] or hybrid model, they all did significantly worse than we hoped [in the evaluation stage]," Terwiesch says. "It's no good generating a great idea if you don't recognize the idea as great. It's like me sitting here and saying I had the idea for Amazon. If I had the idea but didn't do anything about it, then it really doesn't matter that I had the idea."
  • He says an online system that creates a virtual "suggestion box" can accomplish the same goal as long as it is established to achieve a particular purpose.
  • Imposing structure doesn't replace or stifle the creativity of employees, Ulrich adds. In fact, the goal is to establish an idea generation process that helps to bring out the best in people. "We have found that, in the early phases of idea generation, providing very specific process guideposts for individuals [such as] 'Generate at least 10 ideas and submit them by Wednesday,' ensures that all members of a team contribute and that they devote sufficient creative energy to the problem."
  • The results of the experiment with the students showed that average quality of the ideas generated by the hybrid process were better than those that came from the team process by the equivalent of roughly 30 percentage points.
  • in about three times more ideas than the traditional method.
  • "We find huge differences in people's levels of creativity, and we just have to face it. We're not all good singers and we're not all good runners, so why should we expect that we all are good idea generators?
  • They found that ideas built around other ideas are not statistically better than any random suggestion.
  • "In innovation, variance is your friend. You want wacky stuff because you can afford to reject it if you don't like it. If you build on group norms, the group kills variance."
  •  
    Not as radical as it first seems, but pertains to much of our work and the work of others.
Nils Peterson

America's Newest Profession - WSJ.com - 0 views

  • The best studies we can find say we are a nation of over 20 million bloggers, with 1.7 million profiting from the work ,and 452,000 of those using blogging as their primary source of income.
    • Nils Peterson
       
      What is the 21st century CITR that this group would help us invent?
  • It is hard to think of another job category that has grown so quickly and become such a force in society without having any tests, degrees, or regulation of virtually any kind. Courses on blogging are now cropping up, and we can't be far away from the Columbia School of Bloggerism.
Gary Brown

Scholars Assess Their Progress on Improving Student Learning - Research - The Chronicle... - 0 views

  • International Society for the Scholarship of Teaching and Learning, which drew 650 people. The scholars who gathered here were cautiously hopeful about colleges' commitment to the study of student learning, even as the Carnegie Foundation winds down its own project. (Mr. Shulman stepped down as president last year, and the foundation's scholarship-of-teaching-and-learning program formally came to an end last week.) "It's still a fragile thing," said Pat Hutchings, the Carnegie Foundation's vice president, in an interview here. "But I think there's a huge amount of momentum." She cited recent growth in faculty teaching centers,
  • Mary Taylor Huber, director of the foundation's Integrative Learning Project, said that pressure from accrediting organizations, policy makers, and the public has encouraged colleges to pour new resources into this work.
  • The scholars here believe that it is much more useful to try to measure and improve student learning at the level of individual courses. Institutionwide tests like the Collegiate Learning Assessment have limited utility at best, they said.
  • ...6 more annotations...
  • Mr. Bass and Toru Iiyoshi, a senior strategist at the Massachusetts Institute of Technology's office of educational innovation and technology, pointed to an emerging crop of online multimedia projects where college instructors can share findings about their teaching. Those sites include Merlot and the Digital Storytelling Multimedia Archive.
  • "We need to create 'middle spaces' for the scholarship of teaching and learning," said Randall Bass, assistant provost for teaching and learning initiatives at Georgetown University, during a conference session on Friday.
  • "If you use a more generic instrument, you can give the accreditors all the data in the world, but that's not really helpful to faculty at the department level," said the society's president, Jennifer Meta Robinson, in an interview. (Ms. Robinson is also a senior lecturer in communication and culture at Indiana University at Bloomington.)
  • It is vital, Ms. Peseta said, for scholars' articles about teaching and learning to be engaging and human. But at the same time, she urged scholars not to dumb down their statistical analyses or the theoretical foundations of their studies. She even put in a rare good word for jargon.
  • No one had a ready answer. Ms. Huber, of the Carnegie Foundation, noted that a vast number of intervening variables make it difficult to assess the effectiveness of any educational project.
  • "Well, I guess we have a couple of thousand years' worth of evidence that people don't listen to each other, and that we don't build knowledge," Mr. Bass quipped. "So we're building on that momentum."
  •  
    Note our friends Randy Bass (AAEEBL) and Mary Huber are prominent.
Gary Brown

U.S. GAO - Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effectiv... - 1 views

  • In the absence of detailed guidance, the panel defined sizable and sustained effects through case discussion
  • The Top Tier initiative's choice of broad topics (such as early childhood interventions), emphasis on long-term effects, and use of narrow evidence criteria combine to provide limited information on what is effective in achieving specific outcomes.
  • Several rigorous alternatives to randomized experiments are considered appropriate for other situations: quasi-experimental comparison group studies, statistical analyses of observational data, and--in some circumstances--in-depth case studies. The credibility of their estimates of program effects relies on how well the studies' designs rule out competing causal explanations.
  •  
    a critical resource
1 - 20 of 20
Showing 20 items per page