Skip to main content

Home/ CTLT and Friends/ Group items matching "Outcomes" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Nils Peterson

From Social Media to Social Strategy - Umair Haque - Harvard Business Review - 0 views

  • Choreography. Most organizations seek "high performance." Today, performance is no longer enough: excelling in yesterday's terms is excelling at the wrong things. This is downright self-destructive (just ask Wall Street). Today's radical innovators aren't merely mute performers, precisely executing the empty steps of a meaningless dance: they're more like choreographers. Choreographers define the steps of a better dance — they lay down better rules for interactions between supply and demand to take place.
    • Nils Peterson
       
      It strilkes me that this connects to OAI's Aof A work. We recognize learning outcomes can't forever improve, but through better assessment, we can hope the programs are ever more attentive and responsive to changing situations.
Joshua Yeidel

The Answer Sheet - A principal on standardized vs. teacher-written tests - 0 views

  •  
    High school principal George Wood eloquently contrasts standardized NCLB-style testing with his school's performance assessments.
Joshua Yeidel

Performance Assessment | The Alternative to High Stakes Testing - 0 views

  •  
    " The New York Performance Standards Consortium represents 28 schools across New York State. Formed in 1997, the Consortium opposes high stakes tests arguing that "one size does not fit all." Despite skepticism that an alternative to high stakes tests could work, the New York Performance Standards Consortium has done just that...developed an assessment system that leads to quality teaching, that enhances rather than compromises our students' education. Consortium school graduates go on to college and are successful."
Joshua Yeidel

Higher Education: Assessment & Process Improvement Group News | LinkedIn - 0 views

  •  
    High School Principal George Wood eloquently contrasts standardized NCLB-style testing and his school's term-end performance testing.
Corinna Lo

Blackboard Outcomes Assessment Webcast - Moving Beyond Accreditation: Using Institutional Assessment to Ensure Student Success - 0 views

  •  
    The first 12 minutes of the webcast is worth watching. He opened up with a story of the investigation of cholera outbreak during Victorian era in London, and brought that into how it related to student success. He then summarized what the key methods of measurement were, and some lessons Learned: An "interdisciplinary" led to unconventional, yet innovative methods of investigation. The researchers relied on multiple forms of measurement to come to their conclusion. The visualization of their data was important to proving their case to others.
Joshua Yeidel

New Web Site Compares Student Outcomes at Online Colleges - Technology - The Chronicle of Higher Education - 0 views

  •  
    "College Choices for Adults [website] provides adults with specific information about what students are supposed to learn in the colleges' mostly career-oriented programs and measurements of whether they did." That's the billing in the Chronicle, but when I wen to the site, I found mostly self-reports of engagement and satisfaction.
Gary Brown

Education Sector: Research and Reports: Ready to Assemble: Grading State Higher Education Accountability Systems - 0 views

  •  
    I note Washington gets a check mark for learning outcomes.
  •  
    States need strong higher education systems, now more than ever. In the tumultuous, highly competitive 21st century economy, citizens and workers need knowledge, skills, and credentials in order to prosper. Yet many colleges and universities are falling short. To give all students the best possible postsecondary education, states must create smart, effective higher education accountability systems, modeled from the best practices of their peers, and set bold, concrete goals for achievement
Gary Brown

Will a Culture of Entitlement Bankrupt Higher Education? - Commentary - The Chronicle of Higher Education - 2 views

  • The economy has suffered changes so deep and fundamental that institutions cannot just hunker down to weather the storm. The time has come for creative reconstruction. We must summon the courage and will to re-engineer education in ways founded on shared responsibility, demanding hard work and a willingness on the part of everyone involved to let go of "the way it's always been."
  • We need to break down expectations based on entitlement and focus on educational productivity and outcomes. Institutions should review redundancies, rethink staffing models, and streamline business practices. Productivity measures should be applied in all areas. In the same way that secondary schools are being challenged to consider longer school days and an extended academic year, we in higher education need to revisit basic assumptions about how we deliver higher education to students. We should not be tied to any one model or structure.
  • For example, we should re-evaluate the notion that large classes are inherently pedagogically unsound. What both students and faculty members tend to prefer—small classes—is not the only educationally effective approach. Although no one would advocate for large classes in every discipline or instance, we should review what we do in light of new financial contingencies, while keeping an eye on what students learn.
  • ...2 more annotations...
  • the growing demand for a better-prepared work force, we need to revisit undergraduate education as a whole. We should re-examine the teacher/scholar model, for instance. Is it appropriate for every institution? Does that model really produce what it is supposed to: thinkers and makers, learned and professionally skilled graduates?
  • We should separate legitimate aspirations and a drive toward excellence from the costly and often fruitless pursuit of higher status—which may feed egos but is beyond the reasonable prospects of many institutions.
Nils Peterson

Views: The Limitations of Portfolios - Inside Higher Ed - 1 views

  • Gathering valid data about student performance levels and performance improvement requires making comparisons relative to fixed benchmarks and that can only be done when the assessments are standardized. Consequently, we urge the higher education community to embrace authentic, standardized performance-assessment approaches so as to gather valid data that can be used to improve teaching and learning as well as meet its obligations to external audiences to account for its actions and outcomes regarding student learning.
    • Nils Peterson
       
      Diigoed because this is the counter-argument to our work.
Gary Brown

The Ticker - Most Colleges Try to Assess Student Learning, Survey Finds - The Chronicle of Higher Education - 0 views

  • October 26, 2009, 02:53 PM ET Most Colleges Try to Assess Student Learning, Survey Finds A large majority of American colleges make at least some formal effort to assess their students' learning, but most have few or no staff members dedicated to doing so. Those are among the findings of a survey report released Monday by the National Institute for Learning Outcomes Assessment, a year-old project based at Indiana University and the University of Illinois. Of more than 1,500 provosts' offices that responded to the survey, nearly two-thirds said their institutions had two or fewer employees assigned to student assessment. Among large research universities, almost 80 percent cited a lack of faculty engagement as the most serious barrier to student-assessment projects.
  •  
    no news here, but it does suggest the commitment our unit represents.
Gary Brown

News: Assessment vs. Action - Inside Higher Ed - 0 views

  • The assessment movement has firmly taken hold in American higher education, if you judge it by how many colleges are engaged in measuring what undergraduates learn. But if you judge by how many of them use that information to do something, the picture is different.
  • The most common approach used for institutional assessment is a nationally normed survey of students.
  • ut the survey found more attention to learning outcomes at the program level, especially by community colleges.)
  • ...3 more annotations...
  • Much smaller percentages of colleges report that assessment is based on external evaluations of student work (9 percent), student portfolios (8 percent) and employer interviews (8 percent).
  • “Some faculty and staff at prestigious, highly selective campuses wonder why documenting something already understood to be superior is warranted. They have little to gain and perhaps a lot to lose,” the report says. “On the other hand, many colleagues at lower-status campuses often feel pressed to demonstrate their worth; some worry that they may not fare well in comparison with their better-resourced, more selective counterparts. Here too, anxiety may morph into a perceived threat if the results disappoint.”
  • The provosts in the survey said what they most needed to more effectively use assessment was more faculty involvement, with 66 percent citing this need. The percentage was even greater (80 percent) at doctoral institutions.George Kuh, director of the institute, said that he viewed the results as "cause for cautious optimism," and that the reality of so much assessment activity makes it possible to work on making better use of it.
  •  
    From National Institute for LOA:\n\n"The provosts in the survey said what they most \nneeded to more effectively use assessment was more faculty involvement, with 66 \npercent citing this need. The percentage was even greater (80 percent) at \ndoctoral institutions."
  •  
    another report on survey with interesting implications
Gary Brown

An Expert Surveys the Assessment Landscape - Student Affairs - The Chronicle of Higher Education - 1 views

shared by Gary Brown on 29 Oct 09 - Cached
    • Gary Brown
       
      Illustration of a vision of assessment that separates assessment from teaching and learning.
  • If assessment is going to be required by accrediting bodies and top administrators, then we need administrative support and oversight of assessment on campus, rather than once again offloading more work onto faculty members squeezed by teaching & research inflation.
  • Outcomes assessment does not have to be in the form of standardized tests, nor does including assessment in faculty review have to translate into percentages achieving a particular score on such a test. What it does mean is that when the annual review comes along, one should be prepared to answer the question, "How do you know that what you're doing results in student learning?" We've all had the experience of realizing at times that students took in something very different from what we intended (if we were paying attention at all). So it's reasonable to be asked about how you do look at that question and how you decide when your current practice is successful or when it needs to be modified. That's simply being a reflective practitioner in the classroom which is the bare minimum students should expect from us. And that's all assessment is - answering that question, reflecting on what you find, and taking next steps to keep doing what works well and find better solutions for the things that aren't working well.
  • ...2 more annotations...
  • We need to really show HOW we use the results of assessment in the revamping of our curriculum, with real case studies. Each department should insist and be ready to demonstrate real case studies of this type of use of Assessment.
  • Socrates said "A life that is not examined is not worth living". Wonderful as this may be as a metaphor we should add to it - "and once examined - do something to improve it".
Theron DesRosier

An Expert Surveys the Assessment Landscape - The Chronicle of Higher Education - 2 views

  • What we want is for assessment to become a public, shared responsibility, so there should be departmental leadership.
  •  
    "What we want is for assessment to become a public, shared responsibility, so there should be departmental leadership." George Kuh director of the National Institute for Learning Outcomes Assessment.
  •  
    Kuh also says, "So we're going to spend some time looking at the impact of the Voluntary System of Accountability. It's one thing for schools to sign up, it's another to post the information and to show that they're actually doing something with it. It's not about posting a score on a Web site-it's about doing something with the data." He doesn't take the next step and ask if it is even possible for schools to actually do anything with the data collected from the CLA or ask who has access to the criteria: Students? Faculty? Anyone?
Gary Brown

News: Assessing the Assessments - Inside Higher Ed - 2 views

  • The validity of a measure is based on evidence regarding the inferences and assumptions that are intended to be made and the uses to which the measure will be put. Showing that the three tests in question are comparable does not support Shulenburger's assertion regarding the value-added measure as a valid indicator of institutional effectiveness. The claim that public university groups have previously judged the value-added measure as appropriate does not tell us anything about the evidence upon which this judgment was based nor the conditions under which the judgment was reached. As someone familiar with the process, I would assert that there was no compelling evidence presented that these instruments and the value-added measure were validated for making this assertion (no such evidence was available at the time), which is the intended use in the VSA.
  • (however much the sellers of these tests tell you that those samples are "representative"), they provide an easy way out for academic administrators who want to avoid the time-and-effort consuming but incredibly valuable task of developing detailed major program learning outcome statements (even the specialized accrediting bodies don't get down to the level of discrete, operational statements that guide faculty toward appropriate assessment design)
  • f somebody really cared about "value added," they could look at each student's first essay in this course, and compare it with that same student's last essay in this course. This person could then evaluate each individual student's increased mastery of the subject-matter in the course (there's a lot) and also the increased writing skill, if any.
  • ...1 more annotation...
  • These skills cannot be separated out from student success in learning sophisticated subject-matter, because understanding anthropology, or history of science, or organic chemistry, or Japanese painting, is not a matter of absorbing individual facts, but learning facts and ways of thinking about them in a seamless, synthetic way. No assessment scheme that neglects these obvious facts about higher education is going to do anybody any good, and we'll be wasting valuable intellectual and financial resources if we try to design one.
  •  
    ongoing discussion of these tools. Note Longanecker's comment and ask me why.
Gary Brown

At Colleges, Assessment Satisfies Only Accreditors - Letters to the Editor - The Chronicle of Higher Education - 2 views

  • Some of that is due to the influence of the traditional academic freedom that faculty members have enjoyed. Some of it is ego. And some of it is lack of understanding of how it can work. There is also a huge disconnect between satisfying outside parties, like accreditors and the government, and using assessment as a quality-improvement system.
  • We are driven by regional accreditation and program-level accreditation, not by quality improvement. At our institution, we talk about assessment a lot, and do just enough to satisfy the requirements of our outside reviewers.
  • Standardized direct measures, like the Major Field Test for M.B.A. graduates?
  • ...5 more annotations...
  • The problem with the test is that it does not directly align with our program's learning outcomes and it does not yield useful information for closing the loop. So why do we use it? Because it is accepted by accreditors as a direct measure and it is less expensive and time-consuming than more useful tools.
  • Without exception, the most useful information for improving the program and student learning comes from the anecdotal and indirect information.
  • We don't have the time and the resources to do what we really want to do to continuously improve the quality of our programs and instruction. We don't have a culture of continuous improvement. We don't make changes on a regular basis, because we are trapped by the catalog publishing cycle, accreditation visits, and the entrenched misunderstanding of the purposes of assessment.
  • The institutions that use it are ones that have adequate resources to do so. The time necessary for training, whole-system involvement, and developing the programs for improvement is daunting. And it is only being used by one regional accrediting body, as far as I know.
  • Until higher education as a whole is willing to look at changing its approach to assessment, I don't think it will happen
  •  
    The challenge and another piece of evidence that the nuances of assessment as it related to teaching and learning remain elusive.
Gary Brown

U.S. GAO - Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effective Interventions - 1 views

  • In the absence of detailed guidance, the panel defined sizable and sustained effects through case discussion
  • The Top Tier initiative's choice of broad topics (such as early childhood interventions), emphasis on long-term effects, and use of narrow evidence criteria combine to provide limited information on what is effective in achieving specific outcomes.
  • Several rigorous alternatives to randomized experiments are considered appropriate for other situations: quasi-experimental comparison group studies, statistical analyses of observational data, and--in some circumstances--in-depth case studies. The credibility of their estimates of program effects relies on how well the studies' designs rule out competing causal explanations.
  •  
    a critical resource
« First ‹ Previous 81 - 97 of 97
Showing 20 items per page