Skip to main content

Home/ Groups/ CTLT and Friends
Nils Peterson

BBC News - McDonald's to launch own degree - 2 views

  • The two-year foundation degree in managing business operations is a demonstration of how seriously the company takes the training of its staff
    • Nils Peterson
       
      tying the degree to a stakeholder's needs. wonder what Macdonalds has a learning outcomes.
Gary Brown

Measuring Student Learning: Many Tools - Measuring Stick - The Chronicle of Higher Educ... - 2 views

  • The issue that needs to be addressed and spectacularly has been avoided is whether controlled studies (one group does the articulation of and then measurement of outcomes, and a control group does what we have been doing before this mania took hold) can demonstrate or falsify the claim that outcomes assessment results in better-educated students. So far as I can tell, we instead gather data on whether we have in fact been doing outcomes assessment. Not the issue, people. jwp
  •  
    The challenge--not the control study this person calls for, but the perception that outcomes assessment produces outcomes....
Gary Brown

The Ticker - Education Dept. Criticizes Accreditor Over Credit-Hour Standards - The Chr... - 2 views

  • The Southern Association of Colleges and Schools cannot consistently ensure the quality of academic programs it reviews without clearly defining what constitutes a credit hour, according to a report issued on Tuesday by the U.S. Department of Education's inspector general. The accrediting organization, which assesses colleges in 11 states, responded that the variety of experiential, online, and distance courses that institutions now offer makes it impossible to define a single, common standard for credit hours. "The traditionally accepted definitions of semester credit hours and quarter credit hours based almost exclusively on seat time can no longer be applied to half of the credits now being awarded by our higher-education institutions," the association wrote in answer to the report.
  •  
    Outcomes are mentioned as an alternative in the comments, but in a disconcerting way. As is the alternative approach to study time....
Joshua Yeidel

Higher Education: Assessment & Process Improvement Group News | LinkedIn - 2 views

  •  
    "Colleges and universities have transformed themselves from participants in an audit culture to accomplices in an accountability regime."
  •  
    A philosophical critique of a rapidly-approaching "metric future", with "comensuration" (assigning meaning to measurements) run amok. While the application of student learning outcomes given in the article is not ours, the critique of continuous quality improvement challenges some of our assumptions.
Gary Brown

Home | AALHE - 2 views

shared by Gary Brown on 22 Oct 10 - Cached
  • The Association for Assessment of Learning in Higher Education, Inc. (AALHE) is an organization of practitioners interested in using effective assessment practice to document and improve student learning.
  • it is designed to be a resource by all who are interested in the improvement of learning,
  •  
    Our membership begins November 1
Matthew Tedder

5 Impressive Real-Life Google Wave Use Cases - 2 views

  •  
    5 use cases for wave..
Gary Brown

Accountability Issues Persist Under New Administration - Government - The Chronicle of ... - 2 views

  •  
    The daily dose of calls for accountability--useful for those who are inclined to see OAI as the agent of their distress.
Gary Brown

Disciplines Follow Their Own Paths to Quality - Faculty - The Chronicle of Higher Educa... - 2 views

  • But when it comes to the fundamentals of measuring and improving student learning, engineering professors naturally have more to talk about with their counterparts at, say, Georgia Tech than with the humanities professors at Villanova
    • Gary Brown
       
      Perhaps this is too bad....
  • But there is no nationally normed way to measure the particular kind of critical thinking that students of classics acquire
  • er colleagues have created discipline-specific critical-reasoning tests for classics and political science
  • ...5 more annotations...
  • Political science cultivates skills that are substantially different from those in classics, and in each case those skills can't be measured with a general-education test.
  • he wants to use tests of reasoning that are appropriate for each discipline
  • I believe Richard Paul has spent a lifetime articulating the characteristics of discipline based critical thinking. But anyway, I think it is interesting that an attempt is being made to develop (perhaps) a "national standard" for critical thinking in classics. In order to assess anything effectively we need a standard. Without a standard there are no criteria and therefore no basis from which to assess. But standards do not necessarily have to be established at the national level. This raises the issue of scale. What is the appropriate scale from which to measure the quality and effectiveness of an educational experience? Any valid approach to quality assurance has to be multi-scaled and requires multiple measures over time. But to be honest the issues of standards and scale are really just the tip of the outcomes iceberg.
    • Gary Brown
       
      Missing the notion that the variance is in the activity more than the criteria.  We hear little of embedding nationally normed and weighted assignments and then assessing the implementation and facilitation variables.... mirror, not lens.
  • the UW Study of Undergraduate Learning (UW SOUL). Results from the UW SOUL show that learning in college is disciplinary; therefore, real assessment of learning must occur (with central support and resources)in the academic departments. Generic approaches to assessing thinking, writing, research, quantitative reasoning, and other areas of learning may be measuring something, but they cannot measure learning in college.
  • It turns out there is a six week, or 210+ hour serious reading exposure to two or more domains outside ones own, that "turns on" cross domain mapping as a robust capability. Some people just happen to have accumulated, usually by unseen and unsensed happenstance involvements (rooming with an engineer, son of a dad changing domains/careers, etc.) this minimum level of basics that allows robust metaphor based mapping.
Gary Brown

Comments on the report - GEVC Report Comments - University College - Washington State U... - 2 views

  • My primary concern rests with the heavy emphasis on "outcomes based" learning. First, I find it difficult to imagine teaching to outcomes as separate from teaching my content -- I do not consider "content" and "outcomes" as discrete entities; rather, they overlap. This overlap may partly be the reason for the thin and somewhat unconvincing literature on "outcomes based learning." I would therefore like to see in this process a thorough and detailed analysis of the literature on "outcomes" vs content-based learning, followed by thoughtful discussion as to whether the need to focus our energies in a different direction is in fact warranted (and for what reasons). Also, perhaps that same literature can provide guidance on how to create an outcomes driven learning environment while maintaining the spirit of the academic (as opposed to technocratically-oriented) enterprise.
  • Outcomes are simply more refined ways of talking about fundamental purposes of education (on the need for positing our purposes in educating undergraduates, see Derek Bok, Our Underachieving Colleges, ch. 3). Without stating our educational purposes clearly, we can't know whether we are achieving them. "
  • I've clicked just about every link on this website. I still have no idea what the empirical basis is for recommending a "learning goals" based approach over other approaches. The references in the GEVC report, which is where I expected to find the relevant studies, were instead all to other reports. So far as I could tell, there were no direct references to peer-reviewed research.
  • ...1 more annotation...
  • I do not want to read the "three volumes of Pascaralla and Terenzini." Instead, I would appreciate a concise, but thorough, summary of the empirical findings. This would include the sample of institutions studied and how this sample was chosen, the way that student outcomes were measured, and the results.I now understand that many people believe that a "learning goals" approach is desirable, but I still don't understand the empirical basis for their beliefs.
Joshua Yeidel

Higher Education: Assessment & Process Improvement Group News | LinkedIn - 2 views

  •  
    So here it is: by definition, the value-added component of the D.C. IMPACT evaluation system defines 50 percent of all teachers in grades four through eight as ineffective or minimally effective in influencing their students' learning. And given the imprecision of the value-added scores, just by chance some teachers will be categorized as ineffective or minimally effective two years in a row. The system is rigged to label teachers as ineffective or minimally effective as a precursor to firing them.
  •  
    How assessment of value-added actually works in one setting: the Washington, D.C. public schools. This article actually works the numbers to show that the system is set up to put teachers in the firing zone. Note the tyranny of numerical ratings (some of them subjective) converted into meanings like "minimally effective".
Nils Peterson

Tom Vander Ark: How Social Networking Will Transform Learning - 2 views

  • Key assumption: teacher effectiveness is the key variable; more good teachers will improve student achievement
  • I'm betting on social learning platforms as a lever for improvement at scale in education. Instead of a classroom as the primary organizing principle, social networks will become the primary building block of learning communities (both formal and informal). Smart recommendation engines will queue personalized content. Tutoring, training, and collaboration tools will be applications that run on social networks. New schools will be formed around these capabilities. Teachers in existing schools will adopt free tools yielding viral, bureaucracy-cutting productivity improvement.
    • Nils Peterson
       
      I just Diigoed UrgentEvoke.com (a game) and Jumo.com a new social site, each targeted at working on big, real-world problems.
  •  
    Vander Ark was the first Executive Director for the Bill & Melinda Gates Foundation. From his post: "There are plenty of theories about how to improve education. Most focus on what appear to be big levers--a point of entry and system intervention that appears to provide some improvement leverage. These theories usually involve 'if-then' statements: 'if we improve this, then other good stuff will happen.'" "One problem not addressed by these theories is the lack of innovation diffusion in education--a good idea won't cross the street. Weak improvement incentives and strong bureaucracy have created a lousy marketplace for products and ideas." "Key assumption: teacher effectiveness is the key variable; more good teachers will improve student achievement" "I'm betting on social learning platforms as a lever for improvement at scale in education. Instead of a classroom as the primary organizing principle, social networks will become the primary building block of learning communities (both formal and informal). Smart recommendation engines will queue personalized content. Tutoring, training, and collaboration tools will be applications that run on social networks. New schools will be formed around these capabilities. Teachers in existing schools will adopt free tools yielding viral, bureaucracy-cutting productivity improvement."
  •  
    "Key assumption: teacher effectiveness is the key variable; more good teachers will improve student achievement" Vander Ark was the first Executive Director for the Bill & Melinda Gates Foundation. From his post:"There are plenty of theories about how to improve education. Most focus on what appear to be big levers--a point of entry and system intervention that appears to provide some improvement leverage. These theories usually involve 'if-then' statements: 'if we improve this, then other good stuff will happen.'" "One problem not addressed by these theories is the lack of innovation diffusion in education--a good idea won't cross the street. Weak improvement incentives and strong bureaucracy have created a lousy marketplace for products and ideas." "I'm betting on social learning platforms as a lever for improvement at scale in education. Instead of a classroom as the primary organizing principle, social networks will become the primary building block of learning communities (both formal and informal). Smart recommendation engines will queue personalized content. Tutoring, training, and collaboration tools will be applications that run on social networks. New schools will be formed around these capabilities. Teachers in existing schools will adopt free tools yielding viral, bureaucracy-cutting productivity improvement."\n\n\n
Joshua Yeidel

Susan Kistler on Tips for First Time Conference Attendees | AEA365 - 2 views

  •  
    "These are great ideas shared by AEA Conference veterans for making the most of the AEA conference:"
  •  
    ... or any conference.
Gary Brown

Would You Protect Your Computer's Feelings? Clifford Nass Says Yes. - ProfHacker - The ... - 2 views

  • why peer review processes often avoid, rather than facilitate, sound judgment
  • humans do not differentiate between computers and people in their social interactions.
  • no matter what "everyone knows," people act as if the computer secretly cares
  • ...4 more annotations...
  • users given completely random praise by a computer program liked it more than the same program without praise, even though they knew in advance the praise was meaningless.
  • Nass demonstrates, however, that people internalize praise and criticism differently—while we welcome the former, we really dwell on and obsess over the latter. In the criticism sandwich, then, "the criticism blasts the first list of positive achievements out of listeners' memory. They then think hard about the criticism (which will make them remember it better) and are on the alert to think even harder about what happens next. What do they then get? Positive remarks that are too general to be remembered"
  • And because we focus so much on the negative, having a similar number of positive and negative comments "feels negative overall"
  • The best strategy, he suggests, is "to briefly present a few negative remarks and then provide a long list of positive remarks...You should also provide as much detail as possible within the positive comments, even more than feels natural, because positive feedback is less memorable" (33).
  •  
    The implications for feedback issues are pretty clear.
Gary Brown

The Potential Impact of Common Core Standards - 2 views

  • According to the Common Core State Standards Initiative (CCSSI), the goal “is to ensure that academic expectations for students are of high quality and consistent across all states and territories.” To educators across the nation, this means they now have to sync up all curriculum in math and language arts for the benefit of the students.
  • They are evidence based, aligned with college and work expectations, include rigorous content and skills, and are informed by other top performing countries.”
  • “Educational standards help teachers ensure their students have the skills and knowledge they need to be successful by providing clear goals for student learning.” They are really just guidelines for students, making sure they are on the right track with their learning.
  • ...2 more annotations...
  • When asked the simple question of what school standards are, most students are unable to answer the question. When the concept is explained, however, they really do not know if having common standards would make a difference or not. Codie Allen, a senior in the Vail School Distract says, “I think that things will pretty much stay stagnate, people aren’t really going to change because of standards.”
  • Council of Chief State School Officers. Common Core State Standards Initiative, 2010.
Gary Brown

Critical friend - Wikipedia, the free encyclopedia - 2 views

  • The Critical Friend is a powerful idea, perhaps because it contains an inherent tension. Friends bring a high degree of unconditional positive regard. Critics are, at first sight at least, conditional, negative and intolerant of failure. Perhaps the critical friend comes closest to what might be regarded as 'true friendship' - a successful marrying of unconditional support and unconditional critique. [
  •  
    I've been wrestling with the tension again between supporting programs to help them improve, but then rating them for the accountability charge we hold.  So I've been looking into the concept and practice of the "Critical Friend."  Some tensions are inherent. This quote helps clarify.
Theron DesRosier

CDC Evaluation Working Group: Framework - 2 views

  • Framework for Program Evaluation
  • Purposes The framework was developed to: Summarize and organize the essential elements of program evaluation Provide a common frame of reference for conducting evaluations Clarify the steps in program evaluation Review standards for effective program evaluation Address misconceptions about the purposes and methods of program evaluation
  • Assigning value and making judgments regarding a program on the basis of evidence requires answering the following questions: What will be evaluated? (i.e. what is "the program" and in what context does it exist) What aspects of the program will be considered when judging program performance? What standards (i.e. type or level of performance) must be reached for the program to be considered successful? What evidence will be used to indicate how the program has performed? What conclusions regarding program performance are justified by comparing the available evidence to the selected standards? How will the lessons learned from the inquiry be used to improve public health effectiveness?
  • ...3 more annotations...
  • These questions should be addressed at the beginning of a program and revisited throughout its implementation. The framework provides a systematic approach for answering these questions.
  • Steps in Evaluation Practice Engage stakeholders Those involved, those affected, primary intended users Describe the program Need, expected effects, activities, resources, stage, context, logic model Focus the evaluation design Purpose, users, uses, questions, methods, agreements Gather credible evidence Indicators, sources, quality, quantity, logistics Justify conclusions Standards, analysis/synthesis, interpretation, judgment, recommendations Ensure use and share lessons learned Design, preparation, feedback, follow-up, dissemination Standards for "Effective" Evaluation Utility Serve the information needs of intended users Feasibility Be realistic, prudent, diplomatic, and frugal Propriety Behave legally, ethically, and with due regard for the welfare of those involved and those affected Accuracy Reveal and convey technically accurate information
  • The challenge is to devise an optimal — as opposed to an ideal — strategy.
  •  
    Framework for Program Evaluation by the CDC This is a good resource for program evaluation. Click through "Steps and Standards" for information on collecting credible evidence and engaging stakeholders.
Gary Brown

An Oasis of Niceness - Tweed - The Chronicle of Higher Education - 2 views

  • Not exactly, but faculty members and students at Rutgers University are embarking this week on a two-year effort  to "cultivate small acts of courtesy and compassion" on the New Brunswick campus.
  • being civil is more than just demonstrating good manners.
  • "Living together more civilly means living together more peacefully, more kindly, and more justly," she says. Rutgers, Ms. Hull hopes, will become a "warmer, closer community" as a result of Project Civility
  •  
    an item of urgency, in my view.
Matthew Tedder

Accountable Talk: (Un)intended Consequences - 2 views

  •  
    Nutty method of teacher evaluation
Joshua Yeidel

Students Know Good Teaching When They Get It, Survey Finds - NYTimes.com - 2 views

  •  
    ... as measured by student evals and "value-added modeling".  Note some of the student eval items, though... e.g., students agree or disagree with "In this class, we learn to correct our mistakes."
Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessmen... - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
« First ‹ Previous 141 - 160 Next › Last »
Showing 20 items per page