Skip to main content

Home/ Groups/ CTLT and Friends
Nils Peterson

There is No College Cost Crisis - NYTimes.com - 3 views

  • “[A] modern university must provide students with an up-to-date education that familiarizes students with the techniques and associated machinery that are used in the workplace the students must enter.”
    • Nils Peterson
       
      the author means information technologies, but one might also talk about certain habits of mind which are associated with trends in the workplace the students must enter.
  • The causes of the increase in college costs (an increase that has not, they contend, put college “out of reach”) are external; colleges are responding, as they must, to changes they cannot ignore and still provide a quality product.
    • Nils Peterson
       
      Makes me want to re-visit Christiansen's Innovator's Dilemma, who talks about a business focusing on serving its best customer and losing focus on emerging products and markets that operate at lower price points.
  •  
    Stanley Fish reviews "Why Does College Cost So Much?" bu Archibald and Feldman, two economists who conclude "there is no college cost crisis".
Nils Peterson

Nonacademic Members Push Changes in Anthropology Group - Faculty - The Chronicle of Hig... - 1 views

  • Cathleen Crain, an anthropologist who runs a consulting firm near Washington: "There is a growing vision of a unified anthropology, where academics informs practice and practice informs academics."
    • Nils Peterson
       
      Anthropology is having a conversation about stakeholders and this is impacting the national anthro organization. I wonder if its producing metrics that might inform student learning outcomes work.
Gary Brown

Researchers Criticize Reliability of National Survey of Student Engagement - Students -... - 3 views

  • "If each of the five benchmarks does not measure a distinct dimension of engagement and includes substantial error among its items, it is difficult to inform intervention strategies to improve undergraduates' educational experiences,"
  • nly one benchmark, enriching educational experiences, had a significant effect on the seniors' cumulative GPA.
  • Other critics have asserted that the survey's mountains of data remain largely ignored.
  •  
    If the results are largely ignored, the psychometric integrity matters little.  There is no indication it is ignored because it lacks psychometric integrity.
Gary Brown

Home - Journal of Assessment and Accountability Systems in Educator Preparation - 1 views

  •  
    a new journal to note
Gary Brown

Views: Asking Too Much (and Too Little) of Accreditors - Inside Higher Ed - 1 views

  • Senators want to know why accreditors haven’t protected the public interest.
  • Congress shouldn’t blame accreditors: it should blame itself. The existing accreditation system has neither ensured quality nor ferreted out fraud. Why? Because Congress didn’t want it to. If Congress truly wants to protect the public interest, it needs to create a system that ensures real accountability.
  • But turning accreditors into gatekeepers changed the picture. In effect, accreditors now held a gun to the heads of colleges and universities since federal financial aid wouldn’t flow unless the institution received “accredited” status.
  • ...10 more annotations...
  • Congress listened to higher education lobbyists and designated accreditors -- teams made up largely of administrators and faculty -- to be “reliable authorities” on educational quality. Intending to protect institutional autonomy, Congress appropriated the existing voluntary system by which institutions differentiated themselves.
  • A gatekeeping system using peer review is like a penal system that uses inmates to evaluate eligibility for parole. The conflicts of interest are everywhere -- and, surprise, virtually everyone is eligible!
  • accreditation is “premised upon collegiality and assistance; rather than requirements that institutions meet certain standards (with public announcements when they don’t."
  • Meanwhile, there is ample evidence that many accredited colleges are adding little educational value. The 2006 National Assessment of Adult Literacy revealed that nearly a third of college graduates were unable to compare two newspaper editorials or compute the cost of office items, prompting the Spellings Commission and others to raise concerns about accreditors’ attention to productivity and quality.
  • But Congress wouldn’t let them. Rather than welcoming accreditors’ efforts to enhance their public oversight role, Congress told accreditors to back off and let nonprofit colleges and universities set their own standards for educational quality.
  • ccreditation is nothing more than an outdated industrial-era monopoly whose regulations prevent colleges from cultivating the skills, flexibility, and innovation that they need to ensure quality and accountability.
  • there is a much cheaper and better way: a self-certifying regimen of financial accountability, coupled with transparency about graduation rates and student success. (See some alternatives here and here.)
  • Such a system would prioritize student and parent assessment over the judgment of institutional peers or the educational bureaucracy. And it would protect students, parents, and taxpayers from fraud or mismanagement by permitting immediate complaints and investigations, with a notarized certification from the institution to serve as Exhibit A
  • The only way to protect the public interest is to end the current system of peer review patronage, and demand that colleges and universities put their reputation -- and their performance -- on the line.
  • Anne D. Neal is president of the American Council of Trustees and Alumni. The views stated herein do not represent the views of the National Advisory Committee on Institutional Quality and Integrity, of which she is a member.
  •  
    The ascending view of accreditation.
Nils Peterson

Community Colleges Must Focus on Quality of Learning, Report Says - Students - The Chro... - 0 views

  • Over all, 67 percent of community-college students said their coursework often involved analyzing the basic elements of an idea, experience, or theory; 59 percent said they frequently synthesized ideas, information, and experiences in new ways. Other averages were lower: 56 percent of students, for example, reported being regularly asked to examine the strengths or weaknesses of their own views on a topic. And just 52 percent of students said they often had to make judgments about the value or soundness of information as part of their academic work.
    • Nils Peterson
       
      one wonders, who is the stakeholder that commissioned this assessment and thinks these are important -- looks like the CITR might be underlying their thinking.
Gary Brown

Community Colleges Must Focus on Quality of Learning, Report Says - Students - The Chro... - 0 views

  • Increasing college completion is meaningless unless certificates and degrees represent real learning, which community colleges must work harder to ensure, says a report released on Thursday by the Center for Community College Student Engagement.
  • This year's report centers on "deep learning," or "broadly applicable thinking, reasoning, and judgment skills—abilities that allow individuals to apply information, develop a coherent world view, and interact in more meaningful ways."
  • 67 percent of community-college students said their coursework often involved analyzing the basic elements of an idea, experience, or theory; 59 percent said they frequently synthesized ideas, information, and experiences in new ways. Other averages were lower: 56 percent of students, for example, reported being regularly asked to examine the strengths or weaknesses of their own views on a topic. And just 52 percent of students said they often had to make judgments about the value or soundness of information as part of their academic work.
  • ...5 more annotations...
  • One problem may be low expectations,
  • 37 percent of full-time community-college students spent five or fewer hours a week preparing for class. Nineteen percent of students had never done two or more drafts of an assignment, and 69 percent had come to class unprepared at least once.
  • Nearly nine in 10 entering students said they knew how to get in touch with their instructors outside of class, and the same proportion reported that at least one instructor had learned their names. But more than two-thirds of entering students and almost half of more-seasoned students said they had never discussed ideas from their coursework with instructors outside of class.
  • This year's report also strongly recommends that colleges invest more in professional development, for part-time as well as full-time faculty. "The calls for increased college completion come at a time of increasing student enroll­ments and draconian budget cuts; and too often in those circumstances, efforts to develop faculty and staff take low priority,"
  • Lone Star College's Classroom Research Initiative, a form of professional development based on inquiry. Since last year, about 30 faculty members from the community college's five campuses have collaborated to examine assessment data from the report's surveys and other sources and to propose new ways to try to improve learning.
Theron DesRosier

The Future of Work: As Gartner Sees It - 3 views

  •  
    "Gartner points out that the world of work will probably witness ten major changes in the next ten years. Interesting in that it will change how learning happens in the workplace as well. The eLearning industry will need to account for the coming change and have a strategy in place to deal with the changes."
Gary Brown

Cheating Scandal Snares Hundreds in U. of Central Florida Course - The Ticker - The Chr... - 1 views

  • evidence of widespread cheating
  • business course on strategic management,
  • I don’t condone cheating. But I think it is equally pathetic that faculty are put in situations where they feel the only option for an examination is an easy to grade multiple choice or true/false test
  • ...3 more annotations...
  • Faculty all need to wake up, as virtually all test banks, and also all instructor’s manuals with homework answers, are widely available on the interne
  • I think we need to question why a class has 600 students enrolled.
  • Perhaps they are the ones being cheated.
Gary Brown

71 Presidents Pledge to Improve Their Colleges' Teaching and Learning - Faculty - The C... - 0 views

  • In a venture known as the Presidents' Alliance for Excellence in Student Learning and Accountability, they have promised to take specific steps to gather more evidence about student learning, to use that evidence to improve instruction, and to give the public more information about the quality of learning on their campuses.
  • The 71 pledges, officially announced on Friday, are essentially a dare to accreditors, parents, and the news media: Come visit in two years, and if we haven't done these things, you can zing us.
  • deepen an ethic of professional stewardship and self-regulation among college leaders
  • ...4 more annotations...
  • Beginning in 2011, all first-year students at Westminster will be required to create electronic portfolios that reflect their progress in terms of five campuswide learning goals. And the college will expand the number of seniors who take the Collegiate Learning Assessment, so that the test can be used to help measure the strength of each academic major.
  • "The crucial thing is that all of our learning assessments have been designed and driven by the faculty," says Pamela G. Menke, Miami Dade's associate provost for academic affairs. "The way transformation of learning truly occurs is when faculty members ask the questions, and when they're willing to use what they've found out to make change.
  • Other assessment models might point some things out, but they won't be useful if faculty members don't believe in them."
  • "In the long term, as more people join, I hope that the Web site will provide a resource for the kinds of innovations that seem to be successful," he says. "That process might be difficult. Teaching is an art, not a science. But there is still probably a lot that we can learn from each other."
Gary Brown

YouTube - Neil Gershenfeld: The beckoning promise of personal fabrication - 3 views

  • Neil Gershenfeld: The beckoning promise of personal fabrication
  •  
    Nominalism fully debunked.  The keynote from EDUCAUSE.  
Nils Peterson

Jeff Sheldon on the Readiness for Organizational Learning and Evaluation instrument | A... - 4 views

shared by Nils Peterson on 01 Nov 10 - No Cached
  • The ROLE consists of 78 items grouped into six major constructs: 1) Culture, 2) Leadership, 3) Systems and Structures, 4) Communication, 5) Teams, and 6) Evaluation.
    • Nils Peterson
       
      You can look up the book in Amazon and then view inside and search for Appendix A and read the items in the survey. http://www.amazon.com/Evaluation-Organizations-Systematic-Enhancing-Performance/dp/0738202681#reader_0738202681 This might be useful to OAI in assessing readiness (or understanding what in the university culture challenges readiness) OR it might inform our revision (or justify staying out) of our rubric. An initial glance would indicate that there are some cultural constructs in the university that are counter-indicated by the analysis of the ROLE instrument.
  •  
    " Readiness for Organizational Learning and Evaluation (ROLE). The ROLE (Preskill & Torres, 2000) was designed to help us determine the level of readiness for implementing organizational learning, evaluation practices, and supporting processes"
  •  
    An interesting possibility for a Skylight survey (but more reading needed)
Joshua Yeidel

Higher Education: Assessment & Process Improvement Group News | LinkedIn - 2 views

  •  
    So here it is: by definition, the value-added component of the D.C. IMPACT evaluation system defines 50 percent of all teachers in grades four through eight as ineffective or minimally effective in influencing their students' learning. And given the imprecision of the value-added scores, just by chance some teachers will be categorized as ineffective or minimally effective two years in a row. The system is rigged to label teachers as ineffective or minimally effective as a precursor to firing them.
  •  
    How assessment of value-added actually works in one setting: the Washington, D.C. public schools. This article actually works the numbers to show that the system is set up to put teachers in the firing zone. Note the tyranny of numerical ratings (some of them subjective) converted into meanings like "minimally effective".
Gary Brown

Disciplines Follow Their Own Paths to Quality - Faculty - The Chronicle of Higher Educa... - 2 views

  • But when it comes to the fundamentals of measuring and improving student learning, engineering professors naturally have more to talk about with their counterparts at, say, Georgia Tech than with the humanities professors at Villanova
    • Gary Brown
       
      Perhaps this is too bad....
  • But there is no nationally normed way to measure the particular kind of critical thinking that students of classics acquire
  • er colleagues have created discipline-specific critical-reasoning tests for classics and political science
  • ...5 more annotations...
  • Political science cultivates skills that are substantially different from those in classics, and in each case those skills can't be measured with a general-education test.
  • he wants to use tests of reasoning that are appropriate for each discipline
  • I believe Richard Paul has spent a lifetime articulating the characteristics of discipline based critical thinking. But anyway, I think it is interesting that an attempt is being made to develop (perhaps) a "national standard" for critical thinking in classics. In order to assess anything effectively we need a standard. Without a standard there are no criteria and therefore no basis from which to assess. But standards do not necessarily have to be established at the national level. This raises the issue of scale. What is the appropriate scale from which to measure the quality and effectiveness of an educational experience? Any valid approach to quality assurance has to be multi-scaled and requires multiple measures over time. But to be honest the issues of standards and scale are really just the tip of the outcomes iceberg.
    • Gary Brown
       
      Missing the notion that the variance is in the activity more than the criteria.  We hear little of embedding nationally normed and weighted assignments and then assessing the implementation and facilitation variables.... mirror, not lens.
  • the UW Study of Undergraduate Learning (UW SOUL). Results from the UW SOUL show that learning in college is disciplinary; therefore, real assessment of learning must occur (with central support and resources)in the academic departments. Generic approaches to assessing thinking, writing, research, quantitative reasoning, and other areas of learning may be measuring something, but they cannot measure learning in college.
  • It turns out there is a six week, or 210+ hour serious reading exposure to two or more domains outside ones own, that "turns on" cross domain mapping as a robust capability. Some people just happen to have accumulated, usually by unseen and unsensed happenstance involvements (rooming with an engineer, son of a dad changing domains/careers, etc.) this minimum level of basics that allows robust metaphor based mapping.
Gary Brown

Learning Assessment: The Regional Accreditors' Role - Measuring Stick - The Chronicle o... - 0 views

  • The National Institute for Learning Outcomes Assessment has just released a white paper about the regional accreditors’ role in prodding colleges to assess their students’ learning
  • All four presidents suggested that their campuses’ learning-assessment projects are fueled by Fear of Accreditors. One said that a regional accreditor “came down on us hard over assessment.” Another said, “Accreditation visit coming up. This drives what we need to do for assessment.”
  • regional accreditors are more likely now than they were a decade ago to insist that colleges hand them evidence about student-learning outcomes.
  • ...4 more annotations...
  • Western Association of Schools and Colleges, Ms. Provezis reports, “almost every action letter to institutions over the last five years has required additional attention to assessment, with reasons ranging from insufficient faculty involvement to too little evidence of a plan to sustain assessment.”
  • The white paper gently criticizes the accreditors for failing to make sure that faculty members are involved in learning assessment.
  • “it would be good to know more about what would make assessment worthwhile to the faculty—for a better understanding of the source of their resistance.”
  • Many of the most visible and ambitious learning-assessment projects out there seem to strangely ignore the scholarly disciplines’ own internal efforts to improve teaching and learning.
  •  
    fyi
Gary Brown

Does testing for statistical significance encourage or discourage thoughtful ... - 1 views

  • Does testing for statistical significance encourage or discourage thoughtful data analysis? Posted by Patricia Rogers on October 20th, 2010
  • Epidemiology, 9(3):333–337). which argues not only for thoughtful interpretation of findings, but for not reporting statistical significance at all.
  • We also would like to see the interpretation of a study based not on statistical significance, or lack of it, for one or more study variables, but rather on careful quantitative consideration of the data in light of competing explanations for the findings.
  • ...6 more annotations...
  • we prefer a researcher to consider whether the magnitude of an estimated effect could be readily explained by uncontrolled confounding or selection biases, rather than simply to offer the uninspired interpretation that the estimated effect is significant, as if neither chance nor bias could then account for the findings.
  • Many data analysts appear to remain oblivious to the qualitative nature of significance testing.
  • statistical significance is itself only a dichotomous indicator.
  • it cannot convey much useful information
  • Even worse, those two values often signal just the wrong interpretation. These misleading signals occur when a trivial effect is found to be ’significant’, as often happens in large studies, or when a strong relation is found ’nonsignificant’, as often happens in small studies.
  • Another useful paper on this issue is Kristin Sainani, (2010) “Misleading Comparisons: The Fallacy of Comparing Statistical Significance”Physical Medicine and Rehabilitation, Vol. 2 (June), 559-562 which discusses the need to look carefully at within-group differences as well as between-group differences, and at sub-group significance compared to interaction. She concludes: ‘Readers should have a particularly high index of suspicion for controlled studies that fail to report between-group comparisons, because these likely represent attempts to “spin” null results.”
  •  
    and at sub-group significance compared to interaction. She concludes: 'Readers should have a particularly high index of suspicion for controlled studies that fail to report between-group comparisons, because these likely represent attempts to "spin" null results."
Judy Rumph

Views: Why Are We Assessing? - Inside Higher Ed - 1 views

  • Amid all this progress, however, we seem to have lost our way. Too many of us have focused on the route we’re traveling: whether assessment should be value-added; the improvement versus accountability debate; entering assessment data into a database; pulling together a report for an accreditor. We’ve been so focused on the details of our route that we’ve lost sight of our destinatio
  • Our destination, which is what we should be focusing on, is the purpose of assessment. Over the last decades, we've consistently talked about two purposes of assessment: improvement and accountability. The thinking has been that improvement means using assessment to identify problems — things that need improvement — while accountability means using assessment to show that we're already doing a great job and need no improvement. A great deal has been written about the need to reconcile these two seemingly disparate purposes.
  • The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education
  • ...7 more annotations...
  • Our second common purpose of assessment should be making sure not only that students learn what’s important, but that their learning is of appropriate scope, depth, and rigo
  • Third, we need to accept how good we already are, so we can recognize success when we see i
  • And we haven’t figured out a way to tell the story of our effectiveness in 25 words or less, which is what busy people want and nee
  • Because we're not telling the stories of our successful outcomes in simple, understandable terms, the public continues to define quality using the outdated concept of inputs like faculty credentials, student aptitude, and institutional wealth — things that by themselves don’t say a whole lot about student learning.
  • And people like to invest in success. Because the public doesn't know how good we are at helping students learn, it doesn't yet give us all the support we need in our quest to give our students the best possible education.
  • But while virtually every college and university has had to make draconian budget cuts in the last couple of years, with more to come, I wonder how many are using solid, systematic evidence — including assessment evidence — to inform those decisions.
  • Now is the time to move our focus from the road we are traveling to our destination: a point at which we all are prudent, informed stewards of our resources… a point at which we each have clear, appropriate, justifiable, and externally-informed standards for student learning. Most importantly, now is the time to move our focus from assessment to learning, and to keeping our promises. Only then can we make higher education as great as it needs to be.
  •  
    Yes, this article resonnated with me too. Especially connecting assessment to teaching and learning. The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education.... today we seem to be devoting more time, money, thought, and effort to assessment than to helping faculty help students learn as effectively as possible. When our colleagues have disappointing assessment results, and they don't know what to do to improve them, I wonder how many have been made aware that, in some respects, we are living in a golden age of higher education, coming off a quarter-century of solid research on practices that promote deep, lasting learning. I wonder how many are pointed to the many excellent resources we now have on good teaching practices, including books, journals, conferences and, increasingly, teaching-learning centers right on campus. I wonder how many of the graduate programs they attended include the study and practice of contemporary research on effective higher education pedagogies. No wonder so many of us are struggling to make sense of our assessment results! Too many of us are separating work on assessment from work on improving teaching and learning, when they should be two sides of the same coin. We need to bring our work on teaching, learning, and assessment together.
‹ Previous 21 - 40 of 886 Next › Last »
Showing 20 items per page