Skip to main content

Home/ CTLT and Friends/ Group items tagged faculty development

Rss Feed Group items tagged

Gary Brown

At Colleges, Assessment Satisfies Only Accreditors - Letters to the Editor - The Chroni... - 2 views

  • Some of that is due to the influence of the traditional academic freedom that faculty members have enjoyed. Some of it is ego. And some of it is lack of understanding of how it can work. There is also a huge disconnect between satisfying outside parties, like accreditors and the government, and using assessment as a quality-improvement system.
  • We are driven by regional accreditation and program-level accreditation, not by quality improvement. At our institution, we talk about assessment a lot, and do just enough to satisfy the requirements of our outside reviewers.
  • Standardized direct measures, like the Major Field Test for M.B.A. graduates?
  • ...5 more annotations...
  • The problem with the test is that it does not directly align with our program's learning outcomes and it does not yield useful information for closing the loop. So why do we use it? Because it is accepted by accreditors as a direct measure and it is less expensive and time-consuming than more useful tools.
  • Without exception, the most useful information for improving the program and student learning comes from the anecdotal and indirect information.
  • We don't have the time and the resources to do what we really want to do to continuously improve the quality of our programs and instruction. We don't have a culture of continuous improvement. We don't make changes on a regular basis, because we are trapped by the catalog publishing cycle, accreditation visits, and the entrenched misunderstanding of the purposes of assessment.
  • The institutions that use it are ones that have adequate resources to do so. The time necessary for training, whole-system involvement, and developing the programs for improvement is daunting. And it is only being used by one regional accrediting body, as far as I know.
  • Until higher education as a whole is willing to look at changing its approach to assessment, I don't think it will happen
  •  
    The challenge and another piece of evidence that the nuances of assessment as it related to teaching and learning remain elusive.
Gary Brown

A Critic Sees Deep Problems in the Doctoral Rankings - Faculty - The Chronicle of Highe... - 1 views

  • This week he posted a public critique of the NRC study on his university's Web site.
  • "Little credence should be given" to the NRC's ranges of rankings.
  • There's not very much real information about quality in the simple measures they've got."
  • ...4 more annotations...
  • The NRC project's directors say that those small samples are not a problem, because the reputational scores were not converted directly into program assessments. Instead, the scores were used to develop a profile of the kinds of traits that faculty members value in doctoral programs in their field.
  • For one thing, Mr. Stigler says, the relationships between programs' reputations and the various program traits are probably not simple and linear.
  • if these correlations between reputation and citations were plotted on a graph, the most accurate representation would be a curved line, not a straight line. (The curve would occur at the tipping point where high citation levels make reputations go sky-high.)
  • Mr. Stigler says that it was a mistake for the NRC to so thoroughly abandon the reputational measures it used in its previous doctoral studies, in 1982 and 1995. Reputational surveys are widely criticized, he says, but they do provide a check on certain kinds of qualitative measures.
  •  
    What is not challenged is the validity and utility of the construct itself--reputation rankings.
Gary Brown

Wired Campus - The Chronicle of Higher Education - 0 views

  • colleges and universities can learn from for-profit colleges' approach to teaching.
  • "If disruptive technology allows them to serve new markets, or serve markets more efficiently and effectively in order to profit, then they are more likely to utilize them."
  • Some for-profit institutions emphasize instructor training in a way that more traditional institutions should emulate, according to the report. The University of Phoenix, for example, "has required faculty to participate in a four-week training program that includes adult learning theory," the report said.
  • ...1 more annotation...
  • The committee's largest sponsors include GE, Merrill Lynch and Company, IBM, McKinsey and Company, General Motors, and Pfizer.
  •  
    Minimally the advocates list suggests that higher ed might qualify for a bail out.
Gary Brown

Capella University to Receive 2010 CHEA Award - 2 views

  • The Council for Higher Education Accreditation, a national advocate and institutional voice for self-regulation of academic quality through accreditation, has awarded the 2010 CHEA Award for Outstanding Institutional Practice in Student Learning Outcomes to Capella University (MN), one of four institutions that will receive the award in 2010. Capella University is the first online university to receive the award.
  • Capella University’s faculty have developed an outcomes-based curricular model
  • “Capella University is a leader in accountability in higher education. Their work in student learning outcomes exemplifies the progress that institutions are making through the implementation of comprehensive, relevant and effective initiatives,” said CHEA President Judith Eaton. “We are pleased to recognize this institution with the CHEA Award.”
  • ...2 more annotations...
  • our award criteria: 1) articulation and evidence of outcomes; 2) success with regard to outcomes; 3) information to the public about outcomes; and 4) use of outcomes for educational improvement.
  • In addition to Capella University, Portland State University (OR), St. Olaf College (MN) and the University of Arkansas - Fort Smith (AR) also will receive the 2010 CHEA Award. The award will be presented at the 2010 CHEA Annual Conference, which will be held January 25-28 in Washington, D.C
  •  
    Capella has mandatory faculty training program, and then they select from the training program those who will teach. Candidates also pay their own tuition for the "try-out" or training.
Theron DesRosier

Virtual-TA - 2 views

  • We also developed a technology platform that allows our TAs to electronically insert detailed, actionable feedback directly into student assignments
  • Your instructors give us the schedule of assignments, when student assignments are due, when we might expect to receive them electronically, when the scored assignments will be returned, the learning outcomes on which to score the assignments, the rubrics to be used and the weights to be applied to different learning outcomes. We can use your rubrics to score assignments or design rubrics for sign-off by your faculty members.
  • review and embed feedback using color-coded pushpins (each color corresponds to a specific learning outcome) directly onto the electronic assignments. Color-coded pushpins provide a powerful visual diagnostic.
  • ...5 more annotations...
  • We do not have any contact with your students. Instructors retain full control of the process, from designing the assignments in the first place, to specifying learning outcomes and attaching weights to each outcome. Instructors also review the work of our TAs through a step called the Interim Check, which happens after 10% of the assignments have been completed. Faculty provide feedback, offer any further instructions and eventually sign-off on the work done, before our TAs continue with the remainder of the assignments
  • Finally, upon the request of the instructor, the weights he/she specified to the learning outcomes will be rubric-based scores which are used to generate a composite score for each student assignment
  • As an added bonus, our Virtual-TAs provide a detailed, summative report for the instructor on the overall class performance on the given assignment, which includes a look at how the class fared on each outcome, where the students did well, where they stumbled and what concepts, if any, need reinforcing in class the following week.
  • We can also, upon request, generate reports by Student Learning Outcomes (SLOs). This report can be used by the instructor to immediately address gaps in learning at the individual or classroom level.
  • Think of this as a micro-closing-of-the-loop that happens each week.  Contrast this with the broader, closing-the-loop that accompanies program-level assessment of learning, which might happen at the end of a whole academic year or later!
  •  
    I went to Virtual TA and Highlighted their language describing how it works.
Gary Brown

Evaluations That Make the Grade: 4 Ways to Improve Rating the Faculty - Teaching - The ... - 1 views

  • For students, the act of filling out those forms is sometimes a fleeting, half-conscious moment. But for instructors whose careers can live and die by student evaluations, getting back the forms is an hour of high anxiety
  • "They have destroyed higher education." Mr. Crumbley believes the forms lead inexorably to grade inflation and the dumbing down of the curriculum.
  • Texas enacted a law that will require every public college to post each faculty member's student-evaluation scores on a public Web site.
  • ...10 more annotations...
  • The IDEA Center, an education research group based at Kansas State University, has been spreading its particular course-evaluation gospel since 1975. The central innovation of the IDEA system is that departments can tailor their evaluation forms to emphasize whichever learning objectives are most important in their discipline.
  • (Roughly 350 colleges use the IDEA Center's system, though in some cases only a single department or academic unit participates.)
  • The new North Texas instrument that came from these efforts tries to correct for biases that are beyond an instructor's control. The questionnaire asks students, for example, whether the classroom had an appropriate size and layout for the course. If students were unhappy with the classroom, and if it appears that their unhappiness inappropriately colored their evaluations of the instructor, the system can adjust the instructor's scores accordingly.
  • Elaine Seymour, who was then director of ethnography and evaluation research at the University of Colorado at Boulder, was assisting with a National Science Foundation project to improve the quality of science instruction at the college level. She found that many instructors were reluctant to try new teaching techniques because they feared their course-evaluation ratings might decline.
  • "So the ability to do some quantitative analysis of these comments really allows you to take a more nuanced and effective look at what these students are really saying."
  • Mr. Frick and his colleagues found that his new course-evaluation form was strongly correlated with both students' and instructors' own measures of how well the students had mastered each course's learning goals.
  • The survey instrument, known as SALG, for Student Assessment of their Learning Gains, is now used by instructors across the country. The project's Web site contains more than 900 templates, mostly for courses in the sciences.
  • "Students are the inventory," Mr. Crumbley says. "The real stakeholders in higher education are employers, society, the people who hire our graduates. But what we do is ask the inventory if a professor is good or bad. At General Motors," he says, "you don't ask the cars which factory workers are good at their jobs. You check the cars for defects, you ask the drivers, and that's how you know how the workers are doing."
  • William H. Pallett, president of the IDEA Center, says that when course rating surveys are well-designed and instructors make clear that they care about them, students will answer honestly and thoughtfully.
  • In Mr. Bain's view, student evaluations should be just one of several tools colleges use to assess teaching. Peers should regularly visit one another's classrooms, he argues. And professors should develop "teaching portfolios" that demonstrate their ability to do the kinds of instruction that are most important in their particular disciplines. "It's kind of ironic that we grab onto something that seems fixed and fast and absolute, rather than something that seems a little bit messy," he says. "Making decisions about the ability of someone to cultivate someone else's learning is inherently a messy process. It can't be reduced to a formula."
  •  
    Old friends at the Idea Center, and an old but persistent issue.
Theron DesRosier

How Group Dynamics May Be Killing Innovation - Knowledge@Wharton - 5 views

  • Christian Terwiesch and Karl Ulrich argue that group dynamics are the enemy of businesses trying to develop one-of-a-kind new products, unique ways to save money or distinctive marketing strategies.
  • Terwiesch, Ulrich and co-author Karan Girotra, a professor of technology and operations management at INSEAD, found that a hybrid process -- in which people are given time to brainstorm on their own before discussing ideas with their peers -- resulted in more and better quality ideas than a purely team-oriented process.
    • Theron DesRosier
       
      This happens naturally when collaboration is asynchronous.
    • Theron DesRosier
       
      They use the term "team oriented process" but what they mean, I think, is a synchronous, face to face, brainstorming session.
  • Although several existing experimental studies criticize the team brainstorming process due to the interference of group dynamics, the Wharton researchers believe their work stands out due to a focus on the quality, in addition to the number, of ideas generated by the different processes -- in particular, the quality of the best idea.
  • ...8 more annotations...
  • "The evaluation part is critical. No matter which process we used, whether it was the [team] or hybrid model, they all did significantly worse than we hoped [in the evaluation stage]," Terwiesch says. "It's no good generating a great idea if you don't recognize the idea as great. It's like me sitting here and saying I had the idea for Amazon. If I had the idea but didn't do anything about it, then it really doesn't matter that I had the idea."
  • He says an online system that creates a virtual "suggestion box" can accomplish the same goal as long as it is established to achieve a particular purpose.
  • Imposing structure doesn't replace or stifle the creativity of employees, Ulrich adds. In fact, the goal is to establish an idea generation process that helps to bring out the best in people. "We have found that, in the early phases of idea generation, providing very specific process guideposts for individuals [such as] 'Generate at least 10 ideas and submit them by Wednesday,' ensures that all members of a team contribute and that they devote sufficient creative energy to the problem."
  • The results of the experiment with the students showed that average quality of the ideas generated by the hybrid process were better than those that came from the team process by the equivalent of roughly 30 percentage points.
  • in about three times more ideas than the traditional method.
  • "We find huge differences in people's levels of creativity, and we just have to face it. We're not all good singers and we're not all good runners, so why should we expect that we all are good idea generators?
  • They found that ideas built around other ideas are not statistically better than any random suggestion.
  • "In innovation, variance is your friend. You want wacky stuff because you can afford to reject it if you don't like it. If you build on group norms, the group kills variance."
  •  
    Not as radical as it first seems, but pertains to much of our work and the work of others.
Joshua Yeidel

University World News - US: America can learn from Bologna process - 0 views

  •  
    Lumina proposes that the US "adapt and apply the lessons learned from the Bologna Process in the EU, which has developed methodologies that "uniquely focus on linking student learning and the outcomes of higher education" -- tautological though that sounds.
  •  
    Apparently the "audacious" discussion in the WCET webinar yesterday (to be linked) featuring Ellen Wagner and Peter Smith is old hat in Europe. A national "degree framework" is almost inconceivable in the US, but 'tuning' -- "faculty-led approach that involves seeking input from students, recent graduates and employers to establish criterion-referenced learning outcomes and competencies" -- sounds a lot like in goal-setting.
Joshua Yeidel

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • "The CLA is really an authentic assessment process,"
    • Joshua Yeidel
       
      What is the meaning of "authentic" in this statement? It certainly isn't "situated in the real world" or "of intrinsic value".
  • it measures analytical ability, problem-solving ability, critical thinking, and communication.
  • the CLA typically reports scores on a "value added" basis, controlling for the scores that students earned on the SAT or ACT while in high school.
    • Joshua Yeidel
       
      If SAT and ACT are measuring the same things as CLA, then why not just use them? If they are measuring different things, why "control for" their scores?
  • ...5 more annotations...
  • improved models of instruction.
  • add CLA-style assignments to their liberal-arts courses.
    • Joshua Yeidel
       
      Maybe the best way to prepare for the test, but the best way to develop analytical ability, et. al.?
  • "If a college pays attention to learning and helps students develop their skills—whether they do that by participating in our programs or by doing things on their own—they probably should do better on the CLA,"
    • Joshua Yeidel
       
      Just in case anyone missed the message: pay attention to learning, and you'll _probably_ do better on the CLA. Get students to practice CLA tasks, and you _will_ do better on the CLA.
  • "Standardized tests of generic skills—I'm not talking about testing in the major—are so much a measure of what students bring to college with them that there is very little variance left out of which we might tease the effects of college," says Ms. Banta, who is a longtime critic of the CLA. "There's just not enough variance there to make comparative judgments about the comparative quality of institutions."
    • Joshua Yeidel
       
      It's not clear what "standardized tests" means in this comment. Does the "lack of variance" apply to all assessments (including, e.g., e-portfolios)?
  • Can the CLA fill both of those roles?
  •  
    A summary of the current state of "thinking" with regard to CLA. Many fallacies and contradictions are (unintentionally) exposed. At least CLA appears to be more about skills than content (though the question of how it is graded isn't even raised), but the "performance task" approach is the smallest possible step in that direction.
Gary Brown

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • Others say those who take the test have little motivation to do well, which makes it tough to draw conclusions from their performance.
  • "Everything that No Child Left Behind signified during the Bush administration—we operate 180 degrees away from that," says Roger Benjamin, president of the Council for Aid to Education, which developed and promotes the CLA. "We don't want this to be a high-stakes test. We're putting a stake in the ground on classic liberal-arts issues. I'm willing to rest my oar there. These core abilities, these higher-order skills, are very important, and they're even more important in a knowledge economy where everyone needs to deal with a surplus of information." Only an essay test, like the CLA, he says, can really get at those skills.
  • "The CLA is really an authentic assessment process," says Pedro Reyes, associate vice chancellor for academic planning and assessment at the University of Texas system.
  • ...20 more annotations...
  • "The Board of Regents here saw that it would be an important test because it measures analytical ability, problem-solving ability, critical thinking, and communication. Those are the skills that you want every undergraduate to walk away with." (Other large systems that have embraced the CLA include California State University and the West Virginia system.)
  • value added
  • We began by administering a retired CLA question, a task that had to do with analyzing crime-reduction strategies,
  • performance task that mirrors the CLA
  • Mr. Ernsting and Ms. McConnell are perfectly sincere about using CLA-style tasks to improve instruction on their campuses. But at the same time, colleges have a less high-minded motive for familiarizing students with the CLA style: It just might improve their scores when it comes time to take the actual test.
  • by 2012, the CLA scores of more than 100 colleges will be posted, for all the world to see, on the "College Portrait" Web site of the Voluntary System of Accountability, an effort by more than 300 public colleges and universities to provide information about life and learning on their campuses.
  • If familiarizing students with CLA-style tasks does raise their scores, then the CLA might not be a pure, unmediated reflection of the full range of liberal-arts skills. How exactly should the public interpret the scores of colleges that do not use such training exercises?
  • Trudy W. Banta, a professor of higher education and senior adviser to the chancellor for academic planning and evaluation at Indiana University-Purdue University at Indianapolis, believes it is a serious mistake to publicly release and compare scores on the test. There is too much risk, she says, that policy makers and the public will misinterpret the numbers.
  • most colleges do not use a true longitudinal model: That is, the students who take the CLA in their first year do not take it again in their senior year. The test's value-added model is therefore based on a potentially apples-and-oranges comparison.
  • freshman test-takers' scores are assessed relative to their SAT and ACT scores, and so are senior test-takers' scores. For that reason, colleges cannot game the test by recruiting an academically weak pool of freshmen and a strong pool of seniors.
  • students do not always have much motivation to take the test seriously
  • seniors, who are typically recruited to take the CLA toward the end of their final semester, when they can already taste the graduation champagne.
  • Of the few dozen universities that had already chosen to publish CLA data on that site, roughly a quarter of the reports appeared to include erroneous descriptions of the year-to-year value-added scores.
  • It is clear that CLA scores do reflect some broad properties of a college education.
  • Students' CLA scores improved if they took courses that required a substantial amount of reading and writing. Many students didn't take such courses, and their CLA scores tended to stay flat.
  • Colleges that make demands on students can actually develop their skills on the kinds of things measured by the CLA.
  • Mr. Shavelson believes the CLA's essays and "performance tasks" offer an unusually sophisticated way of measuring what colleges do, without relying too heavily on factual knowledge from any one academic field.
  • Politicians and consumers want easily interpretable scores, while colleges need subtler and more detailed data to make internal improvements.
  • The CLA is used at more than 400 colleges
  • Since its debut a decade ago, it has been widely praised as a sophisticated alternative to multiple-choice tests
Gary Brown

OECD Project Seeks International Measures for Assessing Educational Quality - Internati... - 0 views

  • The first phase of an ambitious international study that intends to assess and compare learning outcomes in higher-education systems around the world was announced here on Wednesday at the conference of the Council for Higher Education Accreditation.
  • Richard Yelland, of the OECD's Education Directorate, is leading the project, which he said expects to eventually offer faculty members, students, and governments "a more balanced assessment of higher-education quality" across the organization's 31 member countries.
  • learning outcomes are becoming a central focus worldwide
  • ...7 more annotations...
  • the feasibility study is adapting the Collegiate Learning Assessment, an instrument developed by the Council for Aid to Education in the United States, to an international context.
  • At least six nations are participating in the feasibility study.
  • 14 countries are expected to participate in the full project, with an average of 10 institutions per country and about 200 students per institution,
  • The project's target population will be students nearing the end of three-year or four-year degrees, and will eventually measure student knowledge in economics and engineering.
  • While the goal of the project is not to produce another global ranking of universities, the growing preoccupation with such lists has crystallized what Mr. Yelland described as the urgency of pinning down what exactly it is that most of the world's universities are teaching and how well they are doing
  • Judith S. Eaton, president of the Council for Higher Education Accreditation, said she was also skeptical about whether the project would eventually yield common international assessment mechanisms.
  • Ms. Eaton noted, the same sets of issues recur across borders and systems, about how best to enhance student learning and strengthen economic development and international competitiveness.
  •  
    Another day, another press, again thinking comparisons
Gary Brown

News: Defining Accountability - Inside Higher Ed - 0 views

  • they should do so in ways that reinforce the behaviors they want to see -- and avoid the kinds of perverse incentives that are so evident in many policies today.
  • This is especially true, several speakers argued, on the thorniest of higher education accountability questions -- those related to improving student outcomes.
  • Oh, and one or two people actually talked about how nice it would be if policy makers still envisioned college as a place where people learn about citizenship or just become educated for education's sake.)
  • ...6 more annotations...
  • only if the information they seek to collect is intelligently framed, which the most widely used current measure -- graduation rates -- is not
  • "work force ready"
  • Accountability is not quite as straightforward as we think," said Rhoades, who described himself as "not a 'just say no' guy" about accountability. "It's not a question of whether [colleges and faculty should be held accountable], but how, and by whom," he said. "It's about who's developing the measures, and what behaviors do they encourage?"
  • federal government needs to be the objective protector of taxpayers' dollars,"
  • Judith Eaton, president of the Council for Higher Education Accreditation, said that government regulation would be a major mistake, but said that accreditors needed to come to agreement on "community-driven, outcomes-based standards" to which colleges should be held.
  • But while they complain when policy makers seek to develop measures that compare one institution against another, colleges "keep lists of peers with which they compare themselves" on many fronts, Miller said.
  •  
    High level debates again
Kimberly Green

http://sites.google.com/site/podnetwork/ - 0 views

  •  
    POD's wiki for sharing and discussion
Gary Brown

Colleges May Be Missing a Chance for Change - International - The Chronicle of Higher E... - 1 views

    • Gary Brown
       
      And what are people for, after all?
  • Peter P. Smith, senior vice president for academic strategies and development at Kaplan Higher Education, said that if traditional universities did not adjust, new institutions would evolve to meet student needs. Those new institutions, said Mr. Smith, whose company is a for-profit education provider, would be more student-centric, would deliver instruction with greater flexibility, and would offer educational services at a lower cost.
  • Speakers at an international conference here delivered a scathing assessment of higher education: Universities, they said, are slow to change, uncomfortable in dealing with real-world problems, and culturally resistant to substantive internationalization.
  • ...5 more annotations...
  • The gathering drew about 500 government officials, institutional leaders, and researchers
  • both education and research must become more relevant and responsive to society.
  • many faculty members may be "uncomfortable" with having deeper links to industry because they don't understand that world. Students, however, are highly practical, Mr. Fadel said, and are specifically seeking education that will get them a job or give them an advantage in the workplace.
  • "I'm sorry, as a student, you do not go to university to learn. You go to get a credential," he said.
    • Gary Brown
       
      And if you graduate more appreciative of the credential than what and how you have learned, then the education.
  • That does not mean colleges simply ought to turn out more graduates for in-demand professions like science and engineering, Mr. Fadel added. Colleges need to infuse other disciplines with science and engineering skills.
Gary Brown

Empowerment Evaluation - 1 views

  • Empowerment Evaluation in Stanford University's School of Medicine
  • Empowerment evaluation provides a method for gathering, analyzing, and sharing data about a program and its outcomes and encourages faculty, students, and support personnel to actively participate in system changes.
  • It assumes that the more closely stakeholders are involved in reflecting on evaluation findings, the more likely they are to take ownership of the results and to guide curricular decision making and reform.
  • ...8 more annotations...
  • The steps of empowerment evaluation
  • designating a “critical friend” to communicate areas of potential improvement,
  • collecting evaluation data,
  • encouraging a cycle of reflection and action
  • establishing a culture of evidence
  • developing reflective educational practitioners.
  • cultivating a community of learners
  • yearly cycles of improvement at the Stanford University School of Medicine
  •  
    The findings were presented in Academic Medicine, a medical education journal, earlier this year
Gary Brown

Disciplines Follow Their Own Paths to Quality - Faculty - The Chronicle of Higher Educa... - 2 views

  • But when it comes to the fundamentals of measuring and improving student learning, engineering professors naturally have more to talk about with their counterparts at, say, Georgia Tech than with the humanities professors at Villanova
    • Gary Brown
       
      Perhaps this is too bad....
  • But there is no nationally normed way to measure the particular kind of critical thinking that students of classics acquire
  • er colleagues have created discipline-specific critical-reasoning tests for classics and political science
  • ...5 more annotations...
  • Political science cultivates skills that are substantially different from those in classics, and in each case those skills can't be measured with a general-education test.
  • he wants to use tests of reasoning that are appropriate for each discipline
  • I believe Richard Paul has spent a lifetime articulating the characteristics of discipline based critical thinking. But anyway, I think it is interesting that an attempt is being made to develop (perhaps) a "national standard" for critical thinking in classics. In order to assess anything effectively we need a standard. Without a standard there are no criteria and therefore no basis from which to assess. But standards do not necessarily have to be established at the national level. This raises the issue of scale. What is the appropriate scale from which to measure the quality and effectiveness of an educational experience? Any valid approach to quality assurance has to be multi-scaled and requires multiple measures over time. But to be honest the issues of standards and scale are really just the tip of the outcomes iceberg.
    • Gary Brown
       
      Missing the notion that the variance is in the activity more than the criteria.  We hear little of embedding nationally normed and weighted assignments and then assessing the implementation and facilitation variables.... mirror, not lens.
  • the UW Study of Undergraduate Learning (UW SOUL). Results from the UW SOUL show that learning in college is disciplinary; therefore, real assessment of learning must occur (with central support and resources)in the academic departments. Generic approaches to assessing thinking, writing, research, quantitative reasoning, and other areas of learning may be measuring something, but they cannot measure learning in college.
  • It turns out there is a six week, or 210+ hour serious reading exposure to two or more domains outside ones own, that "turns on" cross domain mapping as a robust capability. Some people just happen to have accumulated, usually by unseen and unsensed happenstance involvements (rooming with an engineer, son of a dad changing domains/careers, etc.) this minimum level of basics that allows robust metaphor based mapping.
Nils Peterson

New Media Technologies and the Scholarship of Teaching and Learning: A Brief Introducti... - 0 views

  • A key element in this transformation is shifting the unit of analysis from the learner in a single course to the learner over time, inside and outside the classroom. What does this shift imply for the ways we understand learning and development? If we accept this new learning paradigm, what kinds of accommodations do we need to make in our approaches to the curriculum, the classroom, the role of faculty, and the empowerment of learners?
    • Nils Peterson
       
      See our conversations about transformative assessment and problems as the motivators of study that span courses. We have a blog post from June, early in the Harvesting Gradebook work, that talks about these problems spanning courses
Gary Brown

Assess this! - 5 views

  • Assess this! is a gathering place for information and resources about new and better ways to promote learning in higher education, with a special focus on high-impact educational practices, student engagement, general or liberal education, and assessment of learning.
  • If you'd like to help make Assess this! more useful, there are some things you can do. You can comment on a post by clicking on the comments link following the post.
  • Of the various ways to assess student learning outcomes, many faculty members prefer what are called “authentic” approaches that document student performance during or at the end of a course or program of study. In this paper, assessment experts Trudy Banta, Merilee Griffin, Teresa Flateby, and Susan Kahn describe the development of several promising authentic assessment approaches.
  • ...5 more annotations...
  • Going PublicDouglas C. Bennett, President of Earlham College, suggests each institution having a public learning audit document and gives the example of what this means for Earlham College as a way for public accountability.
  • More TransparencyMartha Kanter, from the US Education Department, calls for more transparency in the way higher education does accreditation.
  • Despite the uptick in activity, "I still feel like there's no there there" when it comes to colleges' efforts to measure student learning, Kevin Carey, policy director at Education Sector, said in a speech at the Council for Higher Education Accreditation meeting Tuesday.
  • Most of the assessment activity on campuses can be found in nooks and crannies of the institutions - by individual professors, or in one department - and it is often not tied to goals set broadly at the institutional level.
  • Nine Principles of Good Practice for Assessing Student Learning
  •  
    A very interesting useful site where we might help ourselves by getting involved.
Gary Brown

Improving Teaching Will Require Strategic Thinking - Letters to the Editor - The Chroni... - 1 views

  • a rather large gap between knowledge about effective teaching practices in higher education and the use of these practices in higher education.
  • the greatest gains in STEM education are likely to come from the development of strategies to encourage faculty and administrators to implement proven instructional strategies."
  • The issue is not just one of finding better ways to motivate professors. Most professors already take their teaching responsibilities seriously and are motivated to do a good job. Improving instruction will require strategic and systematic work at all levels of the educational system.
  •  
    note the focus on systems
  •  
    This piece raises a related issue we have been discussing at OAI -- "First, and perhaps most important, there is very little research conducted on how to promote change in instructional practices used in higher education. " How does leadership promote change? How do leaders -- such as dept chairs -- promote and manage change? How do they get, or invest in, those skills?
Gary Brown

Practitioner Research as a Way of Knowing: A Case Study of Teacher Learning in Improvi... - 3 views

  •  
    Great resource, particularly for work with science and engineering.
‹ Previous 21 - 40 of 44 Next ›
Showing 20 items per page