Skip to main content

Home/ CTLT and Friends/ Group items tagged how to

Rss Feed Group items tagged

Nils Peterson

2009 Annual Meeting | Conference Program - 0 views

  • This session explores the notion that assessment for transformational learning is best utilized as a learning tool. By providing timely, transparent, and appropriate feedback, both to students and to the institution itself, learning is enhanced – a far different motive for assessment than is external accountability.
    • Nils Peterson
       
      need to get to these guys with our harvesting gradebook ideas...
    • Nils Peterson
       
      decided to attend another session. Hersh was OK before lunch, but the talk by Pan looks more promising
  • Academic and corporate communities agree on the urgent need for contemporary, research-based pedagogies of engagement in STEM fields. Participants will learn how leaders from academic departments and institutions have collaborated with leaders from the corporate and business community in regional networks to ensure that graduates meet the expectations of prospective employers and the public.
    • Nils Peterson
       
      here is another session with links to CTLT work, both harvesting gradebook and the ABET work
  • Professor Pan will discuss the reflective teaching methods used to prepare students to recognize and mobilize community assets as they design, implement, and evaluate projects to improve public health.
    • Nils Peterson
       
      Students tasked to learn about a community, ride the bus, make a Doc appt. Then tasked to do a non-clinical health project in that community (they do plenty of clinical stuff elsewhere in the program). Project must build capacity in the community to survive after the student leaves. Example. Work with hispanic parents in Sacramento about parenting issue, ex getting kids to sleep on time. Student had identified problem in the community, but first project idea was show a video, which was not capacity building. Rather than showing the video, used the video as a template and made a new video. Families were actors. Result was spanish DVD that the community could own. Pan thinks this is increased capacity in the community.
  • ...17 more annotations...
  • Freshman Survey annually examines the academic habits of mind of entering first-year students.  Along with academic involvement, the survey examines diversity, civic engagement, college admissions and expectations of college. 
  • The project aims to promote faculty and student assessment of undergraduate research products in relation to outcomes associated with basic research skills and general undergraduate learning principles (communication and quantitative reasoning, critical thinking, and integration and application of knowledge).
  • They focus educators on the magnitude of the challenge to prepare an ever-increasingly diverse, globally-connected student body with the knowledge, ability, processes, and confidence to adapt to diverse environments and respond creatively to the enormous issues facing humankind.
  • One challenge of civic engagement in the co-curriculum is the merging of cost and outcome: creating meaningful experiences for students and the community with small staffs, on small budgets, while still having significant, purposeful impact. 
  • a)claims that faculty are the sole arbiters of what constitutes a liberal education and b) counter claims that student life professionals also possess the knowledge and expertise critical to defining students’ total learning experiences.  
    • Nils Peterson
       
      also, how many angels can dance on the head of a pin?
  • This session introduces a three-year national effort to document how colleges and universities are using assessment data to improve teaching and learning and to facilitate the dissemination and adoption of best practices in the assessment of college learning outcomes.
  • Exciting pedagogies of engagement abound, including undergraduate research, community-engaged learning, interdisciplinary exploration, and international study.  However, such experiences are typically optional and non-credit-bearing for students, and/or “on top of” the workload for faculty. This session explores strategies for integrating engaged learning into the institutional fabric (curriculum, student role, faculty role) and increasing access to these transformative experiences.
  • hands-on experiential learning, especially in collaboration with other students, is a superior pedagogy but how can this be provided in increasingly larger introductory classes? 
  • As educators seek innovative ways to manage knowledge and expand interdisciplinary attention to pressing global issues, as students and parents look for assurances that their tuition investment will pay professional dividends, and as alumni look for meaningful ways to give back to the institutions that nurtured and prepared them, colleges and universities can integrate these disparate goals through the Guilds, intergenerational membership networks that draw strength from the contributions of all of their members.
    • Nils Peterson
       
      see Theron's ideas for COMM.
  • Civic engagement learning derives its power from the engagement of students with real communities—local, national, and global. This panel explores the relationship between student learning and the contexts in which that learning unfolds by examining programs that place students in diverse contexts close to campus and far afield.
  • For institutional assessment to make a difference for student learning its results must result in changes in classroom practice. This session explores ways in which the institutional assessment of student learning, such as the Wabash National Study of Liberal Arts Education and the Collegiate Learning Assessment, can be connected to our classrooms.
  • Interdisciplinary Teaching and Object-Based Learning in Campus Museums
  • To address pressing needs of their communities, government and non-profit agencies are requesting higher education to provide education in an array of human and social services. To serve these needs effectively, higher educationneeds to broaden and deepen its consultation with practitioners in designing new curricula. Colleges and universities would do well to consider a curriculum development model that requires consultation not only with potential employers, but also with practitioners and supervisors of practitioners.
  • Should Academics be Active? Campuses and Cutting Edge Civic Engagement
  • If transformational liberal education requires engaging the whole student across the educational experience, how can colleges and universities renew strategy and allocate resources effectively to support it?  How can assessment be used to improve student learning and strengthen a transformational learning environment? 
    • Nils Peterson
       
      Purpose of university is not to grant degrees, it has something to do with learning. Keeling's perspective is that the learning should be transformative; changing perspective. Liberating and emancipatory Learning is a complex interaction among student and others, new knowledge and experience, event, own aspirations. learners construct meaning from these elements. "we change our minds" altering the brain at the micro-level Brain imaging research demonstrates that analogical learning (abstract) demands more from more areas of the brain than semantic (concrete) learning. Mind is not an abstraction, it is based in the brain, a working physical organ .Learner and the environment matter to the learning. Seeds magazine, current issue on brain imaging and learning. Segway from brain research to need for university to educate the whole student. Uses the term 'transformative learning' meaning to transform the learning (re-wire the brain) but does not use transformative assessment (see wikipedia).
  • But as public debates roil, higher education has been more reactive than proactive on the question of how best to ensure that today’s students are fully prepared for a fast-paced future.
    • Nils Peterson
       
      Bologna process being adopted (slowly) in EU, the idea is to make academic degrees more interchangeable and understandable across the EU three elements * Qualification Frameworks (transnational, national, disciplinary). Frameworks are graduated, with increasing expertise and autonomy required for the upper levels. They sound like broad skills that we might recognize in the WSU CITR. Not clear how they are assessed * Tuning (benchmarking) process * Diploma Supplements (licensure, thesis, other capstone activities) these extend the information in the transcript. US equivalent might be the Kuali Students system for extending the transcript. Emerging dialog on American capability This dialog is coming from 2 directions * on campus * employers Connect to the Greater Exceptions (2000-2005) iniative. Concluded that American HE has islands of innovation. Lead to LEAP (Liberal Education and America's Promise) Initiative (2005-2015). The dialog is converging because of several forces * Changes in the balance of economic and political power. "The rise of the rest (of the world)" * Global economy in which innovation is key to growth and prosperity LEAP attempts to frame the dialog (look for LEAP in AACU website). Miami-Dade CC has announced a LEAP-derived covenant, the goals must span all aspects of their programs. Define liberal education Knowledge of human cultures and the physical and natural world intellectual and practical skills responsibility integrative skills Marker of success is (here is where the Transformative Gradebook fits in): evidence that students can apply the essential learning outcomes to complex, unscripted problems and real-world settings Current failure -- have not tracked our progress, or have found that we are not doing well. See AACU employer survey 5-10% percent of current graduates taking courses that would meet the global competencies (transcript analysis) See NSSE on Personal and social responsibility gains, less tha
  • Dr. Pan will also talk about strategies for breaking down cultural barriers.
    • Nils Peterson
       
      Pan. found a non-profit agency to be a conduit and coordinator to level the power between univ and grass roots orgs. helped with cultural gaps.
Judy Rumph

Views: Why Are We Assessing? - Inside Higher Ed - 1 views

  • Amid all this progress, however, we seem to have lost our way. Too many of us have focused on the route we’re traveling: whether assessment should be value-added; the improvement versus accountability debate; entering assessment data into a database; pulling together a report for an accreditor. We’ve been so focused on the details of our route that we’ve lost sight of our destinatio
  • Our destination, which is what we should be focusing on, is the purpose of assessment. Over the last decades, we've consistently talked about two purposes of assessment: improvement and accountability. The thinking has been that improvement means using assessment to identify problems — things that need improvement — while accountability means using assessment to show that we're already doing a great job and need no improvement. A great deal has been written about the need to reconcile these two seemingly disparate purposes.
  • The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education
  • ...7 more annotations...
  • Our second common purpose of assessment should be making sure not only that students learn what’s important, but that their learning is of appropriate scope, depth, and rigo
  • Third, we need to accept how good we already are, so we can recognize success when we see i
  • And we haven’t figured out a way to tell the story of our effectiveness in 25 words or less, which is what busy people want and nee
  • Because we're not telling the stories of our successful outcomes in simple, understandable terms, the public continues to define quality using the outdated concept of inputs like faculty credentials, student aptitude, and institutional wealth — things that by themselves don’t say a whole lot about student learning.
  • And people like to invest in success. Because the public doesn't know how good we are at helping students learn, it doesn't yet give us all the support we need in our quest to give our students the best possible education.
  • But while virtually every college and university has had to make draconian budget cuts in the last couple of years, with more to come, I wonder how many are using solid, systematic evidence — including assessment evidence — to inform those decisions.
  • Now is the time to move our focus from the road we are traveling to our destination: a point at which we all are prudent, informed stewards of our resources… a point at which we each have clear, appropriate, justifiable, and externally-informed standards for student learning. Most importantly, now is the time to move our focus from assessment to learning, and to keeping our promises. Only then can we make higher education as great as it needs to be.
  •  
    Yes, this article resonnated with me too. Especially connecting assessment to teaching and learning. The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education.... today we seem to be devoting more time, money, thought, and effort to assessment than to helping faculty help students learn as effectively as possible. When our colleagues have disappointing assessment results, and they don't know what to do to improve them, I wonder how many have been made aware that, in some respects, we are living in a golden age of higher education, coming off a quarter-century of solid research on practices that promote deep, lasting learning. I wonder how many are pointed to the many excellent resources we now have on good teaching practices, including books, journals, conferences and, increasingly, teaching-learning centers right on campus. I wonder how many of the graduate programs they attended include the study and practice of contemporary research on effective higher education pedagogies. No wonder so many of us are struggling to make sense of our assessment results! Too many of us are separating work on assessment from work on improving teaching and learning, when they should be two sides of the same coin. We need to bring our work on teaching, learning, and assessment together.
Gary Brown

Educators Mull How to Motivate Professors to Improve Teaching - Curriculum - The Chroni... - 4 views

  • "Without an unrelenting focus on quality—on defining and measuring and ensuring the learning outcomes of students—any effort to increase college-completion rates would be a hollow effort indeed."
  • If colleges are going to provide high-quality educations to millions of additional students, they said, the institutions will need to develop measures of student learning than can assure parents, employers, and taxpayers that no one's time and money are being wasted.
  • "Effective assessment is critical to ensure that our colleges and universities are delivering the kinds of educational experiences that we believe we actually provide for students," said Ronald A. Crutcher, president of Wheaton College, in Massachusetts, during the opening plenary. "That data is also vital to addressing the skepticism that society has about the value of a liberal education."
  • ...13 more annotations...
  • But many speakers insisted that colleges should go ahead and take drastic steps to improve the quality of their instruction, without using rigid faculty-incentive structures or the fiscal crisis as excuses for inaction.
  • Handing out "teacher of the year" awards may not do much for a college
  • W.E. Deming argued, quality has to be designed into the entire system and supported by top management (that is, every decision made by CEOs and Presidents, and support systems as well as operations) rather than being made the responsibility solely of those delivering 'at the coal face'.
  • I see as a certain cluelessness among those who think one can create substantial change based on volunteerism
  • Current approaches to broaden the instructional repertoires of faculty members include faculty workshops, summer leave, and individual consultations, but these approaches work only for those relatively few faculty members who seek out opportunities to broaden their instructional methods.
  • The approach that makes sense to me is to engage faculty members at the departmental level in a discussion of the future and the implications of the future for their field, their college, their students, and themselves. You are invited to join an ongoing discussion of this issue at http://innovate-ideagora.ning.com/forum/topics/addressing-the-problem-of
  • Putting pressure on professors to improve teaching will not result in better education. The primary reason is that they do not know how to make real improvements. The problem is that in many fields of education there is either not enough research, or they do not have good ways of evaluationg the results of their teaching.
  • Then there needs to be a research based assessment that can be used by individual professors, NOT by the administration.
  • Humanities educatiors either have to learn enough statistics and cognitive science so they can make valid scientific comparisons of different strategies, or they have to work with cognitive scientists and statisticians
  • good teaching takes time
  • On the measurement side, about half of the assessments constructed by faculty fail to meet reasonable minimum standards for validity. (Interestingly, these failures leave the door open to a class action lawsuit. Physicians are successfully sued for failing to apply scientific findings correctly; commerce is replete with lawsuits based on measurement errors.)
  • The elephant in the corner of the room --still-- is that we refuse to measure learning outcomes and impact, especially proficiencies generalized to one's life outside the classroom.
  • until universities stop playing games to make themselves look better because they want to maintain their comfortable positions and actually look at what they can do to improve nothing is going to change.
  •  
    our work, our friends (Ken and Jim), and more context that shapes our strategy.
  •  
    How about using examples of highly motivational lecture and teaching techniques like the Richard Dawkins video I presented on this forum, recently. Even if teacher's do not consciously try to adopt good working techniques, there is at least a strong subconscious human tendency to mimic behaviors. I think that if teachers see more effective techniques, they will automatically begin to adopt adopt them.
Nils Peterson

Innovating the 21st-Century University: It's Time! (EDUCAUSE Review) | EDUCAUSE - 4 views

  • change is required in two vast and interwoven domains that permeate the deep structures and operating model of the university: (1) the value created for the main customers of the university (the students); and (2) the model of production for how that value is created. First we need to toss out the old industrial model of pedagogy (how learning is accomplished) and replace it with a new model called collaborative learning. Second we need an entirely new modus operandi for how the subject matter, course materials, texts, written and spoken word, and other media (the content of higher education) are created.
  • Research shows that mutual exploration, group problem solving, and collective meaning-making produce better learning outcomes and understanding overall. Brown and Adler cite a study by Richard J. Light, of the Harvard Graduate School of Education: "Light discovered that one of the strongest determinants of students' success in higher education . . . was their ability to form or participate in small study groups. Students who studied in groups, even only once a week, were more engaged in their studies, were better prepared for class, and learned significantly more than students who worked on their own."
  • Second, the web enables students to collaborate with others independent of time and geography. Finally, the web represents a new mode of production for knowledge, and that changes just about everything regarding how the "content" of college and university courses are created.
  • ...7 more annotations...
  • As Seymour Papert, one of the world's foremost experts on how technology can provide new ways to learn, put it: "The scandal of education is that every time you teach something, you deprive a [student] of the pleasure and benefit of discovery."14 Students need to integrate new information with the information they already have — to "construct" new knowledge structures and meaning.
  • Universities need an entirely new modus operandi for how the content of higher education is created. The university needs to open up, embrace collaborative knowledge production, and break down the walls that exist among institutions of higher education and between those institutions and the rest of the world.To do so, universities require deep structural changes — and soon. More than three years ago, Charles M. Vest published "Open Content and the Emerging Global Meta-University" in EDUCAUSE Review. In his concluding paragraph, Vest offered a tantalizing vision: "My view is that in the open-access movement, we are seeing the early emergence of a meta-university — a transcendent, accessible, empowering, dynamic, communally constructed framework of open materials and platforms on which much of higher education worldwide can be constructed or enhanced. The Internet and the Web will provide the communication infrastructure, and the open-access movement and its derivatives will provide much of the knowledge and information infrastructure." Vest wrote that the meta-university "will speed the propagation of high-quality education and scholarship. . . . The emerging meta-university, built on the power and ubiquity of the Web and launched by the open courseware movement, will give teachers and learners everywhere the ability to access and share teaching materials, scholarly publications, scientific works in progress, teleoperation of experiments, and worldwide collaborations, thereby achieving economic efficiencies and raising the quality of education through a noble and global endeavor."17
  • Used properly, wikis are tremendously powerful tools to collaborate and co-innovate new content. Tapscott wrote the foreword for a book called We Are Smarter Than Me (2008). The book, a best-seller, was written by Barry Libert, Jon Spector, and more than 4,000 people who contributed to the book's wiki. If a global collaboration can write a book, surely one could be used to create a university course. A professor could operate a wiki with other teachers. Or a professor could use a wiki with his or her students, thereby co-innovating course content with the students themselves. Rather than simply being the recipients of the professor's knowledge, the students co-create the knowledge on their own, which has been shown to be one of the most effective methods of learning.
  • The student might enroll in the primary college in Oregon and register to take a behavioral psychology course from Stanford University and a medieval history course from Cambridge. For these students, the collective syllabi of the world form their menu for higher education. Yet the opportunity goes beyond simply mixing and matching courses. Next-generation faculty will create a context whereby students from around the world can participate in online discussions, forums, and wikis to discover, learn, and produce knowledge as networked individuals and collectively.
  • But what about credentials? As long as the universities can grant degrees, their supremacy will never be challenged." This is myopic thinking. The value of a credential and even the prestige of a university are rooted in its effectiveness as a learning institution. If these institutions are shown to be inferior to alternative learning environments, their capacity to credential will surely diminish. How much longer will, say, a Harvard undergraduate degree, taught mostly through lectures by teaching assistants in large classes, be able to compete in status with the small class size of liberal arts colleges or the superior delivery systems that harness the new models of learning?
  • As part of this, the academic journal should be disintermediated and the textbook industry eliminated. In fact, the word textbook is an oxymoron today. Content should be multimedia — not just text. Content should be networked and hyperlinked bits — not atoms. Moreover, interactive courseware — not separate "books" — should be used to present this content to students, constituting a platform for every subject, across disciplines, among institutions, and around the world. The textbook industry will never reinvent itself, however, since legacy cultures and business models die hard. It will be up to scholars and students to do this collectively.
  • Ultimately, we will need more objective measures centered on students' learning performance.
Nils Peterson

Views: Changing the Equation - Inside Higher Ed - 1 views

  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resource
  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
  • ...19 more annotations...
  • year we strug
  • year we strug
  • those who control influential rating systems of the sort published by U.S. News & World Report -- define academic quality as small classes taught by distinguished faculty, grand campuses with impressive libraries and laboratories, and bright students heavily recruited. Since all of these indicators of quality are costly, my college’s pursuit of quality, like that of so many others, led us to seek more revenue to spend on quality improvements. And the strategy worked.
  • Based on those concerns, and informed by the literature on the “teaching to learning” paradigm shift, we began to change our focus from what we were teaching to what and how our students were learning.
  • No one wants to cut costs if their reputation for quality will suffer, yet no one wants to fall off the cliff.
  • When quality is defined by those things that require substantial resources, efforts to reduce costs are doomed to failure
  • some of the best thinkers in higher education have urged us to define the quality in terms of student outcomes.
  • Faculty said they wanted to move away from giving lectures and then having students parrot the information back to them on tests. They said they were tired of complaining that students couldn’t write well or think critically, but not having the time to address those problems because there was so much material to cover. And they were concerned when they read that employers had reported in national surveys that, while graduates knew a lot about the subjects they studied, they didn’t know how to apply what they had learned to practical problems or work in teams or with people from different racial and ethnic backgrounds.
  • Our applications have doubled over the last decade and now, for the first time in our 134-year history, we receive the majority of our applications from out-of-state students.
  • We established what we call college-wide learning goals that focus on "essential" skills and attributes that are critical for success in our increasingly complex world. These include critical and analytical thinking, creativity, writing and other communication skills, leadership, collaboration and teamwork, and global consciousness, social responsibility and ethical awareness.
  • despite claims to the contrary, many of the factors that drive up costs add little value. Research conducted by Dennis Jones and Jane Wellman found that “there is no consistent relationship between spending and performance, whether that is measured by spending against degree production, measures of student engagement, evidence of high impact practices, students’ satisfaction with their education, or future earnings.” Indeed, they concluded that “the absolute level of resources is less important than the way those resources are used.”
  • After more than a year, the group had developed what we now describe as a low-residency, project- and competency-based program. Here students don’t take courses or earn grades. The requirements for the degree are for students to complete a series of projects, captured in an electronic portfolio,
  • students must acquire and apply specific competencies
  • Faculty spend their time coaching students, providing them with feedback on their projects and running two-day residencies that bring students to campus periodically to learn through intensive face-to-face interaction
  • After a year and a half, the evidence suggests that students are learning as much as, if not more than, those enrolled in our traditional business program
  • As the campus learns more about the demonstration project, other faculty are expressing interest in applying its design principles to courses and degree programs in their fields. They created a Learning Coalition as a forum to explore different ways to capitalize on the potential of the learning paradigm.
  • a problem-based general education curriculum
  • At the very least, finding innovative ways to lower costs without compromising student learning is wise competitive positioning for an uncertain future
  • the focus of student evaluations has changed noticeably. Instead of focusing almost 100% on the instructor and whether he/she was good, bad, or indifferent, our students' evaluations are now focusing on the students themselves - as to what they learned, how much they have learned, and how much fun they had learning.
    • Nils Peterson
       
      gary diigoed this article. this comment shines another light -- the focus of the course eval shifted from faculty member to course & student learning when the focus shifted from teaching to learning
  •  
    A must read spotted by Jane Sherman--I've highlighed, as usual, much of it.
Theron DesRosier

The scientist and blogging - 1 views

  •  
    Some suggestions fort Scientists about blogging. "So what should you put in your blog? (1) Talk about your research. What have you done in the past? What are you working on at the moment? There is some controversy as to how transparent you should be when talking about your research (OMG, someone is going to steal my idea if I write it down! No wait, if everyone knows I said it first, then they can't steal it!), so it's up to you to decide how comfortable you are about sharing your research ideas. I'm old-fashioned enough that I tend towards the side that thinks we should be discreet about the details of what we're working on, but I also understand the side that wants everything to be out there. (2) Talk about other people's research. Do you agree with their results? Do you think that they missed something important? You may feel unqualified to criticize somebody else's work, but science does not advance through groupthink. Remember, part of your job as a scientist will be to review other people's papers. Now is as good a time as any to start practicing. (3) Talk about issues related to your research. Are you working on smartphones? Talk about how they're being integrated into museum visits. Working on accessibility issues? Talk about some of the problems that the handicapped encounter during their daily routine. Just make sure you choose to talk about something that interests you so that you feel motivated to write to your blog. "
Gary Brown

Ranking Employees: Why Comparing Workers to Their Peers Can Often Backfire - Knowledge@... - 2 views

  • We live in a world full of benchmarks and rankings. Consumers use them to compare the latest gadgets. Parents and policy makers rely on them to assess schools and other public institutions,
  • "Many managers think that giving workers feedback about their performance relative to their peers inspires them to become more competitive -- to work harder to catch up, or excel even more. But in fact, the opposite happens," says Barankay, whose previous research and teaching has focused on personnel and labor economics. "Workers can become complacent and de-motivated. People who rank highly think, 'I am already number one, so why try harder?' And people who are far behind can become depressed about their work and give up."
  • mong the companies that use Mechanical Turk are Google, Yahoo and Zappos.com, the online shoe and clothing purveyor.
  • ...12 more annotations...
  • Nothing is more compelling than data from actual workplace settings, but getting it is usually very hard."
  • Instead, the job without the feedback attracted more workers -- 254, compared with 76 for the job with feedback.
  • "This indicates that when people are great and they know it, they tend to slack off. But when they're at the bottom, and are told they're doing terribly, they are de-motivated," says Barankay.
  • In the second stage of the experiment
  • it seems that people would rather not know how they rank compared to others, even though when we surveyed these workers after the experiment, 74% said they wanted feedback about their rank."
  • Of the workers in the control group, 66% came back for more work, compared with 42% in the treatment group. The members of the treatment group who returned were also 22% less productive than the control group. This seems to dispel the notion that giving people feedback might encourage high-performing workers to work harder to excel, and inspire low-ranked workers to make more of an effort.
  • The aim was to determine whether giving people feedback affected their desire to do more work, as well as the quantity and quality of their work.
  • top performers move on to new challenges and low performers have no viable options elsewhere.
  • feedback about rank is detrimental to performance,"
  • it is well documented that tournaments, where rankings are tied to prizes, bonuses and promotions, do inspire higher productivity and performance.
  • "In workplaces where rankings and relative performance is very transparent, even without the intervention of management ... it may be better to attach financial incentives to rankings, as interpersonal comparisons without prizes may lead to lower effort," Barankay suggests. "In those office environments where people may not be able to assess and compare the performance of others, it may not be useful to just post a ranking without attaching prizes."
  • "The key is to devote more time to thinking about whether to give feedback, and how each individual will respond to it. If, as the employer, you think a worker will respond positively to a ranking and feel inspired to work harder, then by all means do it. But it's imperative to think about it on an individual level."
  •  
    the conflation of feedback with ranking confounds this. What is not done and needs to be done is to compare the motivational impact of providing constructive feedback. Presumably the study uses ranking in a strictly comparative context as well, and we do not see the influence of feedback relative to an absolute scale. Still, much in this piece to ponder....
Nils Peterson

Edge 313 - 1 views

  • So what's the point? It's a culture. Call it the algorithmic culture. To get it, you need to be part of it, you need to come out of it. Otherwise, you spend the rest of your life dancing to the tune of other people's code. Just look at Europe where the idea of competition in the Internet space appears to focus on litigation, legislation, regulation, and criminalization.
    • Nils Peterson
       
      US vs Euro thinking about the Internet
  • TIME TO START TAKING THE INTERNET SERIOUSLY 1.  No moment in technology history has ever been more exciting or dangerous than now. The Internet is like a new computer running a flashy, exciting demo. We have been entranced by this demo for fifteen years. But now it is time to get to work, and make the Internet do what we want it to.
  • Wherever computers exist, nearly everyone who writes uses a word processor. The word processor is one of history's most successful inventions. Most people call it not just useful but indispensable. Granted that the word processor is indeed indispensable, what good has it done? We say we can't do without it; but if we had to give it up, what difference would it make? Have word processors improved the quality of modern writing? What has the indispensable word processor accomplished? 4. It has increased not the quality but the quantity of our writing — "our" meaning society's as a whole. The Internet for its part has increased not the quality but the quantity of the information we see. Increasing quantity is easier than improving quality. Instead of letting the Internet solve the easy problems, it's time we got it to solve the important ones.
  • ...10 more annotations...
  • Modern search engines combine the functions of libraries and business directories on a global scale, in a flash: a lightning bolt of brilliant engineering. These search engines are indispensable — just like word processors. But they solve an easy problem. It has always been harder to find the right person than the right fact. Human experience and expertise are the most valuable resources on the Internet — if we could find them. Using a search engine to find (or be found by) the right person is a harder, more subtle problem than ordinary Internet search.
  • Will you store your personal information on your own personal machines, or on nameless servers far away in the Cloud, or both? Answer: in the Cloud. The Cloud (or the Internet Operating System, IOS — "Cloud 1.0") will take charge of your personal machines. It will move the information you need at any given moment onto your own cellphone, laptop, pad, pod — but will always keep charge of the master copy. When you make changes to any document, the changes will be reflected immediately in the Cloud. Many parts of this service are available already.
  • The Internet will never create a new economy based on voluntary instead of paid work — but it can help create the best economy in history, where new markets (a free market in education, for example) change the world. Good news! — the Net will destroy the university as we know it (except for a few unusually prestigious or beautiful campuses).
  • In short: it's time to think about the Internet instead of just letting it happen.
  • The traditional web site is static, but the Internet specializes in flowing, changing information. The "velocity of information" is important — not just the facts but their rate and direction of flow. Today's typical website is like a stained glass window, many small panels leaded together. There is no good way to change stained glass, and no one expects it to change. So it's not surprising that the Internet is now being overtaken by a different kind of cyberstructure. 14. The structure called a cyberstream or lifestream is better suited to the Internet than a conventional website because it shows information-in-motion, a rushing flow of fresh information instead of a stagnant pool.
    • Nils Peterson
       
      jayme will like this for her timeline portfolios
  • There is no clear way to blend two standard websites together, but it's obvious how to blend two streams. You simply shuffle them together like two decks of cards, maintaining time-order — putting the earlier document first. Blending is important because we must be able to add and subtract in the Cybersphere. We add streams together by blending them. Because it's easy to blend any group of streams, it's easy to integrate stream-structured sites so we can treat the group as a unit, not as many separate points of activity; and integration is important to solving the information overload problem. We subtract streams by searching or focusing. Searching a stream for "snow" means that I subtract every stream-element that doesn't deal with snow. Subtracting the "not snow" stream from the mainstream yields a "snow" stream. Blending streams and searching them are the addition and subtraction of the new Cybersphere.
    • Nils Peterson
       
      is Yahoo Pipes a precursor? Theron sent me an email, subject: "let me pipe that for you"
    • Nils Peterson
       
      Google Buzz might also be a ersion of this. It bring together items from your (multiple) public streams.
  • Internet culture is a culture of nowness. The Internet tells you what your friends are doing and the world news now, the state of the shops and markets and weather now, public opinion, trends and fashions now. The Internet connects each of us to countless sites right now — to many different places at one moment in time.
  • Once we understand the inherent bias in an instrument, we can correct it. The Internet has a large bias in favor of now. Using lifestreams (which arrange information in time instead of space), historians can assemble, argue about and gradually refine timelines of historical fact. Such timelines are not history, but they are the raw material of history.
  • Before long, all personal, familial and institutional histories will take visible form in streams.   A lifestream is tangible time:  as life flashes past on waterskis across time's ocean, a lifestream is the wake left in its trail. Dew crystallizes out of the air along cool surfaces; streams crystallize out of the Cybersphere along veins of time. As streams begin to trickle and then rush through the spring thaw in the Cybersphere, our obsession with "nowness" will recede
    • Nils Peterson
       
      barrett has been using lifestream. this guy claims to have coined it lonf ago...in any event, it is a very different picture of portfolio -- more like "not your father's" than like AAEEBL.
  • The Internet today is, after all, a machine for reinforcing our prejudices. The wider the selection of information, the more finicky we can be about choosing just what we like and ignoring the rest. On the Net we have the satisfaction of reading only opinions we already agree with, only facts (or alleged facts) we already know. You might read ten stories about ten different topics in a traditional newspaper; on the net, many people spend that same amount of time reading ten stories about the same topic. But again, once we understand the inherent bias in an instrument, we can correct it. One of the hardest, most fascinating problems of this cyber-century is how to add "drift" to the net, so that your view sometimes wanders (as your mind wanders when you're tired) into places you hadn't planned to go. Touching the machine brings the original topic back. We need help overcoming rationality sometimes, and allowing our thoughts to wander and metamorphose as they do in sleep.
Gary Brown

Change Management 101: A Primer - 1 views

shared by Gary Brown on 13 Jan 10 - Cached
  • To recapitulate, there are at least four basic definitions of change management:  1.      The task of managing change (from a reactive or a proactive posture) 2.      An area of professional practice (with considerable variation in competency and skill levels among practitioners) 3.      A body of knowledge (consisting of models, methods, techniques, and other tools) 4.      A control mechanism (consisting of requirements, standards, processes and procedures).
  • the problems found in organizations, especially the change problems, have both a content and a process dimension.
  • The process of change has been characterized as having three basic stages: unfreezing, changing, and re-freezing. This view draws heavily on Kurt Lewin’s adoption of the systems concept of homeostasis or dynamic stability.
  • ...10 more annotations...
  • The Change Process as Problem Solving and Problem Finding
  • What is not useful about this framework is that it does not allow for change efforts that begin with the organization in extremis
  • this framework is that it gives rise to thinking about a staged approach to changing things.
  • Change as a “How” Problem
  • Change as a “What” Problem
  • Change as a “Why” Problem
  • The Approach taken to Change Management Mirrors Management's Mindset
  • People in core units, buffered as they are from environmental turbulence and with a history of relying on adherence to standardized procedures, typically focus on “how” questions.
  • To summarize: Problems may be formulated in terms of “how,” “what” and “why” questions. Which formulation is used depends on where in the organization the person posing the question or formulating the problem is situated, and where the organization is situated in its own life cycle. “How” questions tend to cluster in core units. “What” questions tend to cluster in buffer units. People in perimeter units tend to ask “what” and “how” questions. “Why” questions are typically the responsibility of top management.
  • One More Time: How do you manage change? The honest answer is that you manage it pretty much the same way you’d manage anything else of a turbulent, messy, chaotic nature, that is, you don’t really manage it, you grapple with it. It’s more a matter of leadership ability than management skill. The first thing to do is jump in. You can’t do anything about it from the outside. A clear sense of mission or purpose is essential. The simpler the mission statement the better. “Kick ass in the marketplace” is a whole lot more meaningful than “Respond to market needs with a range of products and services that have been carefully designed and developed to compare so favorably in our customers’ eyes with the products and services offered by our competitors that the majority of buying decisions will be made in our favor.” Build a team. “Lone wolves” have their uses, but managing change isn’t one of them. On the other hand, the right kind of lone wolf makes an excellent temporary team leader. Maintain a flat organizational team structure and rely on minimal and informal reporting requirements. Pick people with relevant skills and high energy levels. You’ll need both. Toss out the rulebook. Change, by definition, calls for a configured response, not adherence to prefigured routines. Shift to an action-feedback model. Plan and act in short intervals. Do your analysis on the fly. No lengthy up-front studies, please. Remember the hare and the tortoise. Set flexible priorities. You must have the ability to drop what you’re doing and tend to something more important. Treat everything as a temporary measure. Don’t “lock in” until the last minute, and then insist on the right to change your mind. Ask for volunteers. You’ll be surprised at who shows up. You’ll be pleasantly surprised by what they can do. Find a good “straw boss” or team leader and stay out of his or her way. Give the team members whatever they ask for — except authority. They’ll generally ask only for what they really need in the way of resources. If they start asking for authority, that’s a signal they’re headed toward some kind of power-based confrontation and that spells trouble. Nip it in the bud! Concentrate dispersed knowledge. Start and maintain an issues logbook. Let anyone go anywhere and talk to anyone about anything. Keep the communications barriers low, widely spaced, and easily hurdled. Initially, if things look chaotic, relax — they are. Remember, the task of change management is to bring order to a messy situation, not pretend that it’s already well organized and disciplined.
  •  
    Note the "why" challenge and the role of leadership
Theron DesRosier

We Are Media » About Project Background - 0 views

  • The We Are Media Project is a community of people from nonprofits who are interested in learning and teaching about how social media strategies and tools can enable nonprofit organizations to create, compile, and distribute their stories and change the world. Curated by NTEN, the community will work in a networked way to help identify the best existing resources, people, and case studies that will give nonprofit organizations the knowledge and resources they need to be the media. The community will help identify and point to the best how-to guides and useful resources that cover all aspects of creating, aggregating, and distributing social media. The resulting curriculum which will live on this wiki and will also cover important organizational adoption issues, strategy, ROI analysis, as well as the tools.
  •  
    The We Are Media Project is a community of people from nonprofits who are interested in learning and teaching about how social media strategies and tools can enable nonprofit organizations to create, compile, and distribute their stories and change the world.\nCurated by NTEN, the community will work in a networked way to help identify the best existing resources, people, and case studies that will give nonprofit organizations the knowledge and resources they need to be the media. The community will help identify and point to the best how-to guides and useful resources that cover all aspects of creating, aggregating, and distributing social media. The resulting curriculum which will live on this wiki and will also cover important organizational adoption issues, strategy, ROI analysis, as well as the tools.
  •  
    Thanks Stephen, great bookmark. We are thinking about Change.gov right now. Wondering how we make it less broadcast and more 2.0.
S Spaeth

Google Gadget Ventures - 1 views

  •  
    Google Gadget Ventures is a new Google pilot program dedicated to helping developers create richer, more useful gadgets. Inspired by the success of iGoogle, which has been driven by the creation by 3rd-party developers of a broad range of gadgets, Gadget Ventures provides two types of funding: Grants of $5,000 to those who've built gadgets we'd like to see developed further. You're eligible to apply for a grant if you've developed a gadget that's in our gadgets directory and gets at least 250,000 weekly page views. To apply, you must submit a one-page proposal detailing how you'd use the grant to improve your gadget. Seed investments of $100,000 to developers who'd like to build a business around the gadgets platform. Only Google Gadget Venture grant recipients are eligible for this type of funding. Submitting a business plan detailing how you plan to build a viable business around the gadgets platform is a required part of the seed investment application process. It's our hope that Google Gadget Ventures will give developers the opportunity to create a new generation of gadgets to benefit users. ---------- Consider this form of authentic assessment and the metrics they apply.
Nils Peterson

Half an Hour: Open Source Assessment - 0 views

  • When posed the question in Winnipeg regarding what I thought the ideal open online course would look like, my eventual response was that it would not look like a course at all, just the assessment.
    • Nils Peterson
       
      I remembered this Downes post on the way back from HASTAC. It is some of the roots of our Spectrum I think.
  • The reasoning was this: were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.
  • In Holland I encountered a person from an organization that does nothing but test students. This is the sort of thing I long ago predicted (in my 1998 Future of Online Learning) so I wasn't that surprised. But when I pressed the discussion the gulf between different models of assessment became apparent.Designers of learning resources, for example, have only the vaguest of indication of what will be on the test. They have a general idea of the subject area and recommendations for reading resources. Why not list the exact questions, I asked? Because they would just memorize the answers, I was told. I was unsure how this varied from the current system, except for the amount of stuff that must be memorized.
    • Nils Peterson
       
      assumes a test as the form of assessment, rather than something more open ended.
  • ...8 more annotations...
  • As I think about it, I realize that what we have in assessment is now an exact analogy to what we have in software or learning content. We have proprietary tests or examinations, the content of which is held to be secret by the publishers. You cannot share the contents of these tests (at least, not openly). Only specially licensed institutions can offer the tests. The tests cost money.
    • Nils Peterson
       
      See our Where are you on the spectrum, Assessment is locked vs open
  • Without a public examination of the questions, how can we be sure they are reliable? We are forced to rely on 'peer reviews' or similar closed and expert-based evaluation mechanisms.
  • there is the question of who is doing the assessing. Again, the people (or machines) that grade the assessments work in secret. It is expert-based, which creates a resource bottleneck. The criteria they use are not always apparent (and there is no shortage of literature pointing to the randomness of the grading). There is an analogy here with peer-review processes (as compared to recommender system processes)
  • What constitutes achievement in a field? What constitutes, for example, 'being a physicist'?
  • This is a reductive theory of assessment. It is the theory that the assessment of a big thing can be reduced to the assessment of a set of (necessary and sufficient) little things. It is a standards-based theory of assessment. It suggests that we can measure accomplishment by testing for accomplishment of a predefined set of learning objectives.Left to its own devices, though, an open system of assessment is more likely to become non-reductive and non-standards based. Even if we consider the mastery of a subject or field of study to consist of the accomplishment of smaller components, there will be no widespread agreement on what those components are, much less how to measure them or how to test for them.Consequently, instead of very specific forms of evaluation, intended to measure particular competences, a wide variety of assessment methods will be devised. Assessment in such an environment might not even be subject-related. We won't think of, say, a person who has mastered 'physics'. Rather, we might say that they 'know how to use a scanning electron microscope' or 'developed a foundational idea'.
  • We are certainly familiar with the use of recognition, rather than measurement, as a means of evaluating achievement. Ludwig Wittgenstein is 'recognized' as a great philosopher, for example. He didn't pass a series of tests to prove this. Mahatma Gandhi is 'recognized' as a great leader.
  • The concept of the portfolio is drawn from the artistic community and will typically be applied in cases where the accomplishments are creative and content-based. In other disciplines, where the accomplishments resemble more the development of skills rather than of creations, accomplishments will resemble more the completion of tasks, like 'quests' or 'levels' in online games, say.Eventually, over time, a person will accumulate a 'profile' (much as described in 'Resource Profiles').
  • In other cases, the evaluation of achievement will resemble more a reputation system. Through some combination of inputs, from a more or less define community, a person may achieve a composite score called a 'reputation'. This will vary from community to community.
  •  
    Fine piece, transformative. "were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources."
Joshua Yeidel

News: 'You Can't Measure What We Teach' - Inside Higher Ed - 0 views

  •  
    "Despite those diverging starting points, the discussion revealed quite a bit more common ground than any of the panelists probably would have predicted. Let's be clear: Where they ended up was hardly a breakthrough on the scale of solving the Middle East puzzle. But there was general agreement among them that: * Any effort to try to measure learning in the humanities through what McCulloch-Lovell deemed "[Margaret] Spellings-type assessment" -- defined as tests or other types of measures that could be easily compared across colleges and neatly sum up many of the learning outcomes one would seek in humanities students -- was doomed to fail, and should. * It might be possible, and could be valuable, for humanists to reach broad agreement on the skills, abilities, and knowledge they might seek to instill in their students, and that agreement on those goals might be a starting point for identifying effective ways to measure how well students have mastered those outcomes. * It is incumbent on humanities professors and academics generally to decide for themselves how to assess whether their students are learning, less to satisfy external calls for accountability than because it is the right thing for academics, as professionals who care about their students, to do. "
  •  
    Assessment meeting at the accreditors -- driven by expectations of a demand for accountability, with not one mention of improvement.
Ashley Ater Kranov

Why Liberal Arts Need Career Services - Commentary - The Chronicle of Higher Education - 1 views

  •  
    Quote: "In doing so, my students move from superficial to elegant observations about their majors. English majors, who previously said they read literature and wrote papers, come to understand that an English major is also about perspective, and is simultaneously classical and progressive. History majors, who initially discussed reading and research skills, discovered that a prerequisite to the major is being "audaciously curious" and on a search for "truth," despite its elusive nature. They ponder how different the nightly news would be if newsrooms were fully staffed with history majors instead of communication majors. Most important, my students consistently tell me it's the first time they've ever focused on their education-what they've learned and how their majors have influenced their mind-sets, perceptions, and ways of thinking. Once they've had that epiphany, it's amazing how simple it is to teach them to articulate their knowledge to an employer or graduate-school admissions officer."
Gary Brown

Views: The White Noise of Accountability - Inside Higher Ed - 2 views

  • We don’t really know what we are saying
  • “In education, accountability usually means holding colleges accountable for the learning outcomes produced.” One hopes Burck Smith, whose paper containing this sentence was delivered at an American Enterprise Institute conference last November, held a firm tongue-in-cheek with the core phrase.
  • Our adventure through these questions is designed as a prodding to all who use the term to tell us what they are talking about before they otherwise simply echo the white noise.
  • ...20 more annotations...
  • when our students attend three or four schools, the subject of these sentences is considerably weakened in terms of what happens to those students.
  • Who or what is one accountable to?
  • For what?
  • Why that particular “what” -- and not another “what”?
  • To what extent is the relationship reciprocal? Are there rewards and/or sanctions inherent in the relationship? How continuous is the relationship?
  • In the Socratic moral universe, one is simultaneously witness and judge. The Greek syneidesis (“conscience” and “consciousness”) means to know something with, so to know oneself with oneself becomes an obligation of institutions and systems -- to themselves.
  • Obligation becomes self-reflexive.
  • There are no external authorities here. We offer, we accept, we provide evidence, we judge. There is nothing wrong with this: it is indispensable, reflective self-knowledge. And provided we judge without excuses, we hold to this Socratic moral framework. As Peter Ewell has noted, the information produced under this rubric, particularly in the matter of student learning, is “part of our accountability to ourselves.”
  • But is this “accountability” as the rhetoric of higher education uses the white noise -- or something else?
  • in response to shrill calls for “accountability,” U.S. higher education has placed all its eggs in the Socratic basket, but in a way that leaves the basket half-empty. It functions as the witness, providing enormous amounts of information, but does not judge that information.
  • Every single “best practice” cited by Aldeman and Carey is subject to measurement: labor market histories of graduates, ratios of resource commitment to various student outcomes, proportion of students in learning communities or taking capstone courses, publicly-posted NSSE results, undergraduate research participation, space utilization rates, licensing income, faculty patents, volume of non-institutional visitors to art exhibits, etc. etc. There’s nothing wrong with any of these, but they all wind up as measurements, each at a different concentric circle of putatively engaged acceptees of a unilateral contract to provide evidence. By the time one plows through Aldeman and Carey’s banquet, one is measuring everything that moves -- and even some things that don’t.
  • Sorry, but basic capacity facts mean that consumers cannot vote with their feet in higher education.
  • If we glossed the Socratic notion on provision-of-information, the purpose is self-improvement, not comparison. The market approach to accountability implicitly seeks to beat Socrates by holding that I cannot serve as both witness and judge of my own actions unless the behavior of others is also on the table. The self shrinks: others define the reference points. “Accountability” is about comparison and competition, and an institution’s obligations are only to collect and make public those metrics that allow comparison and competition. As for who judges the competition, we have a range of amorphous publics and imagined authorities.
  • There are no formal agreements here: this is not a contract, it is not a warranty, it is not a regulatory relationship. It isn’t even an issue of becoming a Socratic self-witness and judge. It is, instead, a case in which one set of parties, concentrated in places of power, asks another set of parties, diffuse and diverse, “to disclose more and more about academic results,” with the second set of parties responding in their own terms and formulations. The environment itself determines behavior.
  • Ewell is right about the rules of the information game in this environment: when the provider is the institution, it will shape information “to look as good as possible, regardless of the underlying performance.”
  • U.S. News & World Report’s rankings
  • The messengers become self-appointed arbiters of performance, establishing themselves as the second party to which institutions and aggregates of institutions become “accountable.” Can we honestly say that the implicit obligation of feeding these arbiters constitutes “accountability”?
  • But if the issue is student learning, there is nothing wrong with -- and a good deal to be said for -- posting public examples of comprehensive examinations, summative projects, capstone course papers, etc. within the information environment, and doing so irrespective of anyone requesting such evidence of the distribution of knowledge and skills. Yes, institutions will pick what makes them look good, but if the public products resemble AAC&U’s “Our Students’ Best Work” project, they set off peer pressure for self-improvement and very concrete disclosure. The other prominent media messengers simply don’t engage in constructive communication of this type.
  • Ironically, a “market” in the loudest voices, the flashiest media productions, and the weightiest panels of glitterati has emerged to declare judgment on institutional performance in an age when student behavior has diluted the very notion of an “institution” of higher education. The best we can say is that this environment casts nothing but fog over the specific relationships, responsibilities, and obligations that should be inherent in something we call “accountability.” Perhaps it is about time that we defined these components and their interactions with persuasive clarity. I hope that this essay will invite readers to do so.
  • Clifford Adelman is senior associate at the Institute for Higher Education Policy. The analysis and opinions expressed in this essay are those of the author, and do not necessarily represent the positions or opinions of the institute, nor should any such representation be inferred.
  •  
    Perhaps the most important piece I've read recently. Yes must be our answer to Adelman's last challenge: It is time for us to disseminate what and why we do what we do.
Gary Brown

Want Students to Take an Optional Test? Wave 25 Bucks at Them - Students - The Chronicl... - 0 views

  • cash, appears to be the single best approach for colleges trying to recruit students to volunteer for institutional assessments and other low-stakes tests with no bearing on their grades.
  • American Educational Research Association
  • A college's choice of which incentive to offer does not appear to have a significant effect on how students end up performing, but it can have a big impact on colleges' ability to round up enough students for the assessments, the study found.
  • ...6 more annotations...
  • "I cannot provide you with the magic bullet that will help you recruit your students and make sure they are performing to the maximum of their ability," Mr. Steedle acknowledged to his audience at the Denver Convention Center. But, he said, his study results make clear that some recruitment strategies are more effective than others, and also offer some notes of caution for those examining students' scores.
  • The study focused on the council's Collegiate Learning Assessment, or CLA, an open-ended test of critical thinking and writing skills which is annually administered by several hundred colleges. Most of the colleges that use the test try to recruit 100 freshmen and 100 seniors to take it, but doing so can be daunting, especially for colleges that administer it in the spring, right when the seniors are focused on wrapping up their work and graduating.
  • The incentives that spurred students the least were the opportunity to help their college as an institution assess student learning, the opportunity to compare themselves to other students, a promise they would be recognized in some college publication, and the opportunity to put participation in the test on their resume.
  • The incentives which students preferred appeared to have no significant bearing on their performance. Those who appeared most inspired by a chance to earn 25 dollars did not perform better on the CLA than those whose responses suggested they would leap at the chance to help out a professor.
  • What accounted for differences in test scores? Students' academic ability going into the test, as measured by characteristics such as their SAT scores, accounted for 34 percent of the variation in CLA scores among individual students. But motivation, independent of ability, accounted for 5 percent of the variation in test scores—a finding that, the paper says, suggests it is "sensible" for colleges to be concerned that students with low motivation are not posting scores that can allow valid comparisons with other students or valid assessments of their individual strengths and weaknesses.
  • A major limitation of the study was that Mr. Steedle had no way of knowing how the students who took the test were recruited. "If many of them were recruited using cash and prizes, it would not be surprising if these students reported cash and prizes as the most preferable incentives," his paper concedes.
  •  
    Since it is not clear if the incentive to participate in this study influenced the decision to participate, it remains similarly unclear if incentives to participate correlate with performance.
Joshua Yeidel

Jim Dudley on Letting Go of Rigid Adherence to What Evaluation Should Look Like | AEA365 - 1 views

  •  
    "Recently, in working with a board of directors of a grassroots organization, I was reminded of how important it is to "let go" of rigid adherence to typologies and other traditional notions of what an evaluation should look like. For example, I completed an evaluation that incorporated elements of all of the stages of program development - a needs assessment (e.g., how much do board members know about their programs and budget), a process evaluation (e.g., how well do the board members communicate with each other when they meet), and an outcome evaluation (e.g., how effective is their marketing plan for recruiting children and families for its programs)."
  •  
    Needs evaluation, process evaluation, outcomes evaluation -- all useful for improvement.
Gary Brown

News: Turning Surveys Into Reforms - Inside Higher Ed - 0 views

  • Molly Corbett Broad, president of the American Council on Education, warned those gathered here that they would be foolish to think that accountability demands were a thing of the past.
  • She said that while she is “impressed” with the work of NSSE, she thinks higher education is “not moving fast enough” right now to have in place accountability systems that truly answer the questions being asked of higher education. The best bet for higher education, she said, is to more fully embrace various voluntary systems, and show that they are used to promote improvements.
  • One reason NSSE data are not used more, some here said, was the decentralized nature of American higher education. David Paris, executive director of the New Leadership Alliance for Student Learning and Accountability, said that “every faculty member is king or queen in his or her classroom.” As such, he said, “they can take the lessons of NSSE” about the kinds of activities that engage students, but they don’t have to. “There is no authority or dominant professional culture that could impel any faculty member to apply” what NSSE teaches about engaged learning, he said.
  • ...4 more annotations...
  • She stressed that NSSE averages may no longer reflect any single reality of one type of faculty member. She challenged Paris’s description of powerful faculty members by noting that many adjuncts have relatively little control over their pedagogy, and must follow syllabuses and rules set by others. So the power to execute NSSE ideas, she said, may not rest with those doing most of the teaching.
  • Research presented here, however, by the Wabash College National Study of Liberal Arts Education offered concrete evidence of direct correlations between NSSE attributes and specific skills, such as critical thinking skills. The Wabash study, which involves 49 colleges of all types, features cohorts of students being analyzed on various NSSE benchmarks (for academic challenge, for instance, or supportive campus environment or faculty-student interaction) and various measures of learning, such as tests to show critical thinking skills or cognitive skills or the development of leadership skills.
  • The irony of the Wabash work with NSSE data and other data, Blaich said, was that it demonstrates the failure of colleges to act on information they get -- unless someone (in this case Wabash) drives home the ideas.“In every case, after collecting loads of information, we have yet to find a single thing that institutions didn’t already know. Everyone at the institution didn’t know -- it may have been filed away,” he said, but someone had the data. “It just wasn’t followed. There wasn’t sufficient organizational energy to use that data to improve student learning.”
  • “I want to try to make the point that there is a distinction between participating in NSSE and using NSSE," he said. "In the end, what good is it if all you get is a report?"
  •  
    An interesting discussion, exploring basic questions CTLT folks are familiar with, grappling with the question of how to use survey data and how to identify and address limitations. 10 years after launch of National Survey of Student Engagement, many worry that colleges have been speedier to embrace giving the questionnaire than using its results. And some experts want changes in what the survey measures. I note these limitations, near the end of the article: Adrianna Kezar, associate professor of higher education at the University of Southern California, noted that NSSE's questions were drafted based on the model of students attending a single residential college. Indeed many of the questions concern out-of-class experiences (both academic and otherwise) that suggest someone is living in a college community. Kezar noted that this is no longer a valid assumption for many undergraduates. Nor is the assumption that they have time to interact with peers and professors out of class when many are holding down jobs. Nor is the assumption -- when students are "swirling" from college to college, or taking courses at multiple colleges at the same time -- that any single institution is responsible for their engagement. Further, Kezar noted that there is an implicit assumption in NSSE of faculty being part of a stable college community. Questions about seeing faculty members outside of class, she said, don't necessarily work when adjunct faculty members may lack offices or the ability to interact with students from one semester to the next. Kezar said that she thinks full-time adjunct faculty members may actually encourage more engagement than tenured professors because the adjuncts are focused on teaching and generally not on research. And she emphasized that concerns about the impact of part-time adjuncts on student engagement arise not out of criticism of those individuals, but of the system that assigns them teaching duties without much support. S
  •  
    Repeat of highlighted resource, but merits revisiting.
Theron DesRosier

HOW TO 2008: How To Do Almost Anything With Social Media - 0 views

  •  
    HOW TO 2008: How To Do Almost Anything With Social Media
Gary Brown

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • Others say those who take the test have little motivation to do well, which makes it tough to draw conclusions from their performance.
  • "Everything that No Child Left Behind signified during the Bush administration—we operate 180 degrees away from that," says Roger Benjamin, president of the Council for Aid to Education, which developed and promotes the CLA. "We don't want this to be a high-stakes test. We're putting a stake in the ground on classic liberal-arts issues. I'm willing to rest my oar there. These core abilities, these higher-order skills, are very important, and they're even more important in a knowledge economy where everyone needs to deal with a surplus of information." Only an essay test, like the CLA, he says, can really get at those skills.
  • "The CLA is really an authentic assessment process," says Pedro Reyes, associate vice chancellor for academic planning and assessment at the University of Texas system.
  • ...20 more annotations...
  • "The Board of Regents here saw that it would be an important test because it measures analytical ability, problem-solving ability, critical thinking, and communication. Those are the skills that you want every undergraduate to walk away with." (Other large systems that have embraced the CLA include California State University and the West Virginia system.)
  • value added
  • We began by administering a retired CLA question, a task that had to do with analyzing crime-reduction strategies,
  • performance task that mirrors the CLA
  • Mr. Ernsting and Ms. McConnell are perfectly sincere about using CLA-style tasks to improve instruction on their campuses. But at the same time, colleges have a less high-minded motive for familiarizing students with the CLA style: It just might improve their scores when it comes time to take the actual test.
  • by 2012, the CLA scores of more than 100 colleges will be posted, for all the world to see, on the "College Portrait" Web site of the Voluntary System of Accountability, an effort by more than 300 public colleges and universities to provide information about life and learning on their campuses.
  • If familiarizing students with CLA-style tasks does raise their scores, then the CLA might not be a pure, unmediated reflection of the full range of liberal-arts skills. How exactly should the public interpret the scores of colleges that do not use such training exercises?
  • Trudy W. Banta, a professor of higher education and senior adviser to the chancellor for academic planning and evaluation at Indiana University-Purdue University at Indianapolis, believes it is a serious mistake to publicly release and compare scores on the test. There is too much risk, she says, that policy makers and the public will misinterpret the numbers.
  • most colleges do not use a true longitudinal model: That is, the students who take the CLA in their first year do not take it again in their senior year. The test's value-added model is therefore based on a potentially apples-and-oranges comparison.
  • freshman test-takers' scores are assessed relative to their SAT and ACT scores, and so are senior test-takers' scores. For that reason, colleges cannot game the test by recruiting an academically weak pool of freshmen and a strong pool of seniors.
  • students do not always have much motivation to take the test seriously
  • seniors, who are typically recruited to take the CLA toward the end of their final semester, when they can already taste the graduation champagne.
  • Of the few dozen universities that had already chosen to publish CLA data on that site, roughly a quarter of the reports appeared to include erroneous descriptions of the year-to-year value-added scores.
  • It is clear that CLA scores do reflect some broad properties of a college education.
  • Students' CLA scores improved if they took courses that required a substantial amount of reading and writing. Many students didn't take such courses, and their CLA scores tended to stay flat.
  • Colleges that make demands on students can actually develop their skills on the kinds of things measured by the CLA.
  • Mr. Shavelson believes the CLA's essays and "performance tasks" offer an unusually sophisticated way of measuring what colleges do, without relying too heavily on factual knowledge from any one academic field.
  • Politicians and consumers want easily interpretable scores, while colleges need subtler and more detailed data to make internal improvements.
  • The CLA is used at more than 400 colleges
  • Since its debut a decade ago, it has been widely praised as a sophisticated alternative to multiple-choice tests
1 - 20 of 215 Next › Last »
Showing 20 items per page