We think that our work is primarily organized by institutions of higher education, or by departments, or by conferences, but in reality those have become but appendages to a huge system of distributing resources through grants.
New item has been created. View it here
7More
8More
Academic Grants Foster Waste and Antagonism - Commentary - The Chronicle of Higher Educ... - 1 views
-
It's time we looked at this system—and at its costs: unpaid, anxiety-filled hours upon hours for a single successful grant; scholarship shaped, or misshaped, according to the demands of marketlike forces and the interests of nonacademic private foundations. All to uphold a distributive system that fosters antagonistic competition and increasing inequality.
-
Every hour spent working on or worrying about grants is an hour that could be better spent on research (or family life, or civic engagement, or sleep). But every hour not spent on a grant gives a competitive edge to other applicants.
- ...5 more annotations...
-
The grant is basically an outsourcing of assessment that could, in most situations, be carried out much better by paid professional staff members.
-
Meanwhile grant-receiving institutions, like universities, become increasingly dependent on grants, to the point that faculty members and other campus voices can scarcely be heard beneath the din of administrators exhorting them to get more and more grants.
-
Colleagues whose research may be equally valuable (based on traditional criteria of academic debate) could be denied resources and livelihoods because, instead of grant writing, they favor publishing, or public engagement, or teaching.
-
Grant applications normalize a mode of scholarly writing and thought that, whatever its merits, has not been chosen collectively by academe in the interests of good scholarship, but has been imposed from without, with the grant as its guide. And as application procedures grow more stringent, the quality of successful projects is likely to sink. Can we honestly expect good scholarship from scholars who must constantly concentrate on something other than their scholarship? Academic life is increasingly made up of a series of applications, while the applied-for work dwindles toward insignificance.
-
It's time, I think, to put an end to our rationalizations. My spine will not be straightened. The agony will not be wiped off my brain. My mind misshapen will not be pounded back, and I have to stop telling myself that everything will be OK. Months and years of my life have been taken away, and nothing short of systemic transformation will redeem them.
12More
Empowerment Evaluation - 1 views
-
Empowerment evaluation provides a method for gathering, analyzing, and sharing data about a program and its outcomes and encourages faculty, students, and support personnel to actively participate in system changes.
-
It assumes that the more closely stakeholders are involved in reflecting on evaluation findings, the more likely they are to take ownership of the results and to guide curricular decision making and reform.
- ...8 more annotations...
11More
Views: Why Are We Assessing? - Inside Higher Ed - 1 views
-
Amid all this progress, however, we seem to have lost our way. Too many of us have focused on the route we’re traveling: whether assessment should be value-added; the improvement versus accountability debate; entering assessment data into a database; pulling together a report for an accreditor. We’ve been so focused on the details of our route that we’ve lost sight of our destinatio
-
Our destination, which is what we should be focusing on, is the purpose of assessment. Over the last decades, we've consistently talked about two purposes of assessment: improvement and accountability. The thinking has been that improvement means using assessment to identify problems — things that need improvement — while accountability means using assessment to show that we're already doing a great job and need no improvement. A great deal has been written about the need to reconcile these two seemingly disparate purposes.
-
The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education
- ...7 more annotations...
-
Our second common purpose of assessment should be making sure not only that students learn what’s important, but that their learning is of appropriate scope, depth, and rigo
-
And we haven’t figured out a way to tell the story of our effectiveness in 25 words or less, which is what busy people want and nee
-
Because we're not telling the stories of our successful outcomes in simple, understandable terms, the public continues to define quality using the outdated concept of inputs like faculty credentials, student aptitude, and institutional wealth — things that by themselves don’t say a whole lot about student learning.
-
And people like to invest in success. Because the public doesn't know how good we are at helping students learn, it doesn't yet give us all the support we need in our quest to give our students the best possible education.
-
But while virtually every college and university has had to make draconian budget cuts in the last couple of years, with more to come, I wonder how many are using solid, systematic evidence — including assessment evidence — to inform those decisions.
-
Now is the time to move our focus from the road we are traveling to our destination: a point at which we all are prudent, informed stewards of our resources… a point at which we each have clear, appropriate, justifiable, and externally-informed standards for student learning. Most importantly, now is the time to move our focus from assessment to learning, and to keeping our promises. Only then can we make higher education as great as it needs to be.
-
Yes, this article resonnated with me too. Especially connecting assessment to teaching and learning. The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education.... today we seem to be devoting more time, money, thought, and effort to assessment than to helping faculty help students learn as effectively as possible. When our colleagues have disappointing assessment results, and they don't know what to do to improve them, I wonder how many have been made aware that, in some respects, we are living in a golden age of higher education, coming off a quarter-century of solid research on practices that promote deep, lasting learning. I wonder how many are pointed to the many excellent resources we now have on good teaching practices, including books, journals, conferences and, increasingly, teaching-learning centers right on campus. I wonder how many of the graduate programs they attended include the study and practice of contemporary research on effective higher education pedagogies. No wonder so many of us are struggling to make sense of our assessment results! Too many of us are separating work on assessment from work on improving teaching and learning, when they should be two sides of the same coin. We need to bring our work on teaching, learning, and assessment together.
10More
Does testing for statistical significance encourage or discourage thoughtful ... - 1 views
-
Does testing for statistical significance encourage or discourage thoughtful data analysis? Posted by Patricia Rogers on October 20th, 2010
-
Epidemiology, 9(3):333–337). which argues not only for thoughtful interpretation of findings, but for not reporting statistical significance at all.
-
We also would like to see the interpretation of a study based not on statistical significance, or lack of it, for one or more study variables, but rather on careful quantitative consideration of the data in light of competing explanations for the findings.
- ...6 more annotations...
-
we prefer a researcher to consider whether the magnitude of an estimated effect could be readily explained by uncontrolled confounding or selection biases, rather than simply to offer the uninspired interpretation that the estimated effect is significant, as if neither chance nor bias could then account for the findings.
-
Even worse, those two values often signal just the wrong interpretation. These misleading signals occur when a trivial effect is found to be ’significant’, as often happens in large studies, or when a strong relation is found ’nonsignificant’, as often happens in small studies.
-
Another useful paper on this issue is Kristin Sainani, (2010) “Misleading Comparisons: The Fallacy of Comparing Statistical Significance”Physical Medicine and Rehabilitation, Vol. 2 (June), 559-562 which discusses the need to look carefully at within-group differences as well as between-group differences, and at sub-group significance compared to interaction. She concludes: ‘Readers should have a particularly high index of suspicion for controlled studies that fail to report between-group comparisons, because these likely represent attempts to “spin” null results.”
Your source for university news - Some would add science literacy to six learning goals - 3 views
6More
Cheating Scandal Snares Hundreds in U. of Central Florida Course - The Ticker - The Chr... - 1 views
-
I don’t condone cheating. But I think it is equally pathetic that faculty are put in situations where they feel the only option for an examination is an easy to grade multiple choice or true/false test
- ...3 more annotations...
-
Faculty all need to wake up, as virtually all test banks, and also all instructor’s manuals with homework answers, are widely available on the interne
3More
Video Chat with Education Author Alfie Kohn - 1 views
-
"The reality is that outcomes in education are determined in large part by the attitudes and goals and perspectives of the real living human beings, the learners in our classrooms,"
-
So if they regard homework as pointless, as frustrating, as unlikely to be beneficial, as something they thoroughly detest, it would be extroardinary to find research that finds an achievement effect despite the way they regard it, and in fact the research provides just what would be predicated from a non-behaviorist point of view, namely that it doesn't tend to be beneficial."
24More
Views: Changing the Equation - Inside Higher Ed - 1 views
-
But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
-
But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resource
-
But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
- ...19 more annotations...
-
those who control influential rating systems of the sort published by U.S. News & World Report -- define academic quality as small classes taught by distinguished faculty, grand campuses with impressive libraries and laboratories, and bright students heavily recruited. Since all of these indicators of quality are costly, my college’s pursuit of quality, like that of so many others, led us to seek more revenue to spend on quality improvements. And the strategy worked.
-
Based on those concerns, and informed by the literature on the “teaching to learning” paradigm shift, we began to change our focus from what we were teaching to what and how our students were learning.
-
No one wants to cut costs if their reputation for quality will suffer, yet no one wants to fall off the cliff.
-
When quality is defined by those things that require substantial resources, efforts to reduce costs are doomed to failure
-
some of the best thinkers in higher education have urged us to define the quality in terms of student outcomes.
-
Faculty said they wanted to move away from giving lectures and then having students parrot the information back to them on tests. They said they were tired of complaining that students couldn’t write well or think critically, but not having the time to address those problems because there was so much material to cover. And they were concerned when they read that employers had reported in national surveys that, while graduates knew a lot about the subjects they studied, they didn’t know how to apply what they had learned to practical problems or work in teams or with people from different racial and ethnic backgrounds.
-
Our applications have doubled over the last decade and now, for the first time in our 134-year history, we receive the majority of our applications from out-of-state students.
-
We established what we call college-wide learning goals that focus on "essential" skills and attributes that are critical for success in our increasingly complex world. These include critical and analytical thinking, creativity, writing and other communication skills, leadership, collaboration and teamwork, and global consciousness, social responsibility and ethical awareness.
-
despite claims to the contrary, many of the factors that drive up costs add little value. Research conducted by Dennis Jones and Jane Wellman found that “there is no consistent relationship between spending and performance, whether that is measured by spending against degree production, measures of student engagement, evidence of high impact practices, students’ satisfaction with their education, or future earnings.” Indeed, they concluded that “the absolute level of resources is less important than the way those resources are used.”
-
After more than a year, the group had developed what we now describe as a low-residency, project- and competency-based program. Here students don’t take courses or earn grades. The requirements for the degree are for students to complete a series of projects, captured in an electronic portfolio,
-
Faculty spend their time coaching students, providing them with feedback on their projects and running two-day residencies that bring students to campus periodically to learn through intensive face-to-face interaction
-
At the very least, finding innovative ways to lower costs without compromising student learning is wise competitive positioning for an uncertain future
-
As the campus learns more about the demonstration project, other faculty are expressing interest in applying its design principles to courses and degree programs in their fields. They created a Learning Coalition as a forum to explore different ways to capitalize on the potential of the learning paradigm.
-
After a year and a half, the evidence suggests that students are learning as much as, if not more than, those enrolled in our traditional business program
-
the focus of student evaluations has changed noticeably. Instead of focusing almost 100% on the instructor and whether he/she was good, bad, or indifferent, our students' evaluations are now focusing on the students themselves - as to what they learned, how much they have learned, and how much fun they had learning.
1More
The Atlantic Century: Benchmarking EU and U.S. Innovation and Competitiveness | The Inf... - 1 views
-
"ITIF uses 16 indicators to assess the global innovation-based competitiveness of 36 countries and 4 regions. This report finds that while the U.S. still leads the EU in innovation-based competitiveness, it ranks sixth overall. Moreover, the U.S. ranks last in progress toward the new knowledge-based innovation economy over the last decade."
1More
The scientist and blogging - 1 views
-
Some suggestions fort Scientists about blogging. "So what should you put in your blog? (1) Talk about your research. What have you done in the past? What are you working on at the moment? There is some controversy as to how transparent you should be when talking about your research (OMG, someone is going to steal my idea if I write it down! No wait, if everyone knows I said it first, then they can't steal it!), so it's up to you to decide how comfortable you are about sharing your research ideas. I'm old-fashioned enough that I tend towards the side that thinks we should be discreet about the details of what we're working on, but I also understand the side that wants everything to be out there. (2) Talk about other people's research. Do you agree with their results? Do you think that they missed something important? You may feel unqualified to criticize somebody else's work, but science does not advance through groupthink. Remember, part of your job as a scientist will be to review other people's papers. Now is as good a time as any to start practicing. (3) Talk about issues related to your research. Are you working on smartphones? Talk about how they're being integrated into museum visits. Working on accessibility issues? Talk about some of the problems that the handicapped encounter during their daily routine. Just make sure you choose to talk about something that interests you so that you feel motivated to write to your blog. "
18More
AAC&U News | April 2010 | Feature - 1 views
-
First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU).
-
And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.
- ...13 more annotations...
-
outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication
-
“The wonderful thing about this approach is that full-time faculty across the university are gathering data about how their students are doing, and since they’ll be teaching their courses in the future, they’re really invested in rubric assessment—they really care,” Escoe says. In one case, the capstone survey data revealed that students weren’t doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
-
the university planned a “dual pilot” study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment,
-
The rubrics the UC team used were slightly modified versions of those developed by AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) project.
-
In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years. The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.
-
“It’s not some nitpicky, onerous administrative add-on. It’s what we do as we teach our courses, and it really helps close that assessment loop.”
-
There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points “in a black box”:
-
faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test.
-
“The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”
-
Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students’ whole program of study.” Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning.
-
“We found no statistically significant correlation between the CLA scores and the portfolio scores,”
-
In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
1More
An Interview with Anil Dash, Director of Expert Labs | techPresident - 1 views
-
Expert Labs is a new, independent non-profit effort that's trying at its most ambitious to improve the decisions policy makers make, by giving them the tools to tap into crowdsourcing in the same way that private companies do every day. We're part of the American Association for the Advancement of Science (the folks who publish the journal Science) and we're backed by the MacArthur Foundation.
1More
World Air Traffic Over A 24-Hour Period (VIDEO) - 1 views
3More
As Colleges Switch to Online Course Evaluations, Students Stop Filling Them Out - The T... - 1 views
-
Colleges thought they were enhancing efficiency when they moved their course evaluations online, but an unintended consequence of the shift to evaluations not filled out in class is that students started skipping them altogether, The Boston Globe reported today.
11More
News: No Letup From Washington - Inside Higher Ed - 1 views
-
Virtually all of the national higher education leaders who spoke to the country's largest accrediting group sent a version of the same message: The federal government is dead serious about holding colleges and universities accountable for their performance, and can be counted on to impose undesirable requirements if higher education officials don't make meaningful changes themselves.
-
"This is meant to be a wakeup call," Molly Corbett Broad, president of the American Council on Education, said in Monday's keynote address
-
I believe it’s wise for us to assume they will have little reservation about regulating higher education now that they know it is too important to fail."
- ...7 more annotations...
-
Obama administration will be tough on colleges because its officials value higher education and believe it needs to perform much better, and successfully educate many more students, to drive the American economy.
-
In her own speech to the Higher Learning Commission’s members on Sunday, Sylvia Manning, the group’s president, cited several signs that the new administration seemed willing to delve into territory that not long ago would have been viewed as off-limits to federal intrusion. Among them: A recently published “draft” of a guide to accreditation that many accrediting officials believe is overly prescriptive. A just-completed round of negotiations over proposed rules that deal with the definition of a “credit hour” and other issues that touch on academic quality -- areas that have historically been the province of colleges and their faculties. And, of special relevance for the Higher Learning Commission, a trio of critical letters from the Education Department’s inspector general challenging the association’s policies and those of two other regional accreditors on key matters -- and in North Central’s case, questioning its continued viability. With that stroke, Manning noted, the department’s newfound activism “has come to the doorstep, or into the living room, of HLC.”
-
Pressure to measure student learning -- to find out which tactics and approaches are effective, which create efficiency without lowering results -- is increasingly coming from what Broad called the Obama administration's "kitchen cabinet," foundations like the Lumina Foundation for Education (which she singled out) to which the White House and Education Department are increasingly looking for education policy help.
-
She cited an October speech in which the foundation's president, Jamie P. Merisotis, said that student learning should be recognized as the "primary measure of quality in higher education," and heralded the European Union's Bologna process as a potential path for making that so
-
we cannot lay low and hope that the glare of the spotlight will eventually fall on others," Broad told the Higher Learning Commission audience.
-
While higher ed groups have been warned repeatedly that they must act before Congress next renews the Higher Education Act -- a process that will begin in earnest in two or three years -- the reality is that politicians in Washington no longer feel obliged to hold off on major changes to higher education policy until that main law is reviewed. Congress has passed "seven major pieces of legislation" related to higher education in recent years, and "I wish I could tell you that the window is open" until the next reauthorization, Broad said. "But we cannot presume that we have the luxury of years within which to get our collective house in order. We must act quickly."
-
But where will such large-scale change come from? The regional accreditors acting together to align their standards? Groups of colleges working together to agree on a common set of learning outcomes for general education, building on the work of the American Association of Colleges and Universities? No answers here, yet.
1More
Underground History of American Education - John Taylor Gatto - 1 views
6More
How Colleges Could Better Prepare Students to Tackle Society's Problems - Students - Th... - 1 views
-
Employers increasingly want to hire students who are highly adaptive, who can work in a fast-paced environment, be creative and problem-solve—and these are not necessarily core skills universities focus on. Most universities focus on knowledge acquisition, but what the world requires is much more about learning how to work within a fast-changing environment and be a leader in that context.
-
We're not just bringing them into the classroom, but we're involving them in more research collaborations and conversations, so the learning students do is guided by that.
- ...3 more annotations...
-
I just heard from a faculty member at Cornell who has increased the amount of experiential learning she requires for class projects. More students are asking for it, and she's using every opportunity to get people out in the community or talking to people so they can engage in real-world experience.
-
Siloed disciplines are one of our biggest challenges. The world doesn't operate in disciplines—its problems and organizations are cross-cutting. The more interdisciplinary people can think and learn, the more equipped they will be to deal with the complexity of the real world.
View AllMost Active Members
View AllTop 10 Tags
- 115assessment
- 86education
- 64accountability
- 57higher_education
- 43learning
- 36web2.0
- 35accreditation
- 24google
- 23web_2.0
- 20teaching
- 17networked governance
- 17outcomes
- 14twitter
- 12sharepoint
- 11social networking
- 10change
- 9visualization
- 9technology
- 9iteams
- 9collaboration