Skip to main content

Home/ CTLT and Friends/ Group items tagged survey

Rss Feed Group items tagged

Gary Brown

News: Room for Improvement - Inside Higher Ed - 1 views

  • 302 private sector and non-profit employers who, by and large, say their employees need a broader set of skills and higher levels of knowledge than they ever have before. But, most surveyed said, colleges and universities have room for improvement in preparing students to be workers.
  • It is time for us to match our ambitious goals for college attainment with an equally ambitious – and well-informed – understanding of what it means to be well-prepared,” said Carol Geary Schneider, the association’s president. “Quality has to become the centerpiece of this nation’s postsecondary education.”
  • Nearly across the board, employers said they expect more of their employees than they did in the past
  • ...4 more annotations...
  • Employers were largely pessimistic in their views of whether higher education is successful in preparing students for what was characterized as “today’s global economy.”
  • “All of us must focus more on what students are actually doing in college.”
  • Employers said colleges should place more emphasis on preparing students "to effectively communicate orally and in writing" (89 percent), to use "critical thinking and analytical reasoning skills" (81 percent) and to have "the ability to apply knowledge and skills to real-world settings through internships or other hands on-experiences" (79 percent). Fewer than half -- 45 percent and 40 percent, respectively -- though colleges should do more to emphasize proficiency in a foreign language and knowledge of democratic institutions and values.
  • Eighty-four percent said that completion of "a significant project before graduation" like a capstone course or senior thesis would help a lot or a fair amount to prepare students for success, with 62 percent saying it would help "a lot." Expecting that students complete an internship or community-based field project was something that 66 percent of employers surveyed said would help a lot.
  •  
    Updating what we've heard.
Nils Peterson

AAC&U News | April 2010 | Feature - 1 views

  • Comparing Rubric Assessments to Standardized Tests
  • First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU).
  • And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.  
  • ...13 more annotations...
  • outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication
  • “The wonderful thing about this approach is that full-time faculty across the university  are gathering data about how their  students are doing, and since they’ll be teaching their courses in the future, they’re really invested in rubric assessment—they really care,” Escoe says. In one case, the capstone survey data revealed that students weren’t doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
  • the university planned a “dual pilot” study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment,
  • The rubrics the UC team used were slightly modified versions of those developed by AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) project. 
  • In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years.  The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.
  • “It’s not some nitpicky, onerous administrative add-on. It’s what we do as we teach our courses, and it really helps close that assessment loop.”
  • There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points “in a black box”:
  • faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test. 
  • “The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”
  • Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students’ whole program of study.”  Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning.
  • “We’re really trying to stress that assessment is pedagogy,”
  • “We found no statistically significant correlation between the CLA scores and the portfolio scores,”
  • In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
    • Nils Peterson
       
      CLA did not provide information for continuous program improvement -- we've heard this argument before
  •  
    The lack of correlation might be rephrased--there appears to be no corrlation between what is useful for faculty who teach and what is useful for the VSA. A corollary question: Of what use is the VSA?
Gary Brown

Evaluations That Make the Grade: 4 Ways to Improve Rating the Faculty - Teaching - The ... - 1 views

  • For students, the act of filling out those forms is sometimes a fleeting, half-conscious moment. But for instructors whose careers can live and die by student evaluations, getting back the forms is an hour of high anxiety
  • "They have destroyed higher education." Mr. Crumbley believes the forms lead inexorably to grade inflation and the dumbing down of the curriculum.
  • Texas enacted a law that will require every public college to post each faculty member's student-evaluation scores on a public Web site.
  • ...10 more annotations...
  • The IDEA Center, an education research group based at Kansas State University, has been spreading its particular course-evaluation gospel since 1975. The central innovation of the IDEA system is that departments can tailor their evaluation forms to emphasize whichever learning objectives are most important in their discipline.
  • (Roughly 350 colleges use the IDEA Center's system, though in some cases only a single department or academic unit participates.)
  • The new North Texas instrument that came from these efforts tries to correct for biases that are beyond an instructor's control. The questionnaire asks students, for example, whether the classroom had an appropriate size and layout for the course. If students were unhappy with the classroom, and if it appears that their unhappiness inappropriately colored their evaluations of the instructor, the system can adjust the instructor's scores accordingly.
  • Elaine Seymour, who was then director of ethnography and evaluation research at the University of Colorado at Boulder, was assisting with a National Science Foundation project to improve the quality of science instruction at the college level. She found that many instructors were reluctant to try new teaching techniques because they feared their course-evaluation ratings might decline.
  • "So the ability to do some quantitative analysis of these comments really allows you to take a more nuanced and effective look at what these students are really saying."
  • Mr. Frick and his colleagues found that his new course-evaluation form was strongly correlated with both students' and instructors' own measures of how well the students had mastered each course's learning goals.
  • The survey instrument, known as SALG, for Student Assessment of their Learning Gains, is now used by instructors across the country. The project's Web site contains more than 900 templates, mostly for courses in the sciences.
  • "Students are the inventory," Mr. Crumbley says. "The real stakeholders in higher education are employers, society, the people who hire our graduates. But what we do is ask the inventory if a professor is good or bad. At General Motors," he says, "you don't ask the cars which factory workers are good at their jobs. You check the cars for defects, you ask the drivers, and that's how you know how the workers are doing."
  • William H. Pallett, president of the IDEA Center, says that when course rating surveys are well-designed and instructors make clear that they care about them, students will answer honestly and thoughtfully.
  • In Mr. Bain's view, student evaluations should be just one of several tools colleges use to assess teaching. Peers should regularly visit one another's classrooms, he argues. And professors should develop "teaching portfolios" that demonstrate their ability to do the kinds of instruction that are most important in their particular disciplines. "It's kind of ironic that we grab onto something that seems fixed and fast and absolute, rather than something that seems a little bit messy," he says. "Making decisions about the ability of someone to cultivate someone else's learning is inherently a messy process. It can't be reduced to a formula."
  •  
    Old friends at the Idea Center, and an old but persistent issue.
Joshua Yeidel

Transparency By Design: College Choices for Adults - 0 views

  •  
    No learning outcomes here -- just engagement and satisfaction surveys.
Joshua Yeidel

Capella Learning and Career Outcomes - Program Outcomes - 0 views

  •  
    the site is called "Capellaresults", but I can't find any results data other than satisfaction surveys.
Joshua Yeidel

Social Networking on Intranets (Jakob Nielsen's Alertbox) - 0 views

  •  
    ummary: Community features are spreading from "Web 2.0" to "Enterprise 2.0." Research across 14 companies found that many are making productive use of social intranet features. Includes guidelines for implementation of "Enterprise 2.0" based on the experience of surveyed companies.
Gary Brown

An Expert Surveys the Assessment Landscape - Student Affairs - The Chronicle of Higher ... - 1 views

shared by Gary Brown on 29 Oct 09 - Cached
    • Gary Brown
       
      Illustration of a vision of assessment that separates assessment from teaching and learning.
  • If assessment is going to be required by accrediting bodies and top administrators, then we need administrative support and oversight of assessment on campus, rather than once again offloading more work onto faculty members squeezed by teaching & research inflation.
  • Outcomes assessment does not have to be in the form of standardized tests, nor does including assessment in faculty review have to translate into percentages achieving a particular score on such a test. What it does mean is that when the annual review comes along, one should be prepared to answer the question, "How do you know that what you're doing results in student learning?" We've all had the experience of realizing at times that students took in something very different from what we intended (if we were paying attention at all). So it's reasonable to be asked about how you do look at that question and how you decide when your current practice is successful or when it needs to be modified. That's simply being a reflective practitioner in the classroom which is the bare minimum students should expect from us. And that's all assessment is - answering that question, reflecting on what you find, and taking next steps to keep doing what works well and find better solutions for the things that aren't working well.
  • ...2 more annotations...
  • We need to really show HOW we use the results of assessment in the revamping of our curriculum, with real case studies. Each department should insist and be ready to demonstrate real case studies of this type of use of Assessment.
  • Socrates said "A life that is not examined is not worth living". Wonderful as this may be as a metaphor we should add to it - "and once examined - do something to improve it".
Nils Peterson

An Expert Surveys the Assessment Landscape - Student Affairs - The Chronicle of Higher ... - 2 views

  • Colleges and universities have plenty of tools, but they must learn to use them more effectively. That is how George D. Kuh describes the state of assessing what college students learn.
Theron DesRosier

An Expert Surveys the Assessment Landscape - The Chronicle of Higher Education - 2 views

  • What we want is for assessment to become a public, shared responsibility, so there should be departmental leadership.
  •  
    "What we want is for assessment to become a public, shared responsibility, so there should be departmental leadership." George Kuh director of the National Institute for Learning Outcomes Assessment.
  •  
    Kuh also says, "So we're going to spend some time looking at the impact of the Voluntary System of Accountability. It's one thing for schools to sign up, it's another to post the information and to show that they're actually doing something with it. It's not about posting a score on a Web site-it's about doing something with the data." He doesn't take the next step and ask if it is even possible for schools to actually do anything with the data collected from the CLA or ask who has access to the criteria: Students? Faculty? Anyone?
Joshua Yeidel

News: Are Today's Grads Unprofessional? - Inside Higher Ed - 0 views

  •  
    "The results of the survey [of employers], released Friday, suggest that colleges need to change how they prepare their students for the working world, particularly by reinforcing soft skills like honoring workplace etiquette and having a positive demeanor. " -- Oh, yes, and get rid of the tatoos and piercings.
  •  
    Relevant to Gary's post about Career Services and Liberal Arts -- can "professionalism" be part of the curriculum?
Gary Brown

The Chimera of College Brands - Commentary - The Chronicle of Higher Education - 1 views

  • What you get from a college, by contrast, varies wildly from department to department, professor to professor, and course to course. The idea implicit in college brands—that every course reflects certain institutional values and standards—is mostly a fraud. In reality, there are both great and terrible courses at the most esteemed and at the most denigrated institutions.
  • With a grant from the nonprofit Lumina Foundation for Education, physics and history professors from a range of Utah two- and four-year institutions are applying the "tuning" methods developed as part of the sweeping Bologna Process reforms in Europe.
  • The group also created "employability maps" by surveying employers of recent physics graduates—including General Electric, Simco Electronics, and the Air Force—to find out what knowledge and skills are needed for successful science careers.
  • ...3 more annotations...
  • If a student finishes and can't do what's advertised, they'll say, 'I've been shortchanged.'
  • Kathryn MacKay, an associate professor of history at Weber State University, drew on recent work from the American Historical Association to define learning goals in historical knowledge, thinking, and skills.
  • In the immediate future, as the higher-education market continues to globalize and the allure of prestige continues to grow, the value of university brands is likely to rise. But at some point, the countervailing forces of empiricism will begin to take hold. The openness inherent to tuning and other, similar processes will make plain that college courses do not vary in quality in anything like the way that archaic, prestige- and money-driven brands imply. Once you've defined the goals, you can prove what everyone knows but few want to admit: From an educational standpoint, institutional brands are largely an illusion for which students routinely overpay.
  •  
    The argumet for external stakeholders is underscored, among other implications.
Gary Brown

Colleges' Data-Collection Burdens Are Higher Than Official Estimates, GAO Finds - The T... - 0 views

  • The GAO recommended that Education officials reevaluate their official estimates of the time it takes for colleges to complete IPEDS surveys, communicate to a wider range of colleges the opportunites for training, and coordinate with education software providers to improve the quality and reliability of IPEDS reporting features.
  •  
    The "burden" of accountability mirrors in data what we encounter in spirit. It appears to take less time than university's report and, more to the parallel, a little training might be useful.
Peggy Collins

Professors use of Technology - 3 views

shared by Peggy Collins on 28 Jul 10 - Cached
  •  
    Survey from the chronicle of university faculty in 2009.
Nils Peterson

The New Muscle: 5 Quality-of-Learning Projects That Didn't Exist 5 Years Ago - Special ... - 0 views

shared by Nils Peterson on 30 Aug 10 - Cached
  • The New Muscle: 5 Quality-of-Learning Projects That Didn't Exist 5 Years Ago   Lumina Foundation for Education's Tuning USA Year started: 2009 What it does: Supports statewide, faculty-led discussions, meetings, and surveys to define discipline-specific knowledge and skills that college and state officials, students, alumni, and employers can expect graduates of particular degree programs to have.
    • Nils Peterson
       
      That they lump VSA in here with the others suggests to me that the Chronicle's author doesn't distinguish the nuance.
Gary Brown

Ranking Employees: Why Comparing Workers to Their Peers Can Often Backfire - Knowledge@... - 2 views

  • We live in a world full of benchmarks and rankings. Consumers use them to compare the latest gadgets. Parents and policy makers rely on them to assess schools and other public institutions,
  • "Many managers think that giving workers feedback about their performance relative to their peers inspires them to become more competitive -- to work harder to catch up, or excel even more. But in fact, the opposite happens," says Barankay, whose previous research and teaching has focused on personnel and labor economics. "Workers can become complacent and de-motivated. People who rank highly think, 'I am already number one, so why try harder?' And people who are far behind can become depressed about their work and give up."
  • mong the companies that use Mechanical Turk are Google, Yahoo and Zappos.com, the online shoe and clothing purveyor.
  • ...12 more annotations...
  • Nothing is more compelling than data from actual workplace settings, but getting it is usually very hard."
  • Instead, the job without the feedback attracted more workers -- 254, compared with 76 for the job with feedback.
  • "This indicates that when people are great and they know it, they tend to slack off. But when they're at the bottom, and are told they're doing terribly, they are de-motivated," says Barankay.
  • In the second stage of the experiment
  • it seems that people would rather not know how they rank compared to others, even though when we surveyed these workers after the experiment, 74% said they wanted feedback about their rank."
  • Of the workers in the control group, 66% came back for more work, compared with 42% in the treatment group. The members of the treatment group who returned were also 22% less productive than the control group. This seems to dispel the notion that giving people feedback might encourage high-performing workers to work harder to excel, and inspire low-ranked workers to make more of an effort.
  • The aim was to determine whether giving people feedback affected their desire to do more work, as well as the quantity and quality of their work.
  • top performers move on to new challenges and low performers have no viable options elsewhere.
  • feedback about rank is detrimental to performance,"
  • it is well documented that tournaments, where rankings are tied to prizes, bonuses and promotions, do inspire higher productivity and performance.
  • "In workplaces where rankings and relative performance is very transparent, even without the intervention of management ... it may be better to attach financial incentives to rankings, as interpersonal comparisons without prizes may lead to lower effort," Barankay suggests. "In those office environments where people may not be able to assess and compare the performance of others, it may not be useful to just post a ranking without attaching prizes."
  • "The key is to devote more time to thinking about whether to give feedback, and how each individual will respond to it. If, as the employer, you think a worker will respond positively to a ranking and feel inspired to work harder, then by all means do it. But it's imperative to think about it on an individual level."
  •  
    the conflation of feedback with ranking confounds this. What is not done and needs to be done is to compare the motivational impact of providing constructive feedback. Presumably the study uses ranking in a strictly comparative context as well, and we do not see the influence of feedback relative to an absolute scale. Still, much in this piece to ponder....
Gary Brown

The MetLife Survey of the American Teacher: Collaborating for Success ~ Stephen's Web - 0 views

shared by Gary Brown on 13 Aug 10 - Cached
  • ccording to the study, teachers value collaboration, but do most of it outside the classroom. They believe they set high standards for students and believe core skills (mathematics and language, for example) are important. They believe all staff, rather than individual teachers, are accountable for student progress. They believe it would help a lot if students took responsibility for their own learning, but less than a third (compared to a very high percentage of students) believe students actually do.
  •  
    A majority of teachers and principals also believe that the following school- and classroom-centered factors would have a major impact on improving student achievement: Connecting classroom instruction to the real world; A school culture where students feel responsible and accountable for their own education; Addressing the individual needs of diverse learners; and Greater collaboration among teachers and school leaders.
Gary Brown

Texas A&M's Faculty Ratings: Right and Wrong - Commentary - The Chronicle of Higher Edu... - 0 views

  • "Academia is highly specialized. We don't mean to be exclusive. We are a public-serving group of people. But at the same time, that public isn't well-enough aware of what we do and who we are to evaluate us."
  • But the think tank is correct that taxpayers deserve to know how their money is being spent. Public-university operating costs in Texas have gone up more than 60 percent in the last two decades, even after adjusting for inflation, and professors are among the state's highest-paid public employees. The state needs accountability measures, and they must be enforced by a party other than the faculty, who, it could easily be charged, have a conflict of interest. That's what Texas A&M got right.
  • Moosally is right about one thing: The public isn't well aware of what she and many of her colleagues do. But they should be. That is not to say that the public will be able to understand what goes on in all of the chemistry laboratories in Texas. But Moosally teaches English at a college that is not exactly tasked with performing cutting-edge research. Houston-Downtown's mission is to provide "educational opportunities and access to students from a variety of backgrounds including many first-generation college students."
  • ...7 more annotations...
  • No doubt there is useful research coming out of the university system. But plenty could be omitted without a great deal of detriment to students' education. For instance, Hugill's most recent contributions have included a chapter on "Transitions in Hegemony: A Theory Based on State Type and Technology" and the article "German Great-Power Relations in the Pages of Simplicissimus, 1896-1914." Moosally's master's thesis was titled "Resumptive Pronouns in Modern Standard Arabic: A Head-Driven Phrase Structure Grammar Account," and her current research interests include "interactions between grammar knowledge and writing abilities/interest [and] cross-linguistic patterns of agreement."
  • Only 35 percent of respondents felt it was very important for colleges to "provide useful information to the public on issues affecting their daily lives."
  • According to a 2004 survey by The Chronicle, 71 percent of Americans thought it was very important for colleges to prepare undergraduates for careers, while only 56 percent thought it was very important for colleges to "discover more about the world through research."
  • What Texas A&M officials have also missed is that faculty members must be held accountable for what they teach.
  • Professors receive more credit for teaching higher-level students. But again, that is backward. The idea should be to give senior faculty members more credit for teaching introductory classes.
  • Moreover, the metric entirely ignores teaching quality. Who cares how many "student hours" professors put in if they are not particularly good teachers anyway?
  • Ultimately there needs to be a systemic solution to the problem of teacher quality. Someone—a grown-up, preferably—needs to get into the classroom and watch what is being done there.
  •  
    Another one in which the comments say more than I might--but the range of these accountability pieces underscore the work to do....
Gary Brown

Many College Boards Are at Sea in Assessing Student Learning, Survey Finds - Leadership... - 0 views

  • While oversight of educational quality is a critical responsibility of college boards of trustees, a majority of trustees and chief academic officers say boards do not spend enough time discussing student-learning outcomes, and more than a third say boards do not understand how student learning is assessed, says a report issued on Thursday by the Association of Governing Boards of Universities and Colleges.
  • While boards should not get involved in the details of teaching or ways to improve student-learning outcomes, they must hold the administration accountable for identifying needs in the academic programs and then meeting them, the report says. Boards should also make decisions on where to allocate resources based on what works or what should improve.
  • The most commonly received information by boards was college-ranking data
  • ...1 more annotation...
  • Boards should expect to receive useful high-level information on learning outcomes, the report says, and should make comparisons over time and to other institutions. Training in how to understand academic and learning assessments should also be part of orientation for new board members.
  •  
    This piece coupled with the usual commentary reveal again the profound identity crisis shaking education in this country.
Gary Brown

A Critic Sees Deep Problems in the Doctoral Rankings - Faculty - The Chronicle of Highe... - 1 views

  • This week he posted a public critique of the NRC study on his university's Web site.
  • "Little credence should be given" to the NRC's ranges of rankings.
  • There's not very much real information about quality in the simple measures they've got."
  • ...4 more annotations...
  • The NRC project's directors say that those small samples are not a problem, because the reputational scores were not converted directly into program assessments. Instead, the scores were used to develop a profile of the kinds of traits that faculty members value in doctoral programs in their field.
  • For one thing, Mr. Stigler says, the relationships between programs' reputations and the various program traits are probably not simple and linear.
  • if these correlations between reputation and citations were plotted on a graph, the most accurate representation would be a curved line, not a straight line. (The curve would occur at the tipping point where high citation levels make reputations go sky-high.)
  • Mr. Stigler says that it was a mistake for the NRC to so thoroughly abandon the reputational measures it used in its previous doctoral studies, in 1982 and 1995. Reputational surveys are widely criticized, he says, but they do provide a check on certain kinds of qualitative measures.
  •  
    What is not challenged is the validity and utility of the construct itself--reputation rankings.
‹ Previous 21 - 40 of 53 Next ›
Showing 20 items per page