Skip to main content

Home/ CTLT and Friends/ Group items matching "feedback" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessments to Standardized Tests - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
Gary Brown

Would You Protect Your Computer's Feelings? Clifford Nass Says Yes. - ProfHacker - The Chronicle of Higher Education - 2 views

  • why peer review processes often avoid, rather than facilitate, sound judgment
  • humans do not differentiate between computers and people in their social interactions.
  • no matter what "everyone knows," people act as if the computer secretly cares
  • ...4 more annotations...
  • users given completely random praise by a computer program liked it more than the same program without praise, even though they knew in advance the praise was meaningless.
  • Nass demonstrates, however, that people internalize praise and criticism differently—while we welcome the former, we really dwell on and obsess over the latter. In the criticism sandwich, then, "the criticism blasts the first list of positive achievements out of listeners' memory. They then think hard about the criticism (which will make them remember it better) and are on the alert to think even harder about what happens next. What do they then get? Positive remarks that are too general to be remembered"
  • And because we focus so much on the negative, having a similar number of positive and negative comments "feels negative overall"
  • The best strategy, he suggests, is "to briefly present a few negative remarks and then provide a long list of positive remarks...You should also provide as much detail as possible within the positive comments, even more than feels natural, because positive feedback is less memorable" (33).
  •  
    The implications for feedback issues are pretty clear.
Gary Brown

Critical friend - Wikipedia, the free encyclopedia - 2 views

  • The Critical Friend is a powerful idea, perhaps because it contains an inherent tension. Friends bring a high degree of unconditional positive regard. Critics are, at first sight at least, conditional, negative and intolerant of failure. Perhaps the critical friend comes closest to what might be regarded as 'true friendship' - a successful marrying of unconditional support and unconditional critique. [
  •  
    I've been wrestling with the tension again between supporting programs to help them improve, but then rating them for the accountability charge we hold.  So I've been looking into the concept and practice of the "Critical Friend."  Some tensions are inherent. This quote helps clarify.
Gary Brown

Details | LinkedIn - 0 views

  • Although different members of the academic hierarchy take on different roles regarding student learning, student learning is everyone’s concern in an academic setting. As I specified in my article comments, universities would do well to use their academic support units, which often have evaluation teams (or a designated evaluator) to assist in providing boards the information they need for decision making. Perhaps boards are not aware of those serving in evaluation roles at the university or how those staff members can assist boards in their endeavors.
  • Gary Brown • We have been using the Internet to post program assessment plans and reports (the programs that support this initiative at least), our criteria (rubric) for reviewing them, and then inviting external stakeholders to join in the review process.
Lorena O'English

Effective Assessment in a Digital Age: A guide to technology-enhanced assessment and feedback - 1 views

  •  
    from JISC (pdf)
Gary Brown

Ranking Employees: Why Comparing Workers to Their Peers Can Often Backfire - Knowledge@Wharton - 2 views

  • We live in a world full of benchmarks and rankings. Consumers use them to compare the latest gadgets. Parents and policy makers rely on them to assess schools and other public institutions,
  • "Many managers think that giving workers feedback about their performance relative to their peers inspires them to become more competitive -- to work harder to catch up, or excel even more. But in fact, the opposite happens," says Barankay, whose previous research and teaching has focused on personnel and labor economics. "Workers can become complacent and de-motivated. People who rank highly think, 'I am already number one, so why try harder?' And people who are far behind can become depressed about their work and give up."
  • mong the companies that use Mechanical Turk are Google, Yahoo and Zappos.com, the online shoe and clothing purveyor.
  • ...12 more annotations...
  • Nothing is more compelling than data from actual workplace settings, but getting it is usually very hard."
  • Instead, the job without the feedback attracted more workers -- 254, compared with 76 for the job with feedback.
  • "This indicates that when people are great and they know it, they tend to slack off. But when they're at the bottom, and are told they're doing terribly, they are de-motivated," says Barankay.
  • In the second stage of the experiment
  • it seems that people would rather not know how they rank compared to others, even though when we surveyed these workers after the experiment, 74% said they wanted feedback about their rank."
  • Of the workers in the control group, 66% came back for more work, compared with 42% in the treatment group. The members of the treatment group who returned were also 22% less productive than the control group. This seems to dispel the notion that giving people feedback might encourage high-performing workers to work harder to excel, and inspire low-ranked workers to make more of an effort.
  • The aim was to determine whether giving people feedback affected their desire to do more work, as well as the quantity and quality of their work.
  • top performers move on to new challenges and low performers have no viable options elsewhere.
  • feedback about rank is detrimental to performance,"
  • it is well documented that tournaments, where rankings are tied to prizes, bonuses and promotions, do inspire higher productivity and performance.
  • "In workplaces where rankings and relative performance is very transparent, even without the intervention of management ... it may be better to attach financial incentives to rankings, as interpersonal comparisons without prizes may lead to lower effort," Barankay suggests. "In those office environments where people may not be able to assess and compare the performance of others, it may not be useful to just post a ranking without attaching prizes."
  • "The key is to devote more time to thinking about whether to give feedback, and how each individual will respond to it. If, as the employer, you think a worker will respond positively to a ranking and feel inspired to work harder, then by all means do it. But it's imperative to think about it on an individual level."
  •  
    the conflation of feedback with ranking confounds this. What is not done and needs to be done is to compare the motivational impact of providing constructive feedback. Presumably the study uses ranking in a strictly comparative context as well, and we do not see the influence of feedback relative to an absolute scale. Still, much in this piece to ponder....
Nils Peterson

Through the Open Door: Open Courses as Research, Learning, and Engagement (EDUCAUSE Review) | EDUCAUSE - 0 views

  • openness in practice requires little additional investment, since it essentially concerns transparency of already planned course activities on the part of the educator.
    • Nils Peterson
       
      Search YouTube for "master class" Theron and I are looking at violin examples. The class is happening with student, master, and observers. What is added is video recording and posting to YouTube. YouTube provides additional community via comments and linked videos.
  • This second group of learners — those who wanted to participate but weren't interested in course credit — numbered over 2,300. The addition of these learners significantly enhanced the course experience, since additional conversations and readings extended the contributions of the instructors.
    • Nils Peterson
       
      These additional resources might also include peer reviews using a course rubric, or diverse feedback on the rubric itself.
  • Enough structure is provided by the course that if a learner is interested in the topic, he or she can build sufficient language and expertise to participate peripherally or directly.
  • ...4 more annotations...
  • Although courses are under pressure in the "unbundling" or fragmentation of information in general, the learning process requires coherence in content and conversations. Learners need some sense of what they are choosing to do, a sense of eventedness.5 Even in traditional courses, learners must engage in a process of forming coherent views of a topic.
    • Nils Peterson
       
      An assumption here that the learner needs kick starting. Its an assumtion that the learner is not a Margo Tamez making an Urgent Call for Help where the learner owns the problem. Is it a way of inviting a community to a party?
  • The community-as-curriculum model inverts the position of curriculum: rather than being a prerequisite for a course, curriculum becomes an output of a course.
  • They are now able, sometimes through the open access noted above and sometimes through access to other materials and guidance, to engage in their own learning outside of a classroom structure.
    • Nils Peterson
       
      A key point is the creation of open learners. Impediments to open learners need to be understood and overcome. Identity mangement is likely to be an important skill here.
  • Educators continue to play an important role in facilitating interaction, sharing information and resources, challenging assertions, and contributing to learners' growth of knowledge.
Gary Brown

Education ambivalence : Nature : Nature Publishing Group - 1 views

  • Academic scientists value teaching as much as research — but universities apparently don't
  • Nature Education, last year conducted a survey of 450 university-level science faculty members from more than 30 countries. The first report from that survey, freely available at http://go.nature.com/5wEKij, focuses on 'postsecondary' university- and college-level education. It finds that more than half of the respondents in Europe, Asia and North America feel that the quality of undergraduate science education in their country is mediocre, poor or very poor.
  • 77% of respondents indicated that they considered their teaching responsibilities to be just as important as their research — and 16% said teaching was more important.
  • ...6 more annotations...
  • But the biggest barrier to improvement is the pervasive perception that academic institutions — and the prevailing rewards structure of science — value research far more than teaching
  • despite their beliefs that teaching was at least as important as research, many respondents said that they would choose to appoint a researcher rather than a teacher to an open tenured position.
  • To correct this misalignment of values, two things are required. The first is to establish a standardized system of teaching evaluation. This would give universities and professors alike the feedback they need to improve.
  • The second requirement is to improve the support and rewards for university-level teaching.
  • systematic training in how to teach well
  • But by showering so many rewards on research instead of on teaching, universities and funding agencies risk undermining the educational quality that is required for research to flourish in the long term.
  •  
    Attention to this issue from this resource--Nature--is a breakthrough in its own right. Note the focus on "flourish in the long term...".
Gary Brown

Postgraduate Wrath - Brainstorm - The Chronicle of Higher Education - 0 views

  • "So, what I want to know is, why are you wasting money on glossy fundraising brochures full of meaningless synonyms for the word 'Excellence'? And, why are you sending them to ME? Yes, I know that I got a master's degree at your fine institution, but that master's degree hasn't done jack ---- for me since I got it! I have been unemployed for the past TWO YEARS and I am now a professional resume-submitter, sending out dozens of resumes a month to employers, and the degree I received in your hallowed halls is at the TOP OF IT and it doesn't do a ----ing thing."
  • Who knows how smart and conscientious and skilled the graduate really is. He might falter in face-to-face interviews, or have an overly-thin resume. But that doesn't change the fact that the school in question admitted the student, put him through a public policy curriculum, and accredited him. If the writer is a klutz, then that, too, reflects upon the university that trained him.
  • Obviously, this student doesn't recall any non-vocational learning that happened, or doesn't respect it. He even terms the education he received "imaginary."
  •  
    As we wrestle with resistence to employer feedback in assessment, this side of the story gains a bit of press.
Theron DesRosier

Virtual-TA - 2 views

  • We also developed a technology platform that allows our TAs to electronically insert detailed, actionable feedback directly into student assignments
  • Your instructors give us the schedule of assignments, when student assignments are due, when we might expect to receive them electronically, when the scored assignments will be returned, the learning outcomes on which to score the assignments, the rubrics to be used and the weights to be applied to different learning outcomes. We can use your rubrics to score assignments or design rubrics for sign-off by your faculty members.
  • review and embed feedback using color-coded pushpins (each color corresponds to a specific learning outcome) directly onto the electronic assignments. Color-coded pushpins provide a powerful visual diagnostic.
  • ...5 more annotations...
  • We do not have any contact with your students. Instructors retain full control of the process, from designing the assignments in the first place, to specifying learning outcomes and attaching weights to each outcome. Instructors also review the work of our TAs through a step called the Interim Check, which happens after 10% of the assignments have been completed. Faculty provide feedback, offer any further instructions and eventually sign-off on the work done, before our TAs continue with the remainder of the assignments
  • Finally, upon the request of the instructor, the weights he/she specified to the learning outcomes will be rubric-based scores which are used to generate a composite score for each student assignment
  • As an added bonus, our Virtual-TAs provide a detailed, summative report for the instructor on the overall class performance on the given assignment, which includes a look at how the class fared on each outcome, where the students did well, where they stumbled and what concepts, if any, need reinforcing in class the following week.
  • We can also, upon request, generate reports by Student Learning Outcomes (SLOs). This report can be used by the instructor to immediately address gaps in learning at the individual or classroom level.
  • Think of this as a micro-closing-of-the-loop that happens each week.  Contrast this with the broader, closing-the-loop that accompanies program-level assessment of learning, which might happen at the end of a whole academic year or later!
  •  
    I went to Virtual TA and Highlighted their language describing how it works.
Gary Brown

Outsourced Grading, With Supporters and Critics, Comes to College - Teaching - The Chronicle of Higher Education - 3 views

shared by Gary Brown on 06 Apr 10 - Cached
  • Lori Whisenant knows that one way to improve the writing skills of undergraduates is to make them write more. But as each student in her course in business law and ethics at the University of Houston began to crank out—often awkwardly—nearly 5,000 words a semester, it became clear to her that what would really help them was consistent, detailed feedback.
  • She outsourced assignment grading to a company whose employees are mostly in Asia.
  • The graders working for EduMetry, based in a Virginia suburb of Washington, are concentrated in India, Singapore, and Malaysia, along with some in the United States and elsewhere. They do their work online and communicate with professors via e-mail.
  • ...8 more annotations...
  • The company argues that professors freed from grading papers can spend more time teaching and doing research.
  • "This is what they do for a living," says Ms. Whisenant. "We're working with professionals." 
  • Assessors are trained in the use of rubrics, or systematic guidelines for evaluating student work, and before they are hired are given sample student assignments to see "how they perform on those," says Ravindra Singh Bangari, EduMetry's vice president of assessment services.
  • Professors give final grades to assignments, but the assessors score the papers based on the elements in the rubric and "help students understand where their strengths and weaknesses are," says Tara Sherman, vice president of client services at EduMetry. "Then the professors can give the students the help they need based on the feedback."
  • The assessors use technology that allows them to embed comments in each document; professors can review the results (and edit them if they choose) before passing assignments back to students.
  • But West Hills' investment, which it wouldn't disclose, has paid off in an unexpected way. The feedback from Virtual-TA seems to make the difference between a student's remaining in an online course and dropping out.
  • Because Virtual-TA provides detailed comments about grammar, organization, and other writing errors in the papers, students have a framework for improvement that some instructors may not be able to provide, she says.
  • "People need to get past thinking that grading must be done by the people who are teaching," says Mr. Rajam, who is director of assurance of learning at George Washington University's School of Business. "Sometimes people get so caught up in the mousetrap that they forget about the mouse."
Nils Peterson

AAC&U News | April 2010 | Feature - 1 views

  • Comparing Rubric Assessments to Standardized Tests
  • First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU).
  • And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.  
  • ...13 more annotations...
  • outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication
  • “The wonderful thing about this approach is that full-time faculty across the university  are gathering data about how their  students are doing, and since they’ll be teaching their courses in the future, they’re really invested in rubric assessment—they really care,” Escoe says. In one case, the capstone survey data revealed that students weren’t doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
  • the university planned a “dual pilot” study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment,
  • The rubrics the UC team used were slightly modified versions of those developed by AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) project. 
  • In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years.  The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.
  • “It’s not some nitpicky, onerous administrative add-on. It’s what we do as we teach our courses, and it really helps close that assessment loop.”
  • There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points “in a black box”:
  • faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test. 
  • “The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”
  • Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students’ whole program of study.”  Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning.
  • “We’re really trying to stress that assessment is pedagogy,”
  • “We found no statistically significant correlation between the CLA scores and the portfolio scores,”
  • In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
    • Nils Peterson
       
      CLA did not provide information for continuous program improvement -- we've heard this argument before
  •  
    The lack of correlation might be rephrased--there appears to be no corrlation between what is useful for faculty who teach and what is useful for the VSA. A corollary question: Of what use is the VSA?
Nils Peterson

Views: Changing the Equation - Inside Higher Ed - 1 views

  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resource
  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
  • ...19 more annotations...
  • year we strug
  • year we strug
  • those who control influential rating systems of the sort published by U.S. News & World Report -- define academic quality as small classes taught by distinguished faculty, grand campuses with impressive libraries and laboratories, and bright students heavily recruited. Since all of these indicators of quality are costly, my college’s pursuit of quality, like that of so many others, led us to seek more revenue to spend on quality improvements. And the strategy worked.
  • Based on those concerns, and informed by the literature on the “teaching to learning” paradigm shift, we began to change our focus from what we were teaching to what and how our students were learning.
  • No one wants to cut costs if their reputation for quality will suffer, yet no one wants to fall off the cliff.
  • When quality is defined by those things that require substantial resources, efforts to reduce costs are doomed to failure
  • some of the best thinkers in higher education have urged us to define the quality in terms of student outcomes.
  • Faculty said they wanted to move away from giving lectures and then having students parrot the information back to them on tests. They said they were tired of complaining that students couldn’t write well or think critically, but not having the time to address those problems because there was so much material to cover. And they were concerned when they read that employers had reported in national surveys that, while graduates knew a lot about the subjects they studied, they didn’t know how to apply what they had learned to practical problems or work in teams or with people from different racial and ethnic backgrounds.
  • Our applications have doubled over the last decade and now, for the first time in our 134-year history, we receive the majority of our applications from out-of-state students.
  • We established what we call college-wide learning goals that focus on "essential" skills and attributes that are critical for success in our increasingly complex world. These include critical and analytical thinking, creativity, writing and other communication skills, leadership, collaboration and teamwork, and global consciousness, social responsibility and ethical awareness.
  • despite claims to the contrary, many of the factors that drive up costs add little value. Research conducted by Dennis Jones and Jane Wellman found that “there is no consistent relationship between spending and performance, whether that is measured by spending against degree production, measures of student engagement, evidence of high impact practices, students’ satisfaction with their education, or future earnings.” Indeed, they concluded that “the absolute level of resources is less important than the way those resources are used.”
  • After more than a year, the group had developed what we now describe as a low-residency, project- and competency-based program. Here students don’t take courses or earn grades. The requirements for the degree are for students to complete a series of projects, captured in an electronic portfolio,
  • students must acquire and apply specific competencies
  • Faculty spend their time coaching students, providing them with feedback on their projects and running two-day residencies that bring students to campus periodically to learn through intensive face-to-face interaction
  • After a year and a half, the evidence suggests that students are learning as much as, if not more than, those enrolled in our traditional business program
  • As the campus learns more about the demonstration project, other faculty are expressing interest in applying its design principles to courses and degree programs in their fields. They created a Learning Coalition as a forum to explore different ways to capitalize on the potential of the learning paradigm.
  • a problem-based general education curriculum
  • At the very least, finding innovative ways to lower costs without compromising student learning is wise competitive positioning for an uncertain future
  • the focus of student evaluations has changed noticeably. Instead of focusing almost 100% on the instructor and whether he/she was good, bad, or indifferent, our students' evaluations are now focusing on the students themselves - as to what they learned, how much they have learned, and how much fun they had learning.
    • Nils Peterson
       
      gary diigoed this article. this comment shines another light -- the focus of the course eval shifted from faculty member to course & student learning when the focus shifted from teaching to learning
  •  
    A must read spotted by Jane Sherman--I've highlighed, as usual, much of it.
Nils Peterson

Facebook | Evoke - 1 views

  • Here’s how to become an EVOKE mentor: 1) Sign up for the EVOKE network 2) Make a promise to yourself to visit the EVOKE network as often as you can, between now and May 12. OKAY, I’M A MENTOR! NOW WHAT? Every time you visit the EVOKE network, try to complete at least one mentor mission. Each mission takes just a few minutes – but it can have a huge impact. Your feedback and words of advice can help an EVOKE agent stay motivated and optimistic. You can inspire an EVOKE agent to stick with the tough challenges of social innovation long enough to really make a difference.
    • Nils Peterson
       
      concept of building a community be enlisting mentors
  • MENTOR MISSIONS Here are some starter mentor missions. You can tackle them in any order, and complete them as many times as you want. Feel free to invent your own mentor missions – and share instructions here in the comments for others to adopt.
    • Nils Peterson
       
      BEFRIEND AN AGENT Browse the EVOKE agent directory ... Add the agent as your friend. WORDS OF WISDOM So share some words of wisdom CHEER 'EM ON HELPFUL RESOURCES.. share links to articles POWER UP Check to see if your agent has uploaded any videos, photos, or blog posts. BRAG TIME Tell the whole EVOKE network how proud you are Tweet or Facebook status update about your agent MAKE AN ALLIANCE Introduce your agent to a friend or colleague who you think
Jayme Jacobson

Evaluating the effect of peer feedback on the quality of online discourse - 0 views

  • Results indicate that continuous, anonymous, aggregated feedback had no effect on either the students' or the instructors' perception of discussion quality.
  •  
    Abstract: This study explores the effect on discussion quality of adding a feedback mechanism that presents users with an aggregate peer rating of the usefulness of the participant's contributions in online, asynchronous discussion. Participants in the study groups were able to specify the degree to which they thought any posted comment was useful to the discussion. Individuals were regularly presented with feedback (aggregated and anonymous) summarizing peers' assessment of the usefulness of their contribution, along with a summary of how the individuals rated their peers. Results indicate that continuous, anonymous, aggregated feedback had no effect on either the students' or the instructors' perception of discussion quality. This is kind of a show-stopper. It's just one study but when you look at the results there appears to be no effect whatsoever from peers giving feedback about the usefulness of discussion posts, nor any perceived improvement in the quality of the discussions as evaluated by faculty. It looks like we'll need to begin looking carefully at just what kinds of feedback will really make a difference. Following up on Corinna's earlier post http://blogs.hbr.org/cs/2010/03/twitters_potential_as_microfee.html about the effectiveness of short immediate feedback being more effective than lengthier feedback that actually hinders performance. The trick will be to figure out just what kinds of feedback will actually work in embedded situations. It's interesting that an assessment of utility wasn't useful...?
Corinna Lo

Use Twitter to Collect Micro-Feedback - The Conversation - Harvard Business Review - 1 views

  •  
    Even though twitter is on the headline once again, the important message from the article is not about twitter... but rather, the way in which feedback is being solicited, or collected. feedback is best when provided as close to the moment of performance as possible, as shown in studies involving everyone from medical students to athletes. But lengthy feedback forms discourage frequent and immediate responses. Enabling employees to solicit feedback in short, immediate bursts may actually be more effective than performance reviews or lengthy feedback systems, since excessive feedback can be overwhelming and hinder performance.
Nils Peterson

WSU Today Online - Real-life global experience … in the classroom - 3 views

  • “We’ve saved Boeing, for example, hundreds of thousands of dollars,” he said. “Sending a project to us runs about $8,000 to $10,000, the work gets done, and the students get an educational experience on top of that.”
    • Nils Peterson
       
      but they do not report asking boeing for assessments or feedback on rubrics
  • And the company mentors add tremendous value. In a class of 50, with 10 mentors, I’ve effectively reduced the student-instructor ratio to 5:1.”
Gary Brown

Struggling Students Can Improve by Studying Themselves, Research Shows - Teaching - The Chronicle of Higher Education - 3 views

  • "We're trying to document the role of processes that are different from standard student-outcome measures and standard ability measures,
  • We're interested in various types of studying, setting goals for oneself, monitoring one's progress as one goes through learning a particular topic."
  • Mr. Zimmerman has spent most of his career examining what can go wrong when people try to learn new facts and skills. His work centers on two common follies: First, students are often overconfident about their knowledge, assuming that they understand material just because they sat through a few lectures or read a few chapters. Second, students tend to attribute their failures to outside forces ("the teacher didn't like me," "the textbook wasn't clear enough") rather than taking a hard look at their own study habits.
  • ...14 more annotations...
  • That might sound like a recipe for banal lectures about study skills. But training students to monitor their learning involves much more than simple nagging, Mr. Zimmerman says. For one thing, it means providing constant feedback, so that students can see their own strengths and weaknesses.
  • or one thing, it means providing constant feedback, so that students can see their own strengths and weaknesses.
  • "The first one is, Give students fast, accurate feedback about how they're doing. And the second rule, which is less familiar to most people, is, Now make them demonstrate that they actually understand the feedback that has been given."
  • "I did a survey in December," he says. "Only one instructor said they were no longer using the technique. Twelve people said they were using the technique 'somewhat,' and eight said 'a lot.' So we were pleased that they didn't forget about us after the program ended."
  • "Only one instructor said they were no longer using the technique. Twelve people said they were using the technique 'somewhat,' and eight said 'a lot.' So we were pleased that they didn't forget about us after the program ended."
  • And over time, we've realized that these methods have a much greater effect if they're embedded within the course content.
  • "Once we focus on noticing and correcting errors in whatever writing strategy we're working on, the students just become junkies for feedback,"
  • "Errors are part of the process of learning, and not a sign of personal imperfection," Mr. Zimmerman says. "We're trying to help instructors and students see errors not as an endpoint, but as a beginning point for understanding what they know and what they don't know, and how they can approach problems in a more effective way."
  • Errors are part of the process of learning, and not a sign of personal imperfection,"
  • Self-efficacy" was coined by Albert Bandura in the 1970's
  • "Self-efficacy" was coined by Albert Bandura in the 1970's,
  • The 1990 paper from _Educational Psychologist_ 25 (1), pp. 3-17) which is linked above DOES include three citations to Bandura's work.
  • The 1990 paper from _Educational Psychologist_ 25 (1), pp. 3-17) which is linked above DOES include three citations to Bandura's work.
  • What I am particularly amazed by is that the idea of feedback, reflection and explicitly demonstrated understanding (essentially a Socratic approach of teaching), is considered an innovation.
  •  
    selected for the focus on feedback. The adoption by half or fewer, depending, is also interesting as the research is of the type we would presume to be compelling.
Nils Peterson

News: Assessment Disconnect - Inside Higher Ed - 7 views

  •  
    Theron left an interesting comment to this, the whole piece is a timely read and connects to OAI's staff workshop 1/28/10
Joshua Yeidel

Taking the sting out of the honeybee controversy - environmentalresearchweb - 1 views

  •  
    Researchers use "harvesting feedback" and a uncertainty scale to illuminate how stakeholders use evidence to explain honeybee declines in France.
1 - 20 of 37 Next ›
Showing 20 items per page