Skip to main content

Home/ CTLT and Friends/ Group items tagged feedback

Rss Feed Group items tagged

Corinna Lo

Use Twitter to Collect Micro-Feedback - The Conversation - Harvard Business Review - 1 views

  •  
    Even though twitter is on the headline once again, the important message from the article is not about twitter... but rather, the way in which feedback is being solicited, or collected. Feedback is best when provided as close to the moment of performance as possible, as shown in studies involving everyone from medical students to athletes. But lengthy feedback forms discourage frequent and immediate responses. Enabling employees to solicit feedback in short, immediate bursts may actually be more effective than performance reviews or lengthy feedback systems, since excessive feedback can be overwhelming and hinder performance.
Gary Brown

Ranking Employees: Why Comparing Workers to Their Peers Can Often Backfire - Knowledge@... - 2 views

  • We live in a world full of benchmarks and rankings. Consumers use them to compare the latest gadgets. Parents and policy makers rely on them to assess schools and other public institutions,
  • "Many managers think that giving workers feedback about their performance relative to their peers inspires them to become more competitive -- to work harder to catch up, or excel even more. But in fact, the opposite happens," says Barankay, whose previous research and teaching has focused on personnel and labor economics. "Workers can become complacent and de-motivated. People who rank highly think, 'I am already number one, so why try harder?' And people who are far behind can become depressed about their work and give up."
  • mong the companies that use Mechanical Turk are Google, Yahoo and Zappos.com, the online shoe and clothing purveyor.
  • ...12 more annotations...
  • Nothing is more compelling than data from actual workplace settings, but getting it is usually very hard."
  • Instead, the job without the feedback attracted more workers -- 254, compared with 76 for the job with feedback.
  • "This indicates that when people are great and they know it, they tend to slack off. But when they're at the bottom, and are told they're doing terribly, they are de-motivated," says Barankay.
  • In the second stage of the experiment
  • The aim was to determine whether giving people feedback affected their desire to do more work, as well as the quantity and quality of their work.
  • Of the workers in the control group, 66% came back for more work, compared with 42% in the treatment group. The members of the treatment group who returned were also 22% less productive than the control group. This seems to dispel the notion that giving people feedback might encourage high-performing workers to work harder to excel, and inspire low-ranked workers to make more of an effort.
  • it seems that people would rather not know how they rank compared to others, even though when we surveyed these workers after the experiment, 74% said they wanted feedback about their rank."
  • top performers move on to new challenges and low performers have no viable options elsewhere.
  • feedback about rank is detrimental to performance,"
  • it is well documented that tournaments, where rankings are tied to prizes, bonuses and promotions, do inspire higher productivity and performance.
  • "In workplaces where rankings and relative performance is very transparent, even without the intervention of management ... it may be better to attach financial incentives to rankings, as interpersonal comparisons without prizes may lead to lower effort," Barankay suggests. "In those office environments where people may not be able to assess and compare the performance of others, it may not be useful to just post a ranking without attaching prizes."
  • "The key is to devote more time to thinking about whether to give feedback, and how each individual will respond to it. If, as the employer, you think a worker will respond positively to a ranking and feel inspired to work harder, then by all means do it. But it's imperative to think about it on an individual level."
  •  
    the conflation of feedback with ranking confounds this. What is not done and needs to be done is to compare the motivational impact of providing constructive feedback. Presumably the study uses ranking in a strictly comparative context as well, and we do not see the influence of feedback relative to an absolute scale. Still, much in this piece to ponder....
Jayme Jacobson

Evaluating the effect of peer feedback on the quality of online discourse - 0 views

  • Results indicate that continuous, anonymous, aggregated feedback had no effect on either the students' or the instructors' perception of discussion quality.
  •  
    Abstract: This study explores the effect on discussion quality of adding a feedback mechanism that presents users with an aggregate peer rating of the usefulness of the participant's contributions in online, asynchronous discussion. Participants in the study groups were able to specify the degree to which they thought any posted comment was useful to the discussion. Individuals were regularly presented with feedback (aggregated and anonymous) summarizing peers' assessment of the usefulness of their contribution, along with a summary of how the individuals rated their peers. Results indicate that continuous, anonymous, aggregated feedback had no effect on either the students' or the instructors' perception of discussion quality. This is kind of a show-stopper. It's just one study but when you look at the results there appears to be no effect whatsoever from peers giving feedback about the usefulness of discussion posts, nor any perceived improvement in the quality of the discussions as evaluated by faculty. It looks like we'll need to begin looking carefully at just what kinds of feedback will really make a difference. Following up on Corinna's earlier post http://blogs.hbr.org/cs/2010/03/twitters_potential_as_microfee.html about the effectiveness of short immediate feedback being more effective than lengthier feedback that actually hinders performance. The trick will be to figure out just what kinds of feedback will actually work in embedded situations. It's interesting that an assessment of utility wasn't useful...?
Gary Brown

Struggling Students Can Improve by Studying Themselves, Research Shows - Teaching - The... - 3 views

  • "We're trying to document the role of processes that are different from standard student-outcome measures and standard ability measures,
  • We're interested in various types of studying, setting goals for oneself, monitoring one's progress as one goes through learning a particular topic."
  • Mr. Zimmerman has spent most of his career examining what can go wrong when people try to learn new facts and skills. His work centers on two common follies: First, students are often overconfident about their knowledge, assuming that they understand material just because they sat through a few lectures or read a few chapters. Second, students tend to attribute their failures to outside forces ("the teacher didn't like me," "the textbook wasn't clear enough") rather than taking a hard look at their own study habits.
  • ...14 more annotations...
  • That might sound like a recipe for banal lectures about study skills. But training students to monitor their learning involves much more than simple nagging, Mr. Zimmerman says. For one thing, it means providing constant feedback, so that students can see their own strengths and weaknesses.
  • or one thing, it means providing constant feedback, so that students can see their own strengths and weaknesses.
  • "The first one is, Give students fast, accurate feedback about how they're doing. And the second rule, which is less familiar to most people, is, Now make them demonstrate that they actually understand the feedback that has been given."
  • "I did a survey in December," he says. "Only one instructor said they were no longer using the technique. Twelve people said they were using the technique 'somewhat,' and eight said 'a lot.' So we were pleased that they didn't forget about us after the program ended."
  • "Only one instructor said they were no longer using the technique. Twelve people said they were using the technique 'somewhat,' and eight said 'a lot.' So we were pleased that they didn't forget about us after the program ended."
  • And over time, we've realized that these methods have a much greater effect if they're embedded within the course content.
  • "Once we focus on noticing and correcting errors in whatever writing strategy we're working on, the students just become junkies for feedback,"
  • "Errors are part of the process of learning, and not a sign of personal imperfection," Mr. Zimmerman says. "We're trying to help instructors and students see errors not as an endpoint, but as a beginning point for understanding what they know and what they don't know, and how they can approach problems in a more effective way."
  • Errors are part of the process of learning, and not a sign of personal imperfection,"
  • Self-efficacy" was coined by Albert Bandura in the 1970's
  • "Self-efficacy" was coined by Albert Bandura in the 1970's,
  • The 1990 paper from _Educational Psychologist_ 25 (1), pp. 3-17) which is linked above DOES include three citations to Bandura's work.
  • The 1990 paper from _Educational Psychologist_ 25 (1), pp. 3-17) which is linked above DOES include three citations to Bandura's work.
  • What I am particularly amazed by is that the idea of feedback, reflection and explicitly demonstrated understanding (essentially a Socratic approach of teaching), is considered an innovation.
  •  
    selected for the focus on feedback. The adoption by half or fewer, depending, is also interesting as the research is of the type we would presume to be compelling.
Nils Peterson

Focus on Formative Feedback - 0 views

  • This paper reviews the corpus of research on feedback, with a particular focus on formative feedback—defined as information communicated to the learner that is intended to modify the learner’s thinking or behavior for the purpose of improving learning. According to researchers in the area, formative feedback should be multidimensional, nonevaluative, supportive, timely, specific, credible, infrequent, and genuine (e.g., Brophy, 1981; Schwartz & White, 2000). Formative feedback is usually presented as information to a learner in response to some action on the learner’s part. It comes in a variety of types (e.g., verification of response accuracy, explanation of the correct answer, hints, worked examples) and can be administered at various times during the learning process (e.g., immediately following an answer, after some period of time has elapsed). Finally, there are a number of variables that have been shown to interact with formative feedback’s success at promoting learning (e.g., individual characteristics of the learner and aspects of the task). All of these issues will be discussed in this paper. This review concludes with a set of guidelines for generating formative feedback.
  •  
    Educational Testing Service website hosting a literature review ca 2007 on formative feedback. First 10 pages made it look promising enough to Diigo
Gary Brown

Would You Protect Your Computer's Feelings? Clifford Nass Says Yes. - ProfHacker - The ... - 2 views

  • why peer review processes often avoid, rather than facilitate, sound judgment
  • humans do not differentiate between computers and people in their social interactions.
  • no matter what "everyone knows," people act as if the computer secretly cares
  • ...4 more annotations...
  • users given completely random praise by a computer program liked it more than the same program without praise, even though they knew in advance the praise was meaningless.
  • Nass demonstrates, however, that people internalize praise and criticism differently—while we welcome the former, we really dwell on and obsess over the latter. In the criticism sandwich, then, "the criticism blasts the first list of positive achievements out of listeners' memory. They then think hard about the criticism (which will make them remember it better) and are on the alert to think even harder about what happens next. What do they then get? Positive remarks that are too general to be remembered"
  • And because we focus so much on the negative, having a similar number of positive and negative comments "feels negative overall"
  • The best strategy, he suggests, is "to briefly present a few negative remarks and then provide a long list of positive remarks...You should also provide as much detail as possible within the positive comments, even more than feels natural, because positive feedback is less memorable" (33).
  •  
    The implications for feedback issues are pretty clear.
Gary Brown

Outsourced Grading, With Supporters and Critics, Comes to College - Teaching - The Chro... - 3 views

shared by Gary Brown on 06 Apr 10 - Cached
  • Lori Whisenant knows that one way to improve the writing skills of undergraduates is to make them write more. But as each student in her course in business law and ethics at the University of Houston began to crank out—often awkwardly—nearly 5,000 words a semester, it became clear to her that what would really help them was consistent, detailed feedback.
  • She outsourced assignment grading to a company whose employees are mostly in Asia.
  • The graders working for EduMetry, based in a Virginia suburb of Washington, are concentrated in India, Singapore, and Malaysia, along with some in the United States and elsewhere. They do their work online and communicate with professors via e-mail.
  • ...8 more annotations...
  • The company argues that professors freed from grading papers can spend more time teaching and doing research.
  • "This is what they do for a living," says Ms. Whisenant. "We're working with professionals." 
  • Assessors are trained in the use of rubrics, or systematic guidelines for evaluating student work, and before they are hired are given sample student assignments to see "how they perform on those," says Ravindra Singh Bangari, EduMetry's vice president of assessment services.
  • Professors give final grades to assignments, but the assessors score the papers based on the elements in the rubric and "help students understand where their strengths and weaknesses are," says Tara Sherman, vice president of client services at EduMetry. "Then the professors can give the students the help they need based on the feedback."
  • The assessors use technology that allows them to embed comments in each document; professors can review the results (and edit them if they choose) before passing assignments back to students.
  • But West Hills' investment, which it wouldn't disclose, has paid off in an unexpected way. The feedback from Virtual-TA seems to make the difference between a student's remaining in an online course and dropping out.
  • Because Virtual-TA provides detailed comments about grammar, organization, and other writing errors in the papers, students have a framework for improvement that some instructors may not be able to provide, she says.
  • "People need to get past thinking that grading must be done by the people who are teaching," says Mr. Rajam, who is director of assurance of learning at George Washington University's School of Business. "Sometimes people get so caught up in the mousetrap that they forget about the mouse."
Theron DesRosier

Virtual-TA - 2 views

  • We also developed a technology platform that allows our TAs to electronically insert detailed, actionable feedback directly into student assignments
  • Your instructors give us the schedule of assignments, when student assignments are due, when we might expect to receive them electronically, when the scored assignments will be returned, the learning outcomes on which to score the assignments, the rubrics to be used and the weights to be applied to different learning outcomes. We can use your rubrics to score assignments or design rubrics for sign-off by your faculty members.
  • review and embed feedback using color-coded pushpins (each color corresponds to a specific learning outcome) directly onto the electronic assignments. Color-coded pushpins provide a powerful visual diagnostic.
  • ...5 more annotations...
  • We do not have any contact with your students. Instructors retain full control of the process, from designing the assignments in the first place, to specifying learning outcomes and attaching weights to each outcome. Instructors also review the work of our TAs through a step called the Interim Check, which happens after 10% of the assignments have been completed. Faculty provide feedback, offer any further instructions and eventually sign-off on the work done, before our TAs continue with the remainder of the assignments
  • Finally, upon the request of the instructor, the weights he/she specified to the learning outcomes will be rubric-based scores which are used to generate a composite score for each student assignment
  • As an added bonus, our Virtual-TAs provide a detailed, summative report for the instructor on the overall class performance on the given assignment, which includes a look at how the class fared on each outcome, where the students did well, where they stumbled and what concepts, if any, need reinforcing in class the following week.
  • We can also, upon request, generate reports by Student Learning Outcomes (SLOs). This report can be used by the instructor to immediately address gaps in learning at the individual or classroom level.
  • Think of this as a micro-closing-of-the-loop that happens each week.  Contrast this with the broader, closing-the-loop that accompanies program-level assessment of learning, which might happen at the end of a whole academic year or later!
  •  
    I went to Virtual TA and Highlighted their language describing how it works.
Nils Peterson

How To Crowdsource Grading | HASTAC - 0 views

  • My colleagues and I at the University of Maine have pursued a similar course with The Pool, an online environment for sharing art and code that invites students to evaluate each other at various stages of their projects, from intent to approach to release.
    • Nils Peterson
       
      This is feedback on our Harvesting Gradebook and Crowdsourcing ideas. The Pool seems to be an implementation of the feedback mechanism with some ideas about reputation.
  • Like Slashdot's karma system, The Pool entrusts students who have contributed good work in the past with greater power to rate other students. In general students at U-Me have responded responsibly to this ethic; it may help that students are sometimes asked to evaluate students in other classes,
    • Nils Peterson
       
      While there is notion of karma and peer feedback, there does not seem to be notion of bringing in outside expertise or if it were to come in, to track its roles
Nils Peterson

Change Magazine - The New Guys in Assessment Town - 0 views

  • if one of the institution’s general education goals is critical thinking, the system makes it possible to call up all the courses and programs that assess student performance on that outcome.
  • bringing together student learning outcomes data at the level of the institution, program, course, and throughout student support services so that “the data flows between and among these levels”
  • Like its competitors, eLumen maps outcomes vertically across courses and programs, but its distinctiveness lies in its capacity to capture what goes on in the classroom. Student names are entered into the system, and faculty use a rubric-like template to record assessment results for every student on every goal. The result is a running record for each student available only to the course instructor (and in a some cases to the students themselves, who can go to the system to  get feedback on recent assessments).
    • Nils Peterson
       
      sounds like harvesting gradebook. assess student work and roll up
    • Joshua Yeidel
       
      This system has some potential for formative use at the per-student leve.
  • ...7 more annotations...
  • “I’m a little wary.  It seems as if, in addition to the assessment feedback we are already giving to students, we might soon be asked to add a data-entry step of filling in boxes in a centralized database for all the student learning outcomes. This is worrisome to those of us already struggling under the weight of all that commenting and essay grading.”
    • Nils Peterson
       
      its either double work, or not being understood that the grading and the assessment can be the same activity. i suspect the former -- grading is being done with different metrics
    • Joshua Yeidel
       
      I am in the unusual position of seeing many papers _after_ they have been graded by a wide variety of teachers. Many of these contain little "assessment feedback" -- many teachers focus on "correcting" the papers and finding some letter or number to assign as a value.
  • “This is where we see many institutions struggling,” Galvin says. “Faculty simply don’t have the time for a deeper involvement in the mechanics of assessment.” Many have never seen a rubric or worked with one, “so generating accurate, objective data for analysis is a challenge.”  
    • Nils Peterson
       
      Rather than faculty using the community to help with assessment, they are outsourcing to a paid assessor -- this is the result of undertaking this thinking while also remaining in the institution-centric end of the spectrum we developed
  • I asked about faculty pushback. “Not so much,” Galvin says, “not after faculty understand that the process is not intended to evaluate their work.”
    • Nils Peterson
       
      red flag
  • the annual reports required by this process were producing “heaps of paper” while failing to track trends and developments over time. “It’s like our departments were starting anew every year,” Chaplot says. “We wanted to find a way to house the data that gave us access to what was done in the past,” which meant moving from discrete paper reports to an electronic database.
    • Joshua Yeidel
       
      It's not clear whether the "database" is housing measurements, narratives and reflections, or all of the above.
  • Can eLumen represent student learning in language? No, but it can quantify the number of boxes checked against number of boxes not checked.”
  • developing a national repository of resources, rubrics, outcomes statements, and the like that can be reviewed and downloaded by users
    • Nils Peterson
       
      in building our repository we could well open-source these tools, no need to lock them up
  • “These solutions cement the idea that assessment is an administrative rather than an educational enterprise, focused largely on accountability. They increasingly remove assessment decision making from the everyday rhythm of teaching and learning and the realm of the faculty.
    • Nils Peterson
       
      Over the wall assessment, see Transformative Assessment rubric for more detail
Lorena O'English

News: Online and Interpersonal - Inside Higher Ed - 0 views

  •  
    "Two professors from the University of Westminster in London explained research finding that use of educational technology such as blogs and online questionnaires, combined with personal tutors, could enhance the feedback loop while also making face-to-face communication more efficient."
Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessmen... - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
Nils Peterson

It's Time to Improve Academic, Not Just Administrative, Productivity - Chronicle.com - 0 views

  •  
    Kimberly said of this: The focus on activity deals directly with the learning process - one that pushes students to take a more active role - while assessment supplies faculty members with the feedback necessary to diagnose and correct learning problems. Technology allows such active learning processes to be expanded to large courses and, as learning software and databases become better, to use faculty time more effectively. Relates to clickers and skylight learning activities/assessments, in the large class context, as well as the elusive LMS.
Joshua Yeidel

Taking the sting out of the honeybee controversy - environmentalresearchweb - 1 views

  •  
    Researchers use "harvesting feedback" and a uncertainty scale to illuminate how stakeholders use evidence to explain honeybee declines in France.
Theron DesRosier

Revolution in the Classroom - The Atlantic (August 12, 2009) - 0 views

  •  
    An article in the Atlantic today by Clayton Christensen discusses "Revolution in the Classroom" In a paragraph on data collection he says the following: Creating effective methods for measuring student progress is crucial to ensuring that material is actually being learned. And implementing such assessments using an online system could be incredibly potent: rather than simply testing students all at once at the end of an instructional module, this would allow continuous verification of subject mastery as instruction was still underway. Teachers would be able to receive constant feedback about progress or the lack thereof and then make informed decisions about the best learning path for each student. Thus, individual students could spend more or less time, as needed, on certain modules. And as long as the end result - mastery - was the same for all, the process and time allotted for achieving it need not be uniform." The "module" focus is a little disturbing but the rest is helpful.
Nils Peterson

Higher Ed/: TLT's Harvesting Feedback Project - 0 views

  • It's a fascinating project, and to me the most interesting design element is one not actually highlighted here, viz. that the plan is to be able to rate any kind of work anywhere on the Internet. The era of "enclosed garden" portfolio systems may be drawing (thankfully) to an end.
    • Nils Peterson
       
      Interesting that David picked up this implication from the work, its something we didn't say but I think want to believe.
  • crowd-sourcing for assessment (you assess some of my students, I assess some of yours, for example) I wonder if the group has considered using Amazon's Mechanical Turk service as a cost-effective way of getting ratings from "the public."
    • Nils Peterson
       
      This is an interesting idea, i've started to follow up at Mechanical Turk and hope to develop a blog post
Nils Peterson

Apple Apps Ahead - WSJ.com - 0 views

  • new health-related iPhone accessories. LifeScan Inc., of Milipitas, Calif., a Johnson & Johnson-owned company that makes glucose monitors, recently demonstrated a software program it hopes will help make it easier for diabetes patients to communicate their glucose levels to caregivers and family. The program, taking advantage of the iPhone's new ability to connect with accessories wirelessly, reads the patient's glucose level from the monitor, then transmits it through the phone.
    • Nils Peterson
       
      Or point the camera at one of those 2D bar codes and enter a rubric-based feedback? Bar code could ID both the item and the feedback form to be used.
S Spaeth

QuickTopic for Teachers - 0 views

  • "This free, web-based message board allows you to set up a web-based discussion board for your class where your students can post messages to one another, to students in another class, or to a parent "expert." These message areas are closed to outside users because they are set up by invitation."
  •  
    QuickTopic free message boards and the Quick Doc Review collaborative online document review service are excellent tools for all kinds of teachers. Below are a few examples of citations by teaching resource sites that we've found. * "...this amazingly easy site will also send you emails of newly posted messages. ... Also check out the Document Review tool for posting text and eliciting feedback generously provided by QuickTopic" NC State University - Teaching Literature for Young Adults - Resources for teachers.
Gary Brown

Education ambivalence : Nature : Nature Publishing Group - 1 views

  • Academic scientists value teaching as much as research — but universities apparently don't
  • Nature Education, last year conducted a survey of 450 university-level science faculty members from more than 30 countries. The first report from that survey, freely available at http://go.nature.com/5wEKij, focuses on 'postsecondary' university- and college-level education. It finds that more than half of the respondents in Europe, Asia and North America feel that the quality of undergraduate science education in their country is mediocre, poor or very poor.
  • 77% of respondents indicated that they considered their teaching responsibilities to be just as important as their research — and 16% said teaching was more important.
  • ...6 more annotations...
  • But the biggest barrier to improvement is the pervasive perception that academic institutions — and the prevailing rewards structure of science — value research far more than teaching
  • despite their beliefs that teaching was at least as important as research, many respondents said that they would choose to appoint a researcher rather than a teacher to an open tenured position.
  • To correct this misalignment of values, two things are required. The first is to establish a standardized system of teaching evaluation. This would give universities and professors alike the feedback they need to improve.
  • The second requirement is to improve the support and rewards for university-level teaching.
  • systematic training in how to teach well
  • But by showering so many rewards on research instead of on teaching, universities and funding agencies risk undermining the educational quality that is required for research to flourish in the long term.
  •  
    Attention to this issue from this resource--Nature--is a breakthrough in its own right. Note the focus on "flourish in the long term...".
Nils Peterson

Through the Open Door: Open Courses as Research, Learning, and Engagement (EDUCAUSE Rev... - 0 views

  • openness in practice requires little additional investment, since it essentially concerns transparency of already planned course activities on the part of the educator.
    • Nils Peterson
       
      Search YouTube for "master class" Theron and I are looking at violin examples. The class is happening with student, master, and observers. What is added is video recording and posting to YouTube. YouTube provides additional community via comments and linked videos.
  • This second group of learners — those who wanted to participate but weren't interested in course credit — numbered over 2,300. The addition of these learners significantly enhanced the course experience, since additional conversations and readings extended the contributions of the instructors.
    • Nils Peterson
       
      These additional resources might also include peer reviews using a course rubric, or diverse feedback on the rubric itself.
  • Enough structure is provided by the course that if a learner is interested in the topic, he or she can build sufficient language and expertise to participate peripherally or directly.
  • ...4 more annotations...
  • Although courses are under pressure in the "unbundling" or fragmentation of information in general, the learning process requires coherence in content and conversations. Learners need some sense of what they are choosing to do, a sense of eventedness.5 Even in traditional courses, learners must engage in a process of forming coherent views of a topic.
    • Nils Peterson
       
      An assumption here that the learner needs kick starting. Its an assumtion that the learner is not a Margo Tamez making an Urgent Call for Help where the learner owns the problem. Is it a way of inviting a community to a party?
  • The community-as-curriculum model inverts the position of curriculum: rather than being a prerequisite for a course, curriculum becomes an output of a course.
  • They are now able, sometimes through the open access noted above and sometimes through access to other materials and guidance, to engage in their own learning outside of a classroom structure.
    • Nils Peterson
       
      A key point is the creation of open learners. Impediments to open learners need to be understood and overcome. Identity mangement is likely to be an important skill here.
  • Educators continue to play an important role in facilitating interaction, sharing information and resources, challenging assertions, and contributing to learners' growth of knowledge.
1 - 20 of 37 Next ›
Showing 20 items per page