Skip to main content

Home/ CTLT and Friends/ Group items tagged thinking

Rss Feed Group items tagged

Gary Brown

Disciplines Follow Their Own Paths to Quality - Faculty - The Chronicle of Higher Educa... - 2 views

  • But when it comes to the fundamentals of measuring and improving student learning, engineering professors naturally have more to talk about with their counterparts at, say, Georgia Tech than with the humanities professors at Villanova
    • Gary Brown
       
      Perhaps this is too bad....
  • But there is no nationally normed way to measure the particular kind of critical thinking that students of classics acquire
  • er colleagues have created discipline-specific critical-reasoning tests for classics and political science
  • ...5 more annotations...
  • Political science cultivates skills that are substantially different from those in classics, and in each case those skills can't be measured with a general-education test.
  • he wants to use tests of reasoning that are appropriate for each discipline
  • I believe Richard Paul has spent a lifetime articulating the characteristics of discipline based critical thinking. But anyway, I think it is interesting that an attempt is being made to develop (perhaps) a "national standard" for critical thinking in classics. In order to assess anything effectively we need a standard. Without a standard there are no criteria and therefore no basis from which to assess. But standards do not necessarily have to be established at the national level. This raises the issue of scale. What is the appropriate scale from which to measure the quality and effectiveness of an educational experience? Any valid approach to quality assurance has to be multi-scaled and requires multiple measures over time. But to be honest the issues of standards and scale are really just the tip of the outcomes iceberg.
    • Gary Brown
       
      Missing the notion that the variance is in the activity more than the criteria.  We hear little of embedding nationally normed and weighted assignments and then assessing the implementation and facilitation variables.... mirror, not lens.
  • the UW Study of Undergraduate Learning (UW SOUL). Results from the UW SOUL show that learning in college is disciplinary; therefore, real assessment of learning must occur (with central support and resources)in the academic departments. Generic approaches to assessing thinking, writing, research, quantitative reasoning, and other areas of learning may be measuring something, but they cannot measure learning in college.
  • It turns out there is a six week, or 210+ hour serious reading exposure to two or more domains outside ones own, that "turns on" cross domain mapping as a robust capability. Some people just happen to have accumulated, usually by unseen and unsensed happenstance involvements (rooming with an engineer, son of a dad changing domains/careers, etc.) this minimum level of basics that allows robust metaphor based mapping.
Joshua Yeidel

Wired Campus: Randy Bass and Bret Eynon: Still Moving From Teaching to Learni... - 0 views

  •  
    What emerged from this work was a picture of learning that drew our attention to a series of intermediate thinking processes that characterize flexible thinking, processes that digital media are especially good at making visible. This includes such things as how students work through difficulty, consider alternative pathways to solve problems, speculate about ideas, and argue with one another about meaning. These kinds of thinking processes turn out to be much more than just cognitive. Motivation, confidence, fear, one's sense of identity, experience, as well as formal knowledge all come to bear on them.
Theron DesRosier

Academic Evolution: The Open Scholar - 0 views

  •  
    Think of the many publicly funded institutions of higher education, then think of the way those colleges and universities only reward their scholars if they are willing to conceal their expertise from the broader public that funded the institutions they work at. It's as unethical as it is unnecessary, but it will continue until institutions learn to be more publicly responsible with their intellectual resources, or until scholars reject the restrictive identity they are held to through the traditional reward system.
  •  
    From the article: "Think of the many publicly funded institutions of higher education, then think of the way those colleges and universities only reward their scholars if they are willing to conceal their expertise from the broader public that funded the institutions they work at. It's as unethical as it is unnecessary, but it will continue until institutions learn to be more publicly responsible with their intellectual resources, or until scholars reject the restrictive identity they are held to through the traditional reward system."
Gary Brown

Ranking Employees: Why Comparing Workers to Their Peers Can Often Backfire - Knowledge@... - 2 views

  • We live in a world full of benchmarks and rankings. Consumers use them to compare the latest gadgets. Parents and policy makers rely on them to assess schools and other public institutions,
  • "Many managers think that giving workers feedback about their performance relative to their peers inspires them to become more competitive -- to work harder to catch up, or excel even more. But in fact, the opposite happens," says Barankay, whose previous research and teaching has focused on personnel and labor economics. "Workers can become complacent and de-motivated. People who rank highly think, 'I am already number one, so why try harder?' And people who are far behind can become depressed about their work and give up."
  • mong the companies that use Mechanical Turk are Google, Yahoo and Zappos.com, the online shoe and clothing purveyor.
  • ...12 more annotations...
  • Nothing is more compelling than data from actual workplace settings, but getting it is usually very hard."
  • Instead, the job without the feedback attracted more workers -- 254, compared with 76 for the job with feedback.
  • "This indicates that when people are great and they know it, they tend to slack off. But when they're at the bottom, and are told they're doing terribly, they are de-motivated," says Barankay.
  • In the second stage of the experiment
  • The aim was to determine whether giving people feedback affected their desire to do more work, as well as the quantity and quality of their work.
  • Of the workers in the control group, 66% came back for more work, compared with 42% in the treatment group. The members of the treatment group who returned were also 22% less productive than the control group. This seems to dispel the notion that giving people feedback might encourage high-performing workers to work harder to excel, and inspire low-ranked workers to make more of an effort.
  • it seems that people would rather not know how they rank compared to others, even though when we surveyed these workers after the experiment, 74% said they wanted feedback about their rank."
  • top performers move on to new challenges and low performers have no viable options elsewhere.
  • feedback about rank is detrimental to performance,"
  • it is well documented that tournaments, where rankings are tied to prizes, bonuses and promotions, do inspire higher productivity and performance.
  • "In workplaces where rankings and relative performance is very transparent, even without the intervention of management ... it may be better to attach financial incentives to rankings, as interpersonal comparisons without prizes may lead to lower effort," Barankay suggests. "In those office environments where people may not be able to assess and compare the performance of others, it may not be useful to just post a ranking without attaching prizes."
  • "The key is to devote more time to thinking about whether to give feedback, and how each individual will respond to it. If, as the employer, you think a worker will respond positively to a ranking and feel inspired to work harder, then by all means do it. But it's imperative to think about it on an individual level."
  •  
    the conflation of feedback with ranking confounds this. What is not done and needs to be done is to compare the motivational impact of providing constructive feedback. Presumably the study uses ranking in a strictly comparative context as well, and we do not see the influence of feedback relative to an absolute scale. Still, much in this piece to ponder....
Gary Brown

News: Turning Surveys Into Reforms - Inside Higher Ed - 0 views

  • Molly Corbett Broad, president of the American Council on Education, warned those gathered here that they would be foolish to think that accountability demands were a thing of the past.
  • She said that while she is “impressed” with the work of NSSE, she thinks higher education is “not moving fast enough” right now to have in place accountability systems that truly answer the questions being asked of higher education. The best bet for higher education, she said, is to more fully embrace various voluntary systems, and show that they are used to promote improvements.
  • One reason NSSE data are not used more, some here said, was the decentralized nature of American higher education. David Paris, executive director of the New Leadership Alliance for Student Learning and Accountability, said that “every faculty member is king or queen in his or her classroom.” As such, he said, “they can take the lessons of NSSE” about the kinds of activities that engage students, but they don’t have to. “There is no authority or dominant professional culture that could impel any faculty member to apply” what NSSE teaches about engaged learning, he said.
  • ...4 more annotations...
  • She stressed that NSSE averages may no longer reflect any single reality of one type of faculty member. She challenged Paris’s description of powerful faculty members by noting that many adjuncts have relatively little control over their pedagogy, and must follow syllabuses and rules set by others. So the power to execute NSSE ideas, she said, may not rest with those doing most of the teaching.
  • Research presented here, however, by the Wabash College National Study of Liberal Arts Education offered concrete evidence of direct correlations between NSSE attributes and specific skills, such as critical thinking skills. The Wabash study, which involves 49 colleges of all types, features cohorts of students being analyzed on various NSSE benchmarks (for academic challenge, for instance, or supportive campus environment or faculty-student interaction) and various measures of learning, such as tests to show critical thinking skills or cognitive skills or the development of leadership skills.
  • The irony of the Wabash work with NSSE data and other data, Blaich said, was that it demonstrates the failure of colleges to act on information they get -- unless someone (in this case Wabash) drives home the ideas.“In every case, after collecting loads of information, we have yet to find a single thing that institutions didn’t already know. Everyone at the institution didn’t know -- it may have been filed away,” he said, but someone had the data. “It just wasn’t followed. There wasn’t sufficient organizational energy to use that data to improve student learning.”
  • “I want to try to make the point that there is a distinction between participating in NSSE and using NSSE," he said. "In the end, what good is it if all you get is a report?"
  •  
    An interesting discussion, exploring basic questions CTLT folks are familiar with, grappling with the question of how to use survey data and how to identify and address limitations. 10 years after launch of National Survey of Student Engagement, many worry that colleges have been speedier to embrace giving the questionnaire than using its results. And some experts want changes in what the survey measures. I note these limitations, near the end of the article: Adrianna Kezar, associate professor of higher education at the University of Southern California, noted that NSSE's questions were drafted based on the model of students attending a single residential college. Indeed many of the questions concern out-of-class experiences (both academic and otherwise) that suggest someone is living in a college community. Kezar noted that this is no longer a valid assumption for many undergraduates. Nor is the assumption that they have time to interact with peers and professors out of class when many are holding down jobs. Nor is the assumption -- when students are "swirling" from college to college, or taking courses at multiple colleges at the same time -- that any single institution is responsible for their engagement. Further, Kezar noted that there is an implicit assumption in NSSE of faculty being part of a stable college community. Questions about seeing faculty members outside of class, she said, don't necessarily work when adjunct faculty members may lack offices or the ability to interact with students from one semester to the next. Kezar said that she thinks full-time adjunct faculty members may actually encourage more engagement than tenured professors because the adjuncts are focused on teaching and generally not on research. And she emphasized that concerns about the impact of part-time adjuncts on student engagement arise not out of criticism of those individuals, but of the system that assigns them teaching duties without much support. S
  •  
    Repeat of highlighted resource, but merits revisiting.
Ashley Ater Kranov

Course Reminds Budding Ph.D.'s of the Damage They Can Do - Teaching - The Chronicle of ... - 1 views

  •  
    I think most of us are aware of and agree with what this author is positing - the purpose in sharing this is related to our work with departments in re-thinking how they think about teaching and evaluate it. ""People often think that education works either to improve you or to leave you as you were," Mr. Cahn says. "But that's not right. An unsuccessful education can ruin you. It can kill your interest in a topic. It can make you a less-good thinker. It can leave you less open to rational argument. So we do good and bad as teachers-it's not just good or nothing.""
Nils Peterson

An emerging model for open courses @ Dave's Educational Blog - 0 views

  • if I was going to advise any *learner* about pursuing their interest (and by definition, in an “open” situation the set of learners is not prescribed), I’d urge them to find an *existing* robust community of people already talking about that subject, and then focus on helping them develop skills to engage, as a newcomer, with existing coversations and communities.
    • Nils Peterson
       
      Says Scott Leslie. I think we have been saying similar things.
  • Can the two ideas– open, networked learning communities and open courses affiliated with and/or products from institutions not only co-exist, but feed off of one another? I get the asymmetry aspect, I really do, but I’m not convinced that institutions have no worth or that the situation for continuing– maybe even increasing– that worth is hopeless
  • @Scott Leslie. Thanks for your comment on the language of ‘courses’, or in my case ‘modules’. It has helped me realise that my approach to open education post my looming retirement may be trapped in the wrong mindset. I have been trying to think of how I can convert a module I teach at Leeds Uni that dies when I retire to an OE resource ‘in the wild’. I have been thinking about how it can be packaged as an OE module that a community of network of open learners can engage with and exploit/re-purpose according to individual and collective needs. I assumed that I and others would somehow organically become mentors (open tutors?) and flexibly help out as required. Perhaps I should be trying to develop links with existing communities engages in discussions and project around the discipline of my module and try and contribute there somehow. I think your comment illustrates the difficult transition in moving between open education as content (based on a formal education model) and open education as process that engages disparate audiences with varied agendas and objectives.
    • Nils Peterson
       
      Seems to be someone who wants to explore the fine line of releasing his modules into the wild. It might be interesting to engage him
Nils Peterson

Two Bits » Modulate This Book - 0 views

  • Free Software is good to think with… How does one re-mix scholarship? One of the central questions of this book is how Free Software and Free Culture think about re-using, re-mixing, modifying and otherwise building on the work of others. It seems obvious that the same question should be asked of scholarship. Indeed the idea that scholarship is cumulative and builds on the work of others is a bit of a platitude even. But how?
    • Nils Peterson
       
      This is Chris Kelty's site for his book Two Bits: The Cultural Significance of Free Software. Learned about the idea "recusive public" at the P2PU event, and from that found Kelty. This quote leads off the page that is inviting readers to "modulate" the book. The page before gives a free download in PDF and HTML and the CC License and invitation to remix, use, etc, and to "Modulate" so I came to see what that term might mean.
  • I think Free Software is “good to think with” in the classic anthropological sense.  Part of the goal of launching Two Bits has been to experiment with “modulations” of the book–and of scholarship more generally–a subject discussed at length in the text. Free Software has provided a template, and a kind of inspiration for people to experiment with new modes of reuse, remixing, modulating and transducing collaboratively created objects.
  • As such, “Modulations” is a project, concurrent with the book, but not necessarily based on it, which is intended to explore the questions raised there, but in other works, with and by other scholars, a network of researchers and projects on free and open source software, on “recursive publics,” on publics and public sphere theory generally, and on new projects and problems confronted by Free Software and its practices…
Joshua Yeidel

Wired Campus: Lev Gonick: How Technology Will Reshape Academe After the Econo... - 0 views

  •  
    Where will higher education be the day after the current global economic crisis passes? If you think things will simply go back to the way they were once the economy recovers in a year or two, think again.
Gary Brown

(How) Would You Use This Critical Thinking Video? at Beyond School - 3 views

  • This “Critical Thinking” video is worth a watch.
  •  
    it is worth a watch, and as a resource
  •  
    This is well done - many potential applications - a self check, for one, and for use in the myriad of teaching situations we find ourselves in both in work and outside of work.
Matthew Tedder

A New School Teaches Students Through Videogames | Popular Science - 1 views

  •  
    Nothing more powerfully engages students than video games. It's just be very difficult finding ways to exploit this for educational purposes without destroying that affect in the process. My own best idea on the holy grail of a truly addictive game useful for very general and comprehensive educational purposes is an RTS game from an FPS perspective beginning the neolithic times, in a persistent world. A student would begin as a primitive man and gradually work his way toward inventing all the technologies of the modern world in building his civilization. He'd invent each tool by learning the physics and usefulness of it. Then he could add it to the village he founds to expand it. The village and eventual civilization would be, along with its annals, would be a e-portfolio (why the world needs to be persistent, not starting fresh each time the student logs on--he must always be building upon the foundations already established). The student would design the economic system, etc. and his "subjects" would follow the rules he stipulates. He could trade with the villages of others for items he might need to get ahead but cannot produce them himself until he learns the principles behind the technology. The population might be given needs also for entertainment, thus poetry, etc. for a more pacified people. Many ideas can be added within this framework. It's a student's own world in which he can feel safe and for which he should develop more interest as it continue to operation even when he is offline (to increase engagement). And being multiplayer can also provide the social aspect and teamwork for shared goals.... like say, building a trading route and defending it from bandits, investing materials for construction of a dam and irrigation... etc. I have a basic design to build the infrastructure for this. There wouldn't by chance be any grants out there that might apply?
  •  
    I really like this game idea. Seems like it would be a monster of an undertaking not just for the game engine itself, but more so for the content. Let me know if you get this one off the ground.
  •  
    I realized while writing this that it would be difficult to for education professionals to understand this concept. I should have known Shirey would get it. After so much experience in software, one starts to see two personality types--those who design software from a philosophical perspective and those who do so from an immediate, practical point of view. The philosophicals enjoy designing and writing new kinds of software. They are also the kind of people who tend to enjoy RTS games. The immediates struggle trying to write software from scratch, except for where they understand some pre-known framework for writing software of the particular class. They are more often relegated to debugging and tweaking software. These people tend to prefer FPS games. Systems administrators tend to fall more into this category, as well. It's a good complement, I think. I design and they maintain. Philosophicals tend not to be such good maintainers. Immediates tend to make good systems administrators, too. What this all suggests to me is that the only way non-philosophicals (the particular type I mean--don't use the term too generally) are unlikely to "get" the concept until the can see and use it. I would love to be proven wrong. I designed a framework that I think would make building it not so difficult or time consuming. But yes, building content is a chore. Therefore, the way I designed the framework is to allow run-time additions and modifications. That is, you can start simple and gradually add content over time. I think this makes sense in any case because as knowledge changes, so should educational content. Educational methods may also evolve. So I think it is very important that the mechanism for adding and editing be as easy to use as possible. This is where you want the input of non-software engineers.....even non-gamers.
Theron DesRosier

The scientist and blogging - 1 views

  •  
    Some suggestions fort Scientists about blogging. "So what should you put in your blog? (1) Talk about your research. What have you done in the past? What are you working on at the moment? There is some controversy as to how transparent you should be when talking about your research (OMG, someone is going to steal my idea if I write it down! No wait, if everyone knows I said it first, then they can't steal it!), so it's up to you to decide how comfortable you are about sharing your research ideas. I'm old-fashioned enough that I tend towards the side that thinks we should be discreet about the details of what we're working on, but I also understand the side that wants everything to be out there. (2) Talk about other people's research. Do you agree with their results? Do you think that they missed something important? You may feel unqualified to criticize somebody else's work, but science does not advance through groupthink. Remember, part of your job as a scientist will be to review other people's papers. Now is as good a time as any to start practicing. (3) Talk about issues related to your research. Are you working on smartphones? Talk about how they're being integrated into museum visits. Working on accessibility issues? Talk about some of the problems that the handicapped encounter during their daily routine. Just make sure you choose to talk about something that interests you so that you feel motivated to write to your blog. "
Nils Peterson

The World Question Center 2010 - 0 views

  • This year's Question is "How is the Internet changing the way YOU think?" Not "How is the Internet changing the way WE think?" We spent a lot of time going back on forth on "YOU" vs. "WE" and came to the conclusion to go with "YOU", the reason being that Edge is a conversation.
    • Nils Peterson
       
      EDGE question for 2010.
  • We wanted people to think about the "Internet", which includes, but is a much bigger subject than the Web, an application on the Internet, or search, browsing, etc., which are apps on the Web. Back in 1996, computer scientist and visionary Danny Hillis pointed out that when it comes to the Internet, "Many people sense this, but don't want to think about it because the change is too profound.
Gary Brown

New test measures students' digital literacy | eCampus News - 0 views

  • Employers are looking for candidates who can navigate, critically evaluate, and make sense of the wealth of information available through digital media—and now educators have a new way to determine a student’s baseline digital literacy with a certification exam that measures the test-taker’s ability to assess information, think critically, and perform a range of real-world tasks.
  • iCritical Thinking Certification, created by the Educational Testing Service and Certiport, reveals whether or not a person is able to combine technical skills with experiences and knowledge.
  • Monica Brooks, Marshall University’s assistant vice president for Information Technology: Online Learning and Libraries, said her school plans to use iCritical Thinking beginning in the fall.
  •  
    the alternate universe, a small step away...
Nils Peterson

Accreditation and assessment in an Open Course - an opening proposal | Open Course in E... - 1 views

  • A good example of this may be a learning portfolio created by a students and reviewed by an instructor. The instructor might be looking for higher orders of learning... evidence of creative thinking, of the development of complex concepts or looking for things like improvement.
    • Nils Peterson
       
      He starts with a portfolio reviewed by the instructor, but it gets better
  • There is a simple sense in which assessing people for this course involves tracking their willingness to participate in the discussion. I have claimed in many contexts that in fields in which the canon is difficult to identify, where what is 'true' is not possible to identify knowledge becomes a negotiation. This will certainly true in this course, so I think the most important part of the assessment will be whether the learner in question has collaborated, has participated has ENGAGED with the material and with other participants of the course.
  • What we need, then, is a peer review model for assessment. We need people to take it as their responsibility to review the work of others, to confirm their engagement, and form community/networks of assessment that monitor and help each other.
  • ...4 more annotations...
  • (say... 3-5 other participants are willing to sign off on your participation)
    • Nils Peterson
       
      peer credentialling.
  • Evidence of contribution on course projects
    • Nils Peterson
       
      I would prefer he say "projects" where the learner has latitude to define the project, rather than a 'course project' where the agency seems to be outside the learner. See our diagram of last April, the learner should be working their problem in their community
  • I think for those that are looking for PD credit we should be able to use the proposed assessment model (once you guys make it better) for accreditation. You would end up with an email that said "i was assessed based on this model and was not found wanting" signed by facilitators (or other participants, as surely given the quality of the participants i've seen, they would qualify as people who could guarantee such a thing).
    • Nils Peterson
       
      Peer accreditation. It depends on the credibility of those signing off see also http://www.nilspeterson.com/2010/03/21/reimagining-both-learning-learning-institutions/
  • I think the Otago model would work well here. I call it the Otago model as Leigh Blackall's course at Otago was the first time i actually heard of someone doing it. In this model you do all the work in a given course, and then are assessed for credit AFTER the course by, essentially, challenging for PLAR. It's a nice distributed model, as it allows different people to get different credit for the same course.
    • Nils Peterson
       
      Challenging for a particular credit in an established institutional system, or making the claim that you have a useful solution to a problem and the solution merits "credit" in a particular system's procedures.
Matthew Tedder

East Bay Express : Print This Story - 0 views

  •  
    It's not the African American aspect of this story that interests me. It is the aspect of attitudes--whether they be ethnically correlated or not. Politically problematic but I think this includes, at its core, crucial factors to consider. I think this research would have been better conducted not in consideration of ethnicity but rather groups as determined by criteria derived from factor analysis. To me, the point is that memes matter. Both behavioral and belief memes can characterize groups of friends (a better unit of study than a nebulous ethnicity) and provide them with a baseline of comparative likelinesses in achievements of various kinds.
Gary Brown

Best Colleges: The Real Rankings - CBS MoneyWatch.com - 2 views

  • Ultimately, though, the usefulness of any college ranking will depend on what criteria matters most to you and your teen. The best strategy: Use a few of the rankings to amass quantifiable and
  •  
    key advice for prospective college students--and a way to think about providing models that engage authentic learning opportunities as critical benchmark.
  •  
    key advice for prospective college students--and a way to think about providing models that engage authentic learning opportunities as critical benchmark.
Nils Peterson

Half an Hour: Open Source Assessment - 0 views

  • When posed the question in Winnipeg regarding what I thought the ideal open online course would look like, my eventual response was that it would not look like a course at all, just the assessment.
    • Nils Peterson
       
      I remembered this Downes post on the way back from HASTAC. It is some of the roots of our Spectrum I think.
  • The reasoning was this: were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.
  • In Holland I encountered a person from an organization that does nothing but test students. This is the sort of thing I long ago predicted (in my 1998 Future of Online Learning) so I wasn't that surprised. But when I pressed the discussion the gulf between different models of assessment became apparent.Designers of learning resources, for example, have only the vaguest of indication of what will be on the test. They have a general idea of the subject area and recommendations for reading resources. Why not list the exact questions, I asked? Because they would just memorize the answers, I was told. I was unsure how this varied from the current system, except for the amount of stuff that must be memorized.
    • Nils Peterson
       
      assumes a test as the form of assessment, rather than something more open ended.
  • ...8 more annotations...
  • As I think about it, I realize that what we have in assessment is now an exact analogy to what we have in software or learning content. We have proprietary tests or examinations, the content of which is held to be secret by the publishers. You cannot share the contents of these tests (at least, not openly). Only specially licensed institutions can offer the tests. The tests cost money.
    • Nils Peterson
       
      See our Where are you on the spectrum, Assessment is locked vs open
  • Without a public examination of the questions, how can we be sure they are reliable? We are forced to rely on 'peer reviews' or similar closed and expert-based evaluation mechanisms.
  • there is the question of who is doing the assessing. Again, the people (or machines) that grade the assessments work in secret. It is expert-based, which creates a resource bottleneck. The criteria they use are not always apparent (and there is no shortage of literature pointing to the randomness of the grading). There is an analogy here with peer-review processes (as compared to recommender system processes)
  • What constitutes achievement in a field? What constitutes, for example, 'being a physicist'?
  • This is a reductive theory of assessment. It is the theory that the assessment of a big thing can be reduced to the assessment of a set of (necessary and sufficient) little things. It is a standards-based theory of assessment. It suggests that we can measure accomplishment by testing for accomplishment of a predefined set of learning objectives.Left to its own devices, though, an open system of assessment is more likely to become non-reductive and non-standards based. Even if we consider the mastery of a subject or field of study to consist of the accomplishment of smaller components, there will be no widespread agreement on what those components are, much less how to measure them or how to test for them.Consequently, instead of very specific forms of evaluation, intended to measure particular competences, a wide variety of assessment methods will be devised. Assessment in such an environment might not even be subject-related. We won't think of, say, a person who has mastered 'physics'. Rather, we might say that they 'know how to use a scanning electron microscope' or 'developed a foundational idea'.
  • We are certainly familiar with the use of recognition, rather than measurement, as a means of evaluating achievement. Ludwig Wittgenstein is 'recognized' as a great philosopher, for example. He didn't pass a series of tests to prove this. Mahatma Gandhi is 'recognized' as a great leader.
  • The concept of the portfolio is drawn from the artistic community and will typically be applied in cases where the accomplishments are creative and content-based. In other disciplines, where the accomplishments resemble more the development of skills rather than of creations, accomplishments will resemble more the completion of tasks, like 'quests' or 'levels' in online games, say.Eventually, over time, a person will accumulate a 'profile' (much as described in 'Resource Profiles').
  • In other cases, the evaluation of achievement will resemble more a reputation system. Through some combination of inputs, from a more or less define community, a person may achieve a composite score called a 'reputation'. This will vary from community to community.
  •  
    Fine piece, transformative. "were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources."
Joshua Yeidel

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • "The CLA is really an authentic assessment process,"
    • Joshua Yeidel
       
      What is the meaning of "authentic" in this statement? It certainly isn't "situated in the real world" or "of intrinsic value".
  • it measures analytical ability, problem-solving ability, critical thinking, and communication.
  • the CLA typically reports scores on a "value added" basis, controlling for the scores that students earned on the SAT or ACT while in high school.
    • Joshua Yeidel
       
      If SAT and ACT are measuring the same things as CLA, then why not just use them? If they are measuring different things, why "control for" their scores?
  • ...5 more annotations...
  • improved models of instruction.
  • add CLA-style assignments to their liberal-arts courses.
    • Joshua Yeidel
       
      Maybe the best way to prepare for the test, but the best way to develop analytical ability, et. al.?
  • "If a college pays attention to learning and helps students develop their skills—whether they do that by participating in our programs or by doing things on their own—they probably should do better on the CLA,"
    • Joshua Yeidel
       
      Just in case anyone missed the message: pay attention to learning, and you'll _probably_ do better on the CLA. Get students to practice CLA tasks, and you _will_ do better on the CLA.
  • "Standardized tests of generic skills—I'm not talking about testing in the major—are so much a measure of what students bring to college with them that there is very little variance left out of which we might tease the effects of college," says Ms. Banta, who is a longtime critic of the CLA. "There's just not enough variance there to make comparative judgments about the comparative quality of institutions."
    • Joshua Yeidel
       
      It's not clear what "standardized tests" means in this comment. Does the "lack of variance" apply to all assessments (including, e.g., e-portfolios)?
  • Can the CLA fill both of those roles?
  •  
    A summary of the current state of "thinking" with regard to CLA. Many fallacies and contradictions are (unintentionally) exposed. At least CLA appears to be more about skills than content (though the question of how it is graded isn't even raised), but the "performance task" approach is the smallest possible step in that direction.
Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessmen... - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
1 - 20 of 167 Next › Last »
Showing 20 items per page