Skip to main content

Home/ CTLT and Friends/ Group items tagged critical

Rss Feed Group items tagged

Gary Brown

Critical friend - Wikipedia, the free encyclopedia - 2 views

  • The Critical Friend is a powerful idea, perhaps because it contains an inherent tension. Friends bring a high degree of unconditional positive regard. Critics are, at first sight at least, conditional, negative and intolerant of failure. Perhaps the critical friend comes closest to what might be regarded as 'true friendship' - a successful marrying of unconditional support and unconditional critique. [
  •  
    I've been wrestling with the tension again between supporting programs to help them improve, but then rating them for the accountability charge we hold.  So I've been looking into the concept and practice of the "Critical Friend."  Some tensions are inherent. This quote helps clarify.
Theron DesRosier

GothamSchools - 0 views

  •  
    Stanford professor, Linda Darling-Hammond, will chair Obama's transition team studying education policy. This sounds unremarkable, but just like Michelle Rhee and Joel Klein are lightning rods, so is Darling-Hammond. The main reason is that Darling-Hammond has been consistently skeptical of the nameless movement's efforts to shake up public schools. She has criticized Teach For America, the alternative certification program for teachers; criticized high-stakes testing, and criticized No Child Left Behind for narrowing the curriculum.
Gary Brown

Disciplines Follow Their Own Paths to Quality - Faculty - The Chronicle of Higher Educa... - 2 views

  • But when it comes to the fundamentals of measuring and improving student learning, engineering professors naturally have more to talk about with their counterparts at, say, Georgia Tech than with the humanities professors at Villanova
    • Gary Brown
       
      Perhaps this is too bad....
  • But there is no nationally normed way to measure the particular kind of critical thinking that students of classics acquire
  • er colleagues have created discipline-specific critical-reasoning tests for classics and political science
  • ...5 more annotations...
  • Political science cultivates skills that are substantially different from those in classics, and in each case those skills can't be measured with a general-education test.
  • he wants to use tests of reasoning that are appropriate for each discipline
  • I believe Richard Paul has spent a lifetime articulating the characteristics of discipline based critical thinking. But anyway, I think it is interesting that an attempt is being made to develop (perhaps) a "national standard" for critical thinking in classics. In order to assess anything effectively we need a standard. Without a standard there are no criteria and therefore no basis from which to assess. But standards do not necessarily have to be established at the national level. This raises the issue of scale. What is the appropriate scale from which to measure the quality and effectiveness of an educational experience? Any valid approach to quality assurance has to be multi-scaled and requires multiple measures over time. But to be honest the issues of standards and scale are really just the tip of the outcomes iceberg.
    • Gary Brown
       
      Missing the notion that the variance is in the activity more than the criteria.  We hear little of embedding nationally normed and weighted assignments and then assessing the implementation and facilitation variables.... mirror, not lens.
  • the UW Study of Undergraduate Learning (UW SOUL). Results from the UW SOUL show that learning in college is disciplinary; therefore, real assessment of learning must occur (with central support and resources)in the academic departments. Generic approaches to assessing thinking, writing, research, quantitative reasoning, and other areas of learning may be measuring something, but they cannot measure learning in college.
  • It turns out there is a six week, or 210+ hour serious reading exposure to two or more domains outside ones own, that "turns on" cross domain mapping as a robust capability. Some people just happen to have accumulated, usually by unseen and unsensed happenstance involvements (rooming with an engineer, son of a dad changing domains/careers, etc.) this minimum level of basics that allows robust metaphor based mapping.
Gary Brown

Would You Protect Your Computer's Feelings? Clifford Nass Says Yes. - ProfHacker - The ... - 2 views

  • why peer review processes often avoid, rather than facilitate, sound judgment
  • humans do not differentiate between computers and people in their social interactions.
  • no matter what "everyone knows," people act as if the computer secretly cares
  • ...4 more annotations...
  • users given completely random praise by a computer program liked it more than the same program without praise, even though they knew in advance the praise was meaningless.
  • Nass demonstrates, however, that people internalize praise and criticism differently—while we welcome the former, we really dwell on and obsess over the latter. In the criticism sandwich, then, "the criticism blasts the first list of positive achievements out of listeners' memory. They then think hard about the criticism (which will make them remember it better) and are on the alert to think even harder about what happens next. What do they then get? Positive remarks that are too general to be remembered"
  • And because we focus so much on the negative, having a similar number of positive and negative comments "feels negative overall"
  • The best strategy, he suggests, is "to briefly present a few negative remarks and then provide a long list of positive remarks...You should also provide as much detail as possible within the positive comments, even more than feels natural, because positive feedback is less memorable" (33).
  •  
    The implications for feedback issues are pretty clear.
Gary Brown

(How) Would You Use This Critical Thinking Video? at Beyond School - 3 views

  • This “Critical Thinking” video is worth a watch.
  •  
    it is worth a watch, and as a resource
  •  
    This is well done - many potential applications - a self check, for one, and for use in the myriad of teaching situations we find ourselves in both in work and outside of work.
Joshua Yeidel

Scholar Raises Doubts About the Value of a Test of Student Learning - Research - The Ch... - 3 views

  • Beginning in 2011, the 331 universities that participate in the Voluntary System of Accountability will be expected to publicly report their students' performance on one of three national tests of college-level learning.
  • But at least one of those three tests—the Collegiate Learning Assessment, or CLA—isn't quite ready to be used as a tool of public accountability, a scholar suggested here on Tuesday during the annual meeting of the Association for Institutional Research.
  • Students' performance on the test was strongly correlated with how long they spent taking it.
  • ...6 more annotations...
  • Besides the CLA, which is sponsored by the Council for Aid to Education, other tests that participants in the voluntary system may use are the Collegiate Assessment of Academic Proficiency, from ACT Inc., and the Measure of Academic Proficiency and Progress, offered by the Educational Testing Service.
  • The test has sometimes been criticized for relying on a cross-sectional system rather than a longitudinal model, in which the same students would be tested in their first and fourth years of college.
  • there have long been concerns about just how motivated students are to perform well on the CLA.
  • Mr. Hosch suggested that small groups of similar colleges should create consortia for measuring student learning. For example, five liberal-arts colleges might create a common pool of faculty members that would evaluate senior theses from all five colleges. "That wouldn't be a national measure," Mr. Hosch said, "but it would be much more authentic."
  • Mr. Shavelson said. "The challenge confronting higher education is for institutions to address the recruitment and motivation issues if they are to get useful data. From my perspective, we need to integrate assessment into teaching and learning as part of students' programs of study, thereby raising the stakes a bit while enhancing motivation of both students and faculty
  • "I do agree with his central point that it would not be prudent to move to an accountability system based on cross-sectional assessments of freshmen and seniors at an institution," said Mr. Arum, who is an author, with Josipa Roksa, of Academically Adrift: Limited Learning on College Campuses, forthcoming from the University of Chicago Press
  •  
    CLA debunking, but the best item may be the forthcoming book on "limited learning on College Campuses."
  •  
    "Micheal Scriven and I spent more than a few years trying to apply his multiple-ranking item tool (a very robust and creative tool, I recommend it to others when the alternative is multiple-choice items) to the assessment of critical thinking in health care professionals. The result might be deemed partially successful, at best. I eventually abandoned the test after about 10,000 administrations because the scoring was so complex we could not place it in non-technical hands."
  •  
    In comments on an article about CLA, Scriven's name comes up...
Gary Brown

Best Colleges: The Real Rankings - CBS MoneyWatch.com - 2 views

  • Ultimately, though, the usefulness of any college ranking will depend on what criteria matters most to you and your teen. The best strategy: Use a few of the rankings to amass quantifiable and
  •  
    key advice for prospective college students--and a way to think about providing models that engage authentic learning opportunities as critical benchmark.
  •  
    key advice for prospective college students--and a way to think about providing models that engage authentic learning opportunities as critical benchmark.
Gary Brown

A Critic Sees Deep Problems in the Doctoral Rankings - Faculty - The Chronicle of Highe... - 1 views

  • This week he posted a public critique of the NRC study on his university's Web site.
  • "Little credence should be given" to the NRC's ranges of rankings.
  • There's not very much real information about quality in the simple measures they've got."
  • ...4 more annotations...
  • The NRC project's directors say that those small samples are not a problem, because the reputational scores were not converted directly into program assessments. Instead, the scores were used to develop a profile of the kinds of traits that faculty members value in doctoral programs in their field.
  • For one thing, Mr. Stigler says, the relationships between programs' reputations and the various program traits are probably not simple and linear.
  • if these correlations between reputation and citations were plotted on a graph, the most accurate representation would be a curved line, not a straight line. (The curve would occur at the tipping point where high citation levels make reputations go sky-high.)
  • Mr. Stigler says that it was a mistake for the NRC to so thoroughly abandon the reputational measures it used in its previous doctoral studies, in 1982 and 1995. Reputational surveys are widely criticized, he says, but they do provide a check on certain kinds of qualitative measures.
  •  
    What is not challenged is the validity and utility of the construct itself--reputation rankings.
Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessmen... - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
Gary Brown

Researchers Criticize Reliability of National Survey of Student Engagement - Students -... - 3 views

  • "If each of the five benchmarks does not measure a distinct dimension of engagement and includes substantial error among its items, it is difficult to inform intervention strategies to improve undergraduates' educational experiences,"
  • nly one benchmark, enriching educational experiences, had a significant effect on the seniors' cumulative GPA.
  • Other critics have asserted that the survey's mountains of data remain largely ignored.
  •  
    If the results are largely ignored, the psychometric integrity matters little.  There is no indication it is ignored because it lacks psychometric integrity.
Gary Brown

The Ticker - The Chronicle of Higher Education - 1 views

  • The U.S. Education Department today released a report critical of the Middle States Commission on Higher Education, saying the regional accrediting organization did not set minimum standards for its member institutions on program length or credit hours.
  • The accreditor responded that "the fundamental concern of higher education's constituencies is whether students graduate with appropriate knowledge, skills, and competencies, not how many hours they spend in a classroom."
  •  
    A critical indicator of why I see our work as work with accreditors rather than for accreditors.
Nils Peterson

Views: Changing the Equation - Inside Higher Ed - 1 views

  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resource
  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
  • ...19 more annotations...
  • year we strug
  • year we strug
  • those who control influential rating systems of the sort published by U.S. News & World Report -- define academic quality as small classes taught by distinguished faculty, grand campuses with impressive libraries and laboratories, and bright students heavily recruited. Since all of these indicators of quality are costly, my college’s pursuit of quality, like that of so many others, led us to seek more revenue to spend on quality improvements. And the strategy worked.
  • Based on those concerns, and informed by the literature on the “teaching to learning” paradigm shift, we began to change our focus from what we were teaching to what and how our students were learning.
  • No one wants to cut costs if their reputation for quality will suffer, yet no one wants to fall off the cliff.
  • When quality is defined by those things that require substantial resources, efforts to reduce costs are doomed to failure
  • some of the best thinkers in higher education have urged us to define the quality in terms of student outcomes.
  • Faculty said they wanted to move away from giving lectures and then having students parrot the information back to them on tests. They said they were tired of complaining that students couldn’t write well or think critically, but not having the time to address those problems because there was so much material to cover. And they were concerned when they read that employers had reported in national surveys that, while graduates knew a lot about the subjects they studied, they didn’t know how to apply what they had learned to practical problems or work in teams or with people from different racial and ethnic backgrounds.
  • Our applications have doubled over the last decade and now, for the first time in our 134-year history, we receive the majority of our applications from out-of-state students.
  • We established what we call college-wide learning goals that focus on "essential" skills and attributes that are critical for success in our increasingly complex world. These include critical and analytical thinking, creativity, writing and other communication skills, leadership, collaboration and teamwork, and global consciousness, social responsibility and ethical awareness.
  • despite claims to the contrary, many of the factors that drive up costs add little value. Research conducted by Dennis Jones and Jane Wellman found that “there is no consistent relationship between spending and performance, whether that is measured by spending against degree production, measures of student engagement, evidence of high impact practices, students’ satisfaction with their education, or future earnings.” Indeed, they concluded that “the absolute level of resources is less important than the way those resources are used.”
  • After more than a year, the group had developed what we now describe as a low-residency, project- and competency-based program. Here students don’t take courses or earn grades. The requirements for the degree are for students to complete a series of projects, captured in an electronic portfolio,
  • students must acquire and apply specific competencies
  • Faculty spend their time coaching students, providing them with feedback on their projects and running two-day residencies that bring students to campus periodically to learn through intensive face-to-face interaction
  • At the very least, finding innovative ways to lower costs without compromising student learning is wise competitive positioning for an uncertain future
  • As the campus learns more about the demonstration project, other faculty are expressing interest in applying its design principles to courses and degree programs in their fields. They created a Learning Coalition as a forum to explore different ways to capitalize on the potential of the learning paradigm.
  • a problem-based general education curriculum
  • After a year and a half, the evidence suggests that students are learning as much as, if not more than, those enrolled in our traditional business program
  • the focus of student evaluations has changed noticeably. Instead of focusing almost 100% on the instructor and whether he/she was good, bad, or indifferent, our students' evaluations are now focusing on the students themselves - as to what they learned, how much they have learned, and how much fun they had learning.
    • Nils Peterson
       
      gary diigoed this article. this comment shines another light -- the focus of the course eval shifted from faculty member to course & student learning when the focus shifted from teaching to learning
  •  
    A must read spotted by Jane Sherman--I've highlighed, as usual, much of it.
Nils Peterson

AAC&U News | April 2010 | Feature - 1 views

  • Comparing Rubric Assessments to Standardized Tests
  • First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU).
  • And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.  
  • ...13 more annotations...
  • outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication
  • “The wonderful thing about this approach is that full-time faculty across the university  are gathering data about how their  students are doing, and since they’ll be teaching their courses in the future, they’re really invested in rubric assessment—they really care,” Escoe says. In one case, the capstone survey data revealed that students weren’t doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
  • the university planned a “dual pilot” study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment,
  • The rubrics the UC team used were slightly modified versions of those developed by AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) project. 
  • In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years.  The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.
  • “It’s not some nitpicky, onerous administrative add-on. It’s what we do as we teach our courses, and it really helps close that assessment loop.”
  • There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points “in a black box”:
  • faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test. 
  • “The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”
  • Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students’ whole program of study.”  Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning.
  • “We’re really trying to stress that assessment is pedagogy,”
  • “We found no statistically significant correlation between the CLA scores and the portfolio scores,”
  • In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
    • Nils Peterson
       
      CLA did not provide information for continuous program improvement -- we've heard this argument before
  •  
    The lack of correlation might be rephrased--there appears to be no corrlation between what is useful for faculty who teach and what is useful for the VSA. A corollary question: Of what use is the VSA?
Gary Brown

New test measures students' digital literacy | eCampus News - 0 views

  • Employers are looking for candidates who can navigate, critically evaluate, and make sense of the wealth of information available through digital media—and now educators have a new way to determine a student’s baseline digital literacy with a certification exam that measures the test-taker’s ability to assess information, think critically, and perform a range of real-world tasks.
  • iCritical Thinking Certification, created by the Educational Testing Service and Certiport, reveals whether or not a person is able to combine technical skills with experiences and knowledge.
  • Monica Brooks, Marshall University’s assistant vice president for Information Technology: Online Learning and Libraries, said her school plans to use iCritical Thinking beginning in the fall.
  •  
    the alternate universe, a small step away...
Gary Brown

News: Turning Surveys Into Reforms - Inside Higher Ed - 0 views

  • Molly Corbett Broad, president of the American Council on Education, warned those gathered here that they would be foolish to think that accountability demands were a thing of the past.
  • She said that while she is “impressed” with the work of NSSE, she thinks higher education is “not moving fast enough” right now to have in place accountability systems that truly answer the questions being asked of higher education. The best bet for higher education, she said, is to more fully embrace various voluntary systems, and show that they are used to promote improvements.
  • One reason NSSE data are not used more, some here said, was the decentralized nature of American higher education. David Paris, executive director of the New Leadership Alliance for Student Learning and Accountability, said that “every faculty member is king or queen in his or her classroom.” As such, he said, “they can take the lessons of NSSE” about the kinds of activities that engage students, but they don’t have to. “There is no authority or dominant professional culture that could impel any faculty member to apply” what NSSE teaches about engaged learning, he said.
  • ...4 more annotations...
  • She stressed that NSSE averages may no longer reflect any single reality of one type of faculty member. She challenged Paris’s description of powerful faculty members by noting that many adjuncts have relatively little control over their pedagogy, and must follow syllabuses and rules set by others. So the power to execute NSSE ideas, she said, may not rest with those doing most of the teaching.
  • Research presented here, however, by the Wabash College National Study of Liberal Arts Education offered concrete evidence of direct correlations between NSSE attributes and specific skills, such as critical thinking skills. The Wabash study, which involves 49 colleges of all types, features cohorts of students being analyzed on various NSSE benchmarks (for academic challenge, for instance, or supportive campus environment or faculty-student interaction) and various measures of learning, such as tests to show critical thinking skills or cognitive skills or the development of leadership skills.
  • The irony of the Wabash work with NSSE data and other data, Blaich said, was that it demonstrates the failure of colleges to act on information they get -- unless someone (in this case Wabash) drives home the ideas.“In every case, after collecting loads of information, we have yet to find a single thing that institutions didn’t already know. Everyone at the institution didn’t know -- it may have been filed away,” he said, but someone had the data. “It just wasn’t followed. There wasn’t sufficient organizational energy to use that data to improve student learning.”
  • “I want to try to make the point that there is a distinction between participating in NSSE and using NSSE," he said. "In the end, what good is it if all you get is a report?"
  •  
    An interesting discussion, exploring basic questions CTLT folks are familiar with, grappling with the question of how to use survey data and how to identify and address limitations. 10 years after launch of National Survey of Student Engagement, many worry that colleges have been speedier to embrace giving the questionnaire than using its results. And some experts want changes in what the survey measures. I note these limitations, near the end of the article: Adrianna Kezar, associate professor of higher education at the University of Southern California, noted that NSSE's questions were drafted based on the model of students attending a single residential college. Indeed many of the questions concern out-of-class experiences (both academic and otherwise) that suggest someone is living in a college community. Kezar noted that this is no longer a valid assumption for many undergraduates. Nor is the assumption that they have time to interact with peers and professors out of class when many are holding down jobs. Nor is the assumption -- when students are "swirling" from college to college, or taking courses at multiple colleges at the same time -- that any single institution is responsible for their engagement. Further, Kezar noted that there is an implicit assumption in NSSE of faculty being part of a stable college community. Questions about seeing faculty members outside of class, she said, don't necessarily work when adjunct faculty members may lack offices or the ability to interact with students from one semester to the next. Kezar said that she thinks full-time adjunct faculty members may actually encourage more engagement than tenured professors because the adjuncts are focused on teaching and generally not on research. And she emphasized that concerns about the impact of part-time adjuncts on student engagement arise not out of criticism of those individuals, but of the system that assigns them teaching duties without much support. S
  •  
    Repeat of highlighted resource, but merits revisiting.
Gary Brown

News: Assessing the Assessments - Inside Higher Ed - 0 views

  • In other words, a college that ranked in the 95th percentile for critical thinking using one of the tests would rank in roughly the same place using the critical thinking component of one of the other two tests, and vice versa.
    • Gary Brown
       
      A stellar example of critical thinking, this sentence.
  • diversity in measurement" to satisfy faculty
Peggy Collins

The enterprise implications of Google Wave | Enterprise Web 2.0 | ZDNet.com - 1 views

  •  
    "What Google has done with the Wave protocol is essentially create a new kind of social media format that is distinctively different from blogs, wikis, activity streams, RSS, or most familiar online communication models except possibly IM. Both blogs and wikis were created in the era of page-oriented Web applications and haven't changed much since. In contrast, Google Wave is designed for real-time participation and editing of shared conversations and documents and is more akin to the simultaneous multiuser experience of Google Docs than with traditional blogs and wiki editing. Though Google is sometimes criticized for missing the social aspect of the Web, that is patently not the case with waves, which are fundamentally social in nature. Participants can be added in real-time, new conversations forked off (via private replies), social media sharing is assumed to be the norm, and connection with a user's contextual server-side data is also a core feature including location, search, and more. The result is stored in a persistent document known as a wave, access to which can be embedded anywhere that HTML can be embedded, whether that's a Web page or an enterprise portal. Users can then discover and interact with the wave, joining the conversation, adding more information, etc. Google has also leveraged its investments in Google Gadgets and OpenSocial, two key technologies for spreading online services beyond the original boundaries of the sites they came from. All in all, Google Wave is a smart and well-constructed bundle of collaborative capabilities with many of the modern sensibilities we've come to expect in the Web 2.0 era including an acutely social nature, rapid interaction, and community-based technology."
Joshua Yeidel

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • "The CLA is really an authentic assessment process,"
    • Joshua Yeidel
       
      What is the meaning of "authentic" in this statement? It certainly isn't "situated in the real world" or "of intrinsic value".
  • it measures analytical ability, problem-solving ability, critical thinking, and communication.
  • the CLA typically reports scores on a "value added" basis, controlling for the scores that students earned on the SAT or ACT while in high school.
    • Joshua Yeidel
       
      If SAT and ACT are measuring the same things as CLA, then why not just use them? If they are measuring different things, why "control for" their scores?
  • ...5 more annotations...
  • improved models of instruction.
  • add CLA-style assignments to their liberal-arts courses.
    • Joshua Yeidel
       
      Maybe the best way to prepare for the test, but the best way to develop analytical ability, et. al.?
  • "If a college pays attention to learning and helps students develop their skills—whether they do that by participating in our programs or by doing things on their own—they probably should do better on the CLA,"
    • Joshua Yeidel
       
      Just in case anyone missed the message: pay attention to learning, and you'll _probably_ do better on the CLA. Get students to practice CLA tasks, and you _will_ do better on the CLA.
  • "Standardized tests of generic skills—I'm not talking about testing in the major—are so much a measure of what students bring to college with them that there is very little variance left out of which we might tease the effects of college," says Ms. Banta, who is a longtime critic of the CLA. "There's just not enough variance there to make comparative judgments about the comparative quality of institutions."
    • Joshua Yeidel
       
      It's not clear what "standardized tests" means in this comment. Does the "lack of variance" apply to all assessments (including, e.g., e-portfolios)?
  • Can the CLA fill both of those roles?
  •  
    A summary of the current state of "thinking" with regard to CLA. Many fallacies and contradictions are (unintentionally) exposed. At least CLA appears to be more about skills than content (though the question of how it is graded isn't even raised), but the "performance task" approach is the smallest possible step in that direction.
Gary Brown

A Final Word on the Presidents' Student-Learning Alliance - Measuring Stick - The Chron... - 1 views

  • I was very pleased to see the responses to the announcement of the Presidents’ Alliance as generally welcoming (“commendable,” “laudatory initiative,” “applaud”) the shared commitment of these 71 founding institutions to do more—and do it publicly and cooperatively—with regard to gathering, reporting, and using evidence of student learning.
  • establishing institutional indicators of educational progress that could be valuable in increasing transparency may not suggest what needs changing to improve results
  • As Adelman’s implied critique of the CLA indicates, we may end up with an indicator without connections to practice.
  • ...6 more annotations...
  • The Presidents’ Alliance’s focus on and encouragement of institutional efforts is important to making these connections and steps in a direct way supporting improvement.
  • Second, it is hard to disagree with the notion that ultimately evidence-based improvement will occur only if faculty members are appropriately trained and encouraged to improve their classroom work with undergraduates.
  • Certainly there has to be some connection between and among various levels of assessment—classroom, program, department, and institution—in order to have evidence that serves both to aid improvement and to provide transparency and accountability.
  • Presidents’ Alliance is setting forth a common framework of “critical dimensions” that institutions can use to evaluate and extend their own efforts, efforts that would include better reporting for transparency and accountability and greater involvement of faculty.
  • there is wide variation in where institutions are in their efforts, and we have a long way to go. But what is critical here is the public commitment of these institutions to work on their campuses and together to improve the gathering and reporting of evidence of student learning and, in turn, using evidence to improve outcomes.
  • The involvement of institutions of all types will make it possible to build a more coherent and cohesive professional community in which evidence-based improvement of student learning is tangible, visible, and ongoing.
Joshua Yeidel

Shaping Strategy in a World of Constant Disruption | BNET - 0 views

  •  
    Hammered by relentless technological change, many companies take a reactive stance: They focus solely on keeping up, protecting their existing markets, and improving their performance. But a few companies take a proactive stance by executing shaping strategies: They use technology changes to create new business ecosystems that benefit themselves and other participants. Take Google's AdSense: It has reinvented the advertising business by enabling advertisers, content providers, and potential customers to connect with one another quickly, easily, and cheaply. To succeed, a shaping strategy needs a critical mass of participants, say Hagel, Brown, and Davison. Shapers can attract them by: * Convincingly articulating opportunities available to participants * Defining standards and practices that make participation easy and affordable * Demonstrating they have the conviction and resources for success and won't compete against participants Well-executed shaping strategies mobilize masses of players to learn from and share risk with one another - creating a profitable future for all.
1 - 20 of 71 Next › Last »
Showing 20 items per page