Skip to main content

Home/ CTLT and Friends/ Group items tagged tests

Rss Feed Group items tagged

Gary Brown

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • Others say those who take the test have little motivation to do well, which makes it tough to draw conclusions from their performance.
  • "Everything that No Child Left Behind signified during the Bush administration—we operate 180 degrees away from that," says Roger Benjamin, president of the Council for Aid to Education, which developed and promotes the CLA. "We don't want this to be a high-stakes test. We're putting a stake in the ground on classic liberal-arts issues. I'm willing to rest my oar there. These core abilities, these higher-order skills, are very important, and they're even more important in a knowledge economy where everyone needs to deal with a surplus of information." Only an essay test, like the CLA, he says, can really get at those skills.
  • "The CLA is really an authentic assessment process," says Pedro Reyes, associate vice chancellor for academic planning and assessment at the University of Texas system.
  • ...20 more annotations...
  • "The Board of Regents here saw that it would be an important test because it measures analytical ability, problem-solving ability, critical thinking, and communication. Those are the skills that you want every undergraduate to walk away with." (Other large systems that have embraced the CLA include California State University and the West Virginia system.)
  • value added
  • We began by administering a retired CLA question, a task that had to do with analyzing crime-reduction strategies,
  • performance task that mirrors the CLA
  • Mr. Ernsting and Ms. McConnell are perfectly sincere about using CLA-style tasks to improve instruction on their campuses. But at the same time, colleges have a less high-minded motive for familiarizing students with the CLA style: It just might improve their scores when it comes time to take the actual test.
  • by 2012, the CLA scores of more than 100 colleges will be posted, for all the world to see, on the "College Portrait" Web site of the Voluntary System of Accountability, an effort by more than 300 public colleges and universities to provide information about life and learning on their campuses.
  • If familiarizing students with CLA-style tasks does raise their scores, then the CLA might not be a pure, unmediated reflection of the full range of liberal-arts skills. How exactly should the public interpret the scores of colleges that do not use such training exercises?
  • Trudy W. Banta, a professor of higher education and senior adviser to the chancellor for academic planning and evaluation at Indiana University-Purdue University at Indianapolis, believes it is a serious mistake to publicly release and compare scores on the test. There is too much risk, she says, that policy makers and the public will misinterpret the numbers.
  • most colleges do not use a true longitudinal model: That is, the students who take the CLA in their first year do not take it again in their senior year. The test's value-added model is therefore based on a potentially apples-and-oranges comparison.
  • freshman test-takers' scores are assessed relative to their SAT and ACT scores, and so are senior test-takers' scores. For that reason, colleges cannot game the test by recruiting an academically weak pool of freshmen and a strong pool of seniors.
  • students do not always have much motivation to take the test seriously
  • seniors, who are typically recruited to take the CLA toward the end of their final semester, when they can already taste the graduation champagne.
  • Of the few dozen universities that had already chosen to publish CLA data on that site, roughly a quarter of the reports appeared to include erroneous descriptions of the year-to-year value-added scores.
  • It is clear that CLA scores do reflect some broad properties of a college education.
  • Students' CLA scores improved if they took courses that required a substantial amount of reading and writing. Many students didn't take such courses, and their CLA scores tended to stay flat.
  • Colleges that make demands on students can actually develop their skills on the kinds of things measured by the CLA.
  • Mr. Shavelson believes the CLA's essays and "performance tasks" offer an unusually sophisticated way of measuring what colleges do, without relying too heavily on factual knowledge from any one academic field.
  • Politicians and consumers want easily interpretable scores, while colleges need subtler and more detailed data to make internal improvements.
  • The CLA is used at more than 400 colleges
  • Since its debut a decade ago, it has been widely praised as a sophisticated alternative to multiple-choice tests
Joshua Yeidel

Scholar Raises Doubts About the Value of a Test of Student Learning - Research - The Ch... - 3 views

  • Beginning in 2011, the 331 universities that participate in the Voluntary System of Accountability will be expected to publicly report their students' performance on one of three national tests of college-level learning.
  • But at least one of those three tests—the Collegiate Learning Assessment, or CLA—isn't quite ready to be used as a tool of public accountability, a scholar suggested here on Tuesday during the annual meeting of the Association for Institutional Research.
  • Students' performance on the test was strongly correlated with how long they spent taking it.
  • ...6 more annotations...
  • Besides the CLA, which is sponsored by the Council for Aid to Education, other tests that participants in the voluntary system may use are the Collegiate Assessment of Academic Proficiency, from ACT Inc., and the Measure of Academic Proficiency and Progress, offered by the Educational Testing Service.
  • The test has sometimes been criticized for relying on a cross-sectional system rather than a longitudinal model, in which the same students would be tested in their first and fourth years of college.
  • there have long been concerns about just how motivated students are to perform well on the CLA.
  • Mr. Hosch suggested that small groups of similar colleges should create consortia for measuring student learning. For example, five liberal-arts colleges might create a common pool of faculty members that would evaluate senior theses from all five colleges. "That wouldn't be a national measure," Mr. Hosch said, "but it would be much more authentic."
  • Mr. Shavelson said. "The challenge confronting higher education is for institutions to address the recruitment and motivation issues if they are to get useful data. From my perspective, we need to integrate assessment into teaching and learning as part of students' programs of study, thereby raising the stakes a bit while enhancing motivation of both students and faculty
  • "I do agree with his central point that it would not be prudent to move to an accountability system based on cross-sectional assessments of freshmen and seniors at an institution," said Mr. Arum, who is an author, with Josipa Roksa, of Academically Adrift: Limited Learning on College Campuses, forthcoming from the University of Chicago Press
  •  
    CLA debunking, but the best item may be the forthcoming book on "limited learning on College Campuses."
  •  
    "Micheal Scriven and I spent more than a few years trying to apply his multiple-ranking item tool (a very robust and creative tool, I recommend it to others when the alternative is multiple-choice items) to the assessment of critical thinking in health care professionals. The result might be deemed partially successful, at best. I eventually abandoned the test after about 10,000 administrations because the scoring was so complex we could not place it in non-technical hands."
  •  
    In comments on an article about CLA, Scriven's name comes up...
Nils Peterson

Half an Hour: Open Source Assessment - 0 views

  • When posed the question in Winnipeg regarding what I thought the ideal open online course would look like, my eventual response was that it would not look like a course at all, just the assessment.
    • Nils Peterson
       
      I remembered this Downes post on the way back from HASTAC. It is some of the roots of our Spectrum I think.
  • The reasoning was this: were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.
  • In Holland I encountered a person from an organization that does nothing but test students. This is the sort of thing I long ago predicted (in my 1998 Future of Online Learning) so I wasn't that surprised. But when I pressed the discussion the gulf between different models of assessment became apparent.Designers of learning resources, for example, have only the vaguest of indication of what will be on the test. They have a general idea of the subject area and recommendations for reading resources. Why not list the exact questions, I asked? Because they would just memorize the answers, I was told. I was unsure how this varied from the current system, except for the amount of stuff that must be memorized.
    • Nils Peterson
       
      assumes a test as the form of assessment, rather than something more open ended.
  • ...8 more annotations...
  • As I think about it, I realize that what we have in assessment is now an exact analogy to what we have in software or learning content. We have proprietary tests or examinations, the content of which is held to be secret by the publishers. You cannot share the contents of these tests (at least, not openly). Only specially licensed institutions can offer the tests. The tests cost money.
    • Nils Peterson
       
      See our Where are you on the spectrum, Assessment is locked vs open
  • Without a public examination of the questions, how can we be sure they are reliable? We are forced to rely on 'peer reviews' or similar closed and expert-based evaluation mechanisms.
  • there is the question of who is doing the assessing. Again, the people (or machines) that grade the assessments work in secret. It is expert-based, which creates a resource bottleneck. The criteria they use are not always apparent (and there is no shortage of literature pointing to the randomness of the grading). There is an analogy here with peer-review processes (as compared to recommender system processes)
  • What constitutes achievement in a field? What constitutes, for example, 'being a physicist'?
  • This is a reductive theory of assessment. It is the theory that the assessment of a big thing can be reduced to the assessment of a set of (necessary and sufficient) little things. It is a standards-based theory of assessment. It suggests that we can measure accomplishment by testing for accomplishment of a predefined set of learning objectives.Left to its own devices, though, an open system of assessment is more likely to become non-reductive and non-standards based. Even if we consider the mastery of a subject or field of study to consist of the accomplishment of smaller components, there will be no widespread agreement on what those components are, much less how to measure them or how to test for them.Consequently, instead of very specific forms of evaluation, intended to measure particular competences, a wide variety of assessment methods will be devised. Assessment in such an environment might not even be subject-related. We won't think of, say, a person who has mastered 'physics'. Rather, we might say that they 'know how to use a scanning electron microscope' or 'developed a foundational idea'.
  • We are certainly familiar with the use of recognition, rather than measurement, as a means of evaluating achievement. Ludwig Wittgenstein is 'recognized' as a great philosopher, for example. He didn't pass a series of tests to prove this. Mahatma Gandhi is 'recognized' as a great leader.
  • The concept of the portfolio is drawn from the artistic community and will typically be applied in cases where the accomplishments are creative and content-based. In other disciplines, where the accomplishments resemble more the development of skills rather than of creations, accomplishments will resemble more the completion of tasks, like 'quests' or 'levels' in online games, say.Eventually, over time, a person will accumulate a 'profile' (much as described in 'Resource Profiles').
  • In other cases, the evaluation of achievement will resemble more a reputation system. Through some combination of inputs, from a more or less define community, a person may achieve a composite score called a 'reputation'. This will vary from community to community.
  •  
    Fine piece, transformative. "were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources."
Gary Brown

Want Students to Take an Optional Test? Wave 25 Bucks at Them - Students - The Chronicl... - 0 views

  • cash, appears to be the single best approach for colleges trying to recruit students to volunteer for institutional assessments and other low-stakes tests with no bearing on their grades.
  • American Educational Research Association
  • A college's choice of which incentive to offer does not appear to have a significant effect on how students end up performing, but it can have a big impact on colleges' ability to round up enough students for the assessments, the study found.
  • ...6 more annotations...
  • "I cannot provide you with the magic bullet that will help you recruit your students and make sure they are performing to the maximum of their ability," Mr. Steedle acknowledged to his audience at the Denver Convention Center. But, he said, his study results make clear that some recruitment strategies are more effective than others, and also offer some notes of caution for those examining students' scores.
  • The study focused on the council's Collegiate Learning Assessment, or CLA, an open-ended test of critical thinking and writing skills which is annually administered by several hundred colleges. Most of the colleges that use the test try to recruit 100 freshmen and 100 seniors to take it, but doing so can be daunting, especially for colleges that administer it in the spring, right when the seniors are focused on wrapping up their work and graduating.
  • The incentives that spurred students the least were the opportunity to help their college as an institution assess student learning, the opportunity to compare themselves to other students, a promise they would be recognized in some college publication, and the opportunity to put participation in the test on their resume.
  • The incentives which students preferred appeared to have no significant bearing on their performance. Those who appeared most inspired by a chance to earn 25 dollars did not perform better on the CLA than those whose responses suggested they would leap at the chance to help out a professor.
  • What accounted for differences in test scores? Students' academic ability going into the test, as measured by characteristics such as their SAT scores, accounted for 34 percent of the variation in CLA scores among individual students. But motivation, independent of ability, accounted for 5 percent of the variation in test scores—a finding that, the paper says, suggests it is "sensible" for colleges to be concerned that students with low motivation are not posting scores that can allow valid comparisons with other students or valid assessments of their individual strengths and weaknesses.
  • A major limitation of the study was that Mr. Steedle had no way of knowing how the students who took the test were recruited. "If many of them were recruited using cash and prizes, it would not be surprising if these students reported cash and prizes as the most preferable incentives," his paper concedes.
  •  
    Since it is not clear if the incentive to participate in this study influenced the decision to participate, it remains similarly unclear if incentives to participate correlate with performance.
Theron DesRosier

Ethics in Assessment. ERIC Digest. - 2 views

  •  
    "Those who are involved with assessment are unfortunately not immune to unethical practices. Abuses in preparing students to take tests as well as in the use and interpretation of test results have been widely publicized. Misuses of test data in high-stakes decisions, such as scholarship awards, retention/promotion decisions, and accountability decisions, have been reported all too frequently. Even claims made in advertisements about the success rates of test coaching courses have raised questions about truth in advertising. Given these and other occurrences of unethical behavior associated with assessment, the purpose of this digest is to examine the available standards of ethical practice in assessment and the issues associated with implementation of these standards. "
Joshua Yeidel

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • "The CLA is really an authentic assessment process,"
    • Joshua Yeidel
       
      What is the meaning of "authentic" in this statement? It certainly isn't "situated in the real world" or "of intrinsic value".
  • add CLA-style assignments to their liberal-arts courses.
    • Joshua Yeidel
       
      Maybe the best way to prepare for the test, but the best way to develop analytical ability, et. al.?
  • the CLA typically reports scores on a "value added" basis, controlling for the scores that students earned on the SAT or ACT while in high school.
    • Joshua Yeidel
       
      If SAT and ACT are measuring the same things as CLA, then why not just use them? If they are measuring different things, why "control for" their scores?
  • ...5 more annotations...
  • improved models of instruction.
  • it measures analytical ability, problem-solving ability, critical thinking, and communication.
  • "If a college pays attention to learning and helps students develop their skills—whether they do that by participating in our programs or by doing things on their own—they probably should do better on the CLA,"
    • Joshua Yeidel
       
      Just in case anyone missed the message: pay attention to learning, and you'll _probably_ do better on the CLA. Get students to practice CLA tasks, and you _will_ do better on the CLA.
  • "Standardized tests of generic skills—I'm not talking about testing in the major—are so much a measure of what students bring to college with them that there is very little variance left out of which we might tease the effects of college," says Ms. Banta, who is a longtime critic of the CLA. "There's just not enough variance there to make comparative judgments about the comparative quality of institutions."
    • Joshua Yeidel
       
      It's not clear what "standardized tests" means in this comment. Does the "lack of variance" apply to all assessments (including, e.g., e-portfolios)?
  • Can the CLA fill both of those roles?
  •  
    A summary of the current state of "thinking" with regard to CLA. Many fallacies and contradictions are (unintentionally) exposed. At least CLA appears to be more about skills than content (though the question of how it is graded isn't even raised), but the "performance task" approach is the smallest possible step in that direction.
Gary Brown

New test measures students' digital literacy | eCampus News - 0 views

  • Employers are looking for candidates who can navigate, critically evaluate, and make sense of the wealth of information available through digital media—and now educators have a new way to determine a student’s baseline digital literacy with a certification exam that measures the test-taker’s ability to assess information, think critically, and perform a range of real-world tasks.
  • iCritical Thinking Certification, created by the Educational Testing Service and Certiport, reveals whether or not a person is able to combine technical skills with experiences and knowledge.
  • Monica Brooks, Marshall University’s assistant vice president for Information Technology: Online Learning and Libraries, said her school plans to use iCritical Thinking beginning in the fall.
  •  
    the alternate universe, a small step away...
Joshua Yeidel

Performance Assessment | The Alternative to High Stakes Testing - 0 views

  •  
    " The New York Performance Standards Consortium represents 28 schools across New York State. Formed in 1997, the Consortium opposes high stakes tests arguing that "one size does not fit all." Despite skepticism that an alternative to high stakes tests could work, the New York Performance Standards Consortium has done just that...developed an assessment system that leads to quality teaching, that enhances rather than compromises our students' education. Consortium school graduates go on to college and are successful."
Gary Brown

Does testing for statistical significance encourage or discourage thoughtful ... - 1 views

  • Does testing for statistical significance encourage or discourage thoughtful data analysis? Posted by Patricia Rogers on October 20th, 2010
  • Epidemiology, 9(3):333–337). which argues not only for thoughtful interpretation of findings, but for not reporting statistical significance at all.
  • We also would like to see the interpretation of a study based not on statistical significance, or lack of it, for one or more study variables, but rather on careful quantitative consideration of the data in light of competing explanations for the findings.
  • ...6 more annotations...
  • we prefer a researcher to consider whether the magnitude of an estimated effect could be readily explained by uncontrolled confounding or selection biases, rather than simply to offer the uninspired interpretation that the estimated effect is significant, as if neither chance nor bias could then account for the findings.
  • Many data analysts appear to remain oblivious to the qualitative nature of significance testing.
  • statistical significance is itself only a dichotomous indicator.
  • it cannot convey much useful information
  • Even worse, those two values often signal just the wrong interpretation. These misleading signals occur when a trivial effect is found to be ’significant’, as often happens in large studies, or when a strong relation is found ’nonsignificant’, as often happens in small studies.
  • Another useful paper on this issue is Kristin Sainani, (2010) “Misleading Comparisons: The Fallacy of Comparing Statistical Significance”Physical Medicine and Rehabilitation, Vol. 2 (June), 559-562 which discusses the need to look carefully at within-group differences as well as between-group differences, and at sub-group significance compared to interaction. She concludes: ‘Readers should have a particularly high index of suspicion for controlled studies that fail to report between-group comparisons, because these likely represent attempts to “spin” null results.”
  •  
    and at sub-group significance compared to interaction. She concludes: 'Readers should have a particularly high index of suspicion for controlled studies that fail to report between-group comparisons, because these likely represent attempts to "spin" null results."
Theron DesRosier

In Honor of the Standardized Testing Season… « Let's Play Math! - 0 views

  • — Jonathan Kozol at Westfield State College’s 157th Commencement
  •  
    If you could lead through testing, the U.S. would lead the world in all education categories. When are people going to understand you don't fatten your lambs by weighing them? - Jonathan Kozol at Westfield State College's 157th Commencement
Joshua Yeidel

Higher Education: Assessment & Process Improvement Group News | LinkedIn - 0 views

  •  
    High School Principal George Wood eloquently contrasts standardized NCLB-style testing and his school's term-end performance testing.
Gary Brown

No Tests, No Grades = More Graduates? - 0 views

  • At an alternative high school in Newark, students will make presentations instead of taking tests and receive written progress reports instead of grades. They will use few textbooks and divide their school weeks between the classroom and an internship,
  •  
    inch by inch new models make the news and subsequently make progress
Gary Brown

Saving Public Universities - 0 views

  • Many public universities do offer online courses while primarily maintaining traditional ones. But the public higher-education model for the future may already exist: the completely online Western Governors University (WGU), launched in 1998. Back then, it was described as highly controversial. Now WGU is the largest virtual university in the United States, using technology to offer a flexible structure and reasonable pricing to meet adult learners’ needs.
  • keeps its costs down by relying heavily on technology and independent learning resources, and by using a student-centric model versus a professor-centric approach
  • Additionally WGU is the first and only system that gives students credit for what they know rather than the courses they complete.
  • ...3 more annotations...
  • “As you take a course at WGU, you pass it by passing certain tests along the way,” Thomasian said. “Your tests aren’t on a set schedule in terms of, ‘You have to take it this month or that month.’ You can start moving those tests ahead, passing that competency and moving to the end of the course, and passing the competency for that.”
  • It was fun to cross the 10,000 student threshold about two years ago,” Partridge said, “and we’re right at the door of 20,000 right now.”
  • Now he said the university enrolls approximately 1,000 new students each month.
  •  
    The rise of the faculty free institution--should we worry?
Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessmen... - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
Gary Brown

Disciplines Follow Their Own Paths to Quality - Faculty - The Chronicle of Higher Educa... - 2 views

  • But when it comes to the fundamentals of measuring and improving student learning, engineering professors naturally have more to talk about with their counterparts at, say, Georgia Tech than with the humanities professors at Villanova
    • Gary Brown
       
      Perhaps this is too bad....
  • But there is no nationally normed way to measure the particular kind of critical thinking that students of classics acquire
  • er colleagues have created discipline-specific critical-reasoning tests for classics and political science
  • ...5 more annotations...
  • Political science cultivates skills that are substantially different from those in classics, and in each case those skills can't be measured with a general-education test.
  • he wants to use tests of reasoning that are appropriate for each discipline
  • I believe Richard Paul has spent a lifetime articulating the characteristics of discipline based critical thinking. But anyway, I think it is interesting that an attempt is being made to develop (perhaps) a "national standard" for critical thinking in classics. In order to assess anything effectively we need a standard. Without a standard there are no criteria and therefore no basis from which to assess. But standards do not necessarily have to be established at the national level. This raises the issue of scale. What is the appropriate scale from which to measure the quality and effectiveness of an educational experience? Any valid approach to quality assurance has to be multi-scaled and requires multiple measures over time. But to be honest the issues of standards and scale are really just the tip of the outcomes iceberg.
    • Gary Brown
       
      Missing the notion that the variance is in the activity more than the criteria.  We hear little of embedding nationally normed and weighted assignments and then assessing the implementation and facilitation variables.... mirror, not lens.
  • the UW Study of Undergraduate Learning (UW SOUL). Results from the UW SOUL show that learning in college is disciplinary; therefore, real assessment of learning must occur (with central support and resources)in the academic departments. Generic approaches to assessing thinking, writing, research, quantitative reasoning, and other areas of learning may be measuring something, but they cannot measure learning in college.
  • It turns out there is a six week, or 210+ hour serious reading exposure to two or more domains outside ones own, that "turns on" cross domain mapping as a robust capability. Some people just happen to have accumulated, usually by unseen and unsensed happenstance involvements (rooming with an engineer, son of a dad changing domains/careers, etc.) this minimum level of basics that allows robust metaphor based mapping.
Gary Brown

Cross-Disciplinary Grading Techniques - ProfHacker - The Chronicle of Higher Education - 1 views

  • So far, the most useful tool to me, in physics, has been the rubric, which is used widely in grading open-ended assessments in the humanities.
  • This method has revolutionized the way I grade. No longer do I have to keep track of how many points are deducted from which type of misstep on what problem for how many students. In the past, I often would get through several tests before I realized that I wasn’t being consistent with the deduction of points, and then I’d have to go through and re-grade all the previous tests. Additionally, the rubric method encourages students to refer to a solution, which I post after the test is administered, and they are motivated to meet with me in person to discuss why they got a 2 versus a 3 on a given problem, for example.
  • his opens up the opportunity to talk with them personally about their problem-solving skills and how they can better them. The emphasis is moved away from point-by-point deductions and is redirected to a more holistic view of problem solving.
  •  
    In the heart of the home of the concept inventory--Physics
Joshua Yeidel

Op-Ed Contributor - Why Charter Schools Fail the Test - NYTimes.com - 1 views

  •  
    Charles Murray of the Amertican Enterprise Institute waves a conservative flag for _abandoning_ standardized tests in education-- from a consumer's (parent's) standpoint
Joshua Yeidel

The Answer Sheet - A principal on standardized vs. teacher-written tests - 0 views

  •  
    High school principal George Wood eloquently contrasts standardized NCLB-style testing with his school's performance assessments.
Nils Peterson

HASTAC welcomes Howard Rheingold for a discussion on participatory learning | HASTAC - 0 views

  • I was eager to hear Howard Rheingold's thoughts on participatory learning and to learn more about his new course. In the video thread above, Howard goes into detail about the ways that "student-led collaborative inquiry and involvement... enlists their enthusiasm in ways that even very good lectures and texts don't." He details a loose set of what he calls meta-skills, which include: critical inquiry, pathfinding, balancing individual and collective voice, and attention-to-attention.
    • Nils Peterson
       
      I was looking for stuff on Participatory Learning in HASTAC and came across this. The meta-skills Rheingold cites are an interesting list
  • In a follow up video, Howard bemoans the quickness with which students tend to ask the question "what will be on the test?" His solution has increasingly been to have students decide collaboratively what material is important enough to merit this distinction. Here, the ability to make decisions collectively about the accountability of a group seems to call forth another meta-skill: balancing individual and collective voice.
    • Nils Peterson
       
      Here Rheingold seems to miss the mark, he has students learning things that are tested in class, rather than being tested in the world.
Kimberly Green

News: Class Advantage - Inside Higher Ed - 0 views

  •  
    Re SAT scores and college admissions: … Parents of all economic classes want their children to succeed, but the wealthier ones "better understand the postsecondary landscape and competitive admission process and they invest in resources to promote college attendance," she [Alon] writes. As a result test score gaps of high school seniors -- grouped by economic background -- have grown during recent years. Alon writes that as long as college admissions remains competitive, such trends will continue -- with wealthier parents finding ways to improve performance for their children, no matter what measures colleges use to sort applicants.
1 - 20 of 67 Next › Last »
Showing 20 items per page