Skip to main content

Home/ CTLT and Friends/ Group items tagged highlight

Rss Feed Group items tagged

Gary Brown

Sincerity in evaluation - highlights and lowlights « Genuine Evaluation - 3 views

  • Principles of Genuine Evaluation When we set out to explore the notion of ‘Genuine Evaluation’, we identified 5 important aspects of it: VALUE-BASED -transparent and defensible values (criteria of merit and worth and standards of performance) EMPIRICAL – credible evidence about what has happened and what has caused this, USABLE – reported in such a way that it can be understood and used by those who can and should use it (which doesn’t necessarily mean it’s used or used well, of course) SINCERE – a commitment by those commissioning evaluation to respond to information about both success and failure (those doing evaluation can influence this but not control it) HUMBLE – acknowledges its limitations From now until the end of the year, we’re looking at each of these principles and collecting some of the highlights and lowlights  from 2010 (and previously).
  • Sincerity of evaluation is something that is often not talked about in evaluation reports, scholarly papers, or formal presentations, only discussed in the corridors and bars afterwards.  And yet it poses perhaps the greatest threat to the success of individual evaluations and to the whole enterprise of evaluation.
Jayme Jacobson

The Learning in Informal and Formal Environments (LIFE) Center » Blog Archive... - 1 views

  •  
    This looks like it might be something we would want to follow up on. would like to see this in action.
Theron DesRosier

Virtual-TA - 2 views

  • We also developed a technology platform that allows our TAs to electronically insert detailed, actionable feedback directly into student assignments
  • Your instructors give us the schedule of assignments, when student assignments are due, when we might expect to receive them electronically, when the scored assignments will be returned, the learning outcomes on which to score the assignments, the rubrics to be used and the weights to be applied to different learning outcomes. We can use your rubrics to score assignments or design rubrics for sign-off by your faculty members.
  • review and embed feedback using color-coded pushpins (each color corresponds to a specific learning outcome) directly onto the electronic assignments. Color-coded pushpins provide a powerful visual diagnostic.
  • ...5 more annotations...
  • We do not have any contact with your students. Instructors retain full control of the process, from designing the assignments in the first place, to specifying learning outcomes and attaching weights to each outcome. Instructors also review the work of our TAs through a step called the Interim Check, which happens after 10% of the assignments have been completed. Faculty provide feedback, offer any further instructions and eventually sign-off on the work done, before our TAs continue with the remainder of the assignments
  • Finally, upon the request of the instructor, the weights he/she specified to the learning outcomes will be rubric-based scores which are used to generate a composite score for each student assignment
  • As an added bonus, our Virtual-TAs provide a detailed, summative report for the instructor on the overall class performance on the given assignment, which includes a look at how the class fared on each outcome, where the students did well, where they stumbled and what concepts, if any, need reinforcing in class the following week.
  • We can also, upon request, generate reports by Student Learning Outcomes (SLOs). This report can be used by the instructor to immediately address gaps in learning at the individual or classroom level.
  • Think of this as a micro-closing-of-the-loop that happens each week.  Contrast this with the broader, closing-the-loop that accompanies program-level assessment of learning, which might happen at the end of a whole academic year or later!
  •  
    I went to Virtual TA and Highlighted their language describing how it works.
Theron DesRosier

Google and WPP Marketing Research Awards - 0 views

  •  
    "Google and the WPP Group have teamed up to create a new research program to improve understanding and practices in online marketing, and to better understand the relationship between online and offline media. The Google and WPP Marketing Research Awards Program expects to support up to 12 awards in the range from $50,000 to $70,000. Awards will be in the form of unrestricted gifts to academic institutions, under the names of the researchers who submitted the proposal. Award recipients will be invited to participate in a meeting highlighting work in this area and will be encouraged to make their results available online and in professional publications."
Nils Peterson

The Huffington Post Allows Top Commenters To Become Bloggers - Publishing 2.0 - 0 views

  • they took a middle path, opening up an opportunity for ANYONE who actively comments on Huffington Post to become a blogger — but with one caveat…they have to EARN it. Or put another way — they are leveraging the power of the network, while still creating boundaries to channel value.
    • Nils Peterson
       
      How to become a HuffPost blogger. Gives insight into assessment scales
  • Since launching in May 2005, we’ve received more than 2.7 million comments, posted by over 115,000 commenters.
  • Our decision will be based on how many fans a commenter has, how often their comment is selected as a Favorite, and our moderators’ preferences. Every comment now has an “I’m A Fan Of” link and a “Favorite” link, so start voting for the comments and commenters you like best.
  • ...1 more annotation...
  • By using a “groupsourcing” method to highlight well-received commenters — from whom we’ll be able to choose new bloggers — we’re leveraging the power of the HuffPost community to serve as a filter, highlighting strong writers who have something to add to our group blog mix.
    • Nils Peterson
       
      So this is the crux of the issue for Cathy Davidson. Her syllabus proposes using a single criteria "satisfactory" and it appears that it might work if the volume of voters is large and their demographics sufficiently distributed. Also note that its voting for a cream of the crop, not just satisfactory. In a smaller setting, a scale with more than two values and comments like CTLT proposes gives more chance for discrimination and value in the feedback
Nils Peterson

E-Portfolios for Learning: Limitations of Portfolios - 1 views

  • Today, Shavelson, Klein & Benjamin published an online article on Inside Higher Ed entitled, "The Limitations of Portfolios." The comments to that article are even more illuminating, and highlight the debate about electronic portfolios vs. accountability systems... assessment vs. evaluation. These arguments highlight what I think is a clash in philosophies of learning and assessment, between traditional, behaviorist models and more progressive, cognitive/constructivist models. How do we build assessment strategies that bridge these two approaches? Or is the divide too wide? Do these different perspectives support the need for multiple measures and triangulation?
    • Nils Peterson
       
      Helen responds to CLA proponents
Nils Peterson

EVOKE -- When spider webs unite, they can tie up a lion | A World Bank Blog on ICT use ... - 2 views

  • Question 4 – What happens when you bring 10,000 players together in an open innovation platform? A lot!  There have been many highlights these past 14 days and below are some of the more outstanding unexpected outcomes of how the game has taken on a life of its own
    • Nils Peterson
       
      So the game context mediated some peer-to-peer learning around authentic problems
Theron DesRosier

Debate Over P vs. NP Proof Highlights Web Collaboration - NYTimes.com - 1 views

  • The potential of Internet-based collaboration was vividly demonstrated this month when complexity theorists used blogs and wikis to pounce on a claimed proof for one of the most profound and difficult problems facing mathematicians and computer scientists.
  • “The proof required the piecing together of principles from multiple areas within mathematics. The major effort in constructing this proof was uncovering a chain of conceptual links between various fields and viewing them through a common lens.”
  • In this case, however, the significant breakthrough may not be in the science, but rather in the way science is practiced.
  • ...5 more annotations...
  • What was highly significant, however, was the pace of discussion and analysis, carried out in real time on blogs and a wiki that had been quickly set up for the purpose of collectively analyzing the paper.
  • Several of the researchers said that until now such proofs had been hashed out in colloquiums that required participants to be physically present at an appointed time. Now, with the emergence of Web-connected software programs it is possible for such collaborative undertakings to harness the brainpower of the world’s best thinkers on a continuous basis.
  • collaborative tools is paving the way for a second scientific revolution in the same way the printing press created a demarcation between the age of alchemy and the age of chemistry.
  • “The difference between the alchemists and the chemists was that the printing press was used to coordinate peer review,” he said. “The printing press didn’t cause the scientific revolution, but it wouldn’t have been possible without it.”
  • “It’s not just, ‘Hey, everybody, look at this,’ ” he said, “but rather a new set of norms is emerging about what it means to do mathematics, assuming coordinated participation.”
  •  
    "The difference between the alchemists and the chemists was that the printing press was used to coordinate peer review," he said. "The printing press didn't cause the scientific revolution, but it wouldn't have been possible without it." "The difference between the alchemists and the chemists was that the printing press was used to coordinate peer review," he said. "The printing press didn't cause the scientific revolution, but it wouldn't have been possible without it."
Gary Brown

Conference Highlights Contradictory Attitudes Toward Global Rankings - International - ... - 2 views

  • He emphasized, however, that "rankings are only useful if the indicators they use don't just measure things that are easy to measure, but the things that need to be measured."
  • "In Malaysia we do not call it a ranking exercise," she said firmly, saying that the effort was instead a benchmarking exercise that attempts to rate institutions against an objective standard.
  • "If Ranking Is the Disease, Is Benchmarking the Cure?" Jamil Salmi, tertiary education coordinator at the World Bank, said that rankings are "just the tip of the iceberg" of a growing accountability agenda, with students, governments, and employers all seeking more comprehensive information about institutions
  • ...3 more annotations...
  • "Rankings are the most visible and easy to understand" of the various measures, but they are far from the most reliable,
  • Jamie P. Merisotis
  • He described himself as a longtime skeptic of rankings, but noted that "these kinds of forums are useful, because you have to have conversations involving the producers of rankings, consumers, analysts, and critics."
Matthew Tedder

New studies highlight needs of boys in K-12, higher education - 1 views

  •  
    I've long suspected as much. We hear so much about how women's issue need addressing (and that's true) but let's not also neglect male issues... as much as us males don't like to even think they exist.
Gary Brown

Mini-Digest of Education Statistics, 2009 - 0 views

  • This publication is a pocket-sized compilation of statistical information covering the broad field of American education from kindergarten through graduate school. The statistical highlights are excerpts from the Digest of Education of Statistics, 2009.
  •  
    just released for 2009, great resource
Nils Peterson

Higher Ed/: TLT's Harvesting Feedback Project - 0 views

  • It's a fascinating project, and to me the most interesting design element is one not actually highlighted here, viz. that the plan is to be able to rate any kind of work anywhere on the Internet. The era of "enclosed garden" portfolio systems may be drawing (thankfully) to an end.
    • Nils Peterson
       
      Interesting that David picked up this implication from the work, its something we didn't say but I think want to believe.
  • crowd-sourcing for assessment (you assess some of my students, I assess some of yours, for example) I wonder if the group has considered using Amazon's Mechanical Turk service as a cost-effective way of getting ratings from "the public."
    • Nils Peterson
       
      This is an interesting idea, i've started to follow up at Mechanical Turk and hope to develop a blog post
Gary Brown

Top News - School of the Future: Lessons in failure - 0 views

  • School of the Future: Lessons in failure How Microsoft's and Philadelphia's innovative school became an example of what not to do By Meris Stansbury, Associate Editor   Primary Topic Channel:  Tech Leadership   Students at the School of the Future when it first opened in 2006. <script language=JavaScript src="http://rotator.adjuggler.com/servlet/ajrotator/173768/0/vj?z=eschool&dim=173789&pos=6&abr=$scriptiniframe"></script><noscript><a href="http://rotator.adjuggler.com/servlet/ajrotator/173768/0/cc?z=eschool&pos=6"><img src="http://rotator.adjuggler.com/servlet/ajrotator/173768/0/vc?z=eschool&dim=173789&pos=6&abr=$imginiframe" width="300" height="250" border="0"></a></noscript> Also of Interest Cheaper eBook reader challenges Kindle Carnegie Corporation: 'Do school differently' Former college QB battles video game maker Dueling curricula put copyright ed in spotlight Campus payroll project sees delays, more costs <script language=JavaScript src="http://rotator.adjuggler.com/servlet/ajrotator/324506/0/vj?z=eschool&dim=173789&pos=2&abr=$scriptiniframe"></script><noscript><a href="http://rotator.adjuggler.com/servlet/ajrotator/324506/0/cc?z=eschool&pos=2"><img src="http://rotator.adjuggler.com/servlet/ajrotator/324506/0/vc?z=eschool&dim=173789&pos=2&abr=$imginiframe" width="300" height="250" border="0"></a></noscript> When it opened its doors in 2006, Philadelphia's School of the Future (SOF) was touted as a high school that would revolutionize education: It would teach at-risk students critical 21st-century skills needed for college and the work force by emphasizing project-based learning, technology, and community involvement. But three years, three superintendents, four principals, and countless problems later, experts at a May 28 panel discussion hosted by the American Enterprise Institute (AEI) agreed: The Microsoft-inspired project has been a failure so far. Microsoft points to the school's rapid turnover in leadership as the key reason for this failure, but other observers question why the company did not take a more active role in translating its vision for the school into reality. Regardless of where the responsibility lies, the project's failure to date offers several cautionary lessons in school reform--and panelists wondered if the school could use these lessons to succeed in the future.
  •  
    The discussion about Microsoft's Philadelphia School of the future, failing so far. (partial access to article only)
  •  
    I highlight this as a model where faculty and their teaching beliefs appear not to have been addressed.
Gary Brown

News: Turning Surveys Into Reforms - Inside Higher Ed - 0 views

  • Molly Corbett Broad, president of the American Council on Education, warned those gathered here that they would be foolish to think that accountability demands were a thing of the past.
  • She said that while she is “impressed” with the work of NSSE, she thinks higher education is “not moving fast enough” right now to have in place accountability systems that truly answer the questions being asked of higher education. The best bet for higher education, she said, is to more fully embrace various voluntary systems, and show that they are used to promote improvements.
  • One reason NSSE data are not used more, some here said, was the decentralized nature of American higher education. David Paris, executive director of the New Leadership Alliance for Student Learning and Accountability, said that “every faculty member is king or queen in his or her classroom.” As such, he said, “they can take the lessons of NSSE” about the kinds of activities that engage students, but they don’t have to. “There is no authority or dominant professional culture that could impel any faculty member to apply” what NSSE teaches about engaged learning, he said.
  • ...4 more annotations...
  • She stressed that NSSE averages may no longer reflect any single reality of one type of faculty member. She challenged Paris’s description of powerful faculty members by noting that many adjuncts have relatively little control over their pedagogy, and must follow syllabuses and rules set by others. So the power to execute NSSE ideas, she said, may not rest with those doing most of the teaching.
  • Research presented here, however, by the Wabash College National Study of Liberal Arts Education offered concrete evidence of direct correlations between NSSE attributes and specific skills, such as critical thinking skills. The Wabash study, which involves 49 colleges of all types, features cohorts of students being analyzed on various NSSE benchmarks (for academic challenge, for instance, or supportive campus environment or faculty-student interaction) and various measures of learning, such as tests to show critical thinking skills or cognitive skills or the development of leadership skills.
  • The irony of the Wabash work with NSSE data and other data, Blaich said, was that it demonstrates the failure of colleges to act on information they get -- unless someone (in this case Wabash) drives home the ideas.“In every case, after collecting loads of information, we have yet to find a single thing that institutions didn’t already know. Everyone at the institution didn’t know -- it may have been filed away,” he said, but someone had the data. “It just wasn’t followed. There wasn’t sufficient organizational energy to use that data to improve student learning.”
  • “I want to try to make the point that there is a distinction between participating in NSSE and using NSSE," he said. "In the end, what good is it if all you get is a report?"
  •  
    An interesting discussion, exploring basic questions CTLT folks are familiar with, grappling with the question of how to use survey data and how to identify and address limitations. 10 years after launch of National Survey of Student Engagement, many worry that colleges have been speedier to embrace giving the questionnaire than using its results. And some experts want changes in what the survey measures. I note these limitations, near the end of the article: Adrianna Kezar, associate professor of higher education at the University of Southern California, noted that NSSE's questions were drafted based on the model of students attending a single residential college. Indeed many of the questions concern out-of-class experiences (both academic and otherwise) that suggest someone is living in a college community. Kezar noted that this is no longer a valid assumption for many undergraduates. Nor is the assumption that they have time to interact with peers and professors out of class when many are holding down jobs. Nor is the assumption -- when students are "swirling" from college to college, or taking courses at multiple colleges at the same time -- that any single institution is responsible for their engagement. Further, Kezar noted that there is an implicit assumption in NSSE of faculty being part of a stable college community. Questions about seeing faculty members outside of class, she said, don't necessarily work when adjunct faculty members may lack offices or the ability to interact with students from one semester to the next. Kezar said that she thinks full-time adjunct faculty members may actually encourage more engagement than tenured professors because the adjuncts are focused on teaching and generally not on research. And she emphasized that concerns about the impact of part-time adjuncts on student engagement arise not out of criticism of those individuals, but of the system that assigns them teaching duties without much support. S
  •  
    Repeat of highlighted resource, but merits revisiting.
Gary Brown

WSU Today Online - Current Article List - 0 views

  • the goal of the program is for students to submit their portfolios at the start of their junior year, and only about 34 percent are managing to do that.
  • Writing Assessment Program received the 2009 “Writing Program Certificate of Excellence”
  • If students delay completing their portfolio until late in their junior year, or into their senior year, she said, “it undermines the instructional integrity of the assessment.”
  • ...1 more annotation...
  • 70 percent of students submitted a paper as part of their portfolio that had been completed in a non-WSU course
  •  
    I ponder these highlights
1 - 15 of 15
Showing 20 items per page