Skip to main content

Home/ CTLT and Friends/ Group items tagged for

Rss Feed Group items tagged

Gary Brown

Want Students to Take an Optional Test? Wave 25 Bucks at Them - Students - The Chronicl... - 0 views

  • cash, appears to be the single best approach for colleges trying to recruit students to volunteer for institutional assessments and other low-stakes tests with no bearing on their grades.
  • American Educational Research Association
  • A college's choice of which incentive to offer does not appear to have a significant effect on how students end up performing, but it can have a big impact on colleges' ability to round up enough students for the assessments, the study found.
  • ...6 more annotations...
  • "I cannot provide you with the magic bullet that will help you recruit your students and make sure they are performing to the maximum of their ability," Mr. Steedle acknowledged to his audience at the Denver Convention Center. But, he said, his study results make clear that some recruitment strategies are more effective than others, and also offer some notes of caution for those examining students' scores.
  • The study focused on the council's Collegiate Learning Assessment, or CLA, an open-ended test of critical thinking and writing skills which is annually administered by several hundred colleges. Most of the colleges that use the test try to recruit 100 freshmen and 100 seniors to take it, but doing so can be daunting, especially for colleges that administer it in the spring, right when the seniors are focused on wrapping up their work and graduating.
  • The incentives that spurred students the least were the opportunity to help their college as an institution assess student learning, the opportunity to compare themselves to other students, a promise they would be recognized in some college publication, and the opportunity to put participation in the test on their resume.
  • The incentives which students preferred appeared to have no significant bearing on their performance. Those who appeared most inspired by a chance to earn 25 dollars did not perform better on the CLA than those whose responses suggested they would leap at the chance to help out a professor.
  • What accounted for differences in test scores? Students' academic ability going into the test, as measured by characteristics such as their SAT scores, accounted for 34 percent of the variation in CLA scores among individual students. But motivation, independent of ability, accounted for 5 percent of the variation in test scores—a finding that, the paper says, suggests it is "sensible" for colleges to be concerned that students with low motivation are not posting scores that can allow valid comparisons with other students or valid assessments of their individual strengths and weaknesses.
  • A major limitation of the study was that Mr. Steedle had no way of knowing how the students who took the test were recruited. "If many of them were recruited using cash and prizes, it would not be surprising if these students reported cash and prizes as the most preferable incentives," his paper concedes.
  •  
    Since it is not clear if the incentive to participate in this study influenced the decision to participate, it remains similarly unclear if incentives to participate correlate with performance.
Theron DesRosier

P2PU - Peer 2 Peer University / FrontPage - 0 views

shared by Theron DesRosier on 13 Aug 09 - Cached
  • The Peer 2 Peer University (P2PU) is an online community of open study groups for short university-level courses. Think of it as online book clubs for open educational resources. The P2PU helps you navigate the wealth of open education materials that are out there, creates small groups of motivated learners, and supports the design and facilitation of courses. Students and tutors get recognition for their work, and we are building pathways to formal credit as well.
  •  
    "The Peer 2 Peer University (P2PU) is an online community of open study groups for short university-level courses. Think of it as online book clubs for open educational resources. The P2PU helps you navigate the wealth of open education materials that are out there, creates small groups of motivated learners, and supports the design and facilitation of courses. Students and tutors get recognition for their work, and weare building pathways to formal credit as well."
Theron DesRosier

University of the people - 0 views

  • One vision for the school of the future comes from the United Nations. Founded this year by the UN’s Global Alliance for Information and Communication Technology and Development (GAID), the University of the People is a not-for-profit institution that aims to offer higher education opportunities to people who generally couldn’t afford it by leveraging social media technologies and ideas. The school is a one hundred percent online institution, and utilizes open source courseware and peer-to-peer learning to deliver information to students without charging tuition. There are some costs, however. Students must pay an application fee (though the idea is to accept everyone who applies that has a high school diploma and speaks English), and when they’re ready, students must pay to take tests, which they are required to pass in order to continue their education. All fees are set on a sliding scale based on the student’s country of origin, and never exceed $100.
  •  
    "One vision for the school of the future comes from the United Nations. Founded this year by the UN's Global Alliance for Information and Communication Technology and Development (GAID), the University of the People is a not-for-profit institution that aims to offer higher education opportunities to people who generally couldn't afford it by leveraging social media technologies and ideas. All fees are set on a sliding scale based on the student's country of origin, and never exceed $100. "
Gary Brown

Matthew Lombard - 0 views

  • Which measure(s) of intercoder reliability should researchers use? [TOP] There are literally dozens of different measures, or indices, of intercoder reliability. Popping (1988) identified 39 different "agreement indices" for coding nominal categories, which excludes several techniques for interval and ratio level data. But only a handful of techniques are widely used. In communication the most widely used indices are: Percent agreement Holsti's method Scott's pi (p) Cohen's kappa (k) Krippendorff's alpha (a)
  • 5. Which measure(s) of intercoder reliability should researchers use? [TOP] There are literally dozens of different measures, or indices, of intercoder reliability. Popping (1988) identified 39 different "agreement indices" for coding nominal categories, which excludes several techniques for interval and ratio level data. But only a handful of techniques are widely used. In communication the most widely used indices are: Percent agreement Holsti's method Scott's pi (p) Cohen's kappa (k) Krippendorff's alpha (a) Just some of the indices proposed, and in some cases widely used, in other fields are Perreault and Leigh's (1989) Ir measure; Tinsley and Weiss's (1975) T index; Bennett, Alpert, and Goldstein's (1954) S index; Lin's (1989) concordance coefficient; Hughes and Garrett’s (1990) approach based on Generalizability Theory, and Rust and Cooil's (1994) approach based on "Proportional Reduction in Loss" (PRL). It would be nice if there were one universally accepted index of intercoder reliability. But despite all the effort that scholars, methodologists and statisticians have devoted to developing and testing indices, there is no consensus on a single, "best" one. While there are several recommendations for Cohen's kappa (e.g., Dewey (1983) argued that despite its drawbacks, kappa should still be "the measure of choice") and this index appears to be commonly used in research that involves the coding of behavior (Bakeman, 2000), others (notably Krippendorff, 1978, 1987) have argued that its characteristics make it inappropriate as a measure of intercoder agreement.
  • 5. Which measure(s) of intercoder reliability should researchers use? [TOP] There are literally dozens of different measures, or indices, of intercoder reliability. Popping (1988) identified 39 different "agreement indices" for coding nominal categories, which excludes several techniques for interval and ratio level data. But only a handful of techniques are widely used. In communication the most widely used indices are: Percent agreement Holsti's method Scott's pi (p) Cohen's kappa (k) Krippendorff's alpha (a) Just some of the indices proposed, and in some cases widely used, in other fields are Perreault and Leigh's (1989) Ir measure; Tinsley and Weiss's (1975) T index; Bennett, Alpert, and Goldstein's (1954) S index; Lin's (1989) concordance coefficient; Hughes and Garrett’s (1990) approach based on Generalizability Theory, and Rust and Cooil's (1994) approach based on "Proportional Reduction in Loss" (PRL). It would be nice if there were one universally accepted index of intercoder reliability. But despite all the effort that scholars, methodologists and statisticians have devoted to developing and testing indices, there is no consensus on a single, "best" one. While there are several recommendations for Cohen's kappa (e.g., Dewey (1983) argued that despite its drawbacks, kappa should still be "the measure of choice") and this index appears to be commonly used in research that involves the coding of behavior (Bakeman, 2000), others (notably Krippendorff, 1978, 1987) have argued that its characteristics make it inappropriate as a measure of intercoder agreement.
  •  
    for our formalizing of assessment work
  •  
    inter-rater reliability
Gary Brown

Views: The White Noise of Accountability - Inside Higher Ed - 2 views

  • We don’t really know what we are saying
  • “In education, accountability usually means holding colleges accountable for the learning outcomes produced.” One hopes Burck Smith, whose paper containing this sentence was delivered at an American Enterprise Institute conference last November, held a firm tongue-in-cheek with the core phrase.
  • Our adventure through these questions is designed as a prodding to all who use the term to tell us what they are talking about before they otherwise simply echo the white noise.
  • ...20 more annotations...
  • when our students attend three or four schools, the subject of these sentences is considerably weakened in terms of what happens to those students.
  • Who or what is one accountable to?
  • For what?
  • Why that particular “what” -- and not another “what”?
  • To what extent is the relationship reciprocal? Are there rewards and/or sanctions inherent in the relationship? How continuous is the relationship?
  • In the Socratic moral universe, one is simultaneously witness and judge. The Greek syneidesis (“conscience” and “consciousness”) means to know something with, so to know oneself with oneself becomes an obligation of institutions and systems -- to themselves.
  • Obligation becomes self-reflexive.
  • There are no external authorities here. We offer, we accept, we provide evidence, we judge. There is nothing wrong with this: it is indispensable, reflective self-knowledge. And provided we judge without excuses, we hold to this Socratic moral framework. As Peter Ewell has noted, the information produced under this rubric, particularly in the matter of student learning, is “part of our accountability to ourselves.”
  • But is this “accountability” as the rhetoric of higher education uses the white noise -- or something else?
  • in response to shrill calls for “accountability,” U.S. higher education has placed all its eggs in the Socratic basket, but in a way that leaves the basket half-empty. It functions as the witness, providing enormous amounts of information, but does not judge that information.
  • Every single “best practice” cited by Aldeman and Carey is subject to measurement: labor market histories of graduates, ratios of resource commitment to various student outcomes, proportion of students in learning communities or taking capstone courses, publicly-posted NSSE results, undergraduate research participation, space utilization rates, licensing income, faculty patents, volume of non-institutional visitors to art exhibits, etc. etc. There’s nothing wrong with any of these, but they all wind up as measurements, each at a different concentric circle of putatively engaged acceptees of a unilateral contract to provide evidence. By the time one plows through Aldeman and Carey’s banquet, one is measuring everything that moves -- and even some things that don’t.
  • Sorry, but basic capacity facts mean that consumers cannot vote with their feet in higher education.
  • If we glossed the Socratic notion on provision-of-information, the purpose is self-improvement, not comparison. The market approach to accountability implicitly seeks to beat Socrates by holding that I cannot serve as both witness and judge of my own actions unless the behavior of others is also on the table. The self shrinks: others define the reference points. “Accountability” is about comparison and competition, and an institution’s obligations are only to collect and make public those metrics that allow comparison and competition. As for who judges the competition, we have a range of amorphous publics and imagined authorities.
  • There are no formal agreements here: this is not a contract, it is not a warranty, it is not a regulatory relationship. It isn’t even an issue of becoming a Socratic self-witness and judge. It is, instead, a case in which one set of parties, concentrated in places of power, asks another set of parties, diffuse and diverse, “to disclose more and more about academic results,” with the second set of parties responding in their own terms and formulations. The environment itself determines behavior.
  • Ewell is right about the rules of the information game in this environment: when the provider is the institution, it will shape information “to look as good as possible, regardless of the underlying performance.”
  • U.S. News & World Report’s rankings
  • The messengers become self-appointed arbiters of performance, establishing themselves as the second party to which institutions and aggregates of institutions become “accountable.” Can we honestly say that the implicit obligation of feeding these arbiters constitutes “accountability”?
  • But if the issue is student learning, there is nothing wrong with -- and a good deal to be said for -- posting public examples of comprehensive examinations, summative projects, capstone course papers, etc. within the information environment, and doing so irrespective of anyone requesting such evidence of the distribution of knowledge and skills. Yes, institutions will pick what makes them look good, but if the public products resemble AAC&U’s “Our Students’ Best Work” project, they set off peer pressure for self-improvement and very concrete disclosure. The other prominent media messengers simply don’t engage in constructive communication of this type.
  • Ironically, a “market” in the loudest voices, the flashiest media productions, and the weightiest panels of glitterati has emerged to declare judgment on institutional performance in an age when student behavior has diluted the very notion of an “institution” of higher education. The best we can say is that this environment casts nothing but fog over the specific relationships, responsibilities, and obligations that should be inherent in something we call “accountability.” Perhaps it is about time that we defined these components and their interactions with persuasive clarity. I hope that this essay will invite readers to do so.
  • Clifford Adelman is senior associate at the Institute for Higher Education Policy. The analysis and opinions expressed in this essay are those of the author, and do not necessarily represent the positions or opinions of the institute, nor should any such representation be inferred.
  •  
    Perhaps the most important piece I've read recently. Yes must be our answer to Adelman's last challenge: It is time for us to disseminate what and why we do what we do.
Joshua Yeidel

Using Outcome Information: Making Data Pay Off - 1 views

  •  
    Sixth in a series on outcome management for nonprofits. Grist for the mill for any Assessment Handbook we might make. "Systematic use of outcome data pays off. In an independent survey of nearly 400 health and human service organizations, program directors agreed or strongly agreed that implementing program outcome measurement had helped their programs * focus staff on shared goals (88%); * communicate results to stakeholders (88%); * clarify program purpose (86%); * identify effective practices (84%); * compete for resources (83%); * enhance record keeping (80%); and * improve service delivery (76%)."
Theron DesRosier

Disaggregate power not people - Part two: now with more manifesto @ Dave's Educational ... - 2 views

  •  
    "Definition 2 - disaggregating power There is a very different power relationship between being given a space which 'enables contexts' and 'allows supports' for a user and a space that you build and support for yourself. It dodges those institutionally created problems of student mobility, of losing the connections formed in your learning and gives you a professional 'place' from which you can start to make long term knowledge network connections that form the higher end of the productive learning/knowing that is possible on the web. The power is disaggregated in the sense that while attending an institution of learning you are still under the dominance of the instructor or the regulations surrounding accreditation, but coming to your learning space is not about that dominance. The power held (and, i should probably add, that you've given to that institution in applying for accreditation/learning it's not (necessarily) a power of tyranny) by the institution only touches some of your work, and it need not impede any work you choose to do. Here's where I get to the part about the 'personal' that's been bothering me The danger in taking definition two as our definition for PLE is that we lose sight of the subtle, complex dance of person and ecology so eloquently described by Keith Hamon in his response to my post. Maybe more dangerously, we might get taken up as thinking that learning is something that happens to the person, and not as part of a complex rhizome of connections that form the basis of the human experience. Learning (and I don't mean definitions or background) and the making of connections of knowledge is something that is steeped in complexity. At each point we are structured in the work (written in a book, sung in a song, spoken in a web session) of others that constantly tests our own connections and further complexifies our understanding. This is the pattern of knowledge as i understand it. It is organic, and messy, and su
Corinna Lo

YouTube - Tim Berners-Lee: The next Web of open, linked data - 0 views

shared by Corinna Lo on 14 Mar 09 - Cached
  •  
    Tim Berners-Lee invented the World Wide Web. For his next project, he's building a web for open, linked data that could do for numbers what the Web did for words, pictures, video: unlock our data and reframe the way we use it together.
Nils Peterson

Edge 313 - 1 views

  • So what's the point? It's a culture. Call it the algorithmic culture. To get it, you need to be part of it, you need to come out of it. Otherwise, you spend the rest of your life dancing to the tune of other people's code. Just look at Europe where the idea of competition in the Internet space appears to focus on litigation, legislation, regulation, and criminalization.
    • Nils Peterson
       
      US vs Euro thinking about the Internet
  • TIME TO START TAKING THE INTERNET SERIOUSLY 1.  No moment in technology history has ever been more exciting or dangerous than now. The Internet is like a new computer running a flashy, exciting demo. We have been entranced by this demo for fifteen years. But now it is time to get to work, and make the Internet do what we want it to.
  • Wherever computers exist, nearly everyone who writes uses a word processor. The word processor is one of history's most successful inventions. Most people call it not just useful but indispensable. Granted that the word processor is indeed indispensable, what good has it done? We say we can't do without it; but if we had to give it up, what difference would it make? Have word processors improved the quality of modern writing? What has the indispensable word processor accomplished? 4. It has increased not the quality but the quantity of our writing — "our" meaning society's as a whole. The Internet for its part has increased not the quality but the quantity of the information we see. Increasing quantity is easier than improving quality. Instead of letting the Internet solve the easy problems, it's time we got it to solve the important ones.
  • ...10 more annotations...
  • Modern search engines combine the functions of libraries and business directories on a global scale, in a flash: a lightning bolt of brilliant engineering. These search engines are indispensable — just like word processors. But they solve an easy problem. It has always been harder to find the right person than the right fact. Human experience and expertise are the most valuable resources on the Internet — if we could find them. Using a search engine to find (or be found by) the right person is a harder, more subtle problem than ordinary Internet search.
  • Will you store your personal information on your own personal machines, or on nameless servers far away in the Cloud, or both? Answer: in the Cloud. The Cloud (or the Internet Operating System, IOS — "Cloud 1.0") will take charge of your personal machines. It will move the information you need at any given moment onto your own cellphone, laptop, pad, pod — but will always keep charge of the master copy. When you make changes to any document, the changes will be reflected immediately in the Cloud. Many parts of this service are available already.
  • The Internet will never create a new economy based on voluntary instead of paid work — but it can help create the best economy in history, where new markets (a free market in education, for example) change the world. Good news! — the Net will destroy the university as we know it (except for a few unusually prestigious or beautiful campuses).
  • In short: it's time to think about the Internet instead of just letting it happen.
  • The traditional web site is static, but the Internet specializes in flowing, changing information. The "velocity of information" is important — not just the facts but their rate and direction of flow. Today's typical website is like a stained glass window, many small panels leaded together. There is no good way to change stained glass, and no one expects it to change. So it's not surprising that the Internet is now being overtaken by a different kind of cyberstructure. 14. The structure called a cyberstream or lifestream is better suited to the Internet than a conventional website because it shows information-in-motion, a rushing flow of fresh information instead of a stagnant pool.
    • Nils Peterson
       
      jayme will like this for her timeline portfolios
  • There is no clear way to blend two standard websites together, but it's obvious how to blend two streams. You simply shuffle them together like two decks of cards, maintaining time-order — putting the earlier document first. Blending is important because we must be able to add and subtract in the Cybersphere. We add streams together by blending them. Because it's easy to blend any group of streams, it's easy to integrate stream-structured sites so we can treat the group as a unit, not as many separate points of activity; and integration is important to solving the information overload problem. We subtract streams by searching or focusing. Searching a stream for "snow" means that I subtract every stream-element that doesn't deal with snow. Subtracting the "not snow" stream from the mainstream yields a "snow" stream. Blending streams and searching them are the addition and subtraction of the new Cybersphere.
    • Nils Peterson
       
      is Yahoo Pipes a precursor? Theron sent me an email, subject: "let me pipe that for you"
    • Nils Peterson
       
      Google Buzz might also be a ersion of this. It bring together items from your (multiple) public streams.
  • Internet culture is a culture of nowness. The Internet tells you what your friends are doing and the world news now, the state of the shops and markets and weather now, public opinion, trends and fashions now. The Internet connects each of us to countless sites right now — to many different places at one moment in time.
  • Once we understand the inherent bias in an instrument, we can correct it. The Internet has a large bias in favor of now. Using lifestreams (which arrange information in time instead of space), historians can assemble, argue about and gradually refine timelines of historical fact. Such timelines are not history, but they are the raw material of history.
  • Before long, all personal, familial and institutional histories will take visible form in streams.   A lifestream is tangible time:  as life flashes past on waterskis across time's ocean, a lifestream is the wake left in its trail. Dew crystallizes out of the air along cool surfaces; streams crystallize out of the Cybersphere along veins of time. As streams begin to trickle and then rush through the spring thaw in the Cybersphere, our obsession with "nowness" will recede
    • Nils Peterson
       
      barrett has been using lifestream. this guy claims to have coined it lonf ago...in any event, it is a very different picture of portfolio -- more like "not your father's" than like AAEEBL.
  • The Internet today is, after all, a machine for reinforcing our prejudices. The wider the selection of information, the more finicky we can be about choosing just what we like and ignoring the rest. On the Net we have the satisfaction of reading only opinions we already agree with, only facts (or alleged facts) we already know. You might read ten stories about ten different topics in a traditional newspaper; on the net, many people spend that same amount of time reading ten stories about the same topic. But again, once we understand the inherent bias in an instrument, we can correct it. One of the hardest, most fascinating problems of this cyber-century is how to add "drift" to the net, so that your view sometimes wanders (as your mind wanders when you're tired) into places you hadn't planned to go. Touching the machine brings the original topic back. We need help overcoming rationality sometimes, and allowing our thoughts to wander and metamorphose as they do in sleep.
Gary Brown

Evaluations That Make the Grade: 4 Ways to Improve Rating the Faculty - Teaching - The ... - 1 views

  • For students, the act of filling out those forms is sometimes a fleeting, half-conscious moment. But for instructors whose careers can live and die by student evaluations, getting back the forms is an hour of high anxiety
  • "They have destroyed higher education." Mr. Crumbley believes the forms lead inexorably to grade inflation and the dumbing down of the curriculum.
  • Texas enacted a law that will require every public college to post each faculty member's student-evaluation scores on a public Web site.
  • ...10 more annotations...
  • The IDEA Center, an education research group based at Kansas State University, has been spreading its particular course-evaluation gospel since 1975. The central innovation of the IDEA system is that departments can tailor their evaluation forms to emphasize whichever learning objectives are most important in their discipline.
  • (Roughly 350 colleges use the IDEA Center's system, though in some cases only a single department or academic unit participates.)
  • The new North Texas instrument that came from these efforts tries to correct for biases that are beyond an instructor's control. The questionnaire asks students, for example, whether the classroom had an appropriate size and layout for the course. If students were unhappy with the classroom, and if it appears that their unhappiness inappropriately colored their evaluations of the instructor, the system can adjust the instructor's scores accordingly.
  • The survey instrument, known as SALG, for Student Assessment of their Learning Gains, is now used by instructors across the country. The project's Web site contains more than 900 templates, mostly for courses in the sciences.
  • "So the ability to do some quantitative analysis of these comments really allows you to take a more nuanced and effective look at what these students are really saying."
  • Mr. Frick and his colleagues found that his new course-evaluation form was strongly correlated with both students' and instructors' own measures of how well the students had mastered each course's learning goals.
  • Elaine Seymour, who was then director of ethnography and evaluation research at the University of Colorado at Boulder, was assisting with a National Science Foundation project to improve the quality of science instruction at the college level. She found that many instructors were reluctant to try new teaching techniques because they feared their course-evaluation ratings might decline.
  • "Students are the inventory," Mr. Crumbley says. "The real stakeholders in higher education are employers, society, the people who hire our graduates. But what we do is ask the inventory if a professor is good or bad. At General Motors," he says, "you don't ask the cars which factory workers are good at their jobs. You check the cars for defects, you ask the drivers, and that's how you know how the workers are doing."
  • William H. Pallett, president of the IDEA Center, says that when course rating surveys are well-designed and instructors make clear that they care about them, students will answer honestly and thoughtfully.
  • In Mr. Bain's view, student evaluations should be just one of several tools colleges use to assess teaching. Peers should regularly visit one another's classrooms, he argues. And professors should develop "teaching portfolios" that demonstrate their ability to do the kinds of instruction that are most important in their particular disciplines. "It's kind of ironic that we grab onto something that seems fixed and fast and absolute, rather than something that seems a little bit messy," he says. "Making decisions about the ability of someone to cultivate someone else's learning is inherently a messy process. It can't be reduced to a formula."
  •  
    Old friends at the Idea Center, and an old but persistent issue.
Theron DesRosier

Revolution in the Classroom - The Atlantic (August 12, 2009) - 0 views

  •  
    An article in the Atlantic today by Clayton Christensen discusses "Revolution in the Classroom" In a paragraph on data collection he says the following: Creating effective methods for measuring student progress is crucial to ensuring that material is actually being learned. And implementing such assessments using an online system could be incredibly potent: rather than simply testing students all at once at the end of an instructional module, this would allow continuous verification of subject mastery as instruction was still underway. Teachers would be able to receive constant feedback about progress or the lack thereof and then make informed decisions about the best learning path for each student. Thus, individual students could spend more or less time, as needed, on certain modules. And as long as the end result - mastery - was the same for all, the process and time allotted for achieving it need not be uniform." The "module" focus is a little disturbing but the rest is helpful.
Theron DesRosier

Come for the Content, Stay for the Community | Academic Commons - 0 views

  •  
    The Evolution of a Digital Repository and Social Networking Tool for Inorganic Chemistry From Post: "It is said that teaching is a lonely profession. In higher education, a sense of isolation can permeate both teaching and research, especially for academics at primarily undergraduate institutions (PUIs). In these times of doing more with less, new digital communication tools may greatly attenuate this problem--for free. Our group of inorganic chemists from PUIs, together with technologist partners, have built the Virtual Inorganic Pedagogical Electronic Resource Web site (VIPEr, http://www.ionicviper.org) to share teaching materials and ideas and build a sense of community among inorganic chemistry educators. As members of the leadership council of VIPEr, we develop and administer the Web site and reach out to potential users. "
Gary Brown

For Accreditation, a Narrow Window of Opportunity - Commentary - The Chronicle of Highe... - 4 views

  • After two years as president of the American Council on Education, I feel compelled to send a wake-up call to campus executives: If federal policy makers are now willing to bail out the nation's leading banks and buy equity stakes in auto makers because those companies are "too big to fail," they will probably have few reservations about regulating an education system that they now understand is "too important to fail."
  • Regardless of party, policy makers are clearly aware of the importance of education and are demanding improved performance and more information, from preschool to graduate school. In this environment, we should expect college accreditation to come under significant scrutiny.
  • It has also clearly signaled its interest in using data to measure institutional performance and student outcomes, and it has invested in state efforts to create student-data systems from pre-kindergarten through graduate school.
  • ...8 more annotations...
  • Higher education has so far navigated its way through the environment of increased regulatory interest without substantial changes to our system of quality assurance or federally mandated outcomes assessment. But that has only bought us time. As we look ahead, we must keep three facts in mind: Interest in accountability is bipartisan, and the pendulum has swung toward more regulation in virtually all sectors. The economic crisis is likely to spur increased calls from policy makers to control college prices and demonstrate that students are getting value for the dollar. The size of the federal budget deficit will force everyone who receives federal support to produce more and better evidence that an investment of federal funds will pay dividends for individuals and society.
  • If we do not seize the opportunity to strengthen voluntary peer accreditation as a rigorous test of institutional quality, grounded in appropriate measures of student learning, we place at risk a precious bulwark against excessive government intervention, a bulwark that has allowed American higher education to flourish. When it comes to safeguarding the quality, diversity, and independence of American higher education, accreditors hold the keys to the kingdom.
  • all accreditors now require colleges and universities to put more emphasis on measuring student-learning outcomes. They should be equally vigilant about ensuring that those data are used to achieve improvements in outcomes
  • share plain-language results of accreditation reviews with the public.
  • It takes very little close reading to see through the self-serving statements here: namely that higher education institutions must do a better PR job pretending they are interested in meaningful reform so as to head off any real reform that migh come from the federal authorities.
  • THEREFORE, let me voice a wakeup call for those who are really interested in reform--not that there are many.1.There will never be any meaningful reform unless we have a centralized and nationalized higher educational system. Leaving higher education in the hands of individual institutions is no longer effective and is in fact what has led to the present state we find ourselves in. Year after countless year we have been promised changes in higher education and year after year nothing changes. IF CHANGE IS TO COME IT MUST BE FORCED ONTO HIGHER EDUCATION FROM THE OUTSIDE.
  • Higher education in America can no longer afford to be organized around the useless market capitalism that forces too many financially marginalized institutions to compete for less and less.
  • Keeping Quiet by Pablo NerudaIf we were not so singled-mindedabout keeping our lives moving,and for once could do nothing,perhaps a huge silencemight interrupt this sadnessof never understanding ourselvesand of threatening ourselves with death.
  •  
    It is heating up again
Nils Peterson

Half an Hour: Open Source Assessment - 0 views

  • When posed the question in Winnipeg regarding what I thought the ideal open online course would look like, my eventual response was that it would not look like a course at all, just the assessment.
    • Nils Peterson
       
      I remembered this Downes post on the way back from HASTAC. It is some of the roots of our Spectrum I think.
  • The reasoning was this: were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.
  • In Holland I encountered a person from an organization that does nothing but test students. This is the sort of thing I long ago predicted (in my 1998 Future of Online Learning) so I wasn't that surprised. But when I pressed the discussion the gulf between different models of assessment became apparent.Designers of learning resources, for example, have only the vaguest of indication of what will be on the test. They have a general idea of the subject area and recommendations for reading resources. Why not list the exact questions, I asked? Because they would just memorize the answers, I was told. I was unsure how this varied from the current system, except for the amount of stuff that must be memorized.
    • Nils Peterson
       
      assumes a test as the form of assessment, rather than something more open ended.
  • ...8 more annotations...
  • As I think about it, I realize that what we have in assessment is now an exact analogy to what we have in software or learning content. We have proprietary tests or examinations, the content of which is held to be secret by the publishers. You cannot share the contents of these tests (at least, not openly). Only specially licensed institutions can offer the tests. The tests cost money.
    • Nils Peterson
       
      See our Where are you on the spectrum, Assessment is locked vs open
  • Without a public examination of the questions, how can we be sure they are reliable? We are forced to rely on 'peer reviews' or similar closed and expert-based evaluation mechanisms.
  • there is the question of who is doing the assessing. Again, the people (or machines) that grade the assessments work in secret. It is expert-based, which creates a resource bottleneck. The criteria they use are not always apparent (and there is no shortage of literature pointing to the randomness of the grading). There is an analogy here with peer-review processes (as compared to recommender system processes)
  • What constitutes achievement in a field? What constitutes, for example, 'being a physicist'?
  • This is a reductive theory of assessment. It is the theory that the assessment of a big thing can be reduced to the assessment of a set of (necessary and sufficient) little things. It is a standards-based theory of assessment. It suggests that we can measure accomplishment by testing for accomplishment of a predefined set of learning objectives.Left to its own devices, though, an open system of assessment is more likely to become non-reductive and non-standards based. Even if we consider the mastery of a subject or field of study to consist of the accomplishment of smaller components, there will be no widespread agreement on what those components are, much less how to measure them or how to test for them.Consequently, instead of very specific forms of evaluation, intended to measure particular competences, a wide variety of assessment methods will be devised. Assessment in such an environment might not even be subject-related. We won't think of, say, a person who has mastered 'physics'. Rather, we might say that they 'know how to use a scanning electron microscope' or 'developed a foundational idea'.
  • We are certainly familiar with the use of recognition, rather than measurement, as a means of evaluating achievement. Ludwig Wittgenstein is 'recognized' as a great philosopher, for example. He didn't pass a series of tests to prove this. Mahatma Gandhi is 'recognized' as a great leader.
  • The concept of the portfolio is drawn from the artistic community and will typically be applied in cases where the accomplishments are creative and content-based. In other disciplines, where the accomplishments resemble more the development of skills rather than of creations, accomplishments will resemble more the completion of tasks, like 'quests' or 'levels' in online games, say.Eventually, over time, a person will accumulate a 'profile' (much as described in 'Resource Profiles').
  • In other cases, the evaluation of achievement will resemble more a reputation system. Through some combination of inputs, from a more or less define community, a person may achieve a composite score called a 'reputation'. This will vary from community to community.
  •  
    Fine piece, transformative. "were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources."
Joshua Yeidel

Jim Dudley on Letting Go of Rigid Adherence to What Evaluation Should Look Like | AEA365 - 1 views

  •  
    "Recently, in working with a board of directors of a grassroots organization, I was reminded of how important it is to "let go" of rigid adherence to typologies and other traditional notions of what an evaluation should look like. For example, I completed an evaluation that incorporated elements of all of the stages of program development - a needs assessment (e.g., how much do board members know about their programs and budget), a process evaluation (e.g., how well do the board members communicate with each other when they meet), and an outcome evaluation (e.g., how effective is their marketing plan for recruiting children and families for its programs)."
  •  
    Needs evaluation, process evaluation, outcomes evaluation -- all useful for improvement.
Joshua Yeidel

Strategic Directives for Learning Management System Planning | EDUCAUSE - 1 views

  •  
    A largely sensible strategic look at LMS in general. "The LMS, because of its integration with other major institutional technology systems, has itself become an enterprise-wide system. As such, higher education leaders must closely 7 monitor the possible tendency for LMSs to contribute only to maintaining the educational status quo.40 The most radical suggestion for future LMS use would dissolve the commercially enforced "course-based" model of LMS use entirely, allowing for the creation of either larger (departmental) or smaller (study groups) units of LMS access, as the case may require. This ability to cater to context awareness is perhaps the feature most lacking in most LMS products. As noted in a study in which mobile or handheld devices were used to assemble ad hoc study groups,41 this sort of implementation is entirely possible in ways that don't necessarily require interaction through an LMS interface." Requires EDUCAUSE login (free to WSU)
  •  
    The EDUCAUSE paper suggests "dissolv[ing] the commercially enforced 'course-based' model of LMS". How about dissolving the "course-based" model of higher education on which the commercial LMS is based?
Theron DesRosier

Government Innovators Network: A Portal for Democratic Governance and Innovation - 0 views

  •  
    A Portal for Innovative Ideas This portal is produced by the Ash Institute for Democratic Governance and Innovation at Harvard Kennedy School, and is a marketplace of ideas and examples of government innovation. Browse or search to access news, documents, descriptions of award-winning programs, and information on events in your area of interest related to innovation. * RSS Feeds are available for each individual topic area. * We invite you to register to access online events, and to receive the biweekly Innovators Insights newsletter. * And, we encourage you to visit the Ash Institute YouTube Channel. The Ash Institute's Innovations in American Government Awards Program, and its affiliated international programs, are integral to the Government Innovators Network. Learn about the IAG program and how to apply.
Gary Brown

Learning to Hate Learning Objectives - The Chronicle Review - The Chronicle of Higher E... - 1 views

shared by Gary Brown on 16 Dec 09 - Cached
  • Perhaps learning objectives make sense for most courses outside the humanities, but for me—as, no doubt, for many others—they bear absolutely no connection to anything that happens in the classroom.
    • Gary Brown
       
      The homeopathic fallacy, debunked by volumes of research...
  • The problem is, this kind of teaching does not correlate with the assumption of my local accreditation body, which sees teaching—as perhaps it is, in many disciplines—as passing on a body of knowledge and skills to a particular audience.
    • Gary Brown
       
      A profoundly dangerous misperception of accreditation and its role.
  • We talked about the ways in which the study of literature can help to develop and nurture observation, analysis, empathy, and self-reflection, all of which are essential for the practice of psychotherapy,
    • Gary Brown
       
      Reasonable outcomes, with a bit of educational imagination and an understanding of assessment obviously underdeveloped.
  • ...4 more annotations...
  • They will not achieve any "goals or outcomes." Indeed, they will not have "achieved" anything, except, perhaps, to doubt the value of terms like "achievement" when applied to reading literature.
    • Gary Brown
       
      good outcome
  • To describe this as a learning objective is demeaning and reductive to all concerned.
    • Gary Brown
       
      Only in the sense Ralph Tyler criticized, and he is the one who coined the term and developed the concept.
  • except to observe certain habits of mind, nuances of thinking, an appreciation for subtleties and ambiguities of argument, and an appreciation of the capacity for empathy, as well as the need, on certain occasions, to resist this capacity. There is no reason for anyone to take the course except a need to understand more about the consciousness of others, including nonhuman animals.
Matthew Tedder

Turning Work into Play with Online Games | h+ Magazine - 0 views

  •  
    This is about games to improve employee work.
  •  
    This is about using online games to engage employees in work. This is very much the core of what I was talking about, except for education. For those more knowledgeable about software design: my thoughts so far were on using Google's V8 Javascript engine and a 3D engine, such as Ogre3D, perhaps wrapped by QT or SDL. The worlds would be managed on the server but code for each object type shipped to the clients and run, sychronized there. Everything would be an "object"--even a world itself (a container object). Each object would comprise of four code modules plus its media files (3D models, sounds, etc.): Affector -- this receives sensory input then filters and translates that to the object's internal properties. Intrinsor -- this is the object's behavioral programming. For example, it can be the functions of an espresso machine object or the AI and/or user-interface connector of an animate object, etc. Mitigator -- this is the code that controls the internal environment of any objects contained within this object. It mitigates the effects of contained objects to the affects of others. For example, a world object, a boat, a house, or a bag. The physics, weather, and such will depend on what they are contained within. The mitigator may also initiate effects to contained objects. Effector -- this provides the API for action attempts to the Intrinsor. It then filters and translates the action attempts to the objects actual action attempts that are read by this object's containor. (Yes--I mispelled some words above intentionally) This design provides a framework for a persistent world of decent individual performance. The V8 engine compiles to fast machine code (and caches). Only very minimal data need be communicated over the network. Objects can run in parallel precisely the same, so if a client fails the system continues. Updates of virtually any kind can be made on-the-fly without stopping the game. New object types will be relatively easy to b
Gary Brown

Educators Mull How to Motivate Professors to Improve Teaching - Curriculum - The Chroni... - 4 views

  • "Without an unrelenting focus on quality—on defining and measuring and ensuring the learning outcomes of students—any effort to increase college-completion rates would be a hollow effort indeed."
  • If colleges are going to provide high-quality educations to millions of additional students, they said, the institutions will need to develop measures of student learning than can assure parents, employers, and taxpayers that no one's time and money are being wasted.
  • "Effective assessment is critical to ensure that our colleges and universities are delivering the kinds of educational experiences that we believe we actually provide for students," said Ronald A. Crutcher, president of Wheaton College, in Massachusetts, during the opening plenary. "That data is also vital to addressing the skepticism that society has about the value of a liberal education."
  • ...13 more annotations...
  • But many speakers insisted that colleges should go ahead and take drastic steps to improve the quality of their instruction, without using rigid faculty-incentive structures or the fiscal crisis as excuses for inaction.
  • Handing out "teacher of the year" awards may not do much for a college
  • W.E. Deming argued, quality has to be designed into the entire system and supported by top management (that is, every decision made by CEOs and Presidents, and support systems as well as operations) rather than being made the responsibility solely of those delivering 'at the coal face'.
  • I see as a certain cluelessness among those who think one can create substantial change based on volunteerism
  • Current approaches to broaden the instructional repertoires of faculty members include faculty workshops, summer leave, and individual consultations, but these approaches work only for those relatively few faculty members who seek out opportunities to broaden their instructional methods.
  • The approach that makes sense to me is to engage faculty members at the departmental level in a discussion of the future and the implications of the future for their field, their college, their students, and themselves. You are invited to join an ongoing discussion of this issue at http://innovate-ideagora.ning.com/forum/topics/addressing-the-problem-of
  • Putting pressure on professors to improve teaching will not result in better education. The primary reason is that they do not know how to make real improvements. The problem is that in many fields of education there is either not enough research, or they do not have good ways of evaluationg the results of their teaching.
  • Then there needs to be a research based assessment that can be used by individual professors, NOT by the administration.
  • Humanities educatiors either have to learn enough statistics and cognitive science so they can make valid scientific comparisons of different strategies, or they have to work with cognitive scientists and statisticians
  • good teaching takes time
  • On the measurement side, about half of the assessments constructed by faculty fail to meet reasonable minimum standards for validity. (Interestingly, these failures leave the door open to a class action lawsuit. Physicians are successfully sued for failing to apply scientific findings correctly; commerce is replete with lawsuits based on measurement errors.)
  • The elephant in the corner of the room --still-- is that we refuse to measure learning outcomes and impact, especially proficiencies generalized to one's life outside the classroom.
  • until universities stop playing games to make themselves look better because they want to maintain their comfortable positions and actually look at what they can do to improve nothing is going to change.
  •  
    our work, our friends (Ken and Jim), and more context that shapes our strategy.
  •  
    How about using examples of highly motivational lecture and teaching techniques like the Richard Dawkins video I presented on this forum, recently. Even if teacher's do not consciously try to adopt good working techniques, there is at least a strong subconscious human tendency to mimic behaviors. I think that if teachers see more effective techniques, they will automatically begin to adopt adopt them.
‹ Previous 21 - 40 of 546 Next › Last »
Showing 20 items per page