Skip to main content

Home/ CTLT and Friends/ Group items tagged ratings

Rss Feed Group items tagged

Joshua Yeidel

The City Where Diploma Dreams Go to Die - Commentary - The Chronicle of Higher Education - 0 views

  •  
    Another single-measure assessment with political implications. The comments unpack some of the complexities behind "graduation rates".
Nils Peterson

New Grilling of For-Profits Could Turn Up the Heat for All of Higher Education - Govern... - 1 views

shared by Nils Peterson on 25 Jun 10 - Cached
  • Congress plans to put for-profit colleges under the microscope on Thursday, asking whether a higher-education model that consumes more than double its proportionate share of federal student aid is an innovation worthy of duplication or a recipe for long-term economic disaster.
  • The evaluation threatens new headaches for an industry that is sometimes exalted by government policy makers as a lean results-oriented example for the rest of academe, and other times caricatured as an opportunistic outlier that peddles low-value education to unprepared high school dropouts.
  • Economic bubbles such as the unsustainable surge in housing prices "typically are built on ignorance and borrowed money," says one prominent pessimist on the matter, Glenn Harlan Reynolds, a professor of law at the University of Tennessee at Knoxville. "And the reason you've got a higher-education bubble is ignorance and borrowed money," Mr. Reynolds said.
  • ...4 more annotations...
  • Congress and colleges still lack a firm sense of "what our higher education system is producing," said Jamie P. Merisotis, president of the Lumina Foundation for Education.
  • Mr. Reynolds said. Colleges of all type have been raising tuition for years as the government offers ever-growing amounts of grant aid and loan money, he said. The price inflation is driven by the fact that a government-backed loan, while offering students only a slight break from market interest rates, "looks cheap because you don't have to make payments for a while,"
  • That determination to expand the distribution of federal tuition assistance has left Congress and the White House seeking other ways to ensure that students get quality for their money. Just last week, the House education committee held a hearing in which Democratic members joined the Education Department's inspector general in pressing accrediting agencies to more clearly define the "credit hour" measurement used in student-aid allocations. Some colleges have objected, wanting more flexibility in defining their educational missions.
  • Whether it involves defining credit hours or setting accreditation standards, the root of the problem may be that the government is looking for better ways to ensure that its money is spent on worthwhile educational ventures, and yet it doesn't want to challenge the right of each college to define its own mission. So far that has proven to be a fundamental contradiction in judging the overall value of higher education, said Mr. Merisotis, of the Lumina Foundation. "There's got to be a third way," he said. "We don't have it yet."
Gary Brown

Views: The White Noise of Accountability - Inside Higher Ed - 2 views

  • We don’t really know what we are saying
  • “In education, accountability usually means holding colleges accountable for the learning outcomes produced.” One hopes Burck Smith, whose paper containing this sentence was delivered at an American Enterprise Institute conference last November, held a firm tongue-in-cheek with the core phrase.
  • Our adventure through these questions is designed as a prodding to all who use the term to tell us what they are talking about before they otherwise simply echo the white noise.
  • ...20 more annotations...
  • when our students attend three or four schools, the subject of these sentences is considerably weakened in terms of what happens to those students.
  • Who or what is one accountable to?
  • For what?
  • Why that particular “what” -- and not another “what”?
  • To what extent is the relationship reciprocal? Are there rewards and/or sanctions inherent in the relationship? How continuous is the relationship?
  • In the Socratic moral universe, one is simultaneously witness and judge. The Greek syneidesis (“conscience” and “consciousness”) means to know something with, so to know oneself with oneself becomes an obligation of institutions and systems -- to themselves.
  • Obligation becomes self-reflexive.
  • There are no external authorities here. We offer, we accept, we provide evidence, we judge. There is nothing wrong with this: it is indispensable, reflective self-knowledge. And provided we judge without excuses, we hold to this Socratic moral framework. As Peter Ewell has noted, the information produced under this rubric, particularly in the matter of student learning, is “part of our accountability to ourselves.”
  • But is this “accountability” as the rhetoric of higher education uses the white noise -- or something else?
  • in response to shrill calls for “accountability,” U.S. higher education has placed all its eggs in the Socratic basket, but in a way that leaves the basket half-empty. It functions as the witness, providing enormous amounts of information, but does not judge that information.
  • Every single “best practice” cited by Aldeman and Carey is subject to measurement: labor market histories of graduates, ratios of resource commitment to various student outcomes, proportion of students in learning communities or taking capstone courses, publicly-posted NSSE results, undergraduate research participation, space utilization rates, licensing income, faculty patents, volume of non-institutional visitors to art exhibits, etc. etc. There’s nothing wrong with any of these, but they all wind up as measurements, each at a different concentric circle of putatively engaged acceptees of a unilateral contract to provide evidence. By the time one plows through Aldeman and Carey’s banquet, one is measuring everything that moves -- and even some things that don’t.
  • Sorry, but basic capacity facts mean that consumers cannot vote with their feet in higher education.
  • If we glossed the Socratic notion on provision-of-information, the purpose is self-improvement, not comparison. The market approach to accountability implicitly seeks to beat Socrates by holding that I cannot serve as both witness and judge of my own actions unless the behavior of others is also on the table. The self shrinks: others define the reference points. “Accountability” is about comparison and competition, and an institution’s obligations are only to collect and make public those metrics that allow comparison and competition. As for who judges the competition, we have a range of amorphous publics and imagined authorities.
  • There are no formal agreements here: this is not a contract, it is not a warranty, it is not a regulatory relationship. It isn’t even an issue of becoming a Socratic self-witness and judge. It is, instead, a case in which one set of parties, concentrated in places of power, asks another set of parties, diffuse and diverse, “to disclose more and more about academic results,” with the second set of parties responding in their own terms and formulations. The environment itself determines behavior.
  • Ewell is right about the rules of the information game in this environment: when the provider is the institution, it will shape information “to look as good as possible, regardless of the underlying performance.”
  • U.S. News & World Report’s rankings
  • The messengers become self-appointed arbiters of performance, establishing themselves as the second party to which institutions and aggregates of institutions become “accountable.” Can we honestly say that the implicit obligation of feeding these arbiters constitutes “accountability”?
  • But if the issue is student learning, there is nothing wrong with -- and a good deal to be said for -- posting public examples of comprehensive examinations, summative projects, capstone course papers, etc. within the information environment, and doing so irrespective of anyone requesting such evidence of the distribution of knowledge and skills. Yes, institutions will pick what makes them look good, but if the public products resemble AAC&U’s “Our Students’ Best Work” project, they set off peer pressure for self-improvement and very concrete disclosure. The other prominent media messengers simply don’t engage in constructive communication of this type.
  • Ironically, a “market” in the loudest voices, the flashiest media productions, and the weightiest panels of glitterati has emerged to declare judgment on institutional performance in an age when student behavior has diluted the very notion of an “institution” of higher education. The best we can say is that this environment casts nothing but fog over the specific relationships, responsibilities, and obligations that should be inherent in something we call “accountability.” Perhaps it is about time that we defined these components and their interactions with persuasive clarity. I hope that this essay will invite readers to do so.
  • Clifford Adelman is senior associate at the Institute for Higher Education Policy. The analysis and opinions expressed in this essay are those of the author, and do not necessarily represent the positions or opinions of the institute, nor should any such representation be inferred.
  •  
    Perhaps the most important piece I've read recently. Yes must be our answer to Adelman's last challenge: It is time for us to disseminate what and why we do what we do.
Gary Brown

Ranking Employees: Why Comparing Workers to Their Peers Can Often Backfire - Knowledge@... - 2 views

  • We live in a world full of benchmarks and rankings. Consumers use them to compare the latest gadgets. Parents and policy makers rely on them to assess schools and other public institutions,
  • "Many managers think that giving workers feedback about their performance relative to their peers inspires them to become more competitive -- to work harder to catch up, or excel even more. But in fact, the opposite happens," says Barankay, whose previous research and teaching has focused on personnel and labor economics. "Workers can become complacent and de-motivated. People who rank highly think, 'I am already number one, so why try harder?' And people who are far behind can become depressed about their work and give up."
  • mong the companies that use Mechanical Turk are Google, Yahoo and Zappos.com, the online shoe and clothing purveyor.
  • ...12 more annotations...
  • Nothing is more compelling than data from actual workplace settings, but getting it is usually very hard."
  • Instead, the job without the feedback attracted more workers -- 254, compared with 76 for the job with feedback.
  • "This indicates that when people are great and they know it, they tend to slack off. But when they're at the bottom, and are told they're doing terribly, they are de-motivated," says Barankay.
  • In the second stage of the experiment
  • The aim was to determine whether giving people feedback affected their desire to do more work, as well as the quantity and quality of their work.
  • Of the workers in the control group, 66% came back for more work, compared with 42% in the treatment group. The members of the treatment group who returned were also 22% less productive than the control group. This seems to dispel the notion that giving people feedback might encourage high-performing workers to work harder to excel, and inspire low-ranked workers to make more of an effort.
  • it seems that people would rather not know how they rank compared to others, even though when we surveyed these workers after the experiment, 74% said they wanted feedback about their rank."
  • top performers move on to new challenges and low performers have no viable options elsewhere.
  • feedback about rank is detrimental to performance,"
  • it is well documented that tournaments, where rankings are tied to prizes, bonuses and promotions, do inspire higher productivity and performance.
  • "In workplaces where rankings and relative performance is very transparent, even without the intervention of management ... it may be better to attach financial incentives to rankings, as interpersonal comparisons without prizes may lead to lower effort," Barankay suggests. "In those office environments where people may not be able to assess and compare the performance of others, it may not be useful to just post a ranking without attaching prizes."
  • "The key is to devote more time to thinking about whether to give feedback, and how each individual will respond to it. If, as the employer, you think a worker will respond positively to a ranking and feel inspired to work harder, then by all means do it. But it's imperative to think about it on an individual level."
  •  
    the conflation of feedback with ranking confounds this. What is not done and needs to be done is to compare the motivational impact of providing constructive feedback. Presumably the study uses ranking in a strictly comparative context as well, and we do not see the influence of feedback relative to an absolute scale. Still, much in this piece to ponder....
Gary Brown

Brainless slime mould makes decisions like humans | Not Exactly Rocket Science | Discov... - 0 views

  • These results strongly suggest that, like humans, Physarum doesn’t attach any intrinsic value to the options that are available to it. Instead, it compares its alternatives. Add something new into the mix, and its decisions change.
  • But how does Physarum make decisions at all without a brain?  The answer is deceptively simple – it does so by committee. Every plasmodium is basically a big sac of fluid, where each part rhythmically contracts and expands, pushing the fluid inside back-and-forth. The rate of the contractions depends on what neighbouring parts of the sac are doing, and by the local environment. They happen faster when the plasmodium touches something attractive like food, and they slow down when repellent things like sunlight are nearby.
  • It’s the ultimate in collective decision-making and it allows Physarum to perform remarkable feats of “intelligence”, including simulating Tokyo’s transport network, solving mazes, and even driving robots.
  •  
    This probably also apples to change theory....
Gary Brown

Theoretical Expertise Rankings - ProCon.org - 3 views

  • Evaluating the credibility of one person's statements is difficult if not impossible, especially without knowing, for example, each person's background, training, affiliations, education, or experience. However, we feel that a guide to a person's theoretical expertise can be helpful, so we have built theoretical expertise ranking charts for each ProCon.org website to help differentiate the theoretical expertise of the various sources on our sites.
  •  
    An old site worth pondering.
Gary Brown

Conference Highlights Contradictory Attitudes Toward Global Rankings - International - ... - 2 views

  • He emphasized, however, that "rankings are only useful if the indicators they use don't just measure things that are easy to measure, but the things that need to be measured."
  • "In Malaysia we do not call it a ranking exercise," she said firmly, saying that the effort was instead a benchmarking exercise that attempts to rate institutions against an objective standard.
  • "If Ranking Is the Disease, Is Benchmarking the Cure?" Jamil Salmi, tertiary education coordinator at the World Bank, said that rankings are "just the tip of the iceberg" of a growing accountability agenda, with students, governments, and employers all seeking more comprehensive information about institutions
  • ...3 more annotations...
  • "Rankings are the most visible and easy to understand" of the various measures, but they are far from the most reliable,
  • Jamie P. Merisotis
  • He described himself as a longtime skeptic of rankings, but noted that "these kinds of forums are useful, because you have to have conversations involving the producers of rankings, consumers, analysts, and critics."
Gary Brown

Reviewers Unhappy with Portfolio 'Stuff' Demand Evidence -- Campus Technology - 1 views

  • An e-mail comment from one reviewer: “In reviewing about 100-some-odd accreditation reports in the last few months, it has been useful in our work here at Washington State University to distinguish ‘stuff’ from evidence. We have adopted an understanding that evidence is material or data that has been analyzed and that can be used, as dictionary definitions state, as ‘proof.’ A student gathers ‘stuff’ in the ePortfolio, selects, reflects, etc., and presents evidence that makes a case (or not)… The use of this distinction has been indispensable here. An embarrassing amount of academic assessment work culminates in the presentation of ‘stuff’ that has not been analyzed--student evaluations, grades, pass rates, retention, etc. After reading these ‘self studies,’ we ask the stumping question--fine, but what have you learned? Much of the ‘evidence’ we review has been presented without thought or with the general assumption that it is somehow self-evident… But too often that kind of evidence has not focused on an issue or problem or question. It is evidence that provides proof of nothing.
  •  
    a bit of a context shift, but....
Joshua Yeidel

Higher Education: Assessment & Process Improvement Group News | LinkedIn - 2 views

  •  
    So here it is: by definition, the value-added component of the D.C. IMPACT evaluation system defines 50 percent of all teachers in grades four through eight as ineffective or minimally effective in influencing their students' learning. And given the imprecision of the value-added scores, just by chance some teachers will be categorized as ineffective or minimally effective two years in a row. The system is rigged to label teachers as ineffective or minimally effective as a precursor to firing them.
  •  
    How assessment of value-added actually works in one setting: the Washington, D.C. public schools. This article actually works the numbers to show that the system is set up to put teachers in the firing zone. Note the tyranny of numerical ratings (some of them subjective) converted into meanings like "minimally effective".
Gary Brown

Views: Asking Too Much (and Too Little) of Accreditors - Inside Higher Ed - 1 views

  • Senators want to know why accreditors haven’t protected the public interest.
  • Congress shouldn’t blame accreditors: it should blame itself. The existing accreditation system has neither ensured quality nor ferreted out fraud. Why? Because Congress didn’t want it to. If Congress truly wants to protect the public interest, it needs to create a system that ensures real accountability.
  • But turning accreditors into gatekeepers changed the picture. In effect, accreditors now held a gun to the heads of colleges and universities since federal financial aid wouldn’t flow unless the institution received “accredited” status.
  • ...10 more annotations...
  • Congress listened to higher education lobbyists and designated accreditors -- teams made up largely of administrators and faculty -- to be “reliable authorities” on educational quality. Intending to protect institutional autonomy, Congress appropriated the existing voluntary system by which institutions differentiated themselves.
  • A gatekeeping system using peer review is like a penal system that uses inmates to evaluate eligibility for parole. The conflicts of interest are everywhere -- and, surprise, virtually everyone is eligible!
  • accreditation is “premised upon collegiality and assistance; rather than requirements that institutions meet certain standards (with public announcements when they don’t."
  • Meanwhile, there is ample evidence that many accredited colleges are adding little educational value. The 2006 National Assessment of Adult Literacy revealed that nearly a third of college graduates were unable to compare two newspaper editorials or compute the cost of office items, prompting the Spellings Commission and others to raise concerns about accreditors’ attention to productivity and quality.
  • But Congress wouldn’t let them. Rather than welcoming accreditors’ efforts to enhance their public oversight role, Congress told accreditors to back off and let nonprofit colleges and universities set their own standards for educational quality.
  • ccreditation is nothing more than an outdated industrial-era monopoly whose regulations prevent colleges from cultivating the skills, flexibility, and innovation that they need to ensure quality and accountability.
  • there is a much cheaper and better way: a self-certifying regimen of financial accountability, coupled with transparency about graduation rates and student success. (See some alternatives here and here.)
  • Such a system would prioritize student and parent assessment over the judgment of institutional peers or the educational bureaucracy. And it would protect students, parents, and taxpayers from fraud or mismanagement by permitting immediate complaints and investigations, with a notarized certification from the institution to serve as Exhibit A
  • The only way to protect the public interest is to end the current system of peer review patronage, and demand that colleges and universities put their reputation -- and their performance -- on the line.
  • Anne D. Neal is president of the American Council of Trustees and Alumni. The views stated herein do not represent the views of the National Advisory Committee on Institutional Quality and Integrity, of which she is a member.
  •  
    The ascending view of accreditation.
Theron DesRosier

pagi: eLearning - 0 views

  • ePortfolio ePortfolios, the Harvesting Gradebook, Accountability, and Community (!!!) Harvesting gradebook Learning from the transformative grade book Implementing the transformed grade book Transformed gradebook worked example (!!) Best example: Calaboz ePortfolio (!!) Guide to Rating Integrative & Critical Thinking (!!!) Grant Wiggins, Authentic Education Hub and spoke model of course design (!!!) ePortfolio as the core learning application Case Studies of Electronic Portfolios for Learning
  •  
    Nils found this. It is a Spanish concept map on eLearning that includes CTLT and the Harvesting Gradebook.
Gary Brown

Higher Education: Assessment & Process Improvement Group News | LinkedIn - 2 views

  •  
    The Forbes take--more vocationalism, more quality, less focus on quality, contain costs. Not interesting except as an example of how the work of higher ed is perceived in the business press.
Gary Brown

Educators Mull How to Motivate Professors to Improve Teaching - Curriculum - The Chroni... - 4 views

  • "Without an unrelenting focus on quality—on defining and measuring and ensuring the learning outcomes of students—any effort to increase college-completion rates would be a hollow effort indeed."
  • If colleges are going to provide high-quality educations to millions of additional students, they said, the institutions will need to develop measures of student learning than can assure parents, employers, and taxpayers that no one's time and money are being wasted.
  • "Effective assessment is critical to ensure that our colleges and universities are delivering the kinds of educational experiences that we believe we actually provide for students," said Ronald A. Crutcher, president of Wheaton College, in Massachusetts, during the opening plenary. "That data is also vital to addressing the skepticism that society has about the value of a liberal education."
  • ...13 more annotations...
  • But many speakers insisted that colleges should go ahead and take drastic steps to improve the quality of their instruction, without using rigid faculty-incentive structures or the fiscal crisis as excuses for inaction.
  • Handing out "teacher of the year" awards may not do much for a college
  • W.E. Deming argued, quality has to be designed into the entire system and supported by top management (that is, every decision made by CEOs and Presidents, and support systems as well as operations) rather than being made the responsibility solely of those delivering 'at the coal face'.
  • I see as a certain cluelessness among those who think one can create substantial change based on volunteerism
  • Current approaches to broaden the instructional repertoires of faculty members include faculty workshops, summer leave, and individual consultations, but these approaches work only for those relatively few faculty members who seek out opportunities to broaden their instructional methods.
  • The approach that makes sense to me is to engage faculty members at the departmental level in a discussion of the future and the implications of the future for their field, their college, their students, and themselves. You are invited to join an ongoing discussion of this issue at http://innovate-ideagora.ning.com/forum/topics/addressing-the-problem-of
  • Putting pressure on professors to improve teaching will not result in better education. The primary reason is that they do not know how to make real improvements. The problem is that in many fields of education there is either not enough research, or they do not have good ways of evaluationg the results of their teaching.
  • Then there needs to be a research based assessment that can be used by individual professors, NOT by the administration.
  • Humanities educatiors either have to learn enough statistics and cognitive science so they can make valid scientific comparisons of different strategies, or they have to work with cognitive scientists and statisticians
  • good teaching takes time
  • On the measurement side, about half of the assessments constructed by faculty fail to meet reasonable minimum standards for validity. (Interestingly, these failures leave the door open to a class action lawsuit. Physicians are successfully sued for failing to apply scientific findings correctly; commerce is replete with lawsuits based on measurement errors.)
  • The elephant in the corner of the room --still-- is that we refuse to measure learning outcomes and impact, especially proficiencies generalized to one's life outside the classroom.
  • until universities stop playing games to make themselves look better because they want to maintain their comfortable positions and actually look at what they can do to improve nothing is going to change.
  •  
    our work, our friends (Ken and Jim), and more context that shapes our strategy.
  •  
    How about using examples of highly motivational lecture and teaching techniques like the Richard Dawkins video I presented on this forum, recently. Even if teacher's do not consciously try to adopt good working techniques, there is at least a strong subconscious human tendency to mimic behaviors. I think that if teachers see more effective techniques, they will automatically begin to adopt adopt them.
Nils Peterson

Views: Changing the Equation - Inside Higher Ed - 1 views

  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resource
  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
  • ...19 more annotations...
  • year we strug
  • year we strug
  • those who control influential rating systems of the sort published by U.S. News & World Report -- define academic quality as small classes taught by distinguished faculty, grand campuses with impressive libraries and laboratories, and bright students heavily recruited. Since all of these indicators of quality are costly, my college’s pursuit of quality, like that of so many others, led us to seek more revenue to spend on quality improvements. And the strategy worked.
  • Based on those concerns, and informed by the literature on the “teaching to learning” paradigm shift, we began to change our focus from what we were teaching to what and how our students were learning.
  • No one wants to cut costs if their reputation for quality will suffer, yet no one wants to fall off the cliff.
  • When quality is defined by those things that require substantial resources, efforts to reduce costs are doomed to failure
  • some of the best thinkers in higher education have urged us to define the quality in terms of student outcomes.
  • Faculty said they wanted to move away from giving lectures and then having students parrot the information back to them on tests. They said they were tired of complaining that students couldn’t write well or think critically, but not having the time to address those problems because there was so much material to cover. And they were concerned when they read that employers had reported in national surveys that, while graduates knew a lot about the subjects they studied, they didn’t know how to apply what they had learned to practical problems or work in teams or with people from different racial and ethnic backgrounds.
  • Our applications have doubled over the last decade and now, for the first time in our 134-year history, we receive the majority of our applications from out-of-state students.
  • We established what we call college-wide learning goals that focus on "essential" skills and attributes that are critical for success in our increasingly complex world. These include critical and analytical thinking, creativity, writing and other communication skills, leadership, collaboration and teamwork, and global consciousness, social responsibility and ethical awareness.
  • despite claims to the contrary, many of the factors that drive up costs add little value. Research conducted by Dennis Jones and Jane Wellman found that “there is no consistent relationship between spending and performance, whether that is measured by spending against degree production, measures of student engagement, evidence of high impact practices, students’ satisfaction with their education, or future earnings.” Indeed, they concluded that “the absolute level of resources is less important than the way those resources are used.”
  • After more than a year, the group had developed what we now describe as a low-residency, project- and competency-based program. Here students don’t take courses or earn grades. The requirements for the degree are for students to complete a series of projects, captured in an electronic portfolio,
  • students must acquire and apply specific competencies
  • Faculty spend their time coaching students, providing them with feedback on their projects and running two-day residencies that bring students to campus periodically to learn through intensive face-to-face interaction
  • After a year and a half, the evidence suggests that students are learning as much as, if not more than, those enrolled in our traditional business program
  • As the campus learns more about the demonstration project, other faculty are expressing interest in applying its design principles to courses and degree programs in their fields. They created a Learning Coalition as a forum to explore different ways to capitalize on the potential of the learning paradigm.
  • a problem-based general education curriculum
  • At the very least, finding innovative ways to lower costs without compromising student learning is wise competitive positioning for an uncertain future
  • the focus of student evaluations has changed noticeably. Instead of focusing almost 100% on the instructor and whether he/she was good, bad, or indifferent, our students' evaluations are now focusing on the students themselves - as to what they learned, how much they have learned, and how much fun they had learning.
    • Nils Peterson
       
      gary diigoed this article. this comment shines another light -- the focus of the course eval shifted from faculty member to course & student learning when the focus shifted from teaching to learning
  •  
    A must read spotted by Jane Sherman--I've highlighed, as usual, much of it.
Nils Peterson

Edge 313 - 1 views

  • So what's the point? It's a culture. Call it the algorithmic culture. To get it, you need to be part of it, you need to come out of it. Otherwise, you spend the rest of your life dancing to the tune of other people's code. Just look at Europe where the idea of competition in the Internet space appears to focus on litigation, legislation, regulation, and criminalization.
    • Nils Peterson
       
      US vs Euro thinking about the Internet
  • TIME TO START TAKING THE INTERNET SERIOUSLY 1.  No moment in technology history has ever been more exciting or dangerous than now. The Internet is like a new computer running a flashy, exciting demo. We have been entranced by this demo for fifteen years. But now it is time to get to work, and make the Internet do what we want it to.
  • Wherever computers exist, nearly everyone who writes uses a word processor. The word processor is one of history's most successful inventions. Most people call it not just useful but indispensable. Granted that the word processor is indeed indispensable, what good has it done? We say we can't do without it; but if we had to give it up, what difference would it make? Have word processors improved the quality of modern writing? What has the indispensable word processor accomplished? 4. It has increased not the quality but the quantity of our writing — "our" meaning society's as a whole. The Internet for its part has increased not the quality but the quantity of the information we see. Increasing quantity is easier than improving quality. Instead of letting the Internet solve the easy problems, it's time we got it to solve the important ones.
  • ...10 more annotations...
  • Modern search engines combine the functions of libraries and business directories on a global scale, in a flash: a lightning bolt of brilliant engineering. These search engines are indispensable — just like word processors. But they solve an easy problem. It has always been harder to find the right person than the right fact. Human experience and expertise are the most valuable resources on the Internet — if we could find them. Using a search engine to find (or be found by) the right person is a harder, more subtle problem than ordinary Internet search.
  • Will you store your personal information on your own personal machines, or on nameless servers far away in the Cloud, or both? Answer: in the Cloud. The Cloud (or the Internet Operating System, IOS — "Cloud 1.0") will take charge of your personal machines. It will move the information you need at any given moment onto your own cellphone, laptop, pad, pod — but will always keep charge of the master copy. When you make changes to any document, the changes will be reflected immediately in the Cloud. Many parts of this service are available already.
  • The Internet will never create a new economy based on voluntary instead of paid work — but it can help create the best economy in history, where new markets (a free market in education, for example) change the world. Good news! — the Net will destroy the university as we know it (except for a few unusually prestigious or beautiful campuses).
  • In short: it's time to think about the Internet instead of just letting it happen.
  • The traditional web site is static, but the Internet specializes in flowing, changing information. The "velocity of information" is important — not just the facts but their rate and direction of flow. Today's typical website is like a stained glass window, many small panels leaded together. There is no good way to change stained glass, and no one expects it to change. So it's not surprising that the Internet is now being overtaken by a different kind of cyberstructure. 14. The structure called a cyberstream or lifestream is better suited to the Internet than a conventional website because it shows information-in-motion, a rushing flow of fresh information instead of a stagnant pool.
    • Nils Peterson
       
      jayme will like this for her timeline portfolios
  • There is no clear way to blend two standard websites together, but it's obvious how to blend two streams. You simply shuffle them together like two decks of cards, maintaining time-order — putting the earlier document first. Blending is important because we must be able to add and subtract in the Cybersphere. We add streams together by blending them. Because it's easy to blend any group of streams, it's easy to integrate stream-structured sites so we can treat the group as a unit, not as many separate points of activity; and integration is important to solving the information overload problem. We subtract streams by searching or focusing. Searching a stream for "snow" means that I subtract every stream-element that doesn't deal with snow. Subtracting the "not snow" stream from the mainstream yields a "snow" stream. Blending streams and searching them are the addition and subtraction of the new Cybersphere.
    • Nils Peterson
       
      is Yahoo Pipes a precursor? Theron sent me an email, subject: "let me pipe that for you"
    • Nils Peterson
       
      Google Buzz might also be a ersion of this. It bring together items from your (multiple) public streams.
  • Internet culture is a culture of nowness. The Internet tells you what your friends are doing and the world news now, the state of the shops and markets and weather now, public opinion, trends and fashions now. The Internet connects each of us to countless sites right now — to many different places at one moment in time.
  • Once we understand the inherent bias in an instrument, we can correct it. The Internet has a large bias in favor of now. Using lifestreams (which arrange information in time instead of space), historians can assemble, argue about and gradually refine timelines of historical fact. Such timelines are not history, but they are the raw material of history.
  • Before long, all personal, familial and institutional histories will take visible form in streams.   A lifestream is tangible time:  as life flashes past on waterskis across time's ocean, a lifestream is the wake left in its trail. Dew crystallizes out of the air along cool surfaces; streams crystallize out of the Cybersphere along veins of time. As streams begin to trickle and then rush through the spring thaw in the Cybersphere, our obsession with "nowness" will recede
    • Nils Peterson
       
      barrett has been using lifestream. this guy claims to have coined it lonf ago...in any event, it is a very different picture of portfolio -- more like "not your father's" than like AAEEBL.
  • The Internet today is, after all, a machine for reinforcing our prejudices. The wider the selection of information, the more finicky we can be about choosing just what we like and ignoring the rest. On the Net we have the satisfaction of reading only opinions we already agree with, only facts (or alleged facts) we already know. You might read ten stories about ten different topics in a traditional newspaper; on the net, many people spend that same amount of time reading ten stories about the same topic. But again, once we understand the inherent bias in an instrument, we can correct it. One of the hardest, most fascinating problems of this cyber-century is how to add "drift" to the net, so that your view sometimes wanders (as your mind wanders when you're tired) into places you hadn't planned to go. Touching the machine brings the original topic back. We need help overcoming rationality sometimes, and allowing our thoughts to wander and metamorphose as they do in sleep.
Nils Peterson

AAC&U News | April 2010 | Feature - 1 views

  • Comparing Rubric Assessments to Standardized Tests
  • First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU).
  • And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.  
  • ...13 more annotations...
  • outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication
  • “The wonderful thing about this approach is that full-time faculty across the university  are gathering data about how their  students are doing, and since they’ll be teaching their courses in the future, they’re really invested in rubric assessment—they really care,” Escoe says. In one case, the capstone survey data revealed that students weren’t doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
  • the university planned a “dual pilot” study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment,
  • The rubrics the UC team used were slightly modified versions of those developed by AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) project. 
  • In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years.  The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.
  • “It’s not some nitpicky, onerous administrative add-on. It’s what we do as we teach our courses, and it really helps close that assessment loop.”
  • There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points “in a black box”:
  • faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test. 
  • “The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”
  • Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students’ whole program of study.”  Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning.
  • “We’re really trying to stress that assessment is pedagogy,”
  • “We found no statistically significant correlation between the CLA scores and the portfolio scores,”
  • In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
    • Nils Peterson
       
      CLA did not provide information for continuous program improvement -- we've heard this argument before
  •  
    The lack of correlation might be rephrased--there appears to be no corrlation between what is useful for faculty who teach and what is useful for the VSA. A corollary question: Of what use is the VSA?
Gary Brown

Accountability Effort for Community Colleges Pushes Forward, and Other Meeting Notes - ... - 1 views

  • A project led by the American Association of Community Colleges to develop common, voluntary standards of accountability for two-year institutions is moving forward, and specific performance measures are being developed, an official at the association said.
  • financed by the Lumina Foundation for Education and the Bill & Melinda Gates Foundation, is now its second phase
  • The project's advocates have begun pushing a public-relations campaign to build support for the accountability effort among colleges.
  • ...2 more annotations...
  • common reporting formats and measures that are appropriate to their institutions
  • Mr. Phillippe said one area of college performance the voluntary accountability system will measure is student persistence and completion, including retention and transfer rates. Student progress toward completion may also be measured by tracking how many students reach certain credit milestones. Other areas that will be measured include colleges' contributions to the work force and economic and community development.
  •  
    Footsteps....
Joshua Yeidel

Evaluations That Make the Grade: 4 Ways to Improve Rating the Faculty - Teaching - The ... - 0 views

  •  
    Four (somewhat) different types of course evals, closing with a tip of the hat to multiple measures.
Gary Brown

Professors Who Focus on Honing Their Teaching Are a Distinct Breed - Research - The Chr... - 1 views

  • Professors who are heavily focused on learning how to improve their teaching stand apart as a very distinct subset of college faculties, according to a new study examining how members of the professoriate spend their time.
  • those who are focused on tackling societal problems stand apart as their own breed. Other faculty members, it suggests, are pretty much mutts, according to its classification scheme.
  • 1,000 full-time faculty members at four-year colleges and universities gathered as part of the Faculty Professional Performance Survey administered by Mr. Braxton and two Vanderbilt doctoral students in 1999. That survey had asked the faculty members how often they engaged in each of nearly 70 distinct scholarly activities, such as experimenting with a new teaching method, publishing a critical book review in a journal, or being interviewed on a local television station. All of the faculty members examined in the new analysis were either tenured or tenure-track and fell into one of four academic disciplines: biology, chemistry, history, or sociology.
  • ...7 more annotations...
  • cluster analysis,
  • nearly two-thirds of those surveyed were involved in the full range of scholarly activity they examined
  • Just over a third, however, stood out as focused almost solely on one of two types of scholarship: on teaching practices, or on using knowledge from their discipline to identify or solve societal problems.
  • pedagogy-focused scholars were found mainly at liberal-arts colleges and, compared with the general population surveyed, tended to be younger, heavily represented in history departments, and more likely to be female and untenured
  • Those focused on problem-solving were located mainly at research and doctoral institutions, and were evenly dispersed across disciplines and more likely than others surveyed to be male and tenured.
  • how faculty members rate those priorities are fairly consistent across academic disciplines,
  • The study was conducted by B. Jan Middendorf, acting director of Kansas State University's office of educational innovation and evaluation; Russell J. Webster, a doctoral student in psychology at Kansas State; and Steve Benton, a senior research officer at the IDEA Center
  •  
    Another study that documents the challenge and suggests confirmation of the 50% figure of faculty who are not focused on either research or teaching.
Gary Brown

(the teeming void): This is Data? Arguing with Data Baby - 3 views

  • Making that call - defining what data is - is a powerful cultural gesture right now, because as I've argued before data as an idea or a figure is both highly charged and strangely abstract.
  • In other words data here is not gathered, measured, stored or transmitted - or not that we can see. It just is, and it seems to be inherent in the objects it refers to; Data Baby is "generating" data as easily as breathing.
  • This vision of material data is also frustrating because it has all the ingredients of a far more interesting idea: data is material, or at least it depends on material substrates, but the relationship between data and matter is just that, a relationship, not an identity. Data depends on stuff; always in it, and moving transmaterially through it, but it is precisely not stuff in itself.
  • ...3 more annotations...
  • Data does not just happen; it is created in specific and deliberate ways. It is generated by sensors, not babies; and those sensors are designed to measure specific parameters for specific reasons, at certain rates, with certain resolutions. Or more correctly: it is gathered by people, for specific reasons, with a certain view of the world in mind, a certain concept of what the problem or the subject is. The people use the sensors, to gather the data, to measure a certain chosen aspect of the world.
  • If we come to accept that data just is, it's too easy to forget that it reflects a specific set of contexts, contingencies and choices, and that crucially, these could be (and maybe should be) different. Accepting data shaped by someone else's choices is a tacit acceptance of their view of the world, their notion of what is interesting or important or valid. Data is not inherent or intrinsic in anything: it is constructed, and if we are going to work intelligently with data we must remember that it can always be constructed some other way.
  • We need real, broad-based, practical and critical data skills and literacies, an understanding of how to make data and do things with it.
  •  
    A discussion that coincides with reports this morning that again homeland security had the data; they just failed to understand the meaning of the data.
‹ Previous 21 - 40 of 47 Next ›
Showing 20 items per page