Skip to main content

Home/ CTLT and Friends/ Group items tagged question

Rss Feed Group items tagged

Corinna Lo

News: The Challenge of Comparability - Inside Higher Ed - 0 views

  •  
    But when it came to defining sets of common learning outcomes for specific degree programs -- Transparency by Design's most distinguishing characteristic -- commonality was hard to come by. Questions to apply to any institution could be: 1) For any given program, what specific student learning outcomes are graduates expected to demonstrate? 2) By what standards and measurements are students being evaluated? 3) How well have graduating students done relative to these expectations? Comparability of results (the 3rd question) depends on transparency of goals and expectations (the 1st question) and transparency of measures (the 2nd question).
Gary Brown

Change Management 101: A Primer - 1 views

shared by Gary Brown on 13 Jan 10 - Cached
  • To recapitulate, there are at least four basic definitions of change management:  1.      The task of managing change (from a reactive or a proactive posture) 2.      An area of professional practice (with considerable variation in competency and skill levels among practitioners) 3.      A body of knowledge (consisting of models, methods, techniques, and other tools) 4.      A control mechanism (consisting of requirements, standards, processes and procedures).
  • the problems found in organizations, especially the change problems, have both a content and a process dimension.
  • The process of change has been characterized as having three basic stages: unfreezing, changing, and re-freezing. This view draws heavily on Kurt Lewin’s adoption of the systems concept of homeostasis or dynamic stability.
  • ...10 more annotations...
  • The Change Process as Problem Solving and Problem Finding
  • What is not useful about this framework is that it does not allow for change efforts that begin with the organization in extremis
  • this framework is that it gives rise to thinking about a staged approach to changing things.
  • Change as a “How” Problem
  • Change as a “What” Problem
  • Change as a “Why” Problem
  • The Approach taken to Change Management Mirrors Management's Mindset
  • People in core units, buffered as they are from environmental turbulence and with a history of relying on adherence to standardized procedures, typically focus on “how” questions.
  • To summarize: Problems may be formulated in terms of “how,” “what” and “why” questions. Which formulation is used depends on where in the organization the person posing the question or formulating the problem is situated, and where the organization is situated in its own life cycle. “How” questions tend to cluster in core units. “What” questions tend to cluster in buffer units. People in perimeter units tend to ask “what” and “how” questions. “Why” questions are typically the responsibility of top management.
  • One More Time: How do you manage change? The honest answer is that you manage it pretty much the same way you’d manage anything else of a turbulent, messy, chaotic nature, that is, you don’t really manage it, you grapple with it. It’s more a matter of leadership ability than management skill. The first thing to do is jump in. You can’t do anything about it from the outside. A clear sense of mission or purpose is essential. The simpler the mission statement the better. “Kick ass in the marketplace” is a whole lot more meaningful than “Respond to market needs with a range of products and services that have been carefully designed and developed to compare so favorably in our customers’ eyes with the products and services offered by our competitors that the majority of buying decisions will be made in our favor.” Build a team. “Lone wolves” have their uses, but managing change isn’t one of them. On the other hand, the right kind of lone wolf makes an excellent temporary team leader. Maintain a flat organizational team structure and rely on minimal and informal reporting requirements. Pick people with relevant skills and high energy levels. You’ll need both. Toss out the rulebook. Change, by definition, calls for a configured response, not adherence to prefigured routines. Shift to an action-feedback model. Plan and act in short intervals. Do your analysis on the fly. No lengthy up-front studies, please. Remember the hare and the tortoise. Set flexible priorities. You must have the ability to drop what you’re doing and tend to something more important. Treat everything as a temporary measure. Don’t “lock in” until the last minute, and then insist on the right to change your mind. Ask for volunteers. You’ll be surprised at who shows up. You’ll be pleasantly surprised by what they can do. Find a good “straw boss” or team leader and stay out of his or her way. Give the team members whatever they ask for — except authority. They’ll generally ask only for what they really need in the way of resources. If they start asking for authority, that’s a signal they’re headed toward some kind of power-based confrontation and that spells trouble. Nip it in the bud! Concentrate dispersed knowledge. Start and maintain an issues logbook. Let anyone go anywhere and talk to anyone about anything. Keep the communications barriers low, widely spaced, and easily hurdled. Initially, if things look chaotic, relax — they are. Remember, the task of change management is to bring order to a messy situation, not pretend that it’s already well organized and disciplined.
  •  
    Note the "why" challenge and the role of leadership
Theron DesRosier

BigDialog: Ask the President-Elect - 0 views

  •  
    BigDialog: Ask the President-Elect Think of questions you'd like to ask the President-Elect. 2. We fly the top questioners to MIT and ask their questions. Link to Terms 3. You rate the President-Elect's answers. More...
Joshua Yeidel

The Tower and The Cloud | EDUCAUSE - 0 views

  •  
    "The emergence of the networked information economy is unleashing two powerful forces. On one hand, easy access to high-speed networks is empowering individuals. People can now discover and consume information resources and services globally from their homes. Further, new social computing approaches are inviting people to share in the creation and edification of information on the Internet. Empowerment of the individual -- or consumerization -- is reducing the individual's reliance on traditional brick-and-mortar institutions in favor of new and emerging virtual ones. Second, ubiquitous access to high-speed networks along with network standards, open standards and content, and techniques for virtualizing hardware, software, and services is making it possible to leverage scale economies in unprecedented ways. What appears to be emerging is industrial-scale computing -- a standardized infrastructure for delivering computing power, network bandwidth, data storage and protection, and services. Consumerization and industrialization beg the question "Is this the end of the middle?"; that is, what will be the role of "enterprise" IT in the future? Indeed, the bigger question is what will become of all of our intermediating institutions? This volume examines the impact of IT on higher education and on the IT organization in higher education."
  •  
    Consumerization and industrialization beg the question "Is this the end of the middle?"; that is, what will be the role of "enterprise" IT in the future? Indeed, the bigger question is what will become of all of our intermediating institutions? This volume examines the impact of IT on higher education and on the IT organization in higher education.
Gary Brown

News: Turning Surveys Into Reforms - Inside Higher Ed - 0 views

  • Molly Corbett Broad, president of the American Council on Education, warned those gathered here that they would be foolish to think that accountability demands were a thing of the past.
  • She said that while she is “impressed” with the work of NSSE, she thinks higher education is “not moving fast enough” right now to have in place accountability systems that truly answer the questions being asked of higher education. The best bet for higher education, she said, is to more fully embrace various voluntary systems, and show that they are used to promote improvements.
  • One reason NSSE data are not used more, some here said, was the decentralized nature of American higher education. David Paris, executive director of the New Leadership Alliance for Student Learning and Accountability, said that “every faculty member is king or queen in his or her classroom.” As such, he said, “they can take the lessons of NSSE” about the kinds of activities that engage students, but they don’t have to. “There is no authority or dominant professional culture that could impel any faculty member to apply” what NSSE teaches about engaged learning, he said.
  • ...4 more annotations...
  • She stressed that NSSE averages may no longer reflect any single reality of one type of faculty member. She challenged Paris’s description of powerful faculty members by noting that many adjuncts have relatively little control over their pedagogy, and must follow syllabuses and rules set by others. So the power to execute NSSE ideas, she said, may not rest with those doing most of the teaching.
  • Research presented here, however, by the Wabash College National Study of Liberal Arts Education offered concrete evidence of direct correlations between NSSE attributes and specific skills, such as critical thinking skills. The Wabash study, which involves 49 colleges of all types, features cohorts of students being analyzed on various NSSE benchmarks (for academic challenge, for instance, or supportive campus environment or faculty-student interaction) and various measures of learning, such as tests to show critical thinking skills or cognitive skills or the development of leadership skills.
  • The irony of the Wabash work with NSSE data and other data, Blaich said, was that it demonstrates the failure of colleges to act on information they get -- unless someone (in this case Wabash) drives home the ideas.“In every case, after collecting loads of information, we have yet to find a single thing that institutions didn’t already know. Everyone at the institution didn’t know -- it may have been filed away,” he said, but someone had the data. “It just wasn’t followed. There wasn’t sufficient organizational energy to use that data to improve student learning.”
  • “I want to try to make the point that there is a distinction between participating in NSSE and using NSSE," he said. "In the end, what good is it if all you get is a report?"
  •  
    An interesting discussion, exploring basic questions CTLT folks are familiar with, grappling with the question of how to use survey data and how to identify and address limitations. 10 years after launch of National Survey of Student Engagement, many worry that colleges have been speedier to embrace giving the questionnaire than using its results. And some experts want changes in what the survey measures. I note these limitations, near the end of the article: Adrianna Kezar, associate professor of higher education at the University of Southern California, noted that NSSE's questions were drafted based on the model of students attending a single residential college. Indeed many of the questions concern out-of-class experiences (both academic and otherwise) that suggest someone is living in a college community. Kezar noted that this is no longer a valid assumption for many undergraduates. Nor is the assumption that they have time to interact with peers and professors out of class when many are holding down jobs. Nor is the assumption -- when students are "swirling" from college to college, or taking courses at multiple colleges at the same time -- that any single institution is responsible for their engagement. Further, Kezar noted that there is an implicit assumption in NSSE of faculty being part of a stable college community. Questions about seeing faculty members outside of class, she said, don't necessarily work when adjunct faculty members may lack offices or the ability to interact with students from one semester to the next. Kezar said that she thinks full-time adjunct faculty members may actually encourage more engagement than tenured professors because the adjuncts are focused on teaching and generally not on research. And she emphasized that concerns about the impact of part-time adjuncts on student engagement arise not out of criticism of those individuals, but of the system that assigns them teaching duties without much support. S
  •  
    Repeat of highlighted resource, but merits revisiting.
Nils Peterson

Half an Hour: Open Source Assessment - 0 views

  • When posed the question in Winnipeg regarding what I thought the ideal open online course would look like, my eventual response was that it would not look like a course at all, just the assessment.
    • Nils Peterson
       
      I remembered this Downes post on the way back from HASTAC. It is some of the roots of our Spectrum I think.
  • The reasoning was this: were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.
  • In Holland I encountered a person from an organization that does nothing but test students. This is the sort of thing I long ago predicted (in my 1998 Future of Online Learning) so I wasn't that surprised. But when I pressed the discussion the gulf between different models of assessment became apparent.Designers of learning resources, for example, have only the vaguest of indication of what will be on the test. They have a general idea of the subject area and recommendations for reading resources. Why not list the exact questions, I asked? Because they would just memorize the answers, I was told. I was unsure how this varied from the current system, except for the amount of stuff that must be memorized.
    • Nils Peterson
       
      assumes a test as the form of assessment, rather than something more open ended.
  • ...8 more annotations...
  • As I think about it, I realize that what we have in assessment is now an exact analogy to what we have in software or learning content. We have proprietary tests or examinations, the content of which is held to be secret by the publishers. You cannot share the contents of these tests (at least, not openly). Only specially licensed institutions can offer the tests. The tests cost money.
    • Nils Peterson
       
      See our Where are you on the spectrum, Assessment is locked vs open
  • Without a public examination of the questions, how can we be sure they are reliable? We are forced to rely on 'peer reviews' or similar closed and expert-based evaluation mechanisms.
  • there is the question of who is doing the assessing. Again, the people (or machines) that grade the assessments work in secret. It is expert-based, which creates a resource bottleneck. The criteria they use are not always apparent (and there is no shortage of literature pointing to the randomness of the grading). There is an analogy here with peer-review processes (as compared to recommender system processes)
  • What constitutes achievement in a field? What constitutes, for example, 'being a physicist'?
  • This is a reductive theory of assessment. It is the theory that the assessment of a big thing can be reduced to the assessment of a set of (necessary and sufficient) little things. It is a standards-based theory of assessment. It suggests that we can measure accomplishment by testing for accomplishment of a predefined set of learning objectives.Left to its own devices, though, an open system of assessment is more likely to become non-reductive and non-standards based. Even if we consider the mastery of a subject or field of study to consist of the accomplishment of smaller components, there will be no widespread agreement on what those components are, much less how to measure them or how to test for them.Consequently, instead of very specific forms of evaluation, intended to measure particular competences, a wide variety of assessment methods will be devised. Assessment in such an environment might not even be subject-related. We won't think of, say, a person who has mastered 'physics'. Rather, we might say that they 'know how to use a scanning electron microscope' or 'developed a foundational idea'.
  • We are certainly familiar with the use of recognition, rather than measurement, as a means of evaluating achievement. Ludwig Wittgenstein is 'recognized' as a great philosopher, for example. He didn't pass a series of tests to prove this. Mahatma Gandhi is 'recognized' as a great leader.
  • The concept of the portfolio is drawn from the artistic community and will typically be applied in cases where the accomplishments are creative and content-based. In other disciplines, where the accomplishments resemble more the development of skills rather than of creations, accomplishments will resemble more the completion of tasks, like 'quests' or 'levels' in online games, say.Eventually, over time, a person will accumulate a 'profile' (much as described in 'Resource Profiles').
  • In other cases, the evaluation of achievement will resemble more a reputation system. Through some combination of inputs, from a more or less define community, a person may achieve a composite score called a 'reputation'. This will vary from community to community.
  •  
    Fine piece, transformative. "were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources."
Theron DesRosier

CDC Evaluation Working Group: Framework - 2 views

  • Framework for Program Evaluation
  • Purposes The framework was developed to: Summarize and organize the essential elements of program evaluation Provide a common frame of reference for conducting evaluations Clarify the steps in program evaluation Review standards for effective program evaluation Address misconceptions about the purposes and methods of program evaluation
  • Assigning value and making judgments regarding a program on the basis of evidence requires answering the following questions: What will be evaluated? (i.e. what is "the program" and in what context does it exist) What aspects of the program will be considered when judging program performance? What standards (i.e. type or level of performance) must be reached for the program to be considered successful? What evidence will be used to indicate how the program has performed? What conclusions regarding program performance are justified by comparing the available evidence to the selected standards? How will the lessons learned from the inquiry be used to improve public health effectiveness?
  • ...3 more annotations...
  • These questions should be addressed at the beginning of a program and revisited throughout its implementation. The framework provides a systematic approach for answering these questions.
  • Steps in Evaluation Practice Engage stakeholders Those involved, those affected, primary intended users Describe the program Need, expected effects, activities, resources, stage, context, logic model Focus the evaluation design Purpose, users, uses, questions, methods, agreements Gather credible evidence Indicators, sources, quality, quantity, logistics Justify conclusions Standards, analysis/synthesis, interpretation, judgment, recommendations Ensure use and share lessons learned Design, preparation, feedback, follow-up, dissemination Standards for "Effective" Evaluation Utility Serve the information needs of intended users Feasibility Be realistic, prudent, diplomatic, and frugal Propriety Behave legally, ethically, and with due regard for the welfare of those involved and those affected Accuracy Reveal and convey technically accurate information
  • The challenge is to devise an optimal — as opposed to an ideal — strategy.
  •  
    Framework for Program Evaluation by the CDC This is a good resource for program evaluation. Click through "Steps and Standards" for information on collecting credible evidence and engaging stakeholders.
Gary Brown

The Quality Question - Special Reports - The Chronicle of Higher Education - 1 views

shared by Gary Brown on 30 Aug 10 - Cached
  • Few reliable, comparable measures of student learning across colleges exist. Standardized assessments like the Collegiate Learning Assessment are not widely used—and many experts say those tests need refinement in any case.
    • Gary Brown
       
      I am hoping the assumptions underlying this sentence do not frame the discussion. The extent to which it has in the past parallels the lack of progress. Standardized comparisons evince nothing but the wrong questions.
  • "We are the most moribund field that I know of," Mr. Zemsky said in an interview. "We're even more moribund than county government."
  • Robert Zemsky
Nils Peterson

Two Bits » Modulate This Book - 0 views

  • Free Software is good to think with… How does one re-mix scholarship? One of the central questions of this book is how Free Software and Free Culture think about re-using, re-mixing, modifying and otherwise building on the work of others. It seems obvious that the same question should be asked of scholarship. Indeed the idea that scholarship is cumulative and builds on the work of others is a bit of a platitude even. But how?
    • Nils Peterson
       
      This is Chris Kelty's site for his book Two Bits: The Cultural Significance of Free Software. Learned about the idea "recusive public" at the P2PU event, and from that found Kelty. This quote leads off the page that is inviting readers to "modulate" the book. The page before gives a free download in PDF and HTML and the CC License and invitation to remix, use, etc, and to "Modulate" so I came to see what that term might mean.
  • I think Free Software is “good to think with” in the classic anthropological sense.  Part of the goal of launching Two Bits has been to experiment with “modulations” of the book–and of scholarship more generally–a subject discussed at length in the text. Free Software has provided a template, and a kind of inspiration for people to experiment with new modes of reuse, remixing, modulating and transducing collaboratively created objects.
  • As such, “Modulations” is a project, concurrent with the book, but not necessarily based on it, which is intended to explore the questions raised there, but in other works, with and by other scholars, a network of researchers and projects on free and open source software, on “recursive publics,” on publics and public sphere theory generally, and on new projects and problems confronted by Free Software and its practices…
Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessmen... - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
Nils Peterson

The World Question Center 2010 - 0 views

  • This year's Question is "How is the Internet changing the way YOU think?" Not "How is the Internet changing the way WE think?" We spent a lot of time going back on forth on "YOU" vs. "WE" and came to the conclusion to go with "YOU", the reason being that Edge is a conversation.
    • Nils Peterson
       
      EDGE question for 2010.
  • We wanted people to think about the "Internet", which includes, but is a much bigger subject than the Web, an application on the Internet, or search, browsing, etc., which are apps on the Web. Back in 1996, computer scientist and visionary Danny Hillis pointed out that when it comes to the Internet, "Many people sense this, but don't want to think about it because the change is too profound.
Gary Brown

An Expert Surveys the Assessment Landscape - Student Affairs - The Chronicle of Higher ... - 1 views

shared by Gary Brown on 29 Oct 09 - Cached
    • Gary Brown
       
      Illustration of a vision of assessment that separates assessment from teaching and learning.
  • If assessment is going to be required by accrediting bodies and top administrators, then we need administrative support and oversight of assessment on campus, rather than once again offloading more work onto faculty members squeezed by teaching & research inflation.
  • Outcomes assessment does not have to be in the form of standardized tests, nor does including assessment in faculty review have to translate into percentages achieving a particular score on such a test. What it does mean is that when the annual review comes along, one should be prepared to answer the question, "How do you know that what you're doing results in student learning?" We've all had the experience of realizing at times that students took in something very different from what we intended (if we were paying attention at all). So it's reasonable to be asked about how you do look at that question and how you decide when your current practice is successful or when it needs to be modified. That's simply being a reflective practitioner in the classroom which is the bare minimum students should expect from us. And that's all assessment is - answering that question, reflecting on what you find, and taking next steps to keep doing what works well and find better solutions for the things that aren't working well.
  • ...2 more annotations...
  • We need to really show HOW we use the results of assessment in the revamping of our curriculum, with real case studies. Each department should insist and be ready to demonstrate real case studies of this type of use of Assessment.
  • Socrates said "A life that is not examined is not worth living". Wonderful as this may be as a metaphor we should add to it - "and once examined - do something to improve it".
Kimberly Green

Movie Clips and Copyright - 0 views

  •  
    Video clips -- sometimes the copyright question comes up, so this green light is good news. Video clips may lend themselves to scenario-based assessments -- instead of reading a long article, students could look at a digitally presented case to analyze and critique -- might open up a lot of possibilities for assessment activities. a latest round of rule changes, issued Monday by the U.S. Copyright Office, dealing with what is legal and what is not as far as decrypting and repurposing copyrighted content. One change in particular is making waves in academe: an exemption that allows professors in all fields and "film and media studies students" to hack encrypted DVD content and clip "short portions" into documentary films and "non-commercial videos." (The agency does not define "short portions.") This means that any professors can legally extract movie clips and incorporate them into lectures, as long as they are willing to decrypt them - a task made relatively easy by widely available programs known as "DVD rippers." The exemption also permits professors to use ripped content in non-classroom settings that are similarly protected under "fair use" - such as presentations at academic conferences.
Kimberly Green

Strategic National Arts Alumni Project (SNAAP) - 0 views

  •  
    WSU is participating in this survey. Looks interesting, follow up on students who graduate with an arts degree. Could be useful in program assessment in a number of ways ( a model, sample questions, as well as ways to leverage nationally collected data.) Kimberly Welcome to the Strategic National Arts Alumni Project (SNAAP), an annual online survey, data management, and institutional improvement system designed to enhance the impact of arts-school education. SNAAP partners with arts high schools, art and design colleges, conservatories and arts programs within colleges and universities to administer the survey to their graduates. SNAAP is a project of the Indiana University Center for Postsecondary Research in collaboration with the Vanderbilt University Curb Center for Art, Enterprise, and Public Policy. Lead funding is provided by the Surdna Foundation, with major partnership support from the Houston Endowment, Barr Foundation, Cleveland Foundation, Educational Foundation of America and the National Endowment for the Arts. improvement system designed to enhance the impact of arts-school education. SNAAP partners with arts high schools, art and design colleges, conservatories and arts programs within colleges and universities to administer the survey to their graduates. SNAAP is a project of the Indiana University Center for Postsecondary Research in collaboration with the Vanderbilt University Curb Center for Art, Enterprise, and Public Policy. Lead funding is provided by the Surdna Foundation, with major partnership support from the Houston Endowment, Barr Foundation, Cleveland Foundation, Educational Foundation of America and the National Endowment for the Arts."
Joshua Yeidel

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • "The CLA is really an authentic assessment process,"
    • Joshua Yeidel
       
      What is the meaning of "authentic" in this statement? It certainly isn't "situated in the real world" or "of intrinsic value".
  • it measures analytical ability, problem-solving ability, critical thinking, and communication.
  • the CLA typically reports scores on a "value added" basis, controlling for the scores that students earned on the SAT or ACT while in high school.
    • Joshua Yeidel
       
      If SAT and ACT are measuring the same things as CLA, then why not just use them? If they are measuring different things, why "control for" their scores?
  • ...5 more annotations...
  • improved models of instruction.
  • add CLA-style assignments to their liberal-arts courses.
    • Joshua Yeidel
       
      Maybe the best way to prepare for the test, but the best way to develop analytical ability, et. al.?
  • "If a college pays attention to learning and helps students develop their skills—whether they do that by participating in our programs or by doing things on their own—they probably should do better on the CLA,"
    • Joshua Yeidel
       
      Just in case anyone missed the message: pay attention to learning, and you'll _probably_ do better on the CLA. Get students to practice CLA tasks, and you _will_ do better on the CLA.
  • "Standardized tests of generic skills—I'm not talking about testing in the major—are so much a measure of what students bring to college with them that there is very little variance left out of which we might tease the effects of college," says Ms. Banta, who is a longtime critic of the CLA. "There's just not enough variance there to make comparative judgments about the comparative quality of institutions."
    • Joshua Yeidel
       
      It's not clear what "standardized tests" means in this comment. Does the "lack of variance" apply to all assessments (including, e.g., e-portfolios)?
  • Can the CLA fill both of those roles?
  •  
    A summary of the current state of "thinking" with regard to CLA. Many fallacies and contradictions are (unintentionally) exposed. At least CLA appears to be more about skills than content (though the question of how it is graded isn't even raised), but the "performance task" approach is the smallest possible step in that direction.
Gary Brown

The Potential Impact of Common Core Standards - 2 views

  • According to the Common Core State Standards Initiative (CCSSI), the goal “is to ensure that academic expectations for students are of high quality and consistent across all states and territories.” To educators across the nation, this means they now have to sync up all curriculum in math and language arts for the benefit of the students.
  • They are evidence based, aligned with college and work expectations, include rigorous content and skills, and are informed by other top performing countries.”
  • “Educational standards help teachers ensure their students have the skills and knowledge they need to be successful by providing clear goals for student learning.” They are really just guidelines for students, making sure they are on the right track with their learning.
  • ...2 more annotations...
  • When asked the simple question of what school standards are, most students are unable to answer the question. When the concept is explained, however, they really do not know if having common standards would make a difference or not. Codie Allen, a senior in the Vail School Distract says, “I think that things will pretty much stay stagnate, people aren’t really going to change because of standards.”
  • Council of Chief State School Officers. Common Core State Standards Initiative, 2010.
Theron DesRosier

#3m10p - 1 views

  •  
    3M10P is a university project in which 10 students work for 3 months with the (*) goal of writing 10 academic journal papers. The project started on 2010-09-01 and will run until 2010-12-01. On the way, we will need to upset the academic publishing applecart quite a bit: attracting peer commentary on the drafts as they get written, pushing the limits of text re-use between papers and questioning the status of author. This is play, but this is very serious play."
Gary Brown

Online Colleges and States Are at Odds Over Quality Standards - Wired Campus - The Chro... - 1 views

  • the group called for a more uniform accreditation standard across state lines as well as a formal framework for getting a conversation on regulation started.
  • College officials claim that what states really mean when they discuss quality in online education is the credibility of online education in general. John F. Ebersole, president of Excelsior College, said “there is a bit of a double standard” when it comes to regulating online institutions; states, he feels, apply stricter standards to the online world.
  •  
    I note the underlying issue of "credibility" as the core of accreditation. It raises the question, again:  Why would standardized tests be presumed, as Excelsior does, to be a better indicator than a model of stakeholder endorsement?
Gary Brown

Reviewers Unhappy with Portfolio 'Stuff' Demand Evidence -- Campus Technology - 1 views

  • An e-mail comment from one reviewer: “In reviewing about 100-some-odd accreditation reports in the last few months, it has been useful in our work here at Washington State University to distinguish ‘stuff’ from evidence. We have adopted an understanding that evidence is material or data that has been analyzed and that can be used, as dictionary definitions state, as ‘proof.’ A student gathers ‘stuff’ in the ePortfolio, selects, reflects, etc., and presents evidence that makes a case (or not)… The use of this distinction has been indispensable here. An embarrassing amount of academic assessment work culminates in the presentation of ‘stuff’ that has not been analyzed--student evaluations, grades, pass rates, retention, etc. After reading these ‘self studies,’ we ask the stumping question--fine, but what have you learned? Much of the ‘evidence’ we review has been presented without thought or with the general assumption that it is somehow self-evident… But too often that kind of evidence has not focused on an issue or problem or question. It is evidence that provides proof of nothing.
  •  
    a bit of a context shift, but....
Joshua Yeidel

Sharepoint and Enterprise 2.0: The good, the bad, and the ugly | Enterprise Web 2.0 | Z... - 0 views

  •  
    ...due the fact that the single most frequently asked question I get about Enterprise 2.0 is if SharePoint is a suitable platform for it (short answer: it definitely depends), I've spent the last few weeks taking a hard look at SharePoint the product itself, talked extensively with SharePoint and Enterprise 2.0 practitioners both, and created the resulting analysis.
1 - 20 of 79 Next › Last »
Showing 20 items per page