Skip to main content

Home/ CTLT and Friends/ Group items tagged alignment

Rss Feed Group items tagged

Gary Brown

Change Management 101: A Primer - 1 views

shared by Gary Brown on 13 Jan 10 - Cached
  • To recapitulate, there are at least four basic definitions of change management:  1.      The task of managing change (from a reactive or a proactive posture) 2.      An area of professional practice (with considerable variation in competency and skill levels among practitioners) 3.      A body of knowledge (consisting of models, methods, techniques, and other tools) 4.      A control mechanism (consisting of requirements, standards, processes and procedures).
  • the problems found in organizations, especially the change problems, have both a content and a process dimension.
  • The process of change has been characterized as having three basic stages: unfreezing, changing, and re-freezing. This view draws heavily on Kurt Lewin’s adoption of the systems concept of homeostasis or dynamic stability.
  • ...10 more annotations...
  • The Change Process as Problem Solving and Problem Finding
  • What is not useful about this framework is that it does not allow for change efforts that begin with the organization in extremis
  • this framework is that it gives rise to thinking about a staged approach to changing things.
  • Change as a “How” Problem
  • Change as a “What” Problem
  • Change as a “Why” Problem
  • The Approach taken to Change Management Mirrors Management's Mindset
  • People in core units, buffered as they are from environmental turbulence and with a history of relying on adherence to standardized procedures, typically focus on “how” questions.
  • To summarize: Problems may be formulated in terms of “how,” “what” and “why” questions. Which formulation is used depends on where in the organization the person posing the question or formulating the problem is situated, and where the organization is situated in its own life cycle. “How” questions tend to cluster in core units. “What” questions tend to cluster in buffer units. People in perimeter units tend to ask “what” and “how” questions. “Why” questions are typically the responsibility of top management.
  • One More Time: How do you manage change? The honest answer is that you manage it pretty much the same way you’d manage anything else of a turbulent, messy, chaotic nature, that is, you don’t really manage it, you grapple with it. It’s more a matter of leadership ability than management skill. The first thing to do is jump in. You can’t do anything about it from the outside. A clear sense of mission or purpose is essential. The simpler the mission statement the better. “Kick ass in the marketplace” is a whole lot more meaningful than “Respond to market needs with a range of products and services that have been carefully designed and developed to compare so favorably in our customers’ eyes with the products and services offered by our competitors that the majority of buying decisions will be made in our favor.” Build a team. “Lone wolves” have their uses, but managing change isn’t one of them. On the other hand, the right kind of lone wolf makes an excellent temporary team leader. Maintain a flat organizational team structure and rely on minimal and informal reporting requirements. Pick people with relevant skills and high energy levels. You’ll need both. Toss out the rulebook. Change, by definition, calls for a configured response, not adherence to prefigured routines. Shift to an action-feedback model. Plan and act in short intervals. Do your analysis on the fly. No lengthy up-front studies, please. Remember the hare and the tortoise. Set flexible priorities. You must have the ability to drop what you’re doing and tend to something more important. Treat everything as a temporary measure. Don’t “lock in” until the last minute, and then insist on the right to change your mind. Ask for volunteers. You’ll be surprised at who shows up. You’ll be pleasantly surprised by what they can do. Find a good “straw boss” or team leader and stay out of his or her way. Give the team members whatever they ask for — except authority. They’ll generally ask only for what they really need in the way of resources. If they start asking for authority, that’s a signal they’re headed toward some kind of power-based confrontation and that spells trouble. Nip it in the bud! Concentrate dispersed knowledge. Start and maintain an issues logbook. Let anyone go anywhere and talk to anyone about anything. Keep the communications barriers low, widely spaced, and easily hurdled. Initially, if things look chaotic, relax — they are. Remember, the task of change management is to bring order to a messy situation, not pretend that it’s already well organized and disciplined.
  •  
    Note the "why" challenge and the role of leadership
Gary Brown

Matthew Lombard - 0 views

  • Which measure(s) of intercoder reliability should researchers use? [TOP] There are literally dozens of different measures, or indices, of intercoder reliability. Popping (1988) identified 39 different "agreement indices" for coding nominal categories, which excludes several techniques for interval and ratio level data. But only a handful of techniques are widely used. In communication the most widely used indices are: Percent agreement Holsti's method Scott's pi (p) Cohen's kappa (k) Krippendorff's alpha (a)
  • 5. Which measure(s) of intercoder reliability should researchers use? [TOP] There are literally dozens of different measures, or indices, of intercoder reliability. Popping (1988) identified 39 different "agreement indices" for coding nominal categories, which excludes several techniques for interval and ratio level data. But only a handful of techniques are widely used. In communication the most widely used indices are: Percent agreement Holsti's method Scott's pi (p) Cohen's kappa (k) Krippendorff's alpha (a) Just some of the indices proposed, and in some cases widely used, in other fields are Perreault and Leigh's (1989) Ir measure; Tinsley and Weiss's (1975) T index; Bennett, Alpert, and Goldstein's (1954) S index; Lin's (1989) concordance coefficient; Hughes and Garrett’s (1990) approach based on Generalizability Theory, and Rust and Cooil's (1994) approach based on "Proportional Reduction in Loss" (PRL). It would be nice if there were one universally accepted index of intercoder reliability. But despite all the effort that scholars, methodologists and statisticians have devoted to developing and testing indices, there is no consensus on a single, "best" one. While there are several recommendations for Cohen's kappa (e.g., Dewey (1983) argued that despite its drawbacks, kappa should still be "the measure of choice") and this index appears to be commonly used in research that involves the coding of behavior (Bakeman, 2000), others (notably Krippendorff, 1978, 1987) have argued that its characteristics make it inappropriate as a measure of intercoder agreement.
  • 5. Which measure(s) of intercoder reliability should researchers use? [TOP] There are literally dozens of different measures, or indices, of intercoder reliability. Popping (1988) identified 39 different "agreement indices" for coding nominal categories, which excludes several techniques for interval and ratio level data. But only a handful of techniques are widely used. In communication the most widely used indices are: Percent agreement Holsti's method Scott's pi (p) Cohen's kappa (k) Krippendorff's alpha (a) Just some of the indices proposed, and in some cases widely used, in other fields are Perreault and Leigh's (1989) Ir measure; Tinsley and Weiss's (1975) T index; Bennett, Alpert, and Goldstein's (1954) S index; Lin's (1989) concordance coefficient; Hughes and Garrett’s (1990) approach based on Generalizability Theory, and Rust and Cooil's (1994) approach based on "Proportional Reduction in Loss" (PRL). It would be nice if there were one universally accepted index of intercoder reliability. But despite all the effort that scholars, methodologists and statisticians have devoted to developing and testing indices, there is no consensus on a single, "best" one. While there are several recommendations for Cohen's kappa (e.g., Dewey (1983) argued that despite its drawbacks, kappa should still be "the measure of choice") and this index appears to be commonly used in research that involves the coding of behavior (Bakeman, 2000), others (notably Krippendorff, 1978, 1987) have argued that its characteristics make it inappropriate as a measure of intercoder agreement.
  •  
    for our formalizing of assessment work
  •  
    inter-rater reliability
Gary Brown

Top News - School of the Future: Lessons in failure - 0 views

  • School of the Future: Lessons in failure How Microsoft's and Philadelphia's innovative school became an example of what not to do By Meris Stansbury, Associate Editor   Primary Topic Channel:  Tech Leadership   Students at the School of the Future when it first opened in 2006. <script language=JavaScript src="http://rotator.adjuggler.com/servlet/ajrotator/173768/0/vj?z=eschool&dim=173789&pos=6&abr=$scriptiniframe"></script><noscript><a href="http://rotator.adjuggler.com/servlet/ajrotator/173768/0/cc?z=eschool&pos=6"><img src="http://rotator.adjuggler.com/servlet/ajrotator/173768/0/vc?z=eschool&dim=173789&pos=6&abr=$imginiframe" width="300" height="250" border="0"></a></noscript> Also of Interest Cheaper eBook reader challenges Kindle Carnegie Corporation: 'Do school differently' Former college QB battles video game maker Dueling curricula put copyright ed in spotlight Campus payroll project sees delays, more costs <script language=JavaScript src="http://rotator.adjuggler.com/servlet/ajrotator/324506/0/vj?z=eschool&dim=173789&pos=2&abr=$scriptiniframe"></script><noscript><a href="http://rotator.adjuggler.com/servlet/ajrotator/324506/0/cc?z=eschool&pos=2"><img src="http://rotator.adjuggler.com/servlet/ajrotator/324506/0/vc?z=eschool&dim=173789&pos=2&abr=$imginiframe" width="300" height="250" border="0"></a></noscript> When it opened its doors in 2006, Philadelphia's School of the Future (SOF) was touted as a high school that would revolutionize education: It would teach at-risk students critical 21st-century skills needed for college and the work force by emphasizing project-based learning, technology, and community involvement. But three years, three superintendents, four principals, and countless problems later, experts at a May 28 panel discussion hosted by the American Enterprise Institute (AEI) agreed: The Microsoft-inspired project has been a failure so far. Microsoft points to the school's rapid turnover in leadership as the key reason for this failure, but other observers question why the company did not take a more active role in translating its vision for the school into reality. Regardless of where the responsibility lies, the project's failure to date offers several cautionary lessons in school reform--and panelists wondered if the school could use these lessons to succeed in the future.
  •  
    The discussion about Microsoft's Philadelphia School of the future, failing so far. (partial access to article only)
  •  
    I highlight this as a model where faculty and their teaching beliefs appear not to have been addressed.
Theron DesRosier

CDC Evaluation Working Group: Framework - 2 views

  • Framework for Program Evaluation
  • Purposes The framework was developed to: Summarize and organize the essential elements of program evaluation Provide a common frame of reference for conducting evaluations Clarify the steps in program evaluation Review standards for effective program evaluation Address misconceptions about the purposes and methods of program evaluation
  • Assigning value and making judgments regarding a program on the basis of evidence requires answering the following questions: What will be evaluated? (i.e. what is "the program" and in what context does it exist) What aspects of the program will be considered when judging program performance? What standards (i.e. type or level of performance) must be reached for the program to be considered successful? What evidence will be used to indicate how the program has performed? What conclusions regarding program performance are justified by comparing the available evidence to the selected standards? How will the lessons learned from the inquiry be used to improve public health effectiveness?
  • ...3 more annotations...
  • These questions should be addressed at the beginning of a program and revisited throughout its implementation. The framework provides a systematic approach for answering these questions.
  • Steps in Evaluation Practice Engage stakeholders Those involved, those affected, primary intended users Describe the program Need, expected effects, activities, resources, stage, context, logic model Focus the evaluation design Purpose, users, uses, questions, methods, agreements Gather credible evidence Indicators, sources, quality, quantity, logistics Justify conclusions Standards, analysis/synthesis, interpretation, judgment, recommendations Ensure use and share lessons learned Design, preparation, feedback, follow-up, dissemination Standards for "Effective" Evaluation Utility Serve the information needs of intended users Feasibility Be realistic, prudent, diplomatic, and frugal Propriety Behave legally, ethically, and with due regard for the welfare of those involved and those affected Accuracy Reveal and convey technically accurate information
  • The challenge is to devise an optimal — as opposed to an ideal — strategy.
  •  
    Framework for Program Evaluation by the CDC This is a good resource for program evaluation. Click through "Steps and Standards" for information on collecting credible evidence and engaging stakeholders.
Gary Brown

More thinking about the alignment project « The Weblog of (a) David Jones - 0 views

  • he dominant teaching experience for academics is teaching an existing course, generally one the academic has taught previously. In such a setting, academics spend most of their time fine tuning a course or making minor modifications to material or content (Stark, 2000)
  • many academic staff continue to employ inappropriate, teacher-centered, content focused strategies”. If the systems and processes of university teaching and learning practice do not encourage and enable everyday consideration of alignment, is it surprising that many academics don’t consider alignment?
  • student learning outcomes are significantly higher when there are strong links between those learning outcomes, assessment tasks, and instructional activities and materials.
  • ...11 more annotations...
  • Levander and Mikkola (2009) describe the full complexity of managing alignment at the degree level which makes it difficult for the individual teacher and the program coordinator to keep connections between courses in mind.
  • Make explicit the quality model.
  • Build in support for quality enhancement.
  • Institute a process for quality feasibility.
  • Cohen (1987) argues that limitations in learning are not mainly caused by ineffective teaching, but are instead mostly the result of a misalignment between what teachers teach, what they intend to teach, and what they assess as having been taught.
  • Raban (2007) observes that the quality management systems of most universities employ procedures that are retrospective and weakly integrated with long term strategic planning. He continues to argue that the conventional quality management systems used by higher education are self-defeating as they undermine the commitment and motivation of academic staff through an apparent lack of trust, and divert resources away from the core activities of teaching and research (Raban, 2007, p. 78).
  • Ensure participation of formal institutional leadership and integration with institutional priorities.
  • Action research perspective, flexible responsive.
  • Having a scholarly, not bureaucratic focus.
  • Modifying an institutional information system.
  • A fundamental enabler of this project is the presence of an information system that is embedded into the everyday practice of teaching and learning (for both students and staff) that encourages and enables consideration of alignment.
  •  
    a long blog, but underlying principles align with the Guide to Effective Assessment on many levels.
Gary Brown

Encyclopedia of Educational Technology - 1 views

  • The revised taxonomy (Anderson and Krathwohl, 2001) incorporates both the kind of knowledge to be learned (knowledge dimension) and the process used to learn (cognitive process), allowing for the instructional designer to efficiently align objectives to assessment techniques. Both dimensions are illustrated in the following table that can be used to help write clear, focused objectives.
  • Teachers may also use the new taxonomy dimensions to examine current objectives in units, and to revise the objectives so that they will align with one another, and with assessments.
  • Anderson and Krathwohl also list specific verbs that can be used when writing objectives for each column of the cognitive process dimension.
  •  
    Bloom has not gone away, and this revision helps delimit the nominalist implications
Gary Brown

Office of the President: Elson S. Floyd's Blog - 2 views

  • SU research directly supports the economies of so many of our communities statewide – communities and economies many of our graduates will soon take part in when they enter the workforce.  Therefore, it is imperative we align WSU to work closely and strategically with the state, and more specifically, very intimately with the counties, communities and businesses to maximize the efficiency and value of our extension, educational, outreach and research programs
  •  
    The president speaks about community engagement and aligning our goals.....
Gary Brown

The Potential Impact of Common Core Standards - 2 views

  • According to the Common Core State Standards Initiative (CCSSI), the goal “is to ensure that academic expectations for students are of high quality and consistent across all states and territories.” To educators across the nation, this means they now have to sync up all curriculum in math and language arts for the benefit of the students.
  • They are evidence based, aligned with college and work expectations, include rigorous content and skills, and are informed by other top performing countries.”
  • “Educational standards help teachers ensure their students have the skills and knowledge they need to be successful by providing clear goals for student learning.” They are really just guidelines for students, making sure they are on the right track with their learning.
  • ...2 more annotations...
  • When asked the simple question of what school standards are, most students are unable to answer the question. When the concept is explained, however, they really do not know if having common standards would make a difference or not. Codie Allen, a senior in the Vail School Distract says, “I think that things will pretty much stay stagnate, people aren’t really going to change because of standards.”
  • Council of Chief State School Officers. Common Core State Standards Initiative, 2010.
Joshua Yeidel

ACM Ubiquity - An Interview with Michael Schrage - 0 views

  • I learn about the organization's innovation culture as follows: I say that, when someone comes up with an idea you think is a good one and people say, "We can't do that because..." then whatever follows the words "we can't do that because... " is your innovation culture.
  • UBIQUITY: What turns people into such dolts? SCHRAGE:         Internal imperatives.
  •  
    An MIT Media Lab expert on innovation says it's about outcomes, not ideas.
Nils Peterson

National Institute for Learning Outcomes Assessment - 1 views

  • Of the various ways to assess student learning outcomes, many faculty members prefer what are called “authentic” approaches that document student performance during or at the end of a course or program of study.  Authentic assessments typically ask students to generate rather than choose a response to demonstrate what they know and can do.  In their best form, such assessments are flexible and closely aligned with teaching and learning processes, and represent some of students more meaningful educational experiences.  In this paper, assessment experts Trudy Banta, Merilee Griffin, Theresa Flateby, and Susan Kahn describe the development of several promising authentic assessment approaches. 
  • Educators and policy makers in postsecondary education are interested in assessment processes that improve student learning, and at the same time provide comparable data for the purpose of demonstrating accountability.
  • First, ePortfolios provide an in-depth, long-term view of student achievement on a range of skills and abilities instead of a quick snapshot based on a single sample of learning outcomes. Second, a system of rubrics used to evaluate student writing and depth of learning has been combined with faculty learning and team assessments, and is now being used at multiple institutions. Third, online assessment communities link local faculty members in collaborative work to develop shared norms and teaching capacity, and then link local communities with each other in a growing system of assessment.
    • Nils Peterson
       
      hey, does this sound familiar? i'm guessing the portfolios are not anywhere on the Internet, but we're otherwise in good company
  • ...1 more annotation...
  • Three Promising Alternatives for Assessing College Students' Knowledge and Skills
    • Nils Peterson
       
      I'm not sure they are 'alternatives' so much as 3 elements we would combine into a single strategy
Gary Brown

News: No Letup From Washington - Inside Higher Ed - 1 views

  • Virtually all of the national higher education leaders who spoke to the country's largest accrediting group sent a version of the same message: The federal government is dead serious about holding colleges and universities accountable for their performance, and can be counted on to impose undesirable requirements if higher education officials don't make meaningful changes themselves.
  • "This is meant to be a wakeup call," Molly Corbett Broad, president of the American Council on Education, said in Monday's keynote address
  • I believe it’s wise for us to assume they will have little reservation about regulating higher education now that they know it is too important to fail."
  • ...7 more annotations...
  • Obama administration will be tough on colleges because its officials value higher education and believe it needs to perform much better, and successfully educate many more students, to drive the American economy.
  • In her own speech to the Higher Learning Commission’s members on Sunday, Sylvia Manning, the group’s president, cited several signs that the new administration seemed willing to delve into territory that not long ago would have been viewed as off-limits to federal intrusion. Among them: A recently published “draft” of a guide to accreditation that many accrediting officials believe is overly prescriptive. A just-completed round of negotiations over proposed rules that deal with the definition of a “credit hour” and other issues that touch on academic quality -- areas that have historically been the province of colleges and their faculties. And, of special relevance for the Higher Learning Commission, a trio of critical letters from the Education Department’s inspector general challenging the association’s policies and those of two other regional accreditors on key matters -- and in North Central’s case, questioning its continued viability. With that stroke, Manning noted, the department’s newfound activism “has come to the doorstep, or into the living room, of HLC.”
  • Pressure to measure student learning -- to find out which tactics and approaches are effective, which create efficiency without lowering results -- is increasingly coming from what Broad called the Obama administration's "kitchen cabinet," foundations like the Lumina Foundation for Education (which she singled out) to which the White House and Education Department are increasingly looking for education policy help.
  • She cited an October speech in which the foundation's president, Jamie P. Merisotis, said that student learning should be recognized as the "primary measure of quality in higher education," and heralded the European Union's Bologna process as a potential path for making that so
  • we cannot lay low and hope that the glare of the spotlight will eventually fall on others," Broad told the Higher Learning Commission audience.
  • While higher ed groups have been warned repeatedly that they must act before Congress next renews the Higher Education Act -- a process that will begin in earnest in two or three years -- the reality is that politicians in Washington no longer feel obliged to hold off on major changes to higher education policy until that main law is reviewed. Congress has passed "seven major pieces of legislation" related to higher education in recent years, and "I wish I could tell you that the window is open" until the next reauthorization, Broad said. "But we cannot presume that we have the luxury of years within which to get our collective house in order. We must act quickly."
  • But where will such large-scale change come from? The regional accreditors acting together to align their standards? Groups of colleges working together to agree on a common set of learning outcomes for general education, building on the work of the American Association of Colleges and Universities? No answers here, yet.
  •  
    Note the positions of the participants
Gary Brown

It's the Learning, Stupid - Lumina Foundation: Helping People Achieve Their Potential - 3 views

  • My thesis is this. We live in a world where much is changing, quickly. Economic crises, technology, ideological division, and a host of other factors have all had a profound influence on who we are and what we do in higher education. But when all is said and done, it is imperative that we not lose sight of what matters most. To paraphrase the oft-used maxim of the famous political consultant James Carville, it's the learning, stupid.
  • We believe that, to significantly increase higher education attainment rates, three intermediate outcomes must first occur: Higher education must use proven strategies to move students to completion. Quality data must be used to improve student performance and inform policy and decision-making at all levels. The outcomes of student learning must be defined, measured, and aligned with workforce needs. To achieve these outcomes (and thus improve success rates), Lumina has decided to pursue several specific strategies. I'll cite just a few of these many different strategies: We will advocate for the redesign, rebranding and improvement of developmental education. We will explore the development of alternative pathways to degrees and credentials. We will push for smoother systems of transferring credit so students can move more easily between institutions, including from community colleges to bachelor's degree programs.
  • "Lumina defines high-quality credentials as degrees and certificates that have well-defined and transparent learning outcomes which provide clear pathways to further education and employment."
  • ...4 more annotations...
  • And—as Footnote One softly but incessantly reminds us—quality, at its core, must be a measure of what students actually learn and are able to do with the knowledge and skills they gain.
  • and yet we seem reluctant or unable to discuss higher education's true purpose: equipping students for success in life.
  • Research has already shown that higher education institutions vary significantly in the value they add to students in terms of what those students actually learn. Various tools and instruments tell us that some institutions add much more value than others, even when looking at students with similar backgrounds and abilities.
  • The idea with tuning is to take various programs within a specific discipline—chemistry, history, psychology, whatever—and agree on a set of learning outcomes that a degree in the field represents. The goal is not for the various programs to teach exactly the same thing in the same way or even for all of the programs to offer the same courses. Rather, programs can employ whatever techniques they prefer, so long as their students can demonstrate mastery of an agreed-upon body of knowledge and set of skills. To use the musical terminology, the various programs are not expected to play the same notes, but to be "tuned" to the same key.
Gary Brown

At Colleges, Assessment Satisfies Only Accreditors - Letters to the Editor - The Chroni... - 2 views

  • Some of that is due to the influence of the traditional academic freedom that faculty members have enjoyed. Some of it is ego. And some of it is lack of understanding of how it can work. There is also a huge disconnect between satisfying outside parties, like accreditors and the government, and using assessment as a quality-improvement system.
  • We are driven by regional accreditation and program-level accreditation, not by quality improvement. At our institution, we talk about assessment a lot, and do just enough to satisfy the requirements of our outside reviewers.
  • Standardized direct measures, like the Major Field Test for M.B.A. graduates?
  • ...5 more annotations...
  • The problem with the test is that it does not directly align with our program's learning outcomes and it does not yield useful information for closing the loop. So why do we use it? Because it is accepted by accreditors as a direct measure and it is less expensive and time-consuming than more useful tools.
  • Without exception, the most useful information for improving the program and student learning comes from the anecdotal and indirect information.
  • We don't have the time and the resources to do what we really want to do to continuously improve the quality of our programs and instruction. We don't have a culture of continuous improvement. We don't make changes on a regular basis, because we are trapped by the catalog publishing cycle, accreditation visits, and the entrenched misunderstanding of the purposes of assessment.
  • The institutions that use it are ones that have adequate resources to do so. The time necessary for training, whole-system involvement, and developing the programs for improvement is daunting. And it is only being used by one regional accrediting body, as far as I know.
  • Until higher education as a whole is willing to look at changing its approach to assessment, I don't think it will happen
  •  
    The challenge and another piece of evidence that the nuances of assessment as it related to teaching and learning remain elusive.
1 - 13 of 13
Showing 20 items per page