Skip to main content

Home/ Educational Analytics/ Group items tagged evaluation

Rss Feed Group items tagged

George Bradford

Seeking Evidence of Impact: Opportunities and Needs (EDUCAUSE Review) | EDUCAUSE - 0 views

  • Conversations with CIOs and other senior IT administrators reveal a keen interest in the results of evaluation in teaching and learning to guide fiscal, policy, and strategic decision-making. Yet those same conversations reveal that this need is not being met.
  • gain a wider and shared understanding of “evidence” and “impact” in teaching and learning
  • establish a community of practice
  • ...11 more annotations...
  • provide professional-development opportunities
  • explore successful institutional and political contexts
  • establish evidence-based practice
  • The most important reason is that in the absence of data, anecdote can become the primary basis for decision-making. Rarely does that work out very well.
  • autocatalytic evaluation process—one that builds its own synergy.
  • We live by three principles: uncollected data cannot be analyzed; the numbers are helped by a brief and coherent summary; and good graphs beat tables every time.
  • Reports and testimonies from faculty and students (57%) Measures of student and faculty satisfaction (50%) Measures of student mastery (learning outcomes) (41%) Changes in faculty teaching practice (35%) Measures of student and faculty engagement (32%)
  • The survey results also indicate a need for support in undertaking impact-evaluation projects.
  • Knowing where to begin to measure the impact of technology-based innovations in teaching and learning Knowing which measurement and evaluation techniques are most appropriate Knowing the most effective way to analyze evidence 
  • The challenge of persuasion is what ELI has been calling the last mile problem. There are two interrelated components to this issue: (1) influencing faculty members to improve instructional practices at the course level, and (2) providing evidence to help inform key strategic decisions at the institutional level.
  • Broadly summarized, our results reveal a disparity between the keen interest in research-based evaluation and the level of resources that are dedicated to it—prompting a grass-roots effort to support this work.
  •  
    The SEI program is working with the teaching and learning community to gather evidence of the impact of instructional innovations and current practices and to help evaluate the results. The calls for more accountability in higher education, the shrinking budgets that often force larger class sizes, and the pressures to increase degree-completion rates are all raising the stakes for colleges and universities today, especially with respect to the instructional enterprise. As resources shrink, teaching and learning is becoming the key point of accountability. The evaluation of instructional practice would thus seem to be an obvious response to such pressures, with institutions implementing systematic programs of evaluation in teaching and learning, especially of instructional innovations.
George Bradford

Program Evaluation Standards « Joint Committee on Standards for Educational E... - 0 views

  •  
    "   Welcome to the Program Evaluation Standards, 3rd Edition   Standards Names and Statements Errata Sheet for the book   After seven years of systematic effort and much study, the 3rd edition of the Program Evaluation Standards was published this fall by Sage Publishers: http://www.sagepub.com/booksProdDesc.nav?prodId=Book230597&_requestid=255617. The development process relied on formal and informal needs assessments, reviews of existing scholarship, and the involvement of more than 400 stakeholders in national and international reviews, field trials, and national hearings. It's the first revision of the standards in 17 years. This third edition is similar to the previous two editions (1981, 1994) in many respects, for example, the book is organized into the same four dimensions of evaluation quality (utility, feasibility, propriety, and accuracy). It also still includes the popular and useful "Functional Table of Standards," a glossary, extensive documentation, information about how to apply the standards, and numerous case applications."
George Bradford

QUT | Learning and Teaching Unit | REFRAME - 0 views

  •  
    REFRAME REFRAME is a university-wide project reconceptualising QUT's evaluation of learning and teaching. REFRAME is fundamentally reconsidering QUT's overall approach to evaluating learning and teaching. Our aim is to develop a sophisticated risk-based system to gather, analyse and respond to data along with a broader set of user-centered resources. The objective is to provide individuals and teams with the tools, support and reporting they need to meaningfully reflect upon, review and improve teaching, student learning and the curriculum. The approach will be informed by feedback from the university community, practices in other institutions and the literature, and will, as far as possible, be 'future-proofed' through awareness of emergent evaluation trends and tools. Central to REFRAME is the consideration of the purpose of evaluation and the features that a future approach should consider.
George Bradford

Measuring Teacher Effectiveness - DataQualityCampaign.Org - 0 views

  •  
    Measuring Teacher Effectiveness Significant State Data Capacity is Required to Measure and Improve Teacher Effectiveness  States Increasingly Focus on Improving Teacher Effectiveness: There is significant activity at the local, state, and federal levels to  measure and improve teacher effectiveness, with an unprecedented focus on the use of student achievement as a primary indicator of  effectiveness. > 23 states require that teacher evaluations include evidence of student learning in the form of student growth and/or value-added data (NCTQ, 2011). > 17 states and DC have adopted legislation or regulations that specifically require student achievement and/or student growth to "significantly" inform or be the primary criterion in teacher evaluations(NCTQ, 2011).  States Need Significant Data Capacity to Do This Work: These policy changes have significant data implications. > The linchpin of all these efforts is that states must reliably link students and teachers in ways that capture the complex connections that  exist in schools. > If such data is to be used for high stakes decisions-such as hiring, firing, and tenure-it must be accepted as valid, reliable, and fair. > Teacher effectiveness data can be leveraged to target professional development, inform staffing assignments, tailor classroom instruction,  reflect on practice, support research, and otherwise support teachers.  Federal Policies Are Accelerating State and Local Efforts: Federal policies increasingly support states' efforts to use student  achievement data to measure teacher effectiveness. > Various competitive grant funds, including the Race to the Top grants and the Teacher Incentive Fund, require states to implement teacher  and principal evaluation systems that take student data into account.  > States applying for NCLB waivers, including the 11 that submitted requests in November 2011, must commit to implementing teacher and  principal evaluation and support systems. > P
George Bradford

ScienceDirect - The Internet and Higher Education : A course is a course is a course: F... - 0 views

  •  
    "Abstract The authors compared the underlying student response patterns to an end-of-course rating instrument for large student samples in online, blended and face-to-face courses. For each modality, the solution produced a single factor that accounted for approximately 70% of the variance. The correlations among the factors across the class formats showed that they were identical. The authors concluded that course modality does not impact the dimensionality by which students evaluate their course experiences. The inability to verify multiple dimensions for student evaluation of instruction implies that the boundaries of a typical course are beginning to dissipate. As a result, the authors concluded that end-of-course evaluations now involve a much more complex network of interactions. Highlights ► The study models student satisfaction in the online, blended, and face-to-face course modalities. ► The course models vary technology involvement. ► Image analysis produced single dimension solutions. ► The solutions were identical across modalities. Keywords: Student rating of instruction; online learning; blended learning; factor analysis; student agency"
George Bradford

SpringerLink - Abstract - Dr. Fox Rocks: Using Data-mining Techniques to Examine Studen... - 0 views

  •  
    Abstract Few traditions in higher education evoke more controversy, ambivalence, criticism, and, at the same time, support than student evaluation of instruction (SEI). Ostensibly, results from these end-of-course survey instruments serve two main functions: they provide instructors with formative input for improving their teaching, and they serve as the basis for summative profiles of professors' effectiveness through the eyes of their students. In the academy, instructor evaluations also can play out in the high-stakes environments of tenure, promotion, and merit salary increases, making this information particularly important to the professional lives of faculty members. At the research level, the volume of the literature for student ratings impresses even the most casual observer with well over 2,000 studies referenced in the Education Resources Information Center (ERIC) alone (Centra, 2003) and an untold number of additional studies published in educational, psychological, psychometric, and discipline-related journals. There have been numerous attempts at summarizing this work (Algozzine et al., 2004; Gump, 2007; Marsh & Roche, 1997; Pounder, 2007; Wachtel, 1998). Student ratings gained such notoriety that in November 1997 the American Psychologist devoted an entire issue to the topic (Greenwald, 1997). The issue included student ratings articles focusing on stability and reliability, validity, dimensionality, usefulness for improving teaching and learning, and sensitivity to biasing factors, such as the Dr. Fox phenomenon that describes eliciting high student ratings with strategies that reflect little or no relationship to effective teaching practice (Ware & Williams, 1975; Williams & Ware, 1976, 1977).
George Bradford

Open Research Online - Learning analytics to identify exploratory dialogue within synch... - 0 views

  •  
    While generic web analytics tend to focus on easily harvested quantitative data, Learning Analytics will often seek qualitative understanding of the context and meaning of this information. This is critical in the case of dialogue, which may be employed to share knowledge and jointly construct understandings, but which also involves many superficial exchanges. Previous studies have validated a particular pattern of "exploratory dialogue" in learning environments to signify sharing, challenge, evaluation and careful consideration by participants. This study investigates the use of sociocultural discourse analysis to analyse synchronous text chat during an online conference. Key words and phrases indicative of exploratory dialogue were identified in these exchanges, and peaks of exploratory dialogue were associated with periods set aside for discussion and keynote speakers. Fewer individuals posted at these times, but meaningful discussion outweighed trivial exchanges. If further analysis confirms the validity of these markers as learning analytics, they could be used by recommendation engines to support learners and teachers in locating dialogue exchanges where deeper learning appears to be taking place.
George Bradford

About | SNAPP - Social Networks Adapting Pedagogical Practice - 3 views

  •  
    "The Social Networks Adapting Pedagogical Practice (SNAPP) tool performs real-time social network analysis and visualization of discussion forum activity within popular commercial and open source Learning Management Systems (LMS). SNAPP essentially serves as a diagnostic instrument, allowing teaching staff to evaluate student behavioral patterns against learning activity design objectives and intervene as required a timely manner. Valuable interaction data is stored within a discussion forum but from the default threaded display of messages it is difficult to determine the level and direction of activity between participants. SNAPP infers relationship ties from the post-reply data and renders a social network diagram below the forum thread. The social network visualization can be filtered based upon user activity and social network data can be exported for further analysis in NetDraw. SNAPP integrates seamlessly with a variety of Learning Management Systems (Blackboard, Moodle and Desire2Learn) and must be triggered while a forum thread is displayed in a Web browser."
1 - 8 of 8
Showing 20 items per page