Skip to main content

Home/ Educational Analytics/ Group items tagged accountability

Rss Feed Group items tagged

George Bradford

Seeking Evidence of Impact: Opportunities and Needs (EDUCAUSE Review) | EDUCAUSE - 0 views

  • Conversations with CIOs and other senior IT administrators reveal a keen interest in the results of evaluation in teaching and learning to guide fiscal, policy, and strategic decision-making. Yet those same conversations reveal that this need is not being met.
  • gain a wider and shared understanding of “evidence” and “impact” in teaching and learning
  • establish a community of practice
  • ...11 more annotations...
  • provide professional-development opportunities
  • explore successful institutional and political contexts
  • establish evidence-based practice
  • The most important reason is that in the absence of data, anecdote can become the primary basis for decision-making. Rarely does that work out very well.
  • autocatalytic evaluation process—one that builds its own synergy.
  • We live by three principles: uncollected data cannot be analyzed; the numbers are helped by a brief and coherent summary; and good graphs beat tables every time.
  • Reports and testimonies from faculty and students (57%) Measures of student and faculty satisfaction (50%) Measures of student mastery (learning outcomes) (41%) Changes in faculty teaching practice (35%) Measures of student and faculty engagement (32%)
  • The survey results also indicate a need for support in undertaking impact-evaluation projects.
  • Knowing where to begin to measure the impact of technology-based innovations in teaching and learning Knowing which measurement and evaluation techniques are most appropriate Knowing the most effective way to analyze evidence 
  • The challenge of persuasion is what ELI has been calling the last mile problem. There are two interrelated components to this issue: (1) influencing faculty members to improve instructional practices at the course level, and (2) providing evidence to help inform key strategic decisions at the institutional level.
  • Broadly summarized, our results reveal a disparity between the keen interest in research-based evaluation and the level of resources that are dedicated to it—prompting a grass-roots effort to support this work.
  •  
    The SEI program is working with the teaching and learning community to gather evidence of the impact of instructional innovations and current practices and to help evaluate the results. The calls for more accountability in higher education, the shrinking budgets that often force larger class sizes, and the pressures to increase degree-completion rates are all raising the stakes for colleges and universities today, especially with respect to the instructional enterprise. As resources shrink, teaching and learning is becoming the key point of accountability. The evaluation of instructional practice would thus seem to be an obvious response to such pressures, with institutions implementing systematic programs of evaluation in teaching and learning, especially of instructional innovations.
George Bradford

Open Research Online - Social Learning Analytics: Five Approaches - 0 views

  •  
    This paper proposes that Social Learning Analytics (SLA) can be usefully thought of as a subset of learning analytics approaches. SLA focuses on how learners build knowledge together in their cultural and social settings. In the context of online social learning, it takes into account both formal and informal educational environments, including networks and communities. The paper introduces the broad rationale for SLA by reviewing some of the key drivers that make social learning so important today. Five forms of SLA are identified, including those which are inherently social, and others which have social dimensions. The paper goes on to describe early work towards implementing these analytics on SocialLearn, an online learning space in use at the UK's Open University, and the challenges that this is raising. This work takes an iterative approach to analytics, encouraging learners to respond to and help to shape not only the analytics but also their associated recommendations.
George Bradford

Measuring Teacher Effectiveness - DataQualityCampaign.Org - 0 views

  •  
    Measuring Teacher Effectiveness Significant State Data Capacity is Required to Measure and Improve Teacher Effectiveness  States Increasingly Focus on Improving Teacher Effectiveness: There is significant activity at the local, state, and federal levels to  measure and improve teacher effectiveness, with an unprecedented focus on the use of student achievement as a primary indicator of  effectiveness. > 23 states require that teacher evaluations include evidence of student learning in the form of student growth and/or value-added data (NCTQ, 2011). > 17 states and DC have adopted legislation or regulations that specifically require student achievement and/or student growth to "significantly" inform or be the primary criterion in teacher evaluations(NCTQ, 2011).  States Need Significant Data Capacity to Do This Work: These policy changes have significant data implications. > The linchpin of all these efforts is that states must reliably link students and teachers in ways that capture the complex connections that  exist in schools. > If such data is to be used for high stakes decisions-such as hiring, firing, and tenure-it must be accepted as valid, reliable, and fair. > Teacher effectiveness data can be leveraged to target professional development, inform staffing assignments, tailor classroom instruction,  reflect on practice, support research, and otherwise support teachers.  Federal Policies Are Accelerating State and Local Efforts: Federal policies increasingly support states' efforts to use student  achievement data to measure teacher effectiveness. > Various competitive grant funds, including the Race to the Top grants and the Teacher Incentive Fund, require states to implement teacher  and principal evaluation systems that take student data into account.  > States applying for NCLB waivers, including the 11 that submitted requests in November 2011, must commit to implementing teacher and  principal evaluation and support systems. > P
George Bradford

ScienceDirect - The Internet and Higher Education : A course is a course is a course: F... - 0 views

  •  
    "Abstract The authors compared the underlying student response patterns to an end-of-course rating instrument for large student samples in online, blended and face-to-face courses. For each modality, the solution produced a single factor that accounted for approximately 70% of the variance. The correlations among the factors across the class formats showed that they were identical. The authors concluded that course modality does not impact the dimensionality by which students evaluate their course experiences. The inability to verify multiple dimensions for student evaluation of instruction implies that the boundaries of a typical course are beginning to dissipate. As a result, the authors concluded that end-of-course evaluations now involve a much more complex network of interactions. Highlights ► The study models student satisfaction in the online, blended, and face-to-face course modalities. ► The course models vary technology involvement. ► Image analysis produced single dimension solutions. ► The solutions were identical across modalities. Keywords: Student rating of instruction; online learning; blended learning; factor analysis; student agency"
1 - 5 of 5
Showing 20 items per page