Skip to main content

Home/ Educational Analytics/ Group items matching "learning" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
George Bradford

What Google Learned From Its Quest to Build the Perfect Team - The New York Times - 1 views

  •  
    "What Google Learned From Its Quest to Build the Perfect Team"
George Bradford

NSSE Home - 0 views

  •  
    National Survey of Student Engagement What is student engagement? Student engagement represents two critical features of collegiate quality. The first is the amount of time and effort students put into their studies and other educationally purposeful activities. The second is how the institution deploys its resources and organizes the curriculum and other learning opportunities to get students to participate in activities that decades of research studies show are linked to student learning. What does NSSE do? Through its student survey, The College Student Report, NSSE annually collects information at hundreds of four-year colleges and universities about student participation in programs and activities that institutions provide for their learning and personal development. The results provide an estimate of how undergraduates spend their time and what they gain from attending college. NSSE provides participating institutions a variety of reports that compare their students' responses with those of students at self-selected groups of comparison institutions. Comparisons are available for individual survey questions and the five NSSE Benchmarks of Effective Educational Practice. Each November, NSSE also publishes its Annual Results, which reports topical research and trends in student engagement results. NSSE researchers also present and publish research findings throughout the year.
George Bradford

Many Eyes - 0 views

  •  
    Try yourself: Explore ::Visualizations :: Data sets :: Comments :: Topic centers Participate :: Create a visualization :: Upload a data set :: Create a topic center Learn more :: Quick start :: Visualization types :: About Many Eyes :: Privacy :: Blog
George Bradford

Student Learning and Analytics at Michigan (SLAM) | CRLT - 0 views

  •  
    "Student Learning and Analytics at Michigan (SLAM)"
George Bradford

Office of Student Learning Assessment: Examples of Direct and Indirect Measures - Cleveland State University - 0 views

  •  
    "Examples of Direct and Indirect Measures Examples of Direct Measures of Student Learning"
George Bradford

Introducing #pLASMA: project on Learning Analytics in the Social Medi… - 0 views

  •  
    SlideShare by Caroline Haythornthwaite, Rafa Absar, and Drew Paulin.
George Bradford

University builds 'course recommendation engine' to steer students toward completion | Inside Higher Ed - 0 views

  •  
    Recommended for You March 16, 2012 - 3:00am By Steve Kolowich Completing assignments and sitting through exams can be stressful. But when it comes to being graded the waiting is often the hardest part. This is perhaps most true at the end of a semester, as students wait for their instructors to reduce months of work into a series of letter grades that will stay on the books forever. But at Austin Peay State University, students do not have to wait for the end of a semester to learn their grade averages. Thanks to a new technology, pioneered by the university's provost, they do not even have to wait for the semester to start.
George Bradford

Mirror Solution - 0 views

  •  
    Reflective learning at Work
George Bradford

Measuring Teacher Effectiveness - DataQualityCampaign.Org - 0 views

  •  
    Measuring Teacher Effectiveness Significant State Data Capacity is Required to Measure and Improve Teacher Effectiveness  States Increasingly Focus on Improving Teacher Effectiveness: There is significant activity at the local, state, and federal levels to  measure and improve teacher effectiveness, with an unprecedented focus on the use of student achievement as a primary indicator of  effectiveness. > 23 states require that teacher evaluations include evidence of student learning in the form of student growth and/or value-added data (NCTQ, 2011). > 17 states and DC have adopted legislation or regulations that specifically require student achievement and/or student growth to "significantly" inform or be the primary criterion in teacher evaluations(NCTQ, 2011).  States Need Significant Data Capacity to Do This Work: These policy changes have significant data implications. > The linchpin of all these efforts is that states must reliably link students and teachers in ways that capture the complex connections that  exist in schools. > If such data is to be used for high stakes decisions-such as hiring, firing, and tenure-it must be accepted as valid, reliable, and fair. > Teacher effectiveness data can be leveraged to target professional development, inform staffing assignments, tailor classroom instruction,  reflect on practice, support research, and otherwise support teachers.  Federal Policies Are Accelerating State and Local Efforts: Federal policies increasingly support states' efforts to use student  achievement data to measure teacher effectiveness. > Various competitive grant funds, including the Race to the Top grants and the Teacher Incentive Fund, require states to implement teacher  and principal evaluation systems that take student data into account.  > States applying for NCLB waivers, including the 11 that submitted requests in November 2011, must commit to implementing teacher and  principal evaluation and support systems. > P
George Bradford

SpringerLink - Abstract - Dr. Fox Rocks: Using Data-mining Techniques to Examine Student Ratings of Instruction - 0 views

  •  
    Abstract Few traditions in higher education evoke more controversy, ambivalence, criticism, and, at the same time, support than student evaluation of instruction (SEI). Ostensibly, results from these end-of-course survey instruments serve two main functions: they provide instructors with formative input for improving their teaching, and they serve as the basis for summative profiles of professors' effectiveness through the eyes of their students. In the academy, instructor evaluations also can play out in the high-stakes environments of tenure, promotion, and merit salary increases, making this information particularly important to the professional lives of faculty members. At the research level, the volume of the literature for student ratings impresses even the most casual observer with well over 2,000 studies referenced in the Education Resources Information Center (ERIC) alone (Centra, 2003) and an untold number of additional studies published in educational, psychological, psychometric, and discipline-related journals. There have been numerous attempts at summarizing this work (Algozzine et al., 2004; Gump, 2007; Marsh & Roche, 1997; Pounder, 2007; Wachtel, 1998). Student ratings gained such notoriety that in November 1997 the American Psychologist devoted an entire issue to the topic (Greenwald, 1997). The issue included student ratings articles focusing on stability and reliability, validity, dimensionality, usefulness for improving teaching and learning, and sensitivity to biasing factors, such as the Dr. Fox phenomenon that describes eliciting high student ratings with strategies that reflect little or no relationship to effective teaching practice (Ware & Williams, 1975; Williams & Ware, 1976, 1977).
George Bradford

Formation and Learning Analytics? - 0 views

  •  
    Simon Buckingham Shum
George Bradford

Semantic Technologies in Learning Environments -Promises and Challe… - 0 views

  •  
    Dragan Gasevic
George Bradford

Trialling the CLAToolkit - Dashboards and Adventures with IFN614 - 0 views

  •  
    CLA Toolkit - Learning Analytics
George Bradford

UTS Case Study - Ascilite2015 Learning Analytics Workshop - 0 views

  •  
    Simon Shum - Introducing Analytics at UTS - Academic Writing Analytics (slideshare)
bcby c

(7) Eli 2012 Sensemaking Analytics - 0 views

  •  
    George Siemens's PPT at ELI focus group
George Bradford

Using Big Data to Predict Online Student Success | Inside Higher Ed - 0 views

  • Researchers have created a database that measures 33 variables for the online coursework of 640,000 students – a whopping 3 million course-level records.
  • Project Participants American Public University System Community College System of Colorado Rio Salado College University of Hawaii System University of Illinois-Springfield University of Phoenix
  • “What the data seem to suggest, however, is that for students who seem to have a high propensity of dropping out of an online course-based program, the fewer courses they take initially, the better-off they are.”
  • ...6 more annotations...
  • Phil Ice, vice president of research and development for the American Public University System and the project’s lead investigator.
  • Predictive Analytics Reporting Framework
  • Rio Salado, for example, has used the database to create a student performance tracking system.
  • The two-year college, which is based in Arizona, has a particularly strong online presence for a community college – 43,000 of its students are enrolled in online programs. The new tracking system allows instructors to see a red, yellow or green light for each student’s performance. And students can see their own tracking lights.
  • It measures student engagement through their Web interactions, how often they look at textbooks and whether they respond to feedback from instructors, all in addition to their performance on coursework.
  • The data set has the potential to give institutions sophisticated information about small subsets of students – such as which academic programs are best suited for a 25-year-old male Latino with strength in mathematics
  •  
    New students are more likely to drop out of online colleges if they take full courseloads than if they enroll part time, according to findings from a research project that is challenging conventional wisdom about student success. But perhaps more important than that potentially game-changing nugget, researchers said, is how the project has chipped away at skepticism in higher education about the power of "big data." Researchers have created a database that measures 33 variables for the online coursework of 640,000 students - a whopping 3 million course-level records. While the work is far from complete, the variables help track student performance and retention across a broad range of demographic factors. The data can show what works at a specific type of institution, and what doesn't. That sort of predictive analytics has long been embraced by corporations, but not so much by the academy. The ongoing data-mining effort, which was kicked off last year with a $1 million grant from the Bill and Melinda Gates Foundation, is being led by WCET, the WICHE Cooperative for Educational Technologies.
George Bradford

From the Semantic Web to social machines: A research challenge for AI on the World Wide Web 10.1016/j.artint.2009.11.010 : Artificial Intelligence | ScienceDirect.com - 0 views

  •  
    From the Semantic Web to social machines: A research challenge for AI on the World Wide Web Jim Hendler, Tim Berners-Lee Abstract The advent of social computing on the Web has led to a new generation of Web applications that are powerful and world-changing. However, we argue that we are just at the beginning of this age of "social machines" and that their continued evolution and growth requires the cooperation of Web and AI researchers. In this paper, we show how the growing Semantic Web provides necessary support for these technologies, outline the challenges we see in bringing the technology to the next level, and propose some starting places for the research.
« First ‹ Previous 41 - 60 of 62 Next ›
Showing 20 items per page