Skip to main content

Home/ Learning Analytics/ Group items tagged e-learning

Rss Feed Group items tagged

Dianne Rees

Nuts and Bolts: How to Evaluate e-Learning by Jane Bozarth : Learning Solutions Magazine - 3 views

  • The linearity and causality implied within the taxonomy (for instance, the assumption that passing a test at Level 2 will result in improved job performance at Level 3) masks the reality of transferring training into measurable results. Many factors enable — or hinder — the transfer of training to on-the-job behavior change, including support from supervisors, rewards for improved performance, culture of the work unit, issues with procedures and paperwork, and political concerns
  • Robert Brinkerhoff takes a systems view of evaluation of training, believing it should focus on sustained performance rather than attempting to isolate the training effort:
  • learn best from the outliers
  • ...4 more annotations...
  • The method asks evaluators to: Identify individuals or teams that have been most successful in using some new capability or method provided through the training; Document the nature of the success; and Compare to instances of nonsuccess.
  • Stufflebeam
  • training as part of a system
  • a means of formative as well as summative evaluation.
  •  
    Evaluating training: some alternatives to the Kirkpatrick method
Vanessa Vaile

LAK11: Big Data Small Data « Viplav Baxi's Meanderings - 0 views

  • which data is more appropriate - BIG or small
  • most discussion about big data centres on quantity
  • other elements you mention – implication, new models, new decision making approaches – all flow from this abundance of data.
  • ...15 more annotations...
  • Increased data quantity requires new approaches
  • Is small beautiful? Look at the following links. Big Data, Small Data New Age of Innovation (Prahalad) So you like Big Data
  • reading on Insurers and the work done by Levitt and Dubner on Freakonomics tells us clearly that data not earlier thought relevant or causal can be an efficient predictor.
  • Secondly, strategies designed on BIG data
  • may overpower small data strategies
  • Thirdly, BIG data also has BIG impacting factors.
  • Fourthly, actions taken on BIG data will have big consequences,
  • Lastly, if everybody, big or small, started using BIG analytics, to make decisions
  • companies would anyway lose the competitive differentiator that analytics brings to them.
  • Corresponding to the question, how big does BIG need to be, the question I have is - how small really is small.
  • defining patterns that emerge from very small pieces of data (e.g. synchronicity)
  • how tools for SNA and analysis of BIG data can apply to Learning and Knowledge Analytics
  • at the other end it embraces how small changes can cause long term variations
  • not easy to analyze the small data
  • data that is small enough not to be generalizable
1 - 3 of 3
Showing 20 items per page