Skip to main content

Home/ CTLT and Friends/ Group items tagged developer

Rss Feed Group items tagged

Gary Brown

Discussion: Higher Education Teaching and Learning | LinkedIn - 2 views

  • Do you have ideas or examples of good practice of working with employers to promote workforce development? UK universities and colleges are under pressure to do "employer engagement" and some are finding it really difficult. This is sometimes due to the university administrative systems not welcoming non-traditional students, and sometimes because we use "university speak" rather than "employer speak". All ideas very welcome. Thanks. Posted 7 hours ago | Reply Privately /* extlib: /js_controls/_dialog.jsp */ LI.i18n.register( 'Dialog-closeWindow', 'Close this window' ); LI.i18n.register( 'Dialog-or', 'or' ); LI.i18n.register( 'Dialog-cancel', 'Cancel' ); LI.i18n.register( 'Dialog-submit', 'Submit' ); LI.i18n.register( 'Dialog-error-generic', 'We\'re sorry. Something unexpected happened and your request could not be completed. Please try again.' ); LI.Controls.addControl('control-7', 'Dialog', { name: 'sendMessageDialog', type: 'task-modeless', content: { node: 'send-message-dialog', title: 'Reply Privately' }, extra: { memberId: '19056441', fullName: 'Anita Pickerden', groupId: '2774663', subject: 'RE: Do you have ideas or examples of good practice of working with employers to promote workforce development?' } });
  •  
    We should respond to this query with examples from a few of our programs--and write a baseball card or two in the process.
Nils Peterson

AAC&U News | April 2010 | Feature - 1 views

  • Comparing Rubric Assessments to Standardized Tests
  • First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU).
  • And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.  
  • ...13 more annotations...
  • outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication
  • “The wonderful thing about this approach is that full-time faculty across the university  are gathering data about how their  students are doing, and since they’ll be teaching their courses in the future, they’re really invested in rubric assessment—they really care,” Escoe says. In one case, the capstone survey data revealed that students weren’t doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
  • the university planned a “dual pilot” study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment,
  • The rubrics the UC team used were slightly modified versions of those developed by AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) project. 
  • In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years.  The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.
  • “It’s not some nitpicky, onerous administrative add-on. It’s what we do as we teach our courses, and it really helps close that assessment loop.”
  • There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points “in a black box”:
  • faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test. 
  • “The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”
  • Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students’ whole program of study.”  Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning.
  • “We’re really trying to stress that assessment is pedagogy,”
  • “We found no statistically significant correlation between the CLA scores and the portfolio scores,”
  • In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
    • Nils Peterson
       
      CLA did not provide information for continuous program improvement -- we've heard this argument before
  •  
    The lack of correlation might be rephrased--there appears to be no corrlation between what is useful for faculty who teach and what is useful for the VSA. A corollary question: Of what use is the VSA?
Joshua Yeidel

THINK Global School Blog - 3 views

  •  
    "A recent experiment we did asked the question: What happens if you combine lessons from web 2.0 and social media to the process of developing a rubric? The result? We've built what we call "Social Rubrics". Essentially this tool facilitates the process of building a rubric for teachers (and students) in a much more open and collaborative way." A plug-in for Elgg.
Nils Peterson

News & Broadcast - World Bank Frees Up Development Data - 0 views

  • April 20, 2010—The World Bank Group said today it will offer free access to more than 2,000 financial, business, health, economic and human development statistics that had mostly been available only to paying subscribers.
  • Hans Rosling, Gapminder Foundation co-founder and vigorous advocate of open data at the World Bank, said, “It’s the right thing to do, because it will foster innovation. That is the most important thing.”He said he hoped the move would inspire more tools for visualizing data and set an example for other international institutions.
  • The new website at data.worldbank.org offers full access to data from 209 countries, with some of the data going back 50 years. Users will be able to download entire datasets for a particular country or indicator, quickly access raw data, click a button to comment on the data, email and share data with social media sites, says Neil Fantom, a senior statistician at the World Bank.
Joshua Yeidel

Performance Assessment | The Alternative to High Stakes Testing - 0 views

  •  
    " The New York Performance Standards Consortium represents 28 schools across New York State. Formed in 1997, the Consortium opposes high stakes tests arguing that "one size does not fit all." Despite skepticism that an alternative to high stakes tests could work, the New York Performance Standards Consortium has done just that...developed an assessment system that leads to quality teaching, that enhances rather than compromises our students' education. Consortium school graduates go on to college and are successful."
Gary Brown

Educational Malpractice: Making Colleges Accountable - Commentary - The Chronicle of Hi... - 0 views

  • It is crucial that we also develop a wider and deeper body of scientifically valid higher-learning theory. The boom years actually put colleges behind elementary and secondary schools in the development of learning science: how the brain functions, how students learn, what teaching tools work best, how to help all students—not just those who are already academically accomplished—succeed, and the like. I hear calls everywhere for better teaching in higher education, but that is hard to accomplish when the science of higher learning remains relatively primitive.
Joshua Yeidel

Google Fixes IE6 with Chrome Frame - 0 views

  •  
    Chrome Frame is a new open-source product from Google that promises to answer web developer dreams. It's a free plug-in for IE6, IE7 and IE8 that turns Internet Explorer into Google's Chrome browser!
Theron DesRosier

Come for the Content, Stay for the Community | Academic Commons - 0 views

  •  
    The Evolution of a Digital Repository and Social Networking Tool for Inorganic Chemistry From Post: "It is said that teaching is a lonely profession. In higher education, a sense of isolation can permeate both teaching and research, especially for academics at primarily undergraduate institutions (PUIs). In these times of doing more with less, new digital communication tools may greatly attenuate this problem--for free. Our group of inorganic chemists from PUIs, together with technologist partners, have built the Virtual Inorganic Pedagogical Electronic Resource Web site (VIPEr, http://www.ionicviper.org) to share teaching materials and ideas and build a sense of community among inorganic chemistry educators. As members of the leadership council of VIPEr, we develop and administer the Web site and reach out to potential users. "
Joshua Yeidel

Educating the Net Generation : The University of Melbourne - 0 views

  •  
    Educating the Net Generation is a collaborative project involving the University of Melbourne, the University of Wollongong, and Charles Sturt University. The project, funded by the Australian Learning and Teaching Council, began in June 2006. It involved an investigation into students' and teachers' use of new technologies and the development of eight case studies in which emerging technologies were implemented in learning settings across the three participating universities.
Theron DesRosier

www.courseportflio.org - an international repository for documenting student learning - 0 views

  •  
    Home page: "The Peer Review of Teaching Project (PRTP) provides faculty with a structured and practical model that combines inquiry into the intellectual work of a course, careful investigation of student understanding and performance, and faculty reflection on teaching effectiveness. Begun in 1994, the PRTP has engaged hundreds of faculty members from numerous universities. In 2005, the project was awarded a TIAA-CREF Theodore M. Hesburgh Award Certificate of Excellence in recognition of it being an exceptional faculty development program designed to enhance undergraduate student achievement. "
Joshua Yeidel

Joel Oleson's Blog - SharePoint Land : File Servers and SharePoint Doc Libraries... To... - 0 views

  •  
    A list of arguments against SharePoint as a file server with rebuttals from an MS SharePoint developer. The benefits Joel points to are real, but his handwaving about "it does require training" actually helps the other side of the argument..
Gary Brown

Matthew Lombard - 0 views

  • Which measure(s) of intercoder reliability should researchers use? [TOP] There are literally dozens of different measures, or indices, of intercoder reliability. Popping (1988) identified 39 different "agreement indices" for coding nominal categories, which excludes several techniques for interval and ratio level data. But only a handful of techniques are widely used. In communication the most widely used indices are: Percent agreement Holsti's method Scott's pi (p) Cohen's kappa (k) Krippendorff's alpha (a)
  • 5. Which measure(s) of intercoder reliability should researchers use? [TOP] There are literally dozens of different measures, or indices, of intercoder reliability. Popping (1988) identified 39 different "agreement indices" for coding nominal categories, which excludes several techniques for interval and ratio level data. But only a handful of techniques are widely used. In communication the most widely used indices are: Percent agreement Holsti's method Scott's pi (p) Cohen's kappa (k) Krippendorff's alpha (a) Just some of the indices proposed, and in some cases widely used, in other fields are Perreault and Leigh's (1989) Ir measure; Tinsley and Weiss's (1975) T index; Bennett, Alpert, and Goldstein's (1954) S index; Lin's (1989) concordance coefficient; Hughes and Garrett’s (1990) approach based on Generalizability Theory, and Rust and Cooil's (1994) approach based on "Proportional Reduction in Loss" (PRL). It would be nice if there were one universally accepted index of intercoder reliability. But despite all the effort that scholars, methodologists and statisticians have devoted to developing and testing indices, there is no consensus on a single, "best" one. While there are several recommendations for Cohen's kappa (e.g., Dewey (1983) argued that despite its drawbacks, kappa should still be "the measure of choice") and this index appears to be commonly used in research that involves the coding of behavior (Bakeman, 2000), others (notably Krippendorff, 1978, 1987) have argued that its characteristics make it inappropriate as a measure of intercoder agreement.
  • 5. Which measure(s) of intercoder reliability should researchers use? [TOP] There are literally dozens of different measures, or indices, of intercoder reliability. Popping (1988) identified 39 different "agreement indices" for coding nominal categories, which excludes several techniques for interval and ratio level data. But only a handful of techniques are widely used. In communication the most widely used indices are: Percent agreement Holsti's method Scott's pi (p) Cohen's kappa (k) Krippendorff's alpha (a) Just some of the indices proposed, and in some cases widely used, in other fields are Perreault and Leigh's (1989) Ir measure; Tinsley and Weiss's (1975) T index; Bennett, Alpert, and Goldstein's (1954) S index; Lin's (1989) concordance coefficient; Hughes and Garrett’s (1990) approach based on Generalizability Theory, and Rust and Cooil's (1994) approach based on "Proportional Reduction in Loss" (PRL). It would be nice if there were one universally accepted index of intercoder reliability. But despite all the effort that scholars, methodologists and statisticians have devoted to developing and testing indices, there is no consensus on a single, "best" one. While there are several recommendations for Cohen's kappa (e.g., Dewey (1983) argued that despite its drawbacks, kappa should still be "the measure of choice") and this index appears to be commonly used in research that involves the coding of behavior (Bakeman, 2000), others (notably Krippendorff, 1978, 1987) have argued that its characteristics make it inappropriate as a measure of intercoder agreement.
  •  
    for our formalizing of assessment work
  •  
    inter-rater reliability
Nils Peterson

Stolen Knowledge - 3 views

  • This is certainly not a trivial challenge-particularly for schools. The workplace, where our work has been concentrated, is perhaps the easiest place to design because, despite the inevitable contradictions and conflict, it is rich with inherently authentic practice-with a social periphery that, as Orr's (1990) or Shaiken's (1990) work shows, can even supersede attempts to impoverish understanding. Consequently, people often learn, complex work skills despite didactic practices that are deliberately designed to deskill. Workplace designers (and managers) should be developing technology to honor that learning ability, not to circumvent it.
    • Nils Peterson
       
      Another John Seely Brown piece on Legitimate Peripheral Participation with interesting implications for our needs of professional development as OAI evolves. It also leads me back to Lave and Wenger so that I stop crediting JSB with the term.
Peggy Collins

Northwestern U Creates Integration Utility To Link Blackboard and Google Apps -- Campus... - 1 views

  •  
    Users at Northwestern University will be able to log into both Blackboard Learn and Google Apps with a single signon thanks to the efforts of the institution's IT development team. The code created by the team as a Blackboard Building Block and named Bboogle has also been released as open source to let other institutions use or build on the technology at no cost.
Joshua Yeidel

European Journal of Open, Distance and E-Learning - 0 views

  •  
    "This paper describes the implementation of a quantitative cost effectiveness analyzer for Web-supported academic instruction that was developed in Tel Aviv University during a long term study."
  •  
    The king of indirect measures, putting the "count" in accountability via web log analysis.
Gary Brown

News: Defining Accountability - Inside Higher Ed - 0 views

  • they should do so in ways that reinforce the behaviors they want to see -- and avoid the kinds of perverse incentives that are so evident in many policies today.
  • This is especially true, several speakers argued, on the thorniest of higher education accountability questions -- those related to improving student outcomes.
  • Oh, and one or two people actually talked about how nice it would be if policy makers still envisioned college as a place where people learn about citizenship or just become educated for education's sake.)
  • ...6 more annotations...
  • only if the information they seek to collect is intelligently framed, which the most widely used current measure -- graduation rates -- is not
  • "work force ready"
  • Accountability is not quite as straightforward as we think," said Rhoades, who described himself as "not a 'just say no' guy" about accountability. "It's not a question of whether [colleges and faculty should be held accountable], but how, and by whom," he said. "It's about who's developing the measures, and what behaviors do they encourage?"
  • federal government needs to be the objective protector of taxpayers' dollars,"
  • Judith Eaton, president of the Council for Higher Education Accreditation, said that government regulation would be a major mistake, but said that accreditors needed to come to agreement on "community-driven, outcomes-based standards" to which colleges should be held.
  • But while they complain when policy makers seek to develop measures that compare one institution against another, colleges "keep lists of peers with which they compare themselves" on many fronts, Miller said.
  •  
    High level debates again
Gary Brown

At Colleges, Assessment Satisfies Only Accreditors - Letters to the Editor - The Chroni... - 2 views

  • Some of that is due to the influence of the traditional academic freedom that faculty members have enjoyed. Some of it is ego. And some of it is lack of understanding of how it can work. There is also a huge disconnect between satisfying outside parties, like accreditors and the government, and using assessment as a quality-improvement system.
  • We are driven by regional accreditation and program-level accreditation, not by quality improvement. At our institution, we talk about assessment a lot, and do just enough to satisfy the requirements of our outside reviewers.
  • Standardized direct measures, like the Major Field Test for M.B.A. graduates?
  • ...5 more annotations...
  • The problem with the test is that it does not directly align with our program's learning outcomes and it does not yield useful information for closing the loop. So why do we use it? Because it is accepted by accreditors as a direct measure and it is less expensive and time-consuming than more useful tools.
  • Without exception, the most useful information for improving the program and student learning comes from the anecdotal and indirect information.
  • We don't have the time and the resources to do what we really want to do to continuously improve the quality of our programs and instruction. We don't have a culture of continuous improvement. We don't make changes on a regular basis, because we are trapped by the catalog publishing cycle, accreditation visits, and the entrenched misunderstanding of the purposes of assessment.
  • The institutions that use it are ones that have adequate resources to do so. The time necessary for training, whole-system involvement, and developing the programs for improvement is daunting. And it is only being used by one regional accrediting body, as far as I know.
  • Until higher education as a whole is willing to look at changing its approach to assessment, I don't think it will happen
  •  
    The challenge and another piece of evidence that the nuances of assessment as it related to teaching and learning remain elusive.
Kimberly Green

http://sites.google.com/site/podnetwork/ - 0 views

  •  
    POD's wiki for sharing and discussion
Gary Brown

The Chimera of College Brands - Commentary - The Chronicle of Higher Education - 1 views

  • What you get from a college, by contrast, varies wildly from department to department, professor to professor, and course to course. The idea implicit in college brands—that every course reflects certain institutional values and standards—is mostly a fraud. In reality, there are both great and terrible courses at the most esteemed and at the most denigrated institutions.
  • With a grant from the nonprofit Lumina Foundation for Education, physics and history professors from a range of Utah two- and four-year institutions are applying the "tuning" methods developed as part of the sweeping Bologna Process reforms in Europe.
  • The group also created "employability maps" by surveying employers of recent physics graduates—including General Electric, Simco Electronics, and the Air Force—to find out what knowledge and skills are needed for successful science careers.
  • ...3 more annotations...
  • If a student finishes and can't do what's advertised, they'll say, 'I've been shortchanged.'
  • Kathryn MacKay, an associate professor of history at Weber State University, drew on recent work from the American Historical Association to define learning goals in historical knowledge, thinking, and skills.
  • In the immediate future, as the higher-education market continues to globalize and the allure of prestige continues to grow, the value of university brands is likely to rise. But at some point, the countervailing forces of empiricism will begin to take hold. The openness inherent to tuning and other, similar processes will make plain that college courses do not vary in quality in anything like the way that archaic, prestige- and money-driven brands imply. Once you've defined the goals, you can prove what everyone knows but few want to admit: From an educational standpoint, institutional brands are largely an illusion for which students routinely overpay.
  •  
    The argumet for external stakeholders is underscored, among other implications.
Theron DesRosier

performance.learning.productivity: ID - Instructional Design or Interactivity Design in... - 1 views

  • The vast majority of structured learning is content-rich and interaction-poor. That’s understandable in the context of a 20th century mindset and how learning professionals have been taught to develop ‘learning’ events. But it simply isn’t appropriate for today’s world.
  • Dr Ebbinghaus’ experiment revealed we suffer an exponential ‘forgetting curve’ and that about 50% of context-free information is lost in the first hour after acquisition if there is no opportunity to reinforce it with practice.
  • The need to become Interactivity Designers. That’s what they need to do.
  • ...1 more annotation...
  • We need designers who understand that learning comes from experience, practice, conversations and reflection, and are prepared to move away from massaging content into what they see as good instructional design. Designers need to get off the content bus and start thinking about, using, designing and exploiting learning environments full of experiences and interactivity.
  •  
    "Dr Ebbinghaus' experiment revealed we suffer an exponential 'forgetting curve' and that about 50% of context-free information is lost in the first hour after acquisition if there is no opportunity to reinforce it with practice."
« First ‹ Previous 41 - 60 of 115 Next › Last »
Showing 20 items per page