Skip to main content

Home/ Dogs-to-Stars Enterprises/ Group items tagged assessment

Rss Feed Group items tagged

Steve King

UC Berkeley, Management of Technology (MOT) Program Course: Human and Organizational Fa... - 0 views

  • This course advances the concept that humans and their organizations are an integral part of the engineering paradigm and that it is up to engineering to learn how to better integrate considerations of people into engineering systems of all types. This course focuses this concept on the assessment and management of the risks associated with engineered systems during their life-cycle (concept development through decommissioning). Risks (likelihoods and consequences) are addressed in the contexts of the desired quality from an engineered system including serviceability (fitness for purpose), safety (freedom from undue exposure to harm), compatibility (on time, on budget, with happy customers including the environment), and durability (freedom from unexpected degradations in the other quality characteristics). Reliability is introduced to enable assessment of the wide variety of hazards, uncertainties, and variabilities that are present during the life-cycle of an engineered system. Proactive (get ahead of the challenges), Reactive (learn the lessons from successes and failures), and Interactive (realtime assessment and management of unknown knowables and unknown unknowables) strategies are advanced and illustrated to assist engineers in the assessment and management of risks.
dhtobey Tobey

Putting organizational complexity in its place - McKinsey Quarterly - Organization - St... - 0 views

  • The goal? To identify where institutional complexity is an issue, where complexity caused by factors such as a lack of role clarity or poor processes is a problem, and what’s responsible for the complexity in each area. Companies can then boost organizational effectiveness through a combination of two things: removing complexity that doesn’t add value and channeling what’s left to employees who can either handle it naturally or be trained to cope with it.
  • In this article, we review the experience of a multinational consumer goods manufacturer that applied this approach in several regions and functions and consequently halved the time it needed to make decisions in critical processes.
  • Armed with the survey data, the manufacturer constructed several “heat maps” to help senior managers pinpoint where, and why, complexity was causing trouble for employees. Each map showed a particular breakdown—a region or function, for example—and how much complexity of various kinds was occurring there, as well as the level of coping skills employees possessed.
    • dhtobey Tobey
       
      Heat maps would be a nice tool for the CD. We should begin to create a catalog of these visualizations that support decision analysis, as opposed to simple graphical displays in basic analytics applications that don't naturally lead to a transformation that provides insights.
    • dhtobey Tobey
       
      Additionally, each of these "temperatures" should have a gradient to indicate the degree of consensus associated with each map. The graphic below implies there is only one view that all share -- preposterous!
  • ...4 more annotations...
  • A regional map, reproduced here (Exhibit 1), highlighted confusion over accountability between the company’s headquarters and a country office in the same region. T
  • Another map showed how the manufacturer’s supply chain employees were struggling with duplication that stemmed from confusing sales forecasting and from ordering processes that required decisions to pass through multiple loops (including time-consuming iterations with regional offices) prior to approval.
  • Of course, managers must be mindful that not all complexity is equally manageable, and proceed accordingly (Exhibit 2). Exhibit 2: Types of complexity Imposed complexity includes laws, industry regulations, and interventions by nongovernmental organizations. It is not typically manageable by companies. Inherent complexity is intrinsic to the business, and can only be jettisoned by exiting a portion of the business. Designed complexity results from choices about where the business operates, what it sells, to whom, and how. Companies can remove it, but this could mean simplifying valuable wrinkles in their business model. Unnecessary complexity arises from growing misalignment between the needs of the organization and the processes supporting it. It is easily managed once identified.
  • Whenever companies tackle complexity, they will ultimately find some individuals who seem less troubled by it than others. This is not surprising. People are different: some freeze like deer in the headlights in the face of ambiguity, uncertainty, complex roles, and unclear accountabilities; others are able to get their work done regardless.
    • dhtobey Tobey
       
      Difference between the ability to handle complexity may be due to thinkLets and assessable using the Bivariate Emotion Indicator I developed in my dissertation. This could be an assessment of a "CIP CMM" that we offer NEPCO through Assante's new non-profit.
  •  
    wow great stuff.. fully concur.. IMO a catalog of visualizations is very much in line with our mantra of METHODOLOGY, not TECHNOLOGY :)
dhtobey Tobey

HSI Journal of Homeland Security - 2 views

  • Generic training that can aid in dealing with unanticipated complex terrorist activities is needed. Terrorist acts can create stressful situations involving volatility, uncertainty, complexity, ambiguity, and delayed feedback and information flow (“VUCAD”). Strategic management simulation technology, based on complexity theory, can be used to assess and train personnel who must deal with the threat of terrorism.
  • Yet we also need more generic training to handle the VUCAD of terrorism
  • A more applicable technology is known as “quasi-experimental simulation.”17 While the quasi-experimental approach is a compromise between the free and experimental simulation methods, it tends to combine the advantages of both and mostly eliminates the disadvantages of the other two. In a quasi-experimental simulation, preprogrammed information is restricted to only part of the information: incoming messages that assure that all participants experience the same flow of events. On the other hand, many additional computer-generated responses (typically one-half of the incoming information) to participant actions allow realism (and maintenance of high motivation levels). Yet, because of the constant flow of pre-programmed information that keeps significant events and timing constant for all participants, performance can be numerically scored against established criteria of excellence or can be compared between different participants (or participating teams). The observer (who was necessary in the free simulation) has become obsolete. Performance is computer scored, both in terms of how any participant processes information (for example, is strategy developed?) and in terms of the appropriateness of the actions taken to deal with scenario-generated events
  • ...2 more annotations...
  • The strategic management simulation allows for the assessment (and training) of contextual content knowledge, but—more significantly—it permits the analysis and training or teaching of thought and action processes.
  • Process analysis and training are based on complexity theory.21, 22, 23 While complexity theory recognizes the importance of thought and action content (that is, what people do and think), it places major emphasis on the more generic thought and action process (that is, how people think and act). The “how” of thought and action applies to multiple facets of experience—that is, potentially transfers from one thought and action content area to another. Measurement and training of the “how” of thought and action allow for the application of the complexity-based strategic management simulation technology to the VUCAD of terrorism.
dhtobey Tobey

Evidence-based medicine - Wikipedia, the free encyclopedia - 1 views

  • The systematic review of published research studies is a major method used for evaluating particular treatments. The Cochrane Collaboration is one of the best-known, respected examples of systematic reviews. Like other collections of systematic reviews, it requires authors to provide a detailed and repeatable plan of their literature search and evaluations of the evidence. Once all the best evidence is assessed, treatment is categoried as "likely to be beneficial", "likely to be harmful", or "evidence did not support either benefit or harm".
    • dhtobey Tobey
       
      We need to find access to the Cochrane Collaboration -- this is obviously a large, extant community socializing the vetting of clinical evidence.  We should find out more about their methodology and supporting technology, if any.
  • Evidence-based medicine categorizes different types of clinical evidence and ranks them according to the strength of their freedom from the various biases that beset medical research. For example, the strongest evidence for therapeutic interventions is provided by systematic review of randomized, double-blind, placebo-controlled trials involving a homogeneous patient population and medical condition. In contrast, patient testimonials, case reports, and even expert opinion have little value as proof because of the placebo effect, the biases inherent in observation and reporting of cases, difficulties in ascertaining who is an expert, and more.
    • dhtobey Tobey
       
      Is this ranking an emergent process supported by some type of knowledge exchange platform? What about consensus/dissensus analysis? Seems ripe for groupthink and manipulation or paradigm traps.
  • ...5 more annotations...
  • This process can be very human-centered, as in a journal club, or highly technical, using computer programs and information techniques such as data mining.
  • Level III: Opinions of respected authorities, based on clinical experience, descriptive studies, or reports of expert committees.
    • dhtobey Tobey
       
      Need for LivingSurvey, LivingPapers, and LivingAnalysis.
  • Despite the differences between systems, the purposes are the same: to guide users of clinical research information about which studies are likely to be most valid. However, the individual studies still require careful critical appraisal.
    • dhtobey Tobey
       
      In other words, there are wide differences of opinion (dissensus) that must be managed and used to inform decision-making.
  • The U.S. Preventive Services Task Force uses:[9] Level A: Good scientific evidence suggests that the benefits of the clinical service substantially outweighs the potential risks. Clinicians should discuss the service with eligible patients. Level B: At least fair scientific evidence suggests that the benefits of the clinical service outweighs the potential risks. Clinicians should discuss the service with eligible patients. Level C: At least fair scientific evidence suggests that there are benefits provided by the clinical service, but the balance between benefits and risks are too close for making general recommendations. Clinicians need not offer it unless there are individual considerations. Level D: At least fair scientific evidence suggests that the risks of the clinical service outweighs potential benefits. Clinicians should not routinely offer the service to asymptomatic patients. Level I: Scientific evidence is lacking, of poor quality, or conflicting, such that the risk versus benefit balance cannot be assessed. Clinicians should help patients understand the uncertainty surrounding the clinical service.
    • dhtobey Tobey
       
      Relates well to Scott's idea of common problem being one of risk management.
  • AUC-ROC The area under the receiver operating characteristic curve (AUC-ROC) reflects the relationship between sensitivity and specificity for a given test. High-quality tests will have an AUC-ROC approaching 1, and high-quality publications about clinical tests will provide information about the AUC-ROC. Cutoff values for positive and negative tests can influence specificity and sensitivity, but they do not affect AUC-ROC.
    • dhtobey Tobey
       
      ROC curves are similar to PPT, though addressing a different and less impactful issue of system sensitivity and specificity, rather than reliability (consistency) as determined by PPT.
Steve King

Summary - 1 views

  • CHAPTER 4 FRAMEWORK FOR SCADA UTILITY SURVIVABILITY MODELING * 4.1 Risk Modeling * 4.2 Internet Survey * 4.3 Survivability * 4.4 Taxonomy for Assessing Computer Security * 4.5 Definitions and Terms for a Taxonomy * 4.6 Understanding the Taxonomy * 4.7 Hierarchical Holographic Modeling (HHM) * 4.8 Recent Uses of the HHM in Identifying Risks * 4.9 Risk Modeling Using HHM * 4.10 Goal Development and Indices of Performance * 4.11 Event Tree and Fault Tree Analysis * 4.12 Distributions from Event Tree Analysis * 4.13 Partitioned Multiobjective Risk Method * 4.14 Multiobjective Tradeoff Analysis * 4.15 Evaluation *
Steve King

BS 25999 Business continuity - 1 views

  • S 25999On June 15, 2010 the DHS Secretary Janet Napolitano announced the adoption of BS 25999 for the PS-Prep program.  BS 25999 (which comes in two parts) is one of three standards for use in the Voluntary Private Sector Preparedness Accreditation and Certification Program (PS-Prep). PS-Prep is directed by Title IX of the Implementing the Recommendations of the 9/11 Commission Act of 2007.
dhtobey Tobey

Pentagon: Boost Training With Computer-Troop Mind Meld | Danger Room | Wired.com - 0 views

  • The Pentagon is looking to better train its troops — by scanning their minds as they play video games. Adaptive, mind-reading computer systems have been a work-in-progress among military agencies for at least a decade. In 2000, far-out research agency Darpa launched “Augmented Cognition,” a program that sought to develop computers that used EEG scans to adjust how they displayed information — visually, orally, or otherwise — to avoid overtaxing one realm of a troop’s cognition. The Air Force also took up the idea, by trying to use EEGs to “assess the operator’s actual cognitive state”  and “avoid cognitive bottlenecks before they occur.”
  • Now, the Office of the Secretary of Defense (OSD) is soliciting small business proposals for an even more immersive trainer, one that includes voice-recognition technology, and picks up on vocal tone and facial gestures. The game would then react and adapt to a war-fighter’s every action. For example, if a player’s gesture “insults the local tribal leader,” the trainee would “find that future interactions with the population are more difficult and more hostile.” And, most importantly, the new programs would react to the warrior’s own physiological and neurological cues. They’d be monitored using an EEG, eye tracking, heart and respiration rate, and other physiological markers. Based on the metrics, the game would adapt in difficulty and “keep trainees in an optimal state of learning.”
    • dhtobey Tobey
       
      Could this be an application of the immersive training system being developed at Raytheon? Ironically they use the name "Mind-Meld" in the title of this article. We should get Guilded Skilled Performance copywrighted and trademarked as DARPA seems to be heading in this direction. Could be a source of future grant-related funding.
  • The OSD isn’t ready to use neuro-based systems in the war zone, but the agency does want to capitalize on advances in neuroscience that have assigned meaningful value to intuitive decision-making. As the OSD solicitation points out, troops often need to make fast-paced decisions in high-stress environments, with limited information and context. Well-reasoned, analytic decisions are rarely possible
  • ...1 more annotation...
  • That’s where neuroscience comes in. OSD wants simulated games that use EEGs to monitor the cognitive patterns of trainees, particularly at what’s thought to be the locus of neurally based, intuitive decision-making — the basal ganglia. In his seminal paper on the neuroscience of intuition, Harvard’s Matthew Lieberman notes that the ganglia can “learn temporal patterns that are predictive of events of significance, regardless of conscious intent … as long as exposure is repeatedly instantiated.”
    • dhtobey Tobey
       
      The basal ganglia is where I hypothesized the command neurons were located which trigger thinkLets -- the source of intuitive decision making according to this research.
1 - 8 of 8
Showing 20 items per page