Skip to main content

Home/ Dogs-to-Stars Enterprises/ Group items tagged data

Rss Feed Group items tagged

dhtobey Tobey

Byte Size Biology » genomics - 1 views

  • etadata is the “data about the data”: all the habitat data, SOPs and abiotic data that is in dire need of the standardization Kyrpides writes about.
  • Metadata is the “data about the data”: all the habitat data, SOPs and abiotic data that is in dire need of the standardization Kyrpides writes about.
  • In 2005 the Genomics Standards Consortium was formed to address this problem. Renzo Kottman from the Max-Planck Institute for Marine Microbiology in Bremen, Germany  talked about software development within the GSC, and specifically about his own project: the Genomic Contextual Data Markup Language, or GCDML. GCDML is an XML-based standard for describing everything associated with a genomic or a metagenomic sample: where it was taken from , under what conditions, which protocols were used to extract, sequence, assemble, finish and analyze the metagenome.
    • dhtobey Tobey
       
      Standards organizations are community desktops waiting to happen. More specifically, not reference to "protocols" with a five step process similar to our technology transfer framework. If we could get a copy of this protocol we could develop a diagram and a community site around the four "research cycle" stages: extract, sequence, assemble, finish and analyze. What we need is a similar structure for the tissue sourcing process. Scott, can you think of who might have such a protocol documented?
  •  
    Excerpt from further down the article that Steve sent via email. Note the embedded presentation on "Software development by the Genomics Standards Consortium."
  •  
    wow.. this page is a tour de force for bio science info issues ... and I very much like where you are going with the extract, sequence, assemble, finish and analyze.. pattern.. similar to the NIST model we are using at NERC.... hopefully
Steve King

Data.gov - 0 views

  • With so much government data to work with, developers are creating a wide variety of applications, mashups, and visualizations. From crime statistics by neighborhood to the best towns to find a job to seeing the environmental health of your community–these applications arm citizens with the information they need to make decisions every day. Enjoy these highlights of the hundreds of applications available.
dhtobey Tobey

The Rise of Crowd Science - Technology - The Chronicle of Higher Education - 0 views

  • Alexander S. Szalay is a well-regarded astronomer, but he hasn't peered through a telescope in nearly a decade. Instead, the professor of physics and astronomy at the Johns Hopkins University learned how to write software code, build computer servers, and stitch millions of digital telescope images into a sweeping panorama of the universe.
  • Today, data sharing in astronomy isn't just among professors. Amateurs are invited into the data sets through friendly Web interfaces, and a schoolteacher in Holland recently made a major discovery, of an unusual gas cloud that might help explain the life cycle of quasars—bright centers of distant galaxies—after spending part of her summer vacation gazing at the objects on her computer screen. Crowd Science, as it might be called, is taking hold in several other disciplines, such as biology, and is rising rapidly in oceanography and a range of environmental sciences. "Crowdsourcing is a natural solution to many of the problems that scientists are dealing with that involve massive amounts of data," says Haym Hirsh, director of the Division of Information and Intelligent Systems at the National Science Foundation.
    • dhtobey Tobey
       
      Crowdsourcing should be added to our pitch on collective intelligence and included as a primary benefit in NSF and related grants for university development of our code base.
  • Mr. Szalay's unusual career began with a stint as a rock star. While in graduate school in Hungary, he played lead guitar in the band Panta Rhei, which released two albums and several singles in the 1970s.
    • dhtobey Tobey
       
      Hey, this guy might "get" our publishing/producer metaphor for LivingMethods. Perhaps he might be a collaborator on the NSF solicitation for coordinated science applications?
  • ...8 more annotations...
  • In 2007 tragedy ended their long partnership. Mr. Gray set out from San Francisco on a solo trip on his 40-foot sailboat and did not return.
    • dhtobey Tobey
       
      Oops... looks like the guy needs a new systems partner!
  • A couple of years after Mr. Szalay joined the project, a colleague introduced him to Jim Gray, who was a kind of rock star himself—in the computer-science world. Wired magazine once wrote that the programmer's work had made possible ATM machines, electronic tickets, and other wonders of modern life. When Mr. Szalay met him, Mr. Gray was a technical fellow at Microsoft Research and was looking for enormous sets of numbers to place in the databases he was designing.
    • dhtobey Tobey
       
      Nice link with Microsoft Research Labs.
  • in 1992 came the project that would change his career. Johns Hopkins joined the Sloan Digital Sky Survey project, a computerized snapshot of the heavens.
  • The scientists, along with tech-industry leaders whom Mr. Gray had mentored in the past, offered to help the Coast Guard search the open sea using any technology they could think of. Google executives and others helped provide fresh satellite images of the area. And an official at Amazon used the company's servers to send those satellite images to volunteers—more than 12,000 of them stepped forward—who scanned them for any sign of the lost researcher.
  • But Jim Gray was never found. Some of the techniques that the astronomer learned from the search effort, though, have now been incorporated into a Web site that invites anyone to help categorize images from the Sloan Digital Sky Survey.
  • The number of volunteers surprised the organizers. "The server caught fire a couple of hours after we opened it" in July 2007, he said, burning out from overuse. More than 270,000 people have signed up to classify galaxies so far.
  • Gene Wikis
  • It started under the name of GenMAPP, or Gene Map Annotator and Pathway Profiler. Participation rates were low at first because researchers had little incentive to format their findings and add them to the project. Tenure decisions are made by the number of articles published, not the amount of helpful material placed online. "The academic system is not set up to reward the sharing of the most usable aspects of the data," said Alexander Pico, bioinformatics group leader and software engineer at the Gladstone Institute of Cardiovascular Disease. In 2007, Mr. Pico, a developer for GenMAPP, and his colleagues added an easy-to-edit Wiki to the project (making it less time-consuming to participate) and allowed researchers to mark their gene pathways as private until they had published their findings in academic journals (alleviating concerns that they would be pre-empting their published research). Since then, participation has grown quickly, in part because more researchers—and even some pharmaceutical companies—are realizing that genetic information is truly useful only when aggregated.
Steve King

NEJM -- What's Keeping Us So Busy in Primary Care? A Snapshot from One Practice - 0 views

  • Primary care practices typically measure productivity according to the number of visits, which also drives payment.
    • dhtobey Tobey
       
      This study is directly related to the TrustNetMD mission, but could also be useful for other EBM-related and OBM-related community desktop solutions.
  • Several studies have estimated the amount of time that primary care physicians devote to nonvisit work.1,2 To provide a more detailed description, my colleagues and I used our electronic health record to count units of primary care work during the course of a year.
  • Greenhouse Internists is a community-based internal medicine practice employing five physicians in Philadelphia. In 2008, we had an active caseload of 8440 patients between 15 and 99 years of age.
  • ...12 more annotations...
  • Our payer mix included 7.2% of payments from Medicaid (exclusively through Medicaid health maintenance organizations), 21.5% from Medicare (of which 14.0% were fee-for-service and 7.5% capitated), 64.7% from commercial insurers (34.5% fee-for-service and 30.2% capitated), and 6.5% from pay-for-performance programs.
    • dhtobey Tobey
       
      I wonder how this breakdown compares with national/urban averages? Also how are these trending? Is the pay-for-performance increasing dramatically? I would think so based on what we are hearing.
  • Throughout 2008, our physicians provided 118.5 scheduled visit-hours per week, ranging from 15 to 31 weekly hours each. We regard this schedule as equivalent to the work of four full-time physicians, with physicians typically working 50 to 60 hours per week. Our staff included four medical assistants, five front-desk staff, one business manager, one billing manager, one health educator (hired midyear), and two full-time clerical staff. Our staffing ratio was approximately 3.5 full-time support staff per full-time physician. We had no nurses or midlevel practitioners.
    • dhtobey Tobey
       
      From the little I know this is a typical primary care scenario - very poor leverage of professional staff, meaning no use of nurses or midlevel practitioners to leverage physician time and expertise.
  • We use an electronic health record, which we adopted in July 20043 and use exclusively to store, retrieve, and manage clinical information. Our electronic system came with 24 "document types" that function like tabs in a paper chart to organize documents, dividing clinical information into categories such as "office visit," "phone note," "lab report," and "imaging." Since all data about patients is stored in the electronic record (either as structured data or as scanned PDFs) and each document is signed electronically by a physician, we are able to measure accurately the volume of documents, which serve as proxies for clinical activities, in a given time period.
    • dhtobey Tobey
       
      Each of these document types could become a "LivingPaper" creating a "LivingRecord" vs. the current EHR... Steve have you discussed something like this with TNMD?
  • The volume and types of documents that we receive, process, and create are listed in Table 1
  • Each physician reviewed 19.5 laboratory reports per day, including those ordered through our office (which are delivered to us through an electronic interface and are automatically posted to the database of the electronic health record as numerical values) and those ordered outside our office (which enter our chart as scanned PDFs and are not posted as numerical values). The work cycle of responding to a laboratory result includes interpretation by telephone, letter, or e-mail. (Our office sent 12,541 letters communicating test results, about a third of which were sent by e-mail.) For noninterfaced laboratories, we must decide which values need to be entered manually into the electronic health record by a staff person; the values of scanned results cannot be graphed or searched without this step. Laboratory results frequently trigger a review or adjustment of a medication, which requires access to accurate, current medication lists with doses.
    • dhtobey Tobey
       
      How difficult would it be to integrate LivingPaper with existing EHRs and/or lab systems. Since EHRs are still in the "early adopter" phase, perhaps we can address some of the most critical needs making EHR use unnecessary, or perhaps this is a HUGE joint opportunity with Microsoft's healthcare division.
  • Of these calls, 35.7% were for an acute problem, 26.0% were for administrative purposes
  • Physicians averaged 16.8 e-mails per day. Of these electronic communications, 59.3% were for the interpretation of test results, 21.7% were for response to patients (either initiated by patients through the practice's interactive Web site or as part of an e-mail dialogue with patients), 9.3% were for administrative problems, 5.0% were for acute problems, 2.8% were for proactive outreach to patients, and 1.9% were for discussions with consultants.
    • dhtobey Tobey
       
      60% for interpretation of test results!!! Opinion management ranks as the highest use of electronic communications. THIS IS OUR SWEET SPOT! We need to find this type of data for research scientists.
    • Steve King
       
      this is a a perfect source document for HC CD
  • Telephone calls that were determined to be of sufficient clinical import to engage a physician averaged 23.7 per physician per day, with 79.7% of such calls handled directly by physicians.
    • dhtobey Tobey
       
      Wow! I never would have guessed that telephone calls were such a significant part of the physician day. Does the EHR provide a CRM for call-logging?
  • Each physician reviewed 11.1 imaging reports per day, which usually required communication with patients for interpretation. Such review may require updating problem lists (e.g., a new diagnosis of a pulmonary nodule) or further referral (e.g., fine-needle aspiration for a cold thyroid nodule), which generates additional work, since results and recommendations are communicated to patients and consultants.
  • Each physician reviewed 13.9 consultation reports per day. Such reports from specialists may require adjustments to a medication list (if a specialist added or changed a medication), changes to a problem list, or a call or e-mail to a patient to explain or reinforce a specialist's recommendation. Some consultation or diagnostic reports relate to standard quality metrics (e.g., eye examinations for patients with diabetes) and need to be recorded in a different manner to support ongoing quality reporting and improvement.5
  • Before our practice had an electronic health record, we employed a registered nurse. After the implementation of the electronic health record system, much of the work that the nurse performed could be done by staff who did not have nursing skills, and by 2008, we no longer employed a registered nurse. However, on the basis of the analysis described here, we have hired a registered nurse to do "information triage" of incoming laboratory reports, telephone calls, and consultation notes — a completely different job description than what we had before.
    • dhtobey Tobey
       
      Most interesting! This is the conclusion we came to and presented to TNMD as a business plan concept -- become the triage service through outsourcing/insourcing RNs supported by the community desktop system.
  • Our practice is participating in a multipayer Patient Centered Medical Home demonstration project7 (which allowed us to hire our health educator). This project is overseen by the Pennsylvania governor's office and funded by the three largest commercial insurers and all three Medicaid insurers in our region
    • dhtobey Tobey
       
      Monetization is with the insurers -- just as we expected.
dhtobey Tobey

AutoMap: Project | CASOS - 0 views

  • AutoMap is a text mining tool that enables the extraction of network data from texts. AutoMap can extract content analytic data (words and frequencies), semantic networks, and meta-networks from unstructured texts developed by CASOS at Carnegie Mellon.  Pre-processors for handling pdf’s and other text formats exist.  Post-processors for linking to gazateers and belief inference also exist. The main functions of AutoMap are to extract, analyze, and compare texts in terms of concepts, themes, sentiment, semantic networks and the meta-networks extracted from the texts. AutoMap exports data in DyNetML and can be used interoperably with *ORA. AutoMap uses parts of speech tagging and proximity analysis to do computer-assisted Network Text Analysis (NTA). NTA encodes the links among words in a text and constructs a network of the linked words. AutoMap subsumes classical Content Analysis by analyzing the existence, frequencies, and covariance of terms and themes. AutoMap has been implemented in Java 1.5.0_07. It can operate in both a front end with gui, and backend mode. Main functionalities of AutoMap are: Extract, analyze and compare mental models of individuals and groups. Reveal structure of social and organizational systems from texts. AutoMap also offers a variety of techniques for pre-processing Natural Language: Named-Entity Recognition Stemming (Porter, KStem) Collocation (Bigram) Detection Extraction routines for dates, events, parts of speech Deletion Thesaurus development and application Flexible ontology usage Parts of Speech Tagging
  •  
    Could this tool be useful for the knowledge exchange to develop automatic tagging and taxonomy creation?
dhtobey Tobey

Atigeo - 0 views

  • Technology has given us access to endless amounts of information from every corner of the world. But the real challenge is to make sense of all this seemingly disconnected data. To cut through the clutter, make relevant associations, and transform raw information into true knowledge. Atigeo™ is solving this complex challenge with xPatterns,™ a new breed of compassionate technology that allows users to derive insight and wisdom from data. Based upon an advanced platform of artificial intelligence and machine learning, it effectively interprets unstructured data to arrive at new and unexpected connections. Connections that can personalize individual interactions, enhance consumer privacy, improve business intelligence, advance research development, and foster a greater understanding of the world around us.
  •  
    Company referenced today by Jeff. While this uses pattern clustering algorithms it does so based on AI techniques that are inherently counter to our collective intelligence solutions.
Steve King

The next wave of change for US health care payments - McKinsey Quarterly - Health Care ... - 0 views

  • We estimate that by 2012, about 80 percent of the projected eight billion core US health care transactions will be in electronic formats,
  • The complexity of clinical data should not be underestimated—a typical patient-level clinical data set can include more than 800 discrete fields, compared with only about 20 to 30 for a financial transaction.
  • Resulting in part from this systemwide complexity, industry administrative costs will grow by about 10 percent annually over the coming years
  • ...2 more annotations...
  • Cross-industry collaboration could finally spur the creation of payment utilities such as full-cycle-payment automation (described in our 2007 article). As noted there, we believe in the potential for cross-industry collaboration to create an at-scale payment-settlement utility that knits together health care transaction processing through clearinghouses, the automated clearinghouse payment network, and card network payments for retail payments.
  • By applying CBO data, we estimate that 55 percent of hospitals and 85 percent of physician practices will reach the basic stages of meaningful use by 2014.
Steve King

Virtual Strategy Magazine - PC Hypervisors Virtually Change Everything - 0 views

  • With VDI, virtual desktop images are stored in a data center and provided to a client via the network. The virtual machines will include the entire desktop stack, from operating system to applications to user preferences, and management is provided centrally through the backend virtual desktop infrastructure.   The promise is that VDI will replace the need for myriad systems management and security tools that are currently deployed. No more demands for traditional desktop management tools for OS deployment, patch management, anti-virus, personal firewalls, encryption, software distribution and so on. In fact, many are suggesting that we can return to thin client computing models
  •  
    Not sure exactly how this applies to VW internal IT infrastructure and client facing apps.. but I'm sure it does! especially if we could have client VWsuite VMs running in our data center so that we abstract all the different GME/OnP/LP/KE/CRM platforms into a single VM client interface that anyone can log into with no complexity
Steve King

Download details: Sharing Data across Microsoft Dynamics CRM Deployments - 1 views

  • Sharing Data across Microsoft Dynamics CRM Deployments
dhtobey Tobey

Google Prediction API - Google Code - 0 views

  • The Prediction API enables access to Google's machine learning algorithms to analyze your historic data and predict likely future outcomes. Upload your data to Google Storage for Developers, then use the Prediction API to make real-time decisions in your applications
  •  
    Potential analytic toolkit for analyzing behavior trends in best practices
Steve King

Technology Review: The Semantic Web Goes Mainstream - 0 views

  • Another technique that Twine uses is graph analysis. This idea, explains Spivack, is similar to the thinking behind the "social graph" that Mark Zuckerberg, the founder of Facebook, extols: connections between people exist in the real world, and online social-networking tools simply collect those connections and make them visible. In the same way, Spivack says, Twine helps make the connections between people and their information more accessible. When data is tagged, it essentially becomes a node in a network. The connections that each node has to other nodes (which could be other data, people, places, organizations, projects, events, et cetera) depend on their tags and the statistical relevance they have to the tags of other nodes. This is how Twine determines relevance when a person searches through his or her information. The farther away a node is, the less relevant it is to a user's search
dhtobey Tobey

Evidence-based medicine - Wikipedia, the free encyclopedia - 1 views

  • The systematic review of published research studies is a major method used for evaluating particular treatments. The Cochrane Collaboration is one of the best-known, respected examples of systematic reviews. Like other collections of systematic reviews, it requires authors to provide a detailed and repeatable plan of their literature search and evaluations of the evidence. Once all the best evidence is assessed, treatment is categoried as "likely to be beneficial", "likely to be harmful", or "evidence did not support either benefit or harm".
    • dhtobey Tobey
       
      We need to find access to the Cochrane Collaboration -- this is obviously a large, extant community socializing the vetting of clinical evidence.  We should find out more about their methodology and supporting technology, if any.
  • Evidence-based medicine categorizes different types of clinical evidence and ranks them according to the strength of their freedom from the various biases that beset medical research. For example, the strongest evidence for therapeutic interventions is provided by systematic review of randomized, double-blind, placebo-controlled trials involving a homogeneous patient population and medical condition. In contrast, patient testimonials, case reports, and even expert opinion have little value as proof because of the placebo effect, the biases inherent in observation and reporting of cases, difficulties in ascertaining who is an expert, and more.
    • dhtobey Tobey
       
      Is this ranking an emergent process supported by some type of knowledge exchange platform? What about consensus/dissensus analysis? Seems ripe for groupthink and manipulation or paradigm traps.
  • ...5 more annotations...
  • This process can be very human-centered, as in a journal club, or highly technical, using computer programs and information techniques such as data mining.
  • Level III: Opinions of respected authorities, based on clinical experience, descriptive studies, or reports of expert committees.
    • dhtobey Tobey
       
      Need for LivingSurvey, LivingPapers, and LivingAnalysis.
  • Despite the differences between systems, the purposes are the same: to guide users of clinical research information about which studies are likely to be most valid. However, the individual studies still require careful critical appraisal.
    • dhtobey Tobey
       
      In other words, there are wide differences of opinion (dissensus) that must be managed and used to inform decision-making.
  • The U.S. Preventive Services Task Force uses:[9] Level A: Good scientific evidence suggests that the benefits of the clinical service substantially outweighs the potential risks. Clinicians should discuss the service with eligible patients. Level B: At least fair scientific evidence suggests that the benefits of the clinical service outweighs the potential risks. Clinicians should discuss the service with eligible patients. Level C: At least fair scientific evidence suggests that there are benefits provided by the clinical service, but the balance between benefits and risks are too close for making general recommendations. Clinicians need not offer it unless there are individual considerations. Level D: At least fair scientific evidence suggests that the risks of the clinical service outweighs potential benefits. Clinicians should not routinely offer the service to asymptomatic patients. Level I: Scientific evidence is lacking, of poor quality, or conflicting, such that the risk versus benefit balance cannot be assessed. Clinicians should help patients understand the uncertainty surrounding the clinical service.
    • dhtobey Tobey
       
      Relates well to Scott's idea of common problem being one of risk management.
  • AUC-ROC The area under the receiver operating characteristic curve (AUC-ROC) reflects the relationship between sensitivity and specificity for a given test. High-quality tests will have an AUC-ROC approaching 1, and high-quality publications about clinical tests will provide information about the AUC-ROC. Cutoff values for positive and negative tests can influence specificity and sensitivity, but they do not affect AUC-ROC.
    • dhtobey Tobey
       
      ROC curves are similar to PPT, though addressing a different and less impactful issue of system sensitivity and specificity, rather than reliability (consistency) as determined by PPT.
dhtobey Tobey

Online Learning Environment Survey (OLES) - 0 views

  • The Online Learning Environment Survey (OLES) is an online survey instrument for evaluating e-learning environments. The data collected and the resultant statistics depict the actual and preferred learning environment of individuals and groups of learners giving valuable feedback to educators working in these environments. Using the OLES educators can gather valuable pre-course and post-course data to evaluate the effectiveness of the e-learning environment. Adjustments can then be made accordingly to improve or adjust the learning environment.
  •  
    Survey to use in the Critical Intelligence validation phase
Steve King

.:: iSec Consulting ::. - 0 views

shared by Steve King on 04 Jul 10 - Cached
  • Complex Event Processing (CEP) is a technology which has been used for many years in the Aerospace and Defence Industry for Situational Awareness and Data Fusion modules in Command, Control, Communications, Computing and Intelligence Systems (aka C4I).   Currently CEP is being rediscovered as a foundation for new class of extremely effective Business Intelligence, Security and System/Network/SCADA Monitoring solutions in industries like Financial Services, Telecommunications, Oil and Gas, Manufacturing, Logistics etc.
Steve King

AchillesINSIDE™ - 0 views

  • By leveraging the proprietary data in Delphi™, the world’s largest database of industrial system vulnerabilities, Wurldtech has created a solution specifically designed to help reduce the cost and complexity of mitigation activities for process control networks by integrating specific vulnerability intelligence into common security enforcement devices such as firewalls and intrusion detections systems. This allows common IT infrastructure to be tailored for industrial network environments and continuously updated with specific rule-sets and signatures, protecting control systems immediately, substantially reducing the frequency of patching activities and reducing overall costs. This update and support service is called AchillesINSIDE™.
Steve King

Verizon Business Security Blog » Blog Archive » Verizon Incident Metrics Fram... - 0 views

  • Today we’re making a version of that framework, the Verizon Incident Sharing Framework (VerIS), available for you to use. In the document that  you can download here, you’ll find the first release of the VerIS framework.  You can also find a shorter executive summary here.  Our goal for our customers, friends, and anyone responsible for incident response, is to be able to create data sets that can be used and compared because of their commonality.  Together, we can work to eliminate both equivocality and uncertainty, and help defend the organizations we serve.
dhtobey Tobey

GroupMind Express - Collaboration Software and Consulting for Decision Support - 2 views

  • We provide web-based tools and consulting services to support organizations and consultants. Our purpose is to help teams make decisions based on shared data, resulting in increased alignment and faster implementation.. Here are several standard organization needs, and how we can add value to your work   Your need Our value-add Surveys Shared results lead to group learning. Identify your areas of strength and weakness. Meetings Interactive meetings provide opportunities for buy-in and for gathering the group's intelligence. Hear from everyone.   Brainstorm or Delphi process Create better solutions and build improvement by using fast-cycle brainstorming to increase group understanding.
  •  
    Steve and I looked at this platform this evening in prep for tomorrow's walk-thru and after reviewing the KE capabilities and customization limitations, this may be a better option. We should therefore postpone tomorrow's walk-through and see about getting a trial version of GroupMind to try out for Raytheon.
Steve King

Sankey Helper 2.4.1 by G.Doka - 0 views

  • Sankey Helper v2.4 helps you design Sankey diagrams from Excel data ... in Excel !
dhtobey Tobey

Cybersecurity panel: Federal CISOs must focus on worker training - FierceGovernmentIT - 0 views

  • Only 12 percent of federal CISOs worry about poorly trained users. According to an April 2010 study by the Ponemon Institute, 40 percent of all data breaches in the United States are the result of negligence, however a comparable statistic for the federal space is unavailable.
  • The Computer Security Act of 1987 requires federal agencies to "provide for the mandatory periodic training in computer security awareness and accepted computer security practices of all persons who are involved with the management, use, or operation of each Federal computer system within or under the supervision of that agency." At the NIST event, Hord Tipton, executive director of (ISC)², estimated that most federal employees only get an hour of training per year, under FISMA requirements.
  •  
    This points to a significant opportunity for deployment of the Critical Intelligence cybersecurity course, but also other eLearning systems that fulfill the requirements of the Computer Security Act.
Steve King

GIAC Security Expert (GSE) - 1 views

shared by Steve King on 24 Aug 10 - Cached
  • The GSE exam is given in two parts. The first part is a multiple choice exam which may be taken at a proctored location just like any other GIAC exam. The current version of the GSE multiple choice exam has the passing score set at 75%, and the time limit is 3 hours. Passing this exam qualifies a person to sit for the GSE hands-on lab. The first day of the two day GSE lab consists of a rigorous battery of hands on exercises drawn from all of the domains listed below. The second day consists of on Incident Response Scenario that requires the candidate to analyze data and report their results in a written incident report as well as an oral report.
1 - 20 of 23 Next ›
Showing 20 items per page