1 / 23

Human-Aided Computer Cognition for e-Discovery : A Computational Framework (open source)

Human-Aided Computer Cognition for e-Discovery : A Computational Framework (open source). Chris Hogan, R. S. Bauer, Ira Woodhead, Dan Brassil, Nick Pendar @H5.com. DESI III – June 8, 2009. Accurate Human Assessment is No Longer Possible .

darva
Download Presentation

Human-Aided Computer Cognition for e-Discovery : A Computational Framework (open source)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Human-Aided Computer Cognitionfor e-Discovery: A Computational Framework(open source) Chris Hogan, R. S. Bauer,Ira Woodhead, Dan Brassil, Nick Pendar@H5.com DESI III – June 8, 2009

  2. Accurate Human Assessment is No Longer Possible • Unstructured Information Retrieval with High P @ High R • Avoid Underproduction (high R) with Confidentiality & Usability (high P) • The e-Discovery task is one of Sensemaking (Iterative learning-loops) • Today’s Reviews start with Millions of (varied media) docs • Humans can no longer be the final assessors of relevance • 3-Steps to successful Human-Aided Computer e-Discovery • MODEL Relevance as defined by the User • MATCH Relevance criteria to Corpus • MEASURE and Iterate to Train a Classification module • System & Computational Framework for e-Discovery • including Massive Scales • with Unmatched Accuracy

  3. #s Require a NEW Assessment Imperative HumanCognition

  4. #s Require a NEW Assessment Imperative ComputerAssisted HumanCognition HumanCognition

  5. #s Require a NEW Assessment Imperative ComputerAssisted HumanAssisted ComputerCognition HumanCognition HumanCognition

  6. User External user of System with information need Proxy Internal agentco-constructsrelevance with user & provides guidance to assessors Classification Supervised classification module System:Human-Assisted Computer Cognition http://eprints.ucl.ac.uk/9131/ e-Discovery is Relevance Sensemaking utilizing “Iterativelearning-loop complex” • Measure • Iterate • Course-Correct

  7. HACC TREC (2008)Kershaw (2005) Performance:Human-Assisted Computer Cognition

  8. User Modeling:The process by which Criteria of relevance is co-determined by the Proxy with the User Using 4 methods Use Case Scope Nuance Linguistic Variability Assessment:Machine Learning methods enable relevance determinations for the most ambiguous documents with measures to insure maximal document diversity Measurement: Iterating to increase the likelihood of relevance within system's output Measure  Iterate  Course Correct Sampled corpus tests for 12 topicsin 1 case during HACC system training Precision Recall Training:Human-Assisted Computer Cognition

  9. Adaptation to Change: HCCC • Quality Control • Re-reviewerrorfulassessments • Re-classify • Adaptive HACC System can deal with Relevance changes • Iterative Learning Loop Complexes Measure  Iterate  Course Correct • Case Studies • TREC: Training documents (63) re-assessed per scope change • Human review suffers from inconsistency(40% inter-annotator agreement is typical)

  10. TREC-2008 Legal Track Results (F1)

  11. Case Studies Weblog processing 50 million postings 130 million comments Parsing 13 million sentences Large Production 35 million documents 10 Tb RobotArmy Massive Distributed Processing Solves the Scale Problem Management of massive corpora Map-Reduce auto-parallelization Simple administration Simple operation Distributed operationon commodityhardware Made availableby H5asOpenSource HACC: Meeting the Processing Challenge http://github.com/bulletsweetp/robotarmy/

  12. Solveschallengesof scale • Enablesimproved • QC • Course Correction • Measurement • Download RobotArmyNOW! http://github.com/bulletsweetp/robotarmy/ HACC: Today’s e-Discovery Imperative

  13. Summary:Human-Assisted Computer Cognition • Unstructured Information Retrieval with High P @ High R • Avoid Underproduction (high R) with Confidentiality & Usability (high P) • The e-Discovery task is one of Sensemaking (Iterative learning-loops) • Accurate Human assessment of relevance is not possible • 3-Steps to successful Human-Aided Computer e-Discovery • MODEL Relevance as defined by the User • MATCH Relevance criteria to Corpus • MEASURE and Iterate to Train a Classification module • NEW elements for Sensemaking-like Computer Classification • User Modeling • Expert Proxy • Rigorous Measurement, Assessment, and Iteration • Scalable, Massively Distributed Computational Architecture

  14. SUPPLEMENTAL

  15. System Architecture A system for e-Discovery must replicate & automate ‘sensemaking’ • MODEL: Shared understanding of Relevance with Senior Litigator • MATCH: Corpus characterization that represents the shared understanding of relevance • Systematic, Replicable Document Assessments • Relevance Feedback • MEASURE: A mechanism for iterating and correcting the system

  16. L. Takayama, S. K. Card, “Tracing the Microstructure of Sensemaking,” Sensemaking Workshop @ CHI 2008; Florence, Italy (April 6, 2008.) sensemaking • Iterative developmentof a mental model fromthe schemathat fits the evidence • Bottom-Up • Jr. Analyst SENSEMAKING • Top-Down • Sr. Analyst Foraging • Reading & extracting informationintoschemas SENSEMAKING • Top-Down • Sr. Analyst Assessment by Intelligence Analysts P. Pirolli & S. Card, “Sensemaking Processes of Intelligence Analysts and Possible Leverage Points as Identified Through Cognitive Task Analysis, in Proceedings to the 2005 International Conference on Intelligence Analysis, McLean, VA.

  17. User Modeling Questionnaire (example)

  18. Detailed Assessment Guide minimizes misinterpretation Mismatches are resolved by Proxy for consistency Initial Assessment: R/NR Relevant Passage Identification from Rs Cross Check: NRs double checked Other Quality Controls Assessment

  19. 3 Technical Challenges  3 Case Studies • Unstructured Information Retrieval in the Legal Context • The e-Discovery Task is one of Sensemaking (HCI research) • Iterative learning-loop complex is central to e-Discovery • MODEL Relevance – Case 1: Legal Strategy • A Senior Litigator defines retrieval goals (top down) • Studies of the intelligence community: Sr. vs. Jr. analyst IR • MATCH to Corpus – Case 2: Document Retention • Document Assessment (bottom up) • Systematic and replicable • MEASURE and Iterate – Case 3: Litigation Production • Relevance feedback to classification model • Deterministic and reproducible outcome

  20. Corpus Goal: High P MODEL: case 1 - Legal Strategy

  21. Corpus Goal: High R MATCH: case 2 - Document Retention

  22. Corpus Goal: High P and High R MEASURE: case 3 - Litigation Production

  23. Summary Treating the information retrieval task as one of classification has been shown to be the most effective way to achieve high performance on a particular task. In this paper, we describe a hybrid human-computer system that addresses the problem of achieving high performance on IR tasks by systematically and replicably creating large numbers of document assessments. We demonstrate how User Modeling, Document Assessment and Measurement combine to provide a shared understanding of relevance, a means for representing that understanding to an automated system, and a mechanism for iterating and correcting such a system so as to converge on a desired result.

More Related