1 / 14

August 2012 Update August 9, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench

August 2012 Update August 9, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench. With Funding Support provided by National Institute of Standards and Technology. Agenda for Today. Second development iteration ( set to start September 1). 2. 2. 2. 2. 2. 2. 2.

bella
Download Presentation

August 2012 Update August 9, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. August 2012 Update August 9, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench With Funding Support provided by National Institute of Standards and Technology

  2. Agenda for Today • Second developmentiteration (settostart September 1) 2 2 2 2 2 2 2

  3. Second developmentiteration: contentandpriorities Theoretical Base Functionality Test Beds Domain Specific Language Executable Specifications Computational Model Enterprise vocabulary / data service registry End-to-end Specify-> Package workflows • Curation pipeline workflows • DICOM: • Segmentation objects • Query/retrieve • Structured Reporting • Worklist for scripted reader studies • Improved query / search tools (including link of Formulate and Execute) • Continued expansion of Analyze tool box Further analysis of 1187/4140, 1C, and other data sets using LSTK and/or use API to other algorithms Support more 3A-like challenges Integration of detection into pipeline Meta-analysis of reported results using Analyze False-positive reduction in lung cancer screening Other biomarkers 3 3 3 3 3 3 3

  4. Unifying Goal of 2nd Development Iteration • Perform end-to-end characterization of vCT including meta-analysis of literature, incorporation of QIBA results, and "scaled up" using automated detection and reference volumetry method. • Integrated characterization across QIBA, FDA, LIDC/RIDER, Give-a-scan, Open Science sets (e.g., biopsy cases), through analysis modules and rolling up to an i_ file in zip archive. • Specifically have people like Jovanna, Ganesh, and Adele to use it (as opposed to only Gary, Mike/Patrick, and Kjell) 4 4 4 4 4 4 4 4

  5. Execute • Batchmake UI generalized to be on any folder and with GUI for selection of interfaced alg. Menu showing interfaced algorithms, including at least Adele’s detection and LSTK, but also for example DCE-MRI for QIBA/RIC • Adele’s code wrapped by Alden so as to integrate with both ClearCanvas and batchmake • ClearCanvas RIS worklist and Midas DICOM Q/R to support semi-automated workflows from batchmake scripts. • The scripted localization, and use of ClearCanvas for detection, needs to support the type of workflows and tools already used by Ganesh so that it all integrates seamlessly • In ClearCanvas, the app should initially display loc's or seg’s that already exist (if any, and regardless of how they got there), and then provide modify, add, and delete functions. Then manual and automated workflows can be used together. • The customized ClearCanvas is a component of Execute, with an SDD. • DICOM segmentation objects, and formal support for AIM 4.0 (which meshes well with the triple store design from Specify). 5 5

  6. Analyze • Continued enhancement and addition to core modules • Document the core analysis modules as an Analyze SDD, indicating that they may be called either directly from Iterate or indirectly through MVT. • Improve the Analyze link to point to something more interesting than the note re: using MVT on bartok. • Update current text to add phrase such as "or, call analysis modules from workflows in Iterate” with a button that vectors there. 6 6

  7. Specify • Create triple store ala the worked example from earlier this year, linked to the i_ and s_ levels as the example indicated. • Connect Specify to Formulate. By automatically populating the query formulation form from the triples create by Specify. Thus, the form can be filled manually or by "import" from Specify (allowing seamless integration between manual and ontology-assisted workflows). 7 7

  8. Formulate • Apply the query generation of caB2B to query not only NBIA but also list of Midas instances, e.g., QI-Bench instances, Give-a-scan, Open Science collections (e.g., biopsy cases), QIBA/RIC, and NLM, others through source listing as extension of current. • As in the worked example, not only does Formulate initialize the query form from the triples out of Specify, it represents its output as triples linking to discovered data. These triples then drive a workflow that actually brings the data from the linked data archives externally into the RDSM. It should do so providing user control over whether data is added or merged, the latter with sub-options to duplicate or share. • This function of Formulate adds the triples to the biomarker db: • Makes "discovered data" triples for discovered data • Makes additional triples with the discovered triple as subject, predicate "storedAs", and uuid in RDSM as object for data actually imported 8 8

  9. Package • Create the first Package, as a simple workflow to extract ISA files hierarchically from a starting folder in RDSM into a zip file. • Take the i_ file too, as a serialized version of the triple store. 9 9

  10. Project Management • First, get the traceability matrix right. • Then, prioritize list of bug fixes from iteration 1's V&V • Then, propagate down to changes at the SDD level. • Then, form "workpackages" for each enhancement. Link work packages to Jira issues. • Assign workpackages to engineers for estimation. Ensure that estimates explicitly address documentation and testing. • Frame engineering assignments in terms of updates to execute ASD, AAS, and SDD's. This will do two things: reinforce the process including doc and testing, at the same time simplifying communication with engineering for efficiency with constrained resource. • Release workpackages for implementation based on priorities and secured funding. Funding sources include NIST grant budget, SBIR(s), IR&D, other funded projects, say from RSNA for QIBA/RIC, NCI/CIP, etc. • This way, the full vision is in place for context and integration, but we constrain commitment by funding. • Track both estimates and actual for continuous process improvement. 10 10 10 10 10 10 10 10

  11. 11

  12. Value proposition of QI-Bench • Efficiently collect and exploit evidence establishing standards for optimized quantitative imaging: • Users want confidence in the read-outs • Pharma wants to use them as endpoints • Device/SW companies want to market products that produce them without huge costs • Public wants to trust the decisions that they contribute to • By providing a verification framework to develop precompetitive specifications and support test harnesses to curate and utilize reference data • Doing so as an accessible and open resource facilitates collaboration among diverse stakeholders 12

  13. Summary:QI-Bench Contributions • We make it practical to increase the magnitude of data for increased statistical significance. • We provide practical means to grapple with massive data sets. • We address the problem of efficient use of resources to assess limits of generalizability. • We make formal specification accessible to diverse groups of experts that are not skilled or interested in knowledge engineering. • We map both medical as well as technical domain expertise into representations well suited to emerging capabilities of the semantic web. • We enable a mechanism to assess compliance with standards or requirements within specific contexts for use. • We take a “toolbox” approach to statistical analysis. • We provide the capability in a manner which is accessible to varying levels of collaborative models, from individual companies or institutions to larger consortia or public-private partnerships to fully open public access. 13

  14. QI-BenchStructure / Acknowledgements • Prime: BBMSC (Andrew Buckler, Gary Wernsing, Mike Sperling, Matt Ouellette, Kjell Johnson, Jovanna Danagoulian) • Co-Investigators • Kitware (Rick Avila, Patrick Reynolds, JulienJomier, Mike Grauer) • Stanford (David Paik) • Financial support as well as technical content: NIST (Mary Brady, Alden Dima, John Lu) • Collaborators / Colleagues / Idea Contributors • Georgetown (Baris Suzek) • FDA (Nick Petrick, Marios Gavrielides) • UMD (Eliot Siegel, Joe Chen, Ganesh Saiprasad, Yelena Yesha) • Northwestern (Pat Mongkolwat) • UCLA (Grace Kim) • VUmc (Otto Hoekstra) • Industry • Pharma: Novartis (Stefan Baumann), Merck (Richard Baumgartner) • Device/Software: Definiens, Median, Intio, GE, Siemens, Mevis, Claron Technologies, … • Coordinating Programs • RSNA QIBA (e.g., Dan Sullivan, Binsheng Zhao) • Under consideration: CTMM TraIT (Andre Dekker, JeroenBelien) 14

More Related