1 / 32

Reporting structures for Image Cytometry: Context and Challenges

Reporting structures for Image Cytometry: Context and Challenges Chris Taylor, EMBL-EBI & NEBC chris.taylor@ebi.ac.uk MIBBI [www.mibbi.org] HUPO Proteomics Standards Initiative [psidev.sf.net] Research Information Network [www.rin.ac.uk]. Mechanisms of scientific advance.

ewa
Download Presentation

Reporting structures for Image Cytometry: Context and Challenges

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reporting structures for Image Cytometry: Context and Challenges Chris Taylor, EMBL-EBI & NEBCchris.taylor@ebi.ac.uk MIBBI [www.mibbi.org] HUPO Proteomics Standards Initiative [psidev.sf.net] Research Information Network [www.rin.ac.uk]

  2. Mechanisms of scientific advance

  3. Well-oiled cogs meshing perfectly (would be nice) “Publicly-funded research data are a public good, produced in the public interest” “Publicly-funded research data should be openly available to the maximum extent possible.” How well are things working? • Cue the Tower of Babel analogy… • Situation is improving with respect to standards • But few tools, fewer carrots (though some sticks) Why do we care about that..? • Data exchange • Comprehensibility (/quality) of work • Scope for reuse (parallel or orthogonal)

  4. Methods remain properly associated with the results generated Data sets generated with specific techniques/materials can be retrieved from repositories (or excluded from results sets) No need to repeatedly construct sets of contextualizing information Facilitates the sharing of data with collaborators Avoids the risk of loss of information through staff turnover Enables time-efficient handover of projects For industry specifically (in the light of 21 CFR Part 11) The relevance of data can be assessed through summaries without wasting time wading through full data setsin diverse proprietary formats (‘business intelligence’) Public data can be leveraged as ‘commercial intelligence’ 1. Increased efficiency

  5. Enables fully-informed assessment of results (methods used etc.) Supports the assessment of results that may have been generated months or even years ago (e.g. for referees or regulators) Facilitates better-informed comparisons of data sets Increased likelihood of discovering the factors (controlled and uncontrolled) that might differentiate those data sets Supports the discovery of sources of systematic or random error by correlating errors with metadata features such as the date or the operator concerned Requires sufficient information to support the design of appropriate parallel or orthogonal studies to confirm or refute a given result 2. Enhanced confidence in data

  6. Re-using existing data sets for a purpose significantly different to that for which the data were generated Building aggregate data sets containing (similar) data from different sources (including standards-compliant public repositories) Integrating data from different domains For example, correlating changes in mRNA abundances, protein turnover and metabolic fluxes in response to a stimulus Design requirements become both explicit and stable MIAPE modules as driving use cases (tools, formats, CV, DBs) Promotes the development of sophisticated analysis algorithms Presentation of information can be ‘tuned’ appropriately Makes for a more uniform experience 3. Added value, tool development

  7. Data sharing is more or less a given now, and tools are emerging Lots of sticks, but they only get the bare minimum How to get the best out of data generators? Need standards- and user-friendly tools, and meaningful credit Central registries of data sets that can record reuse Well-presented, detailed papers get cited more frequently The same principle should apply to data sets ISNIs for people, DOIs for data: http://www.datacite.org/ Side-benefits, challenges Would also clear up problems around paper authorship Would enable other kinds of credit (training, curation, etc.) Community policing — researchers ‘own’ their credit portfolio (enforcement body useful, more likely through review) Problem of ‘micro data sets’ and legacy data Credit where credit’s due

  8. Spanish multi-site collaboration: provision of proteomics services MIAPE customer satisfaction survey (compiled November 2008) http://www.proteored.org/MIAPE_Survey_Results_Nov08.html Responses from 31 proteomics experts representing 17 labs ProteoRED’s MIAPE satisfaction survey Yes: 95% No: 5%

  9. So what (/why) is a standards body again..? Consider the three main ‘omics standards bodies’ • What defines a (candidate) standards-generating body? • “A beer and an airline” (Zappa) • Requirements, formats, vocabulary • Regular full-blown open attendance meetings, lists, etc. • PSI (proteomics), GSC (genomics), MGED (transcriptomics) Hugely dependent on their respective communities • Requirements (What are we doing and why are we doing it?) • Development (By the people, for the people. Mostly.) • Testing (No it isn’t finished, but yes I’d like you to use it…) • Uptake, by all of the various kinds of stakeholder: • Publishers, funders, vendors, tool/database developers • The user community (capture, store, search, analyse)

  10. Domain specialists&IT types(initial drafts, evolution) Journals The real issue for any MI project is getting enough people to comment on what you have (distinguishes a toy project from something to be taken seriously — community buy-in) Having journals help in garnering reviews is great (editorials, web site links, mail shots even). Their motive of course being that fuller reporting = better content = higher citation index. Funders MI projects can claim to be slightly outside of 'normal' science; may form funding policy components (arguments about maximum value) Funders therefore have a motive (similar to journals) to ensure that MI guidelines, which they may endorse down the line, are representative and mature They can help by allocating slots at (appropriate) meetings of their award holders for you to show your stuff. Things like that. Ingredients for MI pie

  11. Vendors The cost of MIs in person-hours will be the major objection Vendors can implement parameter export to an appropriate file format, ideally using some helpful CV (somebody else's problems) Vendors also have engineers (and some sales staff) who really know their kit and make for great contributors/reviewers. For some standards bodies (like PSI, MGED) their sponsorship has been very helpful also (believe it or not it would seem possible to monetise a standards body). Food / pharma Already used to better, if rarely perfect data capture and management; for example, 21 CFR Part 11 (MI = exec summary…) Trainers There is a small army of individuals training scientists, especially in relation to IT (EBI does a lot of this but I mean commercial training providers)  ‘Resource packs’ Ingredients for MI pie

  12. Modelling the biosciences Biologically-delineated views of the worldA: plant biology B: epidemiology C: microbiology …and… Generic features (‘common core’) — Description of source biomaterial — Experimental design components Technologically-delineated views of the worldA: transcriptomics B: proteomics C: metabolomics …and… MS MS Gels NMR Arrays Columns FTIR Scanning Arrays &Scanning Columns

  13. Modelling the biosciences (slightly differently)

  14. Reporting guidelines — a case in point • MIAME, MIAPE, MIAPA, MIACA, MIARE, MIFACE, MISFISHIE, MIGS, MIMIx, MIQAS, MIRIAM, (MIAFGE, MIAO), My Goodness… • ‘MI’ checklists usually developed independently, by groups working within particular biological or technological domains • Difficult to obtain an overview of the full range of checklists • Tracking the evolution of single checklists is non-trivial • Checklists are inevitably partially redundant one against another • Where they overlap arbitrary decisions on wording and sub structuring make integration difficult • Significant difficulties for those who routinely combine information from multiple biological domains and technology platforms • Example: An investigation looking at the impact of toxins on a sentinel species using proteomics (‘eco-toxico-proteomics’) • What reporting standard(s) should they be using?

  15. The MIBBI Project (mibbi.org) • International collaboration between communities developing ‘Minimum Information’ (MI) checklists • Two distinct goals (Portal and Foundry) • Raise awareness of various minimum reporting specifications • Promote gradual integration of checklists • Lots of enthusiasm (drafters, users, funders, journals) • 31 projects committed (to the portal) to date, including: • MIGS, MINSEQE & MINIMESS (genomics, sequencing) • MIAME (μarrays), MIAPE (proteomics), CIMR (metabolomics) • MIGen & MIQAS (genotyping), MIARE (RNAi), MISFISHIE (in situ)

  16. Nature Biotechnol 26(8), 889–896 (2008) http://dx.doi.org/10.1038/nbt.1411

  17. The MIBBI Project (www.mibbi.org)

  18. The MIBBI Project (www.mibbi.org)

  19. The MIBBI Project (www.mibbi.org) Interaction graph for projects (line thickness & colour saturation show similarity)

  20. The MIBBI Project (www.mibbi.org)

  21. ‘Pedro’ tool → XML→ (via XSLT) Wiki code (etc.)

  22. MICheckout: Supporting Users

  23. Minimum Information guidelines: Progress on uptake • MIAME is the earliest of the ‘new generation’ of guidelines • Supported by ArrayExpress/GEO • Required by many journals • CONSORT (www.consort-statement.org & www.equator- network.org) • Required by many journals (N.B., no databases per se) • Other guidelines recommended for consideration • Individually (e.g., MIMIx, MIFlowCyt [NPG]) • Via MIBBI (BMC, Science [soon], OMICS, others coming too) • Many funders recommend use of ‘accepted’ community standards • But… Uptake is closer to nil for projects lacking supporting resources • Case in point: MIAPE (no usage until a web tool appeared)

  24. ICS: overlapping guidelines registered at the MIBBI Portal • The study sample (potentially described in header metadata) • CIMR (human samples, cell culture) • MIFlowCyt (cell counting/sorting) • MIACA, MIATA (cell-based assays) • The assay • MIACA (cell-based assays) • Some general overlap (software, processing) • Image analysis • Some general overlap (image data [MIAPE] & statistics)

  25. Tools? OBO

  26. From theory to practice: tools for the community Download similar studies Experiments EXPERIMENTALIST Java standalone components, for local installation that can work independently, or as unified system

  27. Example of guiding the experimentalist to search and select a term from the EnvO ontology, to describe the ‘habitat’ of a sample Ontologies, accessed in real time via the Ontology Lookup Service and BioPortal 28 The International Conference on Systems Biology (ICSB), 22-28 August, 2008 Susanna-Assunta Sansone www.ebi.ac.uk/net-project

  28. Spreadsheet functionalities, including: move, add, copy, paste, undo, redo and right click options

  29. Groups of samples are colour coded

  30. public instance deployed @ EBI 31 The International Conference on Systems Biology (ICSB), 22-28 August, 2008 Susanna-Assunta Sansone www.ebi.ac.uk/net-project

More Related