1 / 16

R Karchin, M Cline, Y Mandel-Gutfreund, K Karplus

Hidden Markov Models That Use Predicted Local Structure for Fold Recognition: Alphabets of Backbone Geometry. R Karchin, M Cline, Y Mandel-Gutfreund, K Karplus. Problem. Parent fold may have low sequence similarity Most often, query is sequence-only. Proposed Solution.

amalie
Download Presentation

R Karchin, M Cline, Y Mandel-Gutfreund, K Karplus

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hidden Markov Models That Use Predicted Local Structure for Fold Recognition: Alphabets of Backbone Geometry R Karchin, M Cline, Y Mandel-Gutfreund, K Karplus

  2. Problem • Parent fold may have low sequence similarity • Most often, query is sequence-only

  3. Proposed Solution • Try to predict local structure of target • Use local structure prediction in fold recognition

  4. Simple SAM Overview Query Sequence Search, and threshold

  5. Start Stop Profile HMM

  6. Start Stop AA Str AA Str AA Str AA Str AA AA AA Str Str Str Two-Track Profile HMM

  7. Start Stop AA Str AA Str AA Str AA Str AA AA AA Str Str Str SAM with Local Structure Query Sequence Predicted local structure Two-track HMM

  8. Scoring • Profile HMMP(residue|state) = θ(AA,state) • Two-Track Profile HMMP(residue|state) = θ(AA,state)Φ(local,state) • SAM Two-Track Profile HMMP(residue|state) = θ(AA,state)Φ0.3(local,state)

  9. Local Structure Alphabets

  10. Predicting Local Structure of the Query • Alphabets very dissimilar • Use a neural network • Input: window of multiple alignment • Output: structure probabilities for a single residue

  11. Good Structure Alphabets • Intuitively, we want • Predictability • Conservation • Better fold recognition • Better alignment

  12. Predictability • Evaluated on • Exact predictions (QN) • Overlap of structure segments (SOV) • Information gain • Winners: • *-EHL for precision • STR and PB for most information

  13. Conservation • Used FSSP structural alignments • Calculated mutual information between proteins • Used only alignments with low sequence similarity • Winners: • STR, PB

  14. Fold Recognition Winners: All except PB (STRIDE-EHL leading, STR lagging)

  15. Alignment • Compared to DALI and CE alignments • Evaluated using a shift score • 1.0 perfect match • -0.2 is the worst (adjustable) • Winners • STR, and STRIDE gets an odd one. • All alphabets improve alignment

  16. Conclusions • Local structure improves: • Fold recognition • Alignment when there’s little sequence similarity • Alignment and fold recognition are very different problems

More Related