1 / 20

A Computational Model of Accelerated Future Learning through Feature Recognition

Building an intelligent agent that simulates human-level learning using machine learning techniques. A Computational Model of Accelerated Future Learning through Feature Recognition. Nan Li Computer Science Department Carnegie Mellon University. Accelerated Future Learning.

arissa
Download Presentation

A Computational Model of Accelerated Future Learning through Feature Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Building an intelligent agent that simulates human-level learning using machine learning techniques A Computational Model of Accelerated Future Learning through Feature Recognition Nan Li Computer Science Department Carnegie Mellon University

  2. Accelerated Future Learning • Accelerated Future Learning • Learning more effectively because of prior learning • Has been observed a lot • How? • Expert vs Novice • Expert  Deep functional feature (e.g. -3x  -3) • Novice  Shallow perceptual feature (e.g. -3x  3)

  3. A Computational Model • Model Accelerated Future Learning • Use Machine Learning Techniques • Acquire Deep Feature • Integrated into a Machine-Learning Agent

  4. An Example in Algebra

  5. Feature Recognition asPCFG Induction • Under lying structure in the problem  Grammar • Feature  Intermediate symbol in a grammar rule • Feature learning task  Grammar induction • Error  Incorrect parsing

  6. Problem Statement • Input is a set of feature recognition records consisting of • An original problem (e.g. -3x) • The feature to be recognized (e.g. -3 in -3x) • Output • A PCFG • An intermediate symbol in a grammar rule

  7. Accelerated Future Learning through Feature Recognition • Extended a PCFG Learning Algorithm (Li et al., 2009) • Feature Learning • Stronger Prior Knowledge: • Transfer Learning Using Prior Knowledge • Better Learning Strategy: • Effective Learning Using Bracketing Constraint

  8. A Two-Step Algorithm • Greedy Structure Hypothesizer: • Hypothesizes the schema structure • Viterbi Training Phase: • Refines schema probabilities • Removes redundant schemas Generalizes Inside-Outside Algorithm (Lary & Young, 1990)

  9. Greedy Structure Hypothesizer • Structure learning • Bottom-up • Prefer recursive to non-recursive

  10. EM Phase • Step One: • Plan parse tree computation • Most probable parse tree • Step Two: • Selection probabilities update • s: aip, ajak

  11. Feature Learning • Build Most Probable Parse Trees • For all observation sequences • Select an Intermediate Symbol that • Matches the most training records as the target feature

  12. Transfer Learning Using Prior Knowledge • GSH Phase: • Build parse trees based on previously acquired grammar • Then call the original GSH • Viterbi Training: • Add rule frequency in previous task to the current task 0.5 0.33 0.5 0.66

  13. Effective Learning Using Bracketing Constraint • Force to generate a feature symbol • Learn a subgrammar for feature • Learn a grammar for whole trace • Combine two grammars

  14. Experiment Design in Algebra

  15. Experiment Result in Algebra • Both stronger prior knowledge and a better learning strategy can yield accelerated future learning • Strong prior knowledge produces faster learning outcomes • L00 generated human-like errors Fig.3. Curriculum two Fig.2. Curriculum one Fig.4. Curriculum three

  16. Learning Speed inSynthetic Domains • Both stronger prior knowledge and a better learning strategy yield faster learning • Strong prior knowledge produces faster learning outcomes with small amount of training data, but not with large amount of data • Learning with subtask transfer shows larger difference, 1) training process; 2) low level symbols

  17. Score with Increasing Domain Sizes • The base learner, L00, shows the fastest drop • Average time spent per training record • Less than 1 millisecond except for L10 (266 milliseconds) • L10: Need to maintain previous knowledge, does not separate trace into small traces • Conciseness: Transfer learning doubled the size of the schema.

  18. Integrating Accelerated Future Learning in SimStudent • A machine-learning agent that • Acquires production rules from • Examples and problem solving experience • Integrate the acquired grammar into production rules • Requires weak operators (non-domain specific knowledge) • Less number of operators x+5

  19. Concluding Remarks • Presented a computational model of human learning that yields accelerated future learning. • Showed • Both stronger prior knowledge and a better learning strategy improve learning efficiency. • Stronger prior knowledge produced faster learning outcomes than a better learning strategy. • Some model generated human-like errors, while others did no make any mistake.

  20. Thank you! 

More Related