1 / 8

Towards Lifelike Interfaces That Learn

Towards Lifelike Interfaces That Learn. Jason Leigh, Andrew Johnson, Luc Renambot, Steve Jones, Maxine Brown. The Electronic Visualization Laboratory. Established in 1973 Jason Leigh, Director; Tom DeFanti, Co-Director; Dan Sandin, Director Emeritus 10 full time staff

mauli
Download Presentation

Towards Lifelike Interfaces That Learn

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Towards Lifelike Interfaces That Learn Jason Leigh, Andrew Johnson, Luc Renambot, Steve Jones, Maxine Brown

  2. The Electronic Visualization Laboratory • Established in 1973 • Jason Leigh, Director; Tom DeFanti, Co-Director; Dan Sandin, Director Emeritus • 10 full time staff • Interdisciplinary Computer Science, Art & Communication • 30 students, 15 funded students, • Research in: • Advanced display systems • Visualization and virtual reality • High speed networking • Collaboration & human computer interaction • 34 years of collaboration with Science, Industry & Arts to apply new computer science techniques to these disciplines. • Major support by NSF and ONR.

  3. Goal in 3 Years • Life-sized Avatar capable of reacting to speech input with naturalistic facial and gestural responses. • A methodology of how to capture and translate human verbal and non-verbal communication into an interactive digital representation. • Deeper understanding of how to create believable/credible avatars.

  4. System Components Knowledge Capture Natural Language Processing Speech Recognition AlexDSS Knowledge Processing Textual & Contextual Information Facial Expression Recognition Responsive Avatar Engine Responsive Avatar Eye-tracking Speech Synthesis Lip Synch Gestural Articulation Facial Articulation Facial & Body Motion / Performance Capture Phonetic Speech Sampling

  5. EVL Year 1 • Digitize facial images and audio of Alex • Shadow Alex to capture information about his mannerisms • Create 3D Alex focusing largely on facial features • Prototype initial RAE & merge initial avatar, speech recognition, AlexDSS, pre-recorded voices • Validate provision of non-verbal avatar cues, evaluate efficacy of cues

  6. EVL Year 2 • Full-scale Motion & performance capture to create gestural responses to AlexDSS • Speech synthesis using Alex’s voice patterns to create verbal responses to AlexDSS • Use eye-tracking to begin to experiment with aspects of non-verbal communication • Evaluate merging of verbal and non-verbal information in users’ understandings of • avatar believability and credibility (ethos) • information retrieved • avatar emotional appeals (pathos)

  7. EVL Year 3 Life-sized projection • Utilize camera-based recognition of facial expressions as additional non-verbal input • Conduct user studies: • relative to a believability and credibility (ethos) • to correlate attention to non-verbal communication relative to comprehension and retention • to assess value of avatar emotional appeals (pathos) • to address formation of longer-term relationship formation between avatar and user. Camera Microphone

  8. Thanks! • This project was supported by grants from the National Science Foundation • Award CNS 0703916and CNS 0420477

More Related