1 / 55

Conducting Scientifically-Based Research in Teaching with Technology, Part I

Conducting Scientifically-Based Research in Teaching with Technology, Part I. SITE Annual Meeting Symposium Atlanta, Georgia Gerald Knezek & Rhonda Christensen University of North Texas Charlotte Owens & Dale Magoun University of Louisiana at Monroe March 2, 2004.

bellini
Download Presentation

Conducting Scientifically-Based Research in Teaching with Technology, Part I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Conducting Scientifically-Based Research in Teaching with Technology, Part I SITE Annual Meeting Symposium Atlanta, Georgia Gerald Knezek & Rhonda Christensen University of North Texas Charlotte Owens & Dale Magoun University of Louisiana at Monroe March 2, 2004

  2. Our History of Scientifically-Based Research • Foundation: More than ten years of instrumentation development/validation • Research based on dissertation criteria • Large data sets analyzed (replication of findings) • Quantitative to tell us what is happening; Qualitative to tell us why it is happening

  3. Components for Evaluation with a Research Agenda • Plan for Evaluation (when writing the grant - not after you get it) • Use reliable/valid instruments and/or • Work on developing instruments the first year • Get baseline data - how can you know how far you have come if you don’t know where you started • Use comparison groups such as other PT3 grantees

  4. Common Instruments • Stages of Adoption of Technology • CBAM Levels of Use • Technology Proficiency Self Assessment • Teachers Attitudes Toward Computers (TAC)

  5. Online Data Acquisition System • Provided by UNT • Unix/Linux Based • Stores Data in Files • Data Shared with Contributors

  6. Why are we gathering this data? • Campbell, D. T. & Stanley, J. C. (1966). Experimental and Quasi-Experimental Designs for Research on Teaching. From Gage, N. L. (Ed.) Handbook of Research on Teaching. Boston: Rand McNally, 1963. Frequently references: • McCall, W. A. (1923). How to Experiment in Education.

  7. Adding Research Agendas to Evaluation • ‘By experiment we refer to that portion of research in which variables are manipulated and their effects upon other variables are observed.’ (Campbell & Stanley, 1963, p. 1) • Dependent = outcome variable; predicted or measured; we hope this “depends on’ something • Independent = predictor variable; one manipulated to make, or believed to make a difference • Did changing x influence/impact/improve y? • Y = f(x)

  8. Longitudinal Designs • PT3/Univ. of North Texas: 1999-2003 Baseline data year 1 Pre-post course measures over multiple years Trends in exit student survey data • PT3/University of Nevada/Reno: 2003-2006 Best features of UNT plus comparisons w/UNT Added random selection of 30-60 teachers to track retention through end of induction year

  9. Stages of Adoption of TechnologyFall 1998

  10. CECS 4100 Technology SkillsPre and Post - Spring 1999

  11. What is the ‘Experiment’ here? • Dependent variables: Email, WWW, Integrated Applications, Teaching with Technology Competencies • Independent Variable: completion of content of course (CECS 4100, Computers in Education)

  12. Longitudinal Trends in Integration Abilities(Research Item)

  13. Growth in Technology Integration Course at Univ. of North Texas(Typical PT3 Evaluation Item)

  14. Data Sharing with PT3 Projects • Control groups are difficult • Comparisons within CE clusters is easy! • Similar trends are positive confirmations for each other

  15. Spring 2002: Snapshot Data • Univ. North Texas • Texas A&M Univ. • St. Thomas of Miami • Univ. Nevada at Reno • Northwestern Oklahoma State Univ. • Wichita State University (Kansas)

  16. Demographics Spring 2002 • 481 subjects from 5 schools for pretest • UNT = 179 • TAMU = 65 • Miami = 14 • Nevada = 91 • Oklahoma = 95 • Wichita St. = 37 • 157 subjects from 3 schools for post test • UNT, TAMU, St. Thomas (2 times)

  17. Demographics Spring 2002 (cont.) • Age: Wichita State students are older • Mean = 28 years • Gender: UNT & TAMU have more females • 85% and 97% • Graduation: UNT, Nevada, Oklahoma students expect to graduate later • Teaching Level: TAMU students Elem.

  18. Educational Technology Preservice Courses

  19. Educational Technology Preservice Courses

  20. What is the ‘Experiment’ here? • Dependent Variable: Gains in technology integration proficiency • Independent Variables: • Completion of course content (as before) • Comparisons/contrasts among different environments/curricular models (value added)

  21. General Findings • Reliability of Stages is High • (r = .88 test-retest) • Reliability of Skill Self-Efficacy Data is High • (Alpha = .77 to .88 for 4 TPSA scales) • Gender: Females are higher in Web Access, Home Computer Use, and WWW Skills

  22. Pre-Post Trends for TAMU:Two Teacher Preparation Courses

  23. Impact Across 2 Schools (Pre-Post, UNT & TAMU) • Stages: ES = .42 to .76 • CBAM LOU: ES = .73 to 1.15 • TPSA-IA: ES = .18 to .82 • TPSA-TT: ES = .33 to 1.12 • TPSA-WWW: ES = .05 to .49

  24. How to Interpret Effect Size • Cohen’s d vs. other • Small (.2), medium (.5) vs. large (.8) • Compare to other common effect sizes • “As a quick rule of thumb, an effect size of 0.30 or greater is considered to be important in studies of educational programs.” (NCREL) • For example .1 is one month learning (NCREL) • others SRI International. http://www.ncrel.org/tech/claims/measure.html

  25. APA Guidelines for Effect Size The Publication Manual of the American Psychological Association (APA, 2001) strongly suggests that effect size statistics be reported in addition to the usual statistical tests. To quote from this venerable guide, "For the reader to fully understand the importance of your findings, it is almost always necessary to include Some index of effect size or strength of relationship in your Results section" (APA, 2001, p. 25). This certainly sounds like reasonable advice, but authors have been reluctant to follow this advice and include the suggested effect sizes in their submissions. So, following the lead of several other journals,effect size statistics are now required for the primary findings presented in a manuscript.

  26. UNR Collaborative Exchange

  27. New PT3 Project • Univ. of Nevada - Reno is lead and IITTL at UNT as outside evaluator • One component - following teachers after they graduate from the teacher ed. Program • Randomly select from a pool of 2004 graduates and contact them prior to graduation; pay a stipend to continue in the project by providing yearly data

  28. Procedure for Unbiased Selection • Locate prospective graduates to be certified to teach during spring 2004 • Number consecutively • Use random number table to select a preservice candidate from the list • Verify student completed technology integration course with B or better • Invite preservice candidate to participate during induction year and possibly beyond • Repeat process until 60 agree to participate

  29. From Edwards, A. L. (1954). Statistical Methods for the Behavioral Sciences. NY: Rinehart.

  30. Maine 2003

  31. Maine Learning Technology Initiative (MLTI) • 2001-2002 Laptops for all 7th graders • 2002-2003 Laptops for all 7th and 8th graders in the whole state of Maine • Maine Learns is About Curriculum

  32. Interesting Aspects of Research • Sample or Population (all 17,000 students in the state) • Selection of Exploratory Schools (if wished to participate, one from each region) • Statistical measures of significance • Strong reliance on Effect Size

  33. Research Design • 9 Exploration schools (1 per region) • Compared with 214 others • Used 8th grade state-wide achievement • Examined trend over 3 years in math, science, social studies, and visual/performing arts • Intervention - • Extensive teacher preparation • Laptop and software for every 7th-8th teacher/student • Some permitted to take home, others not

  34. 2003 Findings • Evaluators’ reports • Achievement Effect Sizes • Student self reports on • Attitudes toward school • Self Concept • Serendipitous findings are the sometimes the most valuable • Home Access • Gender Equity

  35. Would Cohen Have Predicted This Effect? "Small Effect Size: d = .2. In new areas of research inquiry, effect sizes are likely to be small (when they are not zero!). This is because the phenomena under study are typically not under good experimental or measurement control or both. When phenomena are studied which cannot be brought into the laboratory, the influence of uncontrollable extraneous variables ("noise') makes the size of the effect small relative to these (makes the 'signal' difficult to detect).” Cohen, J. (1977), p. 25.

  36. Exploratory - as Illustrated by:

More Related