E N D
Michael J Karcher Who’s mentoring the Mentors?
About Michael KarcherCo-editor with David DuBois of Handbook of Youth MentoringCo-author with Carla Herrera ofLifting as we Climb: Achieving Positive Youth Development through School-based Mentoring (Harvard, 2011)Principle Investigator: Study of Mentoring in the Learning Environment (SMILE)Research Advisory Board Member:BBBSA SBM Study (Herrera, 2007) DOE SBM Evaluation (Bernstein, 2009)
Goals of Today’s Talk • To explain the findings from three school-based mentoring evaluations • To explain how to interpret the effect sizes reported in each study • To reveal the role that “statistical significance” testing played in the seemingly different findings • To emphasize how we can magnify program effects in order to better mentor the mentors!
Summary of the National BBBS School-Based Mentoring Impact StudyPublic/Private Ventures Carla Herrera, Jean Grossman, Tina Kauh, Amy Feldman, and Jennifer McMaken
BBBSA Findings: Impacts in Year One Littles fared significantly better than controls in: • Overall academic performance • Written and oral language • Science • Quality of class work • Number of assignments completed • Fewer absence without an excuse/ Less likely to start to skip school • Engaging in serious school misconduct • Scholastic efficacy
From measures to meaning • In Table 13 of Herrera's 2007 impact study you see the BBBSA SBM impacts on absences, initiating skipping school, and school misconduct. The effect size is a quarter of a standard deviation, or a d = .25. But what does that mean? • Here Herrera is stating that mentored kids are .25 of a standard deviation (SD) “better” (which in this case means lower) than the non-mentored kids at the end of the school year. • Whether .25 is meaningful or statistically significant depends on how much the actual scores of the Littles and the control group vary around their mean—that is, how big the SD is. • A d= .25 (or 1/4 of a standard deviation) is about the same "size" as tutoring's impact on reading and writing skills (see Ritter, 2009). Let’s use the effect of tutoring on grades as an example, because grades reflect a meaningful scale.
68% +/- 1 SD -4 -3 -2 -1 0 +1 +2 +3 +4 40 50 60 70 80 90 100 Using tutoring “effects” to illustrate the meaning of BBBSA’s SBM impact effect size • Consider the “same size” impact as the BBBSA SBM program but of tutoring on academic skills. Let’s use grades as the outcome measure. If the kids’ grade point average (GPA) in a school is 80 (a "B”) before the program starts, and there is a 1 grade level standard deviation (10 points), this means that 68% of all kids score between one grade level above and below 80: 68% of kids’ scores are between a C and an A (or a 70 and a 90).
Making the analogy between tutoring and mentoring effects • After they get tutored, you see that the average tutored kid now has a 82.5 GPA compared to the average non-tutored kid, who remains at an 80. That is where the .25 of a standard deviation comes in. Does that seem like a small or big effect? • The overall impact of school-based mentoring in the BBBSA programs was .25 on some important psychosocial/behavioral outcomes (see Table 13).
Smaller effect but still statistically significant • The BBBSA SBM effect on quality of completed work and number of assignments turned in was half the size of the effect of mentoring on absences and misconduct. This improvement, about a d = .13, is small (but was statistically significant). • So if the standard deviation was 10 points on a 100 point scale, then the mentored kids averaged 81.3 on work quality and number of assignments turned in after being mentored. The effects on “overall academic performance” (Language and Science grades) was a bit smaller still. See below
What does a small effect size look like? • Mentoring improved scholastic (self-) efficacy by .11 standard deviation. The d = .11 and it was statistically significant. The differences in the actual means was .07, and the average SD was .72. Here are two representations of this difference. 3.81 Little’s mean 2.74 Comparison Group’s mean
SMILE Study of the Communitiesin Schools SBM Program: Year 1 Effects (Karcher, 2008) Mentees fared better than controls on: • Connectedness to peers (d = .25); Social support from friends (d = .18) • Self-esteem: Global (d = .16) Connectedness to self-in-the-present (d = .25) Elementary boys fared better than controls on: • Connectedness to school (d = .81) and to culturally different peers (d = .55) • Empathy (d = .75), Cooperation (d = .70), and Hopefulness (d = .73)
Given positive effects reported by these prior two studies, why did the evaluationof the US Department of Education study report no significant effects of mentoring?Two points about the report itself. First, Bernstein and his colleagues at Abt were instructed by IES to report and interpret the numbers in a specific and more conservative manner (than may have been appropriate given the state of SBM) But, they worked hard to include the unadjusted findings which allow a very different interpretations of the data
To understand what happened, we have to consider different ways to view “statistical significance” and “risks” related to using it • Type 1: The risk of claiming an effect of mentoring exists when it does not; • Type 2: The risk of not finding the effect of mentoring that really does exist. • Abt emphasized the first risk for the DOE study, PPV emphasized the second risk for the BBBS study; Karcher considered both for the SMILE study
A story of “Statistical Goldilocks and the 3 Bears”: Why the difference? • Papa bear approach: Abt was told to find out if the money the DOE was providing was making a difference—to be sure the DOE was not wasting money. They considered probabilities < .01 sig. They accepted no more than a 1/100 type 1 risk. • Mama bear approach: PPV viewed BBBSA SBM as a new mentoring innovation, and wanted to know if it has promise/potential, and is worth pursuing. They used p < .10 (a 1/10 type 1 risk) • Baby bear approach: SMILE just wondered if it worked (balanced type 1 and 2 of risk) and used a p < .05 (a 1/20 type 1 risk).
To know why they did what they did a complex discussion of the different “significance” levels and corrections for false (positive) discovery is necessary. But your funders and principals care little about the Bonferroni or the Benjamini-Hochberg adjustments. • What does significant mean? Like Eric Clapton says, “Its in the way that you use it. It comes and it goes…”
Most useful may be to level the playing field to be more clear across studies To do this, look at the fine print: Appendix D In the DOE study, see Appendix D for the findings that were not subjected to the Benjamini-Hochberg test and which used the scales in the manner they were intended (validated). When the DOE evaluation used the regular significance level (p , .05, in Appendix D, the findings (on the next slide) were actually stronger. I think, these are the findings to point funders and principals to.
Let’s use the standard significance level—“the Baby bear” p < .05-- across all 3 studies DOE findings, using a 1-in-20 chance of a false positive discovery (type 1 risk) • Improved school efficacy (d = .09), p = .02, • Higher future orientation (d = .08), p = .04, • Lower truancy (d = .14), p = .02 • Lower absenteeism (d = .09), p = .04, • Better relationships w/ adults (d = .09), p = .02
The BBBS/PPV findings are less strong for when .05 is used, but still show effects • Scholastic efficacy, p = .04, • Start to skip school, p = .04, (Fewer unexcused absences, p = .06 ) • Less likely to engage in serious school misconduct, p = .05, Overall academic performance, p = .04, (Higher Science class grades, p = .07)
The standardized effect size (differences in standard deviation units) is a better estimate than statistical significance which depends on sample size (SMILE = 500, BBBSA = 1200, DOE = 2400) Average effect size (d) for variables reported as significant in the DOE, PPV and SMILE studies at p <.05: d=.098 for the DOE Student Mentoring Program (Abt study) d=.132 for Big Brothers Big Sisters of America (PPV study) d=.197 for Communities In Schools (SMILE study) d=.14 Effect size for school-based mentoring reported by DuBois et al in 2002.
The impact of better mentoring mentors • Like other programs, mentoring program effects are larger, sometimes 2x larger, when programs better mentor the mentors through training, support, and program monitoring practices (DuBois et al., 2002). • The same kind of effects emerge for tutoring. Volunteer tutoring program impacts on reading skills differ substantially for programs that are unstructured (d = .14) vs. structured (d = .59). • Ritter, G. W., Barnett, J. H., Denny, G. S., & Albin, G. R. (2009). The effectiveness of volunteer tutoring programs for elementary and middle school students: A meta-analysis. Review of Educational Research, 79(1), 3-38.
Durlak, J. A., & Weissberg, R. P. (2007). The impact of after-school programs that promote personal and social skills. Chicago, IL: Collaborative for Academic, Social, and Emotional Learning (CASEL).
Two examples from the SMILE school-based mentoring study reveal that our focus now needs to be on who’s mentoring the mentors. • SMILE study program staff participated in a focus group to provided explanations of what they felt affected the mentoring program’s success, and of particular interest were those practices specific to the school’s Case Manager “mentoring”: • 1. More time for mentor support by asking school personnel to sharing the administrative workload: “The data clerks really took over and did it all. It made things so much easier. We didn’t have access to schedules. It was more timely because of the information they had that we didn’t.”
Staff suggestions borne out to be true • 2. Established relationships with school: “I think that the fact the we are already in the schools, we know the kids, we know the staff, so that made it easier.” Implication: When school staff know the program, they are nicer and more helpful to the program’s mentors” • 3. Importance of Program Staff training:“We have not had training to provide us the tools to give to our mentors. Fortunately, I have been around a lot of mentoring programs and … that helps, but if we are told we have a mentoring program and here are the logs, and these are your mentors [it] doesn’t really help you on how to give extra support to the mentors.”
When these staff variables were added to the statistical models, program effects increased • When mentoring program coordinators had weekly vs. monthly contact with the front office, the impact of mentoring increased by .45 units (on a 1 to 4 scale). • For every additional hour of staff training, the impact increased .09 units. So, for those who had one day’s training (5 hours), the impact increased .45 units. • On the self-in-the-future scales (like the Hopefulness scale), the SD was .55, so either of these ways of better mentoring the mentors could boost a mentoring program’s impact by 80%, nearly doubling its effect. Karcher, M. J. (2008, May). Multilevel modeling of setting-level program staff contributions to school-based mentoring program effectiveness. Poster presented at the Society for Prevention Research Conf., San Francisco.
Mentor the mentors by minimizing space and resource concerns:“Make way for mentors….” • Mentors who felt the space and resources were adequate were 2.5 times more likely to return for a second year. • Those with no complaints about space and resources also mentored more days than those feeling they did not have enough space or resources. Karcher, M. J. (2008). The Study of Mentoring in the Learning Environment (SMILE): A randomized study of the effectiveness of school-based mentoring. Prevention Science, 9(2), 99-113.
How Program Staff support relates to mentors’ experiences Mentor Feeling Important Program Staff Support Mentor rated Relationship Quality Mentors feeling mentoring benefited them socially and helped their careers Karcher, M. J. (2008). The Study of Mentoring in the Learning Environment (SMILE): A randomized study of the effectiveness of school-based mentoring. Prevention Science, 9(2), 99-113.
We can be confident school-based mentoring can have positive effects. These effects are small, but similar in size to the effects of tutoring and other prevention efforts.The take-home message: We need to focus less on impacts alone, and focus more on what we can do to increase impacts.One way is to better mentor the mentors by providing support for staff through training and time for more regular communication with school office staff. There are others. Our goal needs to be to focus funder and principal’s eyes on that issue, not on the discrepancy between SBM study findings
W. T. Grant Foundation SMILE staff: Chichi Allen, Kristi Benne, Debby Gil-Hernandez, Molly Gomez, Laura Roy-Carlson The study was conducted through the Communities In Schools of San Antonio agency and would not have succeeded without the support of Patrick McDaniel, Nancy Reed, Jessica Weaver, the Case Managers and Cluster Leaders, Ed, Bob Frasier, Ross Trevino. As well as to Ilsa Garcia who carries the torch. Many thanks to Herrera and co-authors and the 10 agencies that participated in the BBBSA SBM study were the Big Brothers Big Sisters of Central Ohio; Colorado, Inc.; Eastern Maine; Eastern Missouri; Greater Cleveland; Island County; North Texas; Northeastern Arizona; Northwest Georgia Mountains; and The Bridge, in Wilkes-Barre, PA. Website: schoolbasedmentoring.com Contact: michael.karcher@utsa.edu