390 likes | 634 Views
Evaluation of campaigns. Dr. L.R. Pol Tabula Rasa / Erasmus University Rotterdam. Tabula Rasa?. Consultancy for evidence based and research driven advice … … in communication strategy and influence. Agenda. a. Influencing Behaviour. Evaluation Practice. Background. Background. a. a.
E N D
Evaluation of campaigns Dr. L.R. Pol Tabula Rasa / Erasmus University Rotterdam
Tabula Rasa? • Consultancy for evidence based and research driven advice … • … in communication strategy and influence
Agenda a Influencing Behaviour Evaluation Practice Background
Background a a Influencing Behaviour Influencing Behaviour Evaluation Practice Evaluation Practice Background Background Governm. Practice Kinds of campaigns
What kind of campaigns? • Isolated mass media campaigns: • Ads, RTV-spots, etc
What kind of campaigns?(2) • Integrated campaigns: intervention programs which include for example: • Interpersonal communication: information, counselling • Teaching material • Tailor-made counselling via internet • Communication with intermediaries such as teachers, parents, physicians • Ads, RTV-spots, outdoor, website • Free publicity
Governmental campaign practice in Holland • Increasing number of integrated campaigns (governmental campaigns) • Why: effects of isolated mass media spots and ads on behaviour are quite small • That’s why ads and spots are more and more part of an intervention program
Governmentalcampaignpractice in Holland (2) • Isolated campaigns: 1 or 2 % change in behaviour (meta-analysis Derzon and Lipsey 2002) • Nevertheless: can be very worthwhile in the case of (for instance) public health • Especially when cost effectiveness is OK • Besides: new insights in behavioural change can lead to increasing effects
Influencing Behaviour Influencing Behaviour Evaluation Practice Background Influencing Automatic Behaviour Planned behaviour Good Practice
Crucial question in the case of influencing • Which kind of behaviour is involved? • Planned behaviour • Automatic behaviour
Planned behaviourTheory of Planned Behaviour (Ajzen) (Beliefs ) Attitude Social norm intention behaviour Self-efficacy
Automatic behaviour • However: at least 95% of our behaviour is not planned behaviour, but … • … automatic behaviour • We do it just because we always do it, not because we are considering it
Automatic behaviour (2) • Knowledge and attitudes are not relevant determinants of automatic behaviour • Arguments aren’t even noticed! • Habits are the key determinants of automatic behaviour • But … automatic behaviour is changeable! • Only not via arguments, but via social influence, heuristics, primes • A lot of recent research on the topic
Influencing automatic behaviour • Quite some campaigns still try to influence automatic behaviour by influencing attitudes • That’s useless and a waist of time and money • What needs to be done, is either break through routines… • … or instead use the fact that behaviour is automatic • In both words play an important role. And other stimuli: images, sounds or perfumes • They influence us very often, while we don’t even notice we were influenced
Influencing automatic behaviour: some techniques • Priming • Use (or: abuse) heuristics (rules of the thumb) • Make use of the inclination to consistency • Using the inclination to reciprocity • Using authority • The liking principle: similarity, flattery • Social validation • The right example
Influencing automatic behaviour: alternative techniques • Alternative: break through routines: • Social network method • Using a negative approach • But not fear appeals!
What depends good practice in governmental communication on? • Analyses of the behaviour is a condition for success: automatic or planned? • Empirical social scientific research offers crucial assistance for analyses and interventions • In case of influencing behaviour: trusting in hunches and creativity is dangerous: often counterproductive
There’s no harm in just following hunches ? • A sloppy approach can result in the opposite of what you want to reach: • The wrong images or words (aggression) • Activation of undesirable exemplary behaviour (alcohol abuse) • Activation of undesirable processes (voting)
Evaluation Influencing Behaviour Evaluation Practice Background What do you want to know What do you need to know Issues and Best practice
Why evaluate? • At least three good reasons: • Find out if you didn’t cause a counterproductive effect • Learning effect: improve the campaign under construction during the process and use insights in new campaigns • Account for the use of government money
Evaluation: what do you want to know? • What do you want to know? • Was the campaign effective? • What is effective? • What the target group knows? • What the target group thinks (attitude)? • What the target group does (behaviour)?
Evaluation: what do you need to know? • When objective is to change knowledge or attitudes: • Direction of the change: positive or negative • Size of the change • Why did the attitudes change: • Change of the beliefs that constitute the attitude and relative weight of the beliefs
Evaluation: what do you need to know? (2) • Planned behaviour: • Effect size of the total change in behaviour • Which determinants caused the change: • Change in the attitude towards the behaviour? • Change in vulnerability to what others think? • Change in the personal belief in the possibility of performing the desired behaviour
Evaluation: what do you need to know? (3) • Automatic behaviour: • Change in the critical habits • Effect size of the change
Subjects of evaluation • Effects • Effect size • Why did the effect occur (effects on determinants of behaviour and attitudes) • Process • Was the target group really confronted with the materials? If not: why not? • Did the target group understand the message • How did they evaluate the likeability of the message • Did they use the information? Did they do that in the intended way? • The same questions for the intermediaries (physicians, teachers, etc.) • Cost effectiveness
Evaluation practice • Everyday practice: great diversity: • No need to know the real effects • Not want to know the effects • Deficiencies in the evaluation because of: • Lack of theoretical knowledge: asking questions about attitudes and intentions in the case of automatic behaviour; confusing intentions and the real behaviour
Evaluation practice (2) • Lack of necessary methodological knowledge • Lack of time and money for a genuine evaluation • Naïveté towards research bureaus • And happily also good practice
Best practice • Gather the parameters you want to evaluate from adequate theory: • for instance planned behaviour and current insights into automatic behaviour • Pre-test the campaign materials (in a sound way) and adjust them if and where necessary • Evaluate the campaign materials in randomized and controlled experiments (lab experiments)
Best practice (2) • Evaluate the effect and the process in a field experiment
Serious problems in evaluation • No time and / money for: • Lab experiments: therefore no sound insight in the question if the campaign materials reallycause the aimed effects (on behaviour, attitudes, knowledge) • Field experiments: therefore no possibility for generalization and also no learning effect
I’d suggest the next solution • Don’t evaluate every individual campaign according to the standard of the best practiceif there’s no money to do it as it should be done • Evaluate extensively one specimen of a special kind of campaigns • Evaluate individual campaigns much less extensive: • Effect size • Process evaluation
More serious problems in evaluation • Methodological problems: • Control groups in the case of field experiments not always possible • In Holland never possible in the case of mass media campaigns • As a consequence no possibility for reliable attribution of observed effects on behaviour etc. to campaign • Alternative: interrupted time series design
Serious problems in evaluation (3) • Pre and post tests of materials often clumsy • Artificial: ‘what’s your opinion about this ad?’ Results in test bias • Materials should be shown in natural surrounding • Respondent shouldn’t know about which specific item he is supposed to give his opinion • Use of focus groups: group processes obstruct a clear sight on what individuals think
Summarizing • Good evaluation presupposes good knowledge of the theory of influencing behaviour: • Otherwise there will be no learning effect • There’s a gap between evaluation of campaigns as it should be done and how it’s often done. If time and money is the cause: • Don’t evaluate every single campaign extensively • Instead evaluate campaigns that are typical of a kind of campaigns
Questions? • Ask me now • Mail me: bert.pol@tabularasa.nl • Call me: (+31) 70 3600 949 / (+31) 65323 1764 • Visit me: • Tabula Rasa • Anna Paulownastraat 107 • 2518 Den Haag • Holland