150 likes | 164 Views
This article discusses the importance of providing specific information and avoiding misinformation when evaluating forensic science evidence for rational jury decisions and convictions. It emphasizes the need for independent validation studies, indicating error rates and limitations, evidence of analyst proficiency, and efforts to reduce bias. It also cautions against relying on speculation, legal criteria, and independence claims without supporting evidence. The example of forensic gait analysis is used to illustrate these principles.
E N D
Forensic science evidence and the conditions for rational (jury) decisions and convictions Professor Gary Edmond Director, Program in Expertise, Evidence & Law, ARC Future Fellow, School of Law
1. Introduction The orthodox legal commitment is to the fact-finder being able to understand all of the evidence presented during the trial. There is no obligation to accept it, but the fact-finder should consider all of the admissible evidence. In accordance with this commitment, the fact-finder must be placed in a position conducive to understanding and evaluating the evidence, including expert opinion. Twining (1990); Allen and Miller (1994); Edmond & Roberts (2012).
2. The basic thesis Specific types of information are required to evaluate (i.e. ‘weigh’) many, and perhaps most, types of expert opinion evidence. Specific information is required to understand and evaluate opinions linking an individual or object to a trace (i.e. pattern or comparison evidence). In the absence of this information, many types of forensic science evidence are not susceptible to rational evaluation. Caveats
3. The kinds of information required In the absence of information about the validity and reliability of a technique and/or information about the proficiency of the analyst, many types of forensic science evidence are not susceptible to rational evaluation. Normative – describing what ought to be presented to enable evaluation: in terms of trial aspirations and approaches to evaluating evidence. (I am not claiming that the presentation of this information will necessarily lead to rational evaluation.)
4. Specific information (e.g. for pattern recognition and comparison evidence) • Independent validation studies conducted in conditions where the correct answer (i.e. ground truth) is known. • Indicative error rate and other limitations and uncertainties. • Evidence of the analyst’s proficiency using the validated technique (i.e. genuine expertise). • Frequency of the feature(s) in relevant population and whether they are (in)dependent – empirically-based. • Whether standards are derived from formal evaluation and whether applied. • Description of efforts to reduce or eliminate contextual bias – exposure, suggestion and process. • Explanation of why particular expressions (i.e. the form of words) were selected and whether derived from independent research. [Also, how laypersons understand it.]
5. Misinformation – what won’t work • Bare concessions: e.g. that validation testing has not been done (see e.g. R v Atkins; R v Morgan and R v Honeysett). • Speculation about the validity and reliability of opinions derived from untested techniques. • Speculation about error and uncertainty. • Most legal criteria and heuristics (e.g. formal qualifications, a ‘field’, ‘experience’, prior legal recognition and/or admission). • Apparent independence or impartiality of forensic analyst (especially juxtaposed to the partiality of any defence witness). • Allowing the jury to decide based on what transpires at trial (where basic information is not provided). • Deference, especially if limitations and uncertainties are not presented. • Relying on other evidence and/or the strength of the case as a ‘makeweight’ – the masking problem. • Relying on these factors, individually or in combination, may be irrational (unless used to negate probative value).
Example: an opinion about the identity of a person of interest in CCTV images based upon a podiatrist’s comparison with the gait of a suspect – from R v Aitken 2012 BCCA 134.
Forensic gait analysis • No independent testing: no validation studies or proficiency evidence. • No evidence of ability in image interpretation and comparison for purposes of identification. Expertise as podiatrist might not translate to this different task. • Any standards or claims about error are declaratory and speculative. • Experience looking at abnormal features in controlled (i.e. ideal) conditions is not the same as comparison for identification purposes. Therapy provides feedback via patient and podiatrist’s (ongoing) observations. Forensic casework does not provide feedback or validation or relevant experience. • No information about whether those committing serious crimes alter their gait, or about the effects of disguise, clothing, carrying objects, footwear etc. • No information about frequency or independence of any features ‘observed’. • Exposure to patients in clinical setting does not provide a base to infer or estimate incidence of gait features in general (or reference) population. • Analyst usually presented with only two sets of images and suggestive information about the suspect – contextual bias. • Analysts rarely familiar with methodological issues and problems – e.g. image interpretation, ‘identification’, appropriate expressions, statistics etc. • Admitted in Canada (Aitkins) and England (Otway and Ferdinand). • How are we to assess the analyst’s opinion?
6. What about legal safeguards? • Prosecutors (as ‘ministers of justice’) and disclosure obligations • Expert witness duties, obligations and oaths • Cross-examination • Rebuttal experts • Directions and warnings to the jury • Appeals • Admissibility standards • Mandatory and discretionary exclusions • (Also, problem of guilty pleas influenced by ‘expert’ opinions of unknown value. About a quarter of DNA exonerations feature false confessions).
7. Admissibility criteria and practice versus authoritative scientific advice Admissibility criteria cast an undesirable pall over the trial and legal evaluation. (Table from ‘The admissibility of forensic science and medicine evidence under the Uniform Evidence Law’ (2014) 38 Criminal Law Journal 136.)
8. Jury competence? Studies involving scientific and technical evidence • Judge-jury comparisons • Jury (and trial) observations (including my own) • Exit surveys and juror interviews • Experimental mock jury studies • Qualitative studies of the public understanding of science (PUS) • Jury understanding of methods, probabilities and statistics • Most of these studies don’t tell us much about actual jury abilities or their performance with ‘expert’ opinions (at trial). Commonly over-read to support competence and adequacy of criminal proceedings. Most do not address missing info.
9. Jury competence? Judicial impressions, ‘the experience of The Law’ & judicial competence • Assume that juries understand the evidence and legal instructions and directions. (Reform tends to be based around ‘improved’ directions, instructions and warnings). • Lawyers and judges tend to support juries, tend to report positive experience and impressions. This is speculative (i.e. impressionistic), largely inattentive to independent research (much of which doesn’t address the issue), and does not have a credible empirical basis. • Judges seem oblivious to informational requirements, or have assumed that the parties and the trial will somehow resolve them (or lawyers made ‘tactical’ decisions not to). • Judicial responses reveals much about their understanding of the issues as well as the possibility of reform.
10. Avoiding deference and irrationality: Exclusion or education? • Too much is expected of our lay fact-finders. • Juries (and judges) are routinely placed in positions where they are expected to evaluate the state’s forensic science evidence but are not given the kinds of information required to make sense of it. (Instead, bombarded with epiphenomena.) • Juries are expected to evaluate ‘expert’ opinions in conditions that are not conducive to a ‘neutral tutorial’—i.e. anything presented takes place in an adversarial setting where actions appear motivated and there is other evidence (often part of a carefully assembled prosecution story) that appears independent and corroborative. • If juries are not provided with the kinds of experimental information that enables them to make sense of the evidence then the opinion should not be admitted. It is not ‘specialised knowledge’, but rather ipse dixit or an impression of unknown value – where indicative value could be ascertained.
11. Conclusions, implications & issues • See also • ‘How to cross-examine forensic scientists: A guide for lawyers’ (2014) 39 Australian Bar Review 174.
Some further reading • Edmond, Tangen and Thompson, A guide to interpreting forensic testimony: Scientific approaches to fingerprint evidence (2013) 12 Law, Probability & Risk 1. • Edmond, Searston, Tangen and Dror, Contextual bias and cross-contamination in the forensic sciences: The corrosive implications for investigations, plea bargains, trials and appeals (2014) 13 Law, Probability & Risk. • Edmond et al, How to cross-examine forensic scientists: A guide for lawyers (2014) 39 Australian Bar Review 174. • What lawyers should know about the forensic “sciences”(2015) 36 Adelaide Law Review (forthcoming). • The admissibility of forensic science and medicine evidence under the Uniform Evidence Law (2014) 38 Criminal Law Journal 136. • (ad)Ministering justice: Expert evidence and the professional responsibilities of prosecutors (2013) 36 UNSW Law Journal 921. • Edmond and San Roque, The Cool Crucible: Forensic Science and the Frailty of the Criminal Trial (2012) 24 Current Issues in Crim. Justice 51.