How can you be sure your expert’s evidence is reliable — evidence that gives a solid footing to your case? In the area of mental illness there is such a very broad range of opinion on many issues, ranging from the soft psychoanalytic/pre-scientific theorising about unconscious motivations to the supposedly hard-edged science of brain-MRIs. Every point along that spectrum has its weaknesses in terms of its reliability as evidence.

In high-profile cases, often there is a resort to relying on experts with high status — not recognising that there is often difficulty in identifying who got to the top based on competence vs. blagging or bullying. Today, in the medical professions, there is less appetite for letting bullies get away with it, but still the influence of the old boys’ clubs — noted in the Bristol Inquiry and since — is strong. So, is your well-respected expert there because he or she is also a good scientist, or just because of having good professional colleagues? We see the effects of these sorts of problems in the examples of the physicians and professors Roy Meadow and David Southall, where it became clear that there had been unfounded claims of expertise in science, particularly statistics, in which mistakes were made due to a complete lack of training that would qualify them as scientists. Shouldn’t the barristers and the Court have been aware of this risk?

For physicians, this sort of thing is all too common, for one reason: contrary to many physicians’ opinion, medicine is not a science — nor is doing independent research in undergraduate physician-education equivalent to doing a PhD (rather staggeringly, a belief I’ve come across several times). Where one translates from science to medicine — and back to science — is where many analytical and logical gaps open up, as was the case with the above physicians’ testimonies. While Britain has an evident love of amateurism inherited from the gentlemen-scholars of the 18th and 19th centuries, it does not avoid the simple fact that the reliability of evidence is determined by reliable methods properly applied by the fully trained, with a tightly-knit chain of reasoning. Blagging doesn’t qualify.

This example shows that, without the required training — no matter how high the status as a ‘doctor’ — evidence will be  associated with logical errors and cognitive biases in evaluating information and generating opinion, producing evidence that may look impressive but which is fundamentally flawed. A good counsel in tandem with a good consulting expert can gut such evidence methodically and thoroughly — and it’s been my pleasure a couple of times to contribute to the gutting of such evidence and claimed expertise, which should never have been accepted by a Court. In all but one case, it has been evidence offered by a physician acting outside his area of expertise. It is worth noting in passing that both Meadow and Southall were physicians who gave opinion on causative behaviours of alleged perpetrators (i.e., on variables lying quite outside their area of qualified expertise in clinical physiology), but which was likely accepted because of their social and professional status — the same status that had also allowed both to do would-be scientific research for which neither was qualified. Thus, the controversy over the so-called Münchausen syndrome by proxy — otherwise known as medical child-abuse.

While examining expert witnesses is considered a tricky business — largely because they know a lot more than counsel about the topic — there are some basic questions that can be used to address the reliability of evidence.

Firstly, did their process of assessment accord with established guidelines? Each profession has its own set of ethical and practice-guidelines that can be used to measure the quality of their opinion, constituting the floor below which a clinician’s practice should not venture. Yet some professions (notably so in Britain) have remarkably low standards; if we are to ensure the quality of evidence provided in court, then the highest minimum standard across professions would act as a reasonable starting-point, regardless of the particular profession of the expert.

While it’s useful to have a working knowledge of the variety of standards, and while it’s not reasonable to expect an expert from a given profession to be aware of standards from other professions nor to expect them to adhere to other professions’ standards, a variant standard that a lawyer has reason to prefer can provide leverage for making clear the expert’s rationale for varying from that standard, and thus the acceptability and reliability of the resulting evidence.

One can also determine:

  1. if the testing-method used is standardised — and whether its standards are scientifically based
  2. if the tests have norms against which to compare results (just like lab-tests show whether your blood-count is high or low compared to others like you)
  3. if the test shows adequate reliability over time, as well as sensitivity, specificity, predictive value, and any of a 7 or more types of validity
  4. if the method is published or private, perhaps with a hidden algorithm for calculating results — private knowledge is not reliable in Court
  5. if the test-materials are uniform (be that a questionnaire or a blood-collection tube)
  6. if the test has standardised procedures to minimise variation in using it and in its resulting data
  7. if the test requires specific qualifications in order to use it and the clinician has them (or has lied about them, as has happened with several psychiatric physicians in my experience)
  8. if there is a comprehensive manual for application and interpretation providing objective evidence or it’s subject to gnostic knowledge and subjective perceptions (as is often the case in psychiatric opinion)
  9. if the results are fully automated or interpreted, and by whom (and with which qualifications?)
  10. if the test has been applied consistently to the appropriate type of person in the appropriate context (be that time of day of blood-draw or the setting for a cognitive exam)
  11. if a blood-test is affected by certain factors (alcohol-use, rate of breathing, etc.) and whether the examinee was informed/coached on this
  12. if a written test has a required reading-level and whether the person was assessed for that
  13. or even if the examinee asked for clarification on written tests and how the examiner answered (to determine the possibility of coaching)

Another particular issue I’ve come across before is whether the expert sought solely to confirm an assumed or ‘working’ diagnosis, or made a thorough, differential diagnosis. There are symptoms that are in common with some significantly disparate disorders (e.g., PTSD, anxiety, depression, epilepsy, and dementia). Strangely, this is one of the problems that I come across with NHS-clinicians, who are used to making a ‘working’ diagnosis on the assumption that there will be time later to ‘tidy it up’ if it’s not right, not least because the patient won’t be cured. But it is not enough that certain symptoms match a given diagnosis; ruling out other possibilities is also necessary, not least to protect against confirmation-bias.

Similarly, have only ‘clinical’ and subjective methods been used to achieve a diagnostic opinion? Lawyers don’t always understand that forensic clinical methods differ from standard clinical practice, precisely because the evidence needs to be more reliable. Yet standard techniques of clinical interview are subjective; they lack integral methods for assessing efforts by the examinee for managing the clinician’s impressions of him (i.e., malingering or hiding material issues such as drug-abuse). The estimation of malingering and the credibility of the client or patient is often left to the clinician’s ‘experience’, which is little more than an informed personal bias. Objective methods of exploring malingering, hypochondriasis, or less-common disorders that can look like them are required but rarely used because they take time and the NHS doesn’t give time.

When examined in court, clinicians will often refer to their clinical acumen and then fall back on “well, I can tell because of X years of experience” or other overtly fallacious and ipse dixit statements. Methods that are not objectively verifiable as adequate are, therefore, inadequate for the purposes of reliability. What objective (scientific) methods have been used to assess such possibilities? This last is one of the reasons why psychometrics, where available, are essential, as one can identify problems on a psychometric that often don’t emerge in a clinical interview, particularly around manipulation and impression-management.

Again, not only should Meadow and Southall help us remember that there is no substitute for good objective evidence and proper scientific training, but so should the history of psychiatric abuse of women (including lobotomies of independent women in the 1950s), of ethnic minorities, and the efforts to “cure” people with variant sexual orientations or gender-identities. This abuse was rationalised solely because of ‘expert opinion’ that was deeply wrong, factually and morally.

The key question is always: “how do you know what you know?”; thankfully, “because I’m a doctor” is usually no longer an adequate answer. If the expert is unable to provide objective evidence with a tightly-knit chain of reasoning leading from data arising from methods found reliable in themselves, to the interpretation made, to the conclusions drawn, to the relevance of the substantiating references, and to the summary opinion, without any evident analytical gaps, then that testimony should be regarded as inadequately reliable.

Of course, some opinion might seem inherently unreliable for the simple fact that the immediately relevant science hasn’t yet been done. Can a crushed foot be linked to causation of dementia (a case-report available on this website)? In extremely rare cases, it is possible, but the specific research has not been done on a population of such people to demonstrate such a link. In that case, a careful scientific analysis of relevant data using reliable techniques is required along with the above-mentioned tightly-knit chain of reasoning.

In Britain, it seems a rarity to have a clinician-scientist acting as a consulting expert to barristers, rather than solely a testifying expert. However, such a consultant can be very useful to review the quality of reports in ways that lawyers, no matter how excellent, will not be able to do; further, just as purely clinical evidence can be subject to shredding under informed cross-examination, it is also possible to bolster such inadequate evidence by requiring the expert, pre-hearing, to answer specific questions that will provide a substantiated concatenation of reasoning that avoids excessive analytical gaps. They may well have good reason to say what they have said, but haven’t made the argument.

Further, it’s quite feasible to save significant time and resources due to inadequate or imprecise instructions by specifying the scope of instruction most useful to a case, not solely by identifying unreliable methods, inadequate substantiation, and analytical gaps in existing testimony. This can provide much greater cost-savings than instructing an opposing clinician who might produce a report of much the same poor quality. In several cases I have had, I can identify the primary problems as arising from ill-targeted instructions, through no fault of the barrister or solicitor in question.

Similarly, being able to counter unreliable testimony means that opponents and the Courts do not have to expend significant resources in gathering and examining contrary evidence, while also helping lawyers to make strategic use of limited resources and time with which to advocate effectively.

Had these approaches been taken in the cases in which Meadow and Southall gave opinions, the testimonies and the outcomes would have been very different and innocent people would not have been injured, including fatally. Getting reliable evidence on mental and behavioural issues should not be as difficult as it is commonly today, but it does require lawyers to understand some key issues in how to evaluate the quality of the evidence presented. Some of these issues are straightforward, while others benefit from the assistance of a consulting expert. For the purposes of justice, effectiveness, efficiency, and cost-savings, basic principles and practices like those above above should be observed in the preparation and examination of experts’ evidence.