Forensic toxicology is often viewed as one of the most objective disciplines in forensic science. After all, the results come from analytical instruments like GC-MS or LC-MS. But as Dr. Hilary Hamnett explains in this episode of the podcast, the human decisions surrounding those instruments can still introduce bias.
Dr. Hamnett, an Associate Professor of Forensic Science at the University of Lincoln, spent years working in toxicology laboratories in both the UK and New Zealand before turning her attention to research on cognitive bias in forensic science.
Dr. Hamnett identifies several stages where bias can influence decisions:
choosing which tests to run
deciding whether a drug is present
determining the concentration
interpreting the results
Interestingly, Dr. Hamnett considers the earliest stage to be the most important. If the wrong tests are selected at the beginning, the rest of the case may be affected.
One example discussed during the episode involved a laboratory that knew a deceased individual had a history of heroin use. Based on that information, the lab immediately performed confirmatory testing for opiates using a limited blood sample. The result was negative. Only later did investigators learn that the individual had transitioned to methadone treatment, which had not been tested for. Because the entire sample had been used, no further testing was possible.
Cases like this highlight how contextual information can shape analytical decisions.
So what can labs do to reduce bias?
Dr. Hamnett suggests several steps, including standardized testing schemes for common case types, greater transparency in reporting, and clearly documenting analytical decisions. These measures help ensure that scientific decisions are driven by procedure rather than assumptions about the case.
Awareness of cognitive bias is growing in forensic science, and Dr. Hamnett believes the field is slowly changing. More laboratories are beginning to adopt bias-minimizing policies, and new guidance is being developed to help labs address human factors in their workflows.
The results may come from instruments, but the decisions around that science are still made by people.
Clips
Instruments aren’t purely objective
Decisions must be written down
Context can mislead in forensic toxicology
Reports must show uncertainty
Learn More
Papers discussed in the episode:
The effect of contextual information on decision-making in forensic toxicology
https://www.sciencedirect.com/science/article/pii/S2589871X20300449
The use of contextual information in forensic toxicology: An international survey
https://www.sciencedirect.com/science/article/pii/S1355030618301230
You can also hear Dr. Hamnett discuss this topic on The Tox Pod:
https://thetoxpod.buzzsprout.com/227318/episodes/8253367-cognitive-bias-with-hilary-hamnett
Automated Transcript (not checked for errors)
Aaron Olson: Welcome to the show. Today we’re joined by Dr. Hillary Hamnet, an associate professor in forensic science at the University of Lincoln and leading expert in forensic toxicology. She has a distinguished career spanning the Royal Society of Chemistry to senior scient scientist
Hilary Hamnett: Thank
Aaron Olson: roles in New Zealand and the UK. Dr. Dr. Hamnet also serves as a specialized defense consultant navigating the complex intersection of drugs, toxicology, and cognitive bias. Dr. Hamnet, thanks so much for joining me on the show.
Hilary Hamnett: you. It’s good to be here.
Aaron Olson: So, tell me a little bit about your background. You your work for focuses on cognitive bias and forensic toxicology, at least some of your recent papers. Tell me a little bit how you got into this area of science.
Hilary Hamnett: Yeah. So I guess much like yourself, I um have a background as a practicing forensic toxicologist. Um and I worked in I worked in four different tox labs all around the world. So in the UK and New Zealand, everybody had their own way of doing things.
00:00:59
Hilary Hamnett: Um a lot of people thought that theirs was the way. So it’s very interesting when you come into contact with people who’ve only ever worked in one place. So kind of like how the things get passed down, I guess, from generation to generation. Um and it was really when I was in New Zealand that I came across this idea of cognitive bias and someone from Australia came to give a talk about it and it and it really just struck a chord with me. Um and then sort of after a while for about eight years of working in the field I came to the conclusion that it was going to be very difficult to do anything from within. Um and there’s some sort of structural reasons for that particularly in the UK where um in England and Wales labs are competing with each other for work. So we have a privatized system where um labs compete with each other for police force work. So it’s very difficult to get people to change the way they’re doing things. Um and and one of the problems with cognitive bias is that it unfortunately often makes life harder.
00:01:53
Hilary Hamnett: So you putting in processes to minimize bias don’t necessarily make things better in terms of efficiency and cost and so on. So it’s a hard sell to people who are um trying to uh you know produce really cost-ffective work to win tenders to do work with police forces. So I went out into the world of academia um and it’s really since I’ve been out there that I’ve been able to start publishing about this and um sort of talking about it a little bit more and coming in from outside rather than trying to do it from within. Um, and I’ve got a couple of uh papers where I’ve done um uh surveys. So I did a survey at TAFT in 2016. So this is the International Association of Forensic Toxicologists. They had a conference in um Brisbane in 2016. I did a survey then and I redid the survey in uh just last November when the same conference was in New Zealand again. So I’ll be hoping to publish that soon. Um I also have one where I did a little experiment with my students looking at how um cog how sort of contextual information affects decisions.
00:02:57
Hilary Hamnett: Um and then I have one in the TAF bulletin which is like a a sort of um I guess trade magazine for that organization which goes to lots of different labs about more um practical things that people can do. So in in cognitive forensics we do we like to talk a lot about the problem and I think we probably talk less about what people can actually do about it and I was I was watching um when you had it draw on your show and um and he was very kindly saying that people are you know people’s minds are changing which is great and there is a lot more acceptance um but now people are moving on to the stepper of all what can we do um and so that’s really where my work focuses now is what can we do about
Aaron Olson: Okay. Yeah, we’ll get to that. Uh what what can we do? But first, let’s define what are some of the problems that you see with cognitive bias. Like how do you define it and how does it apply to toxicology?
00:03:52
Hilary Hamnett: So cognitive bias is uh it’s a mechanism in our brains that comes about because we take mental shortcuts all the time and we don’t have any control over that. So, um, we take mental shortcuts basically to make our lives easier and it’s hardwired into us, um, to make quick decisions. Um, and a lot of the time on like a day-to-day basis, am I going to cross the road now? Um, it’s really helpful. It’s not particularly helpful in in forensic science. Um, and in forensic toxicology in particular, there are certain points in a case where they it really can become an issue. Um, so I like to talk about the the strategy. So when you decide what test you’re going to do, um there’s the actually deciding whether the drug is present is another one. Um deciding how much is there and then the final one is is the interpretation. So there’s sort of the four points in the case where I think they’re most vulnerable to cognitive bias. Um and it it’s not necessarily the same for every case.
00:04:49
Hilary Hamnett: So some cases, as I’m sure you have experienced, are are simpler than others. Some cases you have better quality evidence than others. And it’s really the cases where the evidence is ambiguous or it’s poor quality that cognitive bias plays the biggest role. So when you’ve got to make a tricky decision, that’s when the bias can start to creep
Aaron Olson: Okay.
Hilary Hamnett: in.
Aaron Olson: So you you identified the three major errors or areas uh selection of the test,
Hilary Hamnett: Mhm.
Aaron Olson: this the identification of the substance and the interpretation of the results.
Hilary Hamnett: Yep.
Aaron Olson: Of those three,
Hilary Hamnett: Yep.
Aaron Olson: where do you see the biggest uh issues with cognitive bias?
Hilary Hamnett: It’s definitely at the start because if you pick the wrong test at the start, the rest of the case you can’t really selfish it. So it and particularly if you use up all of the all of the sample for example, making a biased decision at the beginning that’s the end of the case. So getting it right to begin with is it for me is the big
00:05:42
Aaron Olson: Yeah. Yeah.
Hilary Hamnett: thing.
Aaron Olson: So many toxicologists receive case details from investigators. At what point does contextual information become helpful or hurtful in a in a case?
Hilary Hamnett: So when you’re setting up the um tests, my the advice I give to people when I because I do some bits and pieces of training around tox labs is to um have a a defined testing scheme. So I don’t know if you’re familiar with this, but say every Yeah. And you can do it by case types. You can say, well, our postmortem cases are going to have these tests done on them. our um you know alleged sexual assault is going to have this type of drivers are going to have this type of test. So you can have a sort of a scheme for broadly each type of test. Um and so when you get a driver in they just go for that set of tests and you don’t actually need to worry about what’s in the case circumstances. And so if you’re using the case circumstances to make a case-bycase decision, that’s when the problems can really begin because the the information that you receive may not actually be accurate because people don’t always tell the police the truth, right?
00:06:45
Hilary Hamnett: That won’t be a shock to anybody listening to this podcast. People aren’t always honest at the moment that something happens. People can’t remember what happened. Maybe they just say something that springs to mind. Um things like this, you know, the scene where something happens. Um, it’s typical to see something like, oh, there’s drug paraphernalia all over the place there. Maybe there are empty blister packets. They may not have anything to do with that person. So, they may be living in an environment where there are empty blister packets or um, you know, they could belong to someone else or the opposite. So, sometimes you get these cases where maybe it’s a teenager and the parents um, come in and they tidy up the scene because they don’t want that to get around that there was drug use involved. And so actually what’s at the scene may not have any relevance at all um to to the case. And so relying on that we’re on quite shaky ground um because it may not be accurate.
00:07:35
Hilary Hamnett: It may be but but it may not be. So when you start making these case by case decisions you you bring in the context from the case but you also bring in your own experiences and your own um you know well I’ve seen this before or your own stereotypes about you know who might be using drugs and who might not be. So it could be based on age, it can be based on um ethnicity, on sexuality. So there’s lots of different um I like to call them rough guides or rules of thumb that people have. Um and I asked this question in one of my papers. I did a survey and said, “How do you decide what test to do to people from all over the world?” And they all came out with a totally different rule of thumb and it was often based on age. Um so it’s better to have a defined testing scheme for each of these types of case. Of course, there’ll always going to be the exception. There always going to be the the case where somebody’s taking something unusual or, you know, you need to do an extra test and and that’s not a problem.
00:08:31
Hilary Hamnett: The you know, having the odd extra thing that you do is fine, but having a routine set of tests for everybody on case type is the best way to deal with that situation.
Aaron Olson: Can you think of any type of uh example like a concrete example where contextual information might influence the outcome or deciding on which tests are run?
Hilary Hamnett: So there’s a really interesting example by the forensic science regulator here in the UK. Um so we have a UKwide regulator. Um and they talked about an example where a case came into the lab and it was the person had a history of heroin misuse. So that was known about the person. There was a very limited blood sample which is which can be the case um you know with someone who has collapsed veins if they’ve been injection for a long period. So a really limited sample. the the choice of tests was so important to get it right so as not to use all the sample up. The um the lab decided to go straight for an opiates confirmation.
00:09:27
Hilary Hamnett: They didn’t even do you know an imunino assay to check um for opiates. They used up the whole sample and the case was negative and that was because the person had the the deceased had moved on to methadone and they hadn’t been given that information that they’d actually stopped using her when they gone on to methadone. And so that decision right at the beginning to to make you know to do that test based on the information that they had then that was the end of the case. There was no more blood. So that’s just an example of of how that can go wrong and it’s less of an issue if you do have a lot of sample you can go back and do more tests or change tests but that’s expensive and timeconuming. You know going around around doing test after test after test is timeconuming. So having getting it right first time is better whether you have a limited sample or not.
Aaron Olson: Yeah. Yeah. Um, so one of the interesting findings of one of your studies was that toxicologists were aware of co cognitive bias but they had very few formal practices and procedures around addressing it.
00:10:24
Aaron Olson: Uh, you know why do these gaps exist and what kind of you talked about proposed solutions.
Hilary Hamnett: Yeah.
Aaron Olson: What kind of things do you think about as uh shoring up those gaps of policies?
Hilary Hamnett: So I think I think the reason is that we as a field we rely very heavily on analytical chemistry. So, and that kind of gives us this idea that what we’re doing is is um objective because it’s a machine that’s giving us the answer. And I think it’s taken a while for people to sort of understand that actually there is quite a lot of subjectivity in our field. Even though we’re relying on a machine to give us an answer, what we put in the machine is subjective. You know, how we think about the results is subjective. So, I think there’s that. You know, we I think it was easier for the people who worked in things like fingerprints to see that their decisions were subjective because they had two things and they were looking at them and comparing them. And it’s been harder for us to sort of come to terms with that idea that what we do is subjective.
00:11:16
Hilary Hamnett: Um terms of things you can do. So um consistency like I mentioned before is really important. So treating cases the same, having a defined uh testing scheme up front. Transparency is really really important as well. where if you have to write down the decision and justify where you made it, you will often think twice about the decision that you made because you also have to think about that decision being disclosed to family or the other side depending on the case. And if you’re happy for them to read what you’ve written, you will make a different decision to if you can just have free reign and make whatever decision you want based on whatever stereotype you like. Um if you have to write it down, then then it’s different. And then I think um particularly at the end of the case in that when we get to that interpretation side um being clear that there can be more than one uh reasonable explanation. So we can have cases where we got the same results. You could look at it maybe two or three ways maybe and all of those explanations are plausible.
00:12:35
Hilary Hamnett: And we often will go with just one and we’ll go with the one that we think is most likely or the one that seems to fit and I hate that phrase but seems to fit with what we know about the case. And we we’re not very clear with the person reading the report that maybe there are other options. Maybe something else could have been um at play here. And uh and we’re also not great at saying I don’t know. you know, here are the options and I don’t know which one is correct and and you know, we kind of we like certainty and that’s a sort of a just human nature. You know, it not many people like to leave things hanging. Um so that side of it as well is making it clear to the person reading the report that there are other other explanations and also that you what you knew. So when you did your interpretation what you knew about the circumstance of the case because if they that then changes it’s clear to the person reading the report that maybe the interpretation is going to change as well.
00:13:28
Aaron Olson: Okay. So, thinking about that in the report and what you would put in a report, how would things like uh interpretations, would those go in the formal report or would those come up later uh during talks with prosecutors or defense attorneys? Uh you know, how would that go? How would that practically play out?
Hilary Hamnett: I think it depends on on how you do your reporting. So I know some labs um have a very kind of simple report where basically everything is just produced by a computer. You know, it’s just standard text that comes out of a computer. Um and the problem with that sort of standard text is that it can very quickly become out of date. So somebody puts it in and then people forget to update it. And also you can lose the nuance as well in it if you just have these sort of boilerplate text statements. Um so in some labs it would go in the report other labs it would be a case of saying like you know there are alternative explanations for these results please contact us and then you can talk it through with uh whoever it is whoever this whoever submitted the sample.
00:14:27
Aaron Olson: As you’ve worked in the field, how eager are laboratories uh to do this type of extra work and shield analysts from irrelevant case information or put uh you know caveats about the analytical interpretation in reports?
Hilary Hamnett: I it’s a mixed picture. I think there are some labs that are quite keen um particularly the ones where they’ve had a challenge in court. So I’ve been contacted by labs who say you know somebody went to court and they were asked what’s your bias minimizing policy? What processes did you go through? And they don’t have an answer and that really seems to focus the mind. So then people will then get in touch with me and say please can you help us um with this? So it it’s a mixed picture. Some people are quite far along, some people are really kind of beginning. there’s still people holding out, you know, there’s still people who say we just don’t accept this. Um, so but I think things are changing and when I did my survey again, I did find that there were more people who say they have a policy now.
00:15:22
Hilary Hamnett: There are more people using uh bias minimizing procedures. So things have been changing in the field and OSAC as well. So I’m involved with OSAC. We’re currently writing a document about human factors and cognitive bias in um forensic talk. So hopefully when that comes out that will give people a little bit more guidance as to what they can be
Aaron Olson: Yeah. Yeah.
Hilary Hamnett: doing.
Aaron Olson: What do you think are some of the biggest changes that you’ll we’ll see coming forward?
Hilary Hamnett: I’d like to start to see um everybody moving to defined testing and so people stopping doing this case by case uh which also helps your lab become more efficient because you don’t have to sit there and about what test to do. Um I I think a big change will be when people start writing their reports differently and they start saying you know this is information that I was given and you know there are some caveats to this and there are you know other explanations for this. I think that will be the big thing when when the court starts seeing this in the in black and white if you like um that will be the biggest change.
00:16:21
Aaron Olson: Yeah. So, you know, we’ve talked a lot about your work, the role of cognitive bias, uh, interpretation and and the testing procedures. What are some of the big ideas that I haven’t hit on yet that you think are important for people listening to consider?
Hilary Hamnett: So I think that um deciding whether a drug is actually there or not is one of the danger points in my is one of the bias zones if you like. Um because depending on how you do your workflow, you may end up looking at too mass spec and um trying to decide whether there’s a match, right? So is this actually here or not? Um, and there are some labs who are who are really transparent about it and they’ll use criteria, set criteria and they’ll they’ll apply them consistently. And there are other labs who have a it’s a match because I say it’s sort of approach. You know, they’ll sort of eyeball it and say, “Oh, yeah, that looks like a match.” Or they’ll do, you know, the opposite of what I want them to do.
00:17:13
Hilary Hamnett: And they’ll say, “Oh, this is a tricky decision. I’ll go and get the case file and I’ll read the circumstances and that will help me help me inverted to make a decision. So, you know, I’ve got this this number on my immuno assay. It’s very very close to the cutoff. Should I confirm it or not? They’ll go and get the case. Depending on the case, they’ll then make a decision to confirm it or not. And actually, what they a better thing to do is to just have a rule that if you’re close to the cutoff, you always confirm and then you never have to look at the case circumstances um because you’ve got that kind of in place. So, yeah, that subjective decision about whether something is there and then also how much is there. So um manual integration is a real danger point. Um so and this can also tends to be a problem with the iffier samples, right? So the sample is dirty, limited. You can often find that the the machine doesn’t um manually integrate it very well.
00:18:06
Hilary Hamnett: You you go in and you fiddle with the integration, suddenly that person is over a significant limit, right? So they’re over a driving limit or they’re you know it was a QC sample and now it’s in and before it was out. And so if you know what that sample is and you know that changing the integration could could have quite a significant difference that’s a problem um in terms of motivations for doing that. And actually in the UK now in England and Wales um in driver cases so we have our uh per se limits for um drugs in drivers manual integration’s banned now. So you’re just not allowed to do it and if it doesn’t integrate you have to run the sample again. Um so that’s another danger point.
Aaron Olson: Yeah. And is the manual integration banned? Was that due to the fallout of the Randox testing scandal or other
Hilary Hamnett: I mean I think that’s probably one of the reasons behind it.
Aaron Olson: reasons?
Hilary Hamnett: Um but also it it is um a problem with not just with the you know the QC’s which was a problem with randox but also with K samples as well.
00:19:07
Hilary Hamnett: So, um you just generally it’s not a good not a good thing to be doing where you’ve got a a really um important limit.
Aaron Olson: Yeah. Yeah. Well, tell me what you’re working on next. What What do you got in the
Hilary Hamnett: Yeah.
Aaron Olson: pipeline?
Hilary Hamnett: So, um I’m hoping to get my follow-up survey published to um and I’m hoping to publish that in the there’s a special issue for the um TF conference. Um, and so I’m hoping to get that published to show actually how much progress has been made because I think like it’s encouraging for me that I haven’t just been wasting my time for the last 10 years. Um, but also so people can see, you know, that this is changing and actually this, you know, this percentage of people are reporting having a policy, maybe we should get a have a policy as well. Um, and I’m always looking to work with, you know, any lab who wants to have a have a policy. I’m always on the lookout for any lab who wants to have a policy um to help them develop it. um if we can then publish the text of it. So I think one of the things that holds people back is they just don’t know where to start and so it’ be great if um you know we could get an example of one that we could publish and make publicly available and then people can use it as a starting point for their own work.
Aaron Olson: Dr. Hamnet, thanks so much for coming on the show today and sharing with us about your work. Is there anything that you want to leave listeners with before we end the call?
Hilary Hamnett: no. There is a uh another um sorry I should said yes there is a a rival podcast where I talk about this in a lot more detail which is the talk pod. Um so if anybody wants like a lot more of the nitty-gritty um then you can head over there and and hear me talk about that um in a bit more detail. Thank you.










