Many years ago, I would have given a straightforward answer to this question. Application of scientific method to produce information that is the basis for any intervention or action. More specifically, this evidence can be ranked in terms of the level of bias, so that better evidence comes from randomised controlled trials, worse evidence from case studies or case series (note the highly quantitative nature of my younger self answer to the question). The best case scenario is a series of large randomised controlled trials that when pulled together in a meta-analysis would provide the evidence and this evidence would be directly translated into practice. No other factors would come into play, so the best science can be implemented undiluted.
What is my answer now? It depends.
I have moved away from black and white thinking to a much greyer colour. If we are testing a medicine for use for a specific disease, the scientific model with direct application of evidence into the real work use of the medicines is the most appropriate approach.
What about in public health? I give much more consideration to context now than I used to. This comes from many years of learning about the complexity of the world in which we live. We are uniquely influenced by the social, cultural, political and economic part of the world we find ourselves in. Our communities matter, they influence our everyday lives. Our family matters. When we try to apply an intervention that worked in one context, generally under trial conditions, to another community, we need to think about how relevant the intervention is to our community. This is where best practice would have us adapt the evidence to the local context, stepping away from a purist implementation, into what could work.
Then there is an issue about what is measured. Scientists, including epidemiologists, frequently measure outcomes they think are important. Outcomes may also be measured by the quality of data available, rather than the outcomes that we really want to measure. In every study there is almost an infinite amount of outcomes not measured. Each of these scenarios means it research can be more or less useful when considering application in the real world.
For example, it may be radical to suggest, but when we run a public health campaign, our aim is to change behaviour. We can tie ourselves up in knots about the knowledge-attitudes-behaviour continuum and pretend that we are “aiming to increase knowledge”, or worse still, measure a campaign’s effectiveness on “campaign awareness”. What we really want to know is did the campaign lead to the behaviour change we wanted? The problem is, this is often very difficult or not possible to measure, because some outcomes cannot be neatly placed in a box with a clear definition of how it will be coded and analysed thereafter. When we evaluate campaigns on interim measures, we can convince ourselves of its success and then re-run the campaign based on it being ‘evidence based’. But is it really successful?
There is also the issue of not measuring harms or unintended consequences. It is to be expected when there is a public health intervention across a population that there will be a mixed response, even with a new successful intervention. This is true of medicine as well. Operations that generally are lifesaving or enhance quality of life can rarely for some individuals lead to more harm. Medicine deals with this potential range of outcomes through a structured consent process, so the prospective patient can make a decision about whether to have surgery or not. There is less evidence that we do this in public health, including in how we measure outcomes in an intervention. Going back to the campaign example, scare campaigns may rate highly on campaign recall and may also change behaviour in some, but have we measured the real effect of the stigma that results? Almost certainly not.
Then we have the issue of research being conducted the matters to researchers but not to communities. There are untold examples of research that is driven by a particular interest of a researcher or policy makers that has nothing to do with community priorities. The problem here is that if those same researchers manage to attract funds and now move to an implementation trial, the community is completely excluded from the process of decision making but may be subject to the intervention anyway. Sure, the research may be high quality, but should it be implemented as best practice evidence?
A major problem with evidence based practice is believing we can measure everything that counts and that scientific knowledge trumps all other knowledge. It was only in recent years I came across the concept ‘positivism’ in epidemiology. This is a philosophical concept that emphasizes the importance of scientific methods to understand the way the world works. For many years primacy has been given to scientific knowledge over Aboriginal and Torres Strait Islander knowledges, for example. Many would argue this is still true, including in public health.
There is danger when epidemiological evidence is used in a black and white way. We need to be thinking context, what matters to the community and their priorities, minimising the harms and not pretending because we didn’t measure them that they don’t exist. We need to understand the limitations to our methods and value other ways of knowing. Qualitative research can be a counter to some of these issues and perhaps more importantly Aboriginal and Torres Strait Islander research methodologies.
Evidence based practice is still important, but nowadays I think the epidemiological evidence needs to be shared with and prioritised with communities, considered contextually and interventions worked through and measured together. And sometimes we cannot measure what really counts.





0 Comments