Arguably the most important piece of the evidence-based-medicine puzzle is when we ask ourselves:
"Is this evidence significant? – Is this relevant to my patients and my practice?"
When we talk about the 'quality' of a published research work we largely mean what the epidemiologists refer to as 'internal validity' – the extent to which the study's conclusions are actually warranted given the methodology and results. Internal validity looks only at the study design, conduct and interpretation, and takes into account bias and confounders. While important, internal validity is not alone sufficient.
The significance of a piece of evidence to medicine in general, along with it's relevance to our own practice, is referred to as the external validity. I think that for your and my practice this is often what matters most.
Really, external validity just describes how well the results and conclusions can be generalized to situations and people beyond those in the study.
I think of significance as the cumulative generalizability of a piece of evidence for the specialty and for wider medicine, integrated with how well the evidence agrees with what is already known. Relevance describes how applicable the evidence is to my hospital, my practice – and my patients.
It has significance for you, and relevance for me.
There are publications that have such great significance that even though their relevance to your own practice is low, it is still very important that you know of them. For example a critical care specialist with limited obstetric practice should still understand the importance of the Magpie Trial for halving the risk of eclampsia in pre-eclamptic pregnant women. Most 'landmark' trials fall into this group of greatly-significant medical articles.
In contrast, an article might be of low general significance but have great relevance to my or your own practice, such as Tandoc's 2011 trial showing that adjuvant dexamethasone prolongs the duration of interscalene blocks. Not particularly relevant unless you have some regional anesthesia interest!
The challenge is that significance *and *relevance are imperfectly linked, in fact there is at times almost an inverse relationship – the most personally relevant articles are unlikely to be the most significant. While most clinicians could make a reasonable judgement of the significance of any individual piece of medical research, determining relevance is much, much harder.
Only you can determine the relevance of the evidence for your practice and your patients – and there lies the challenge of the third horseman. As medicine increasingly specializes with ever greater levels of distinction ("I only operate on mitral valves"), and the volume of publication accelerates, the probability that any single, random article you pickup will be personally relevant is falling exponentially.
The only way to combat this is by having a non-random way of finding personally relevant, practice-changing evidence, and simultaneously not missing those general articles of great significance.
More on that later.