Globally, there are numerous different sets of HIV treatment
guidelines for different countries and different needs.1 But the
three most influential sets are probably the Department of Health and Human
guidelines issued in the US; the European
AIDS Clinical Society (EACS)3 guidelines; and the BHIVA
There’s no set global standard for the evidence upon which
guidelines are based. In theory they could simply be the opinion of a group of
experts sitting round a table. Expert opinion, however, is often fallible.
Doctors tend to base their opinions on their own patients, who may not be typical;
negative results and non-results are notoriously less likely to be published;
even people of integrity can be swayed by studies hyped by PR firms. What you
think you know ain’t always so.
For this reason, most guidelines attempt to ‘grade’
evidence. This means that you look at each piece of scientific evidence and
decide how reliable it is, and how crucial in health terms. It can be done by
strength of recommendation, and by reliability of scientific evidence. There
are three grades of scientific reliability. Grade 1, the best, is results from
randomised trials that pit one treatment against another or against placebo.
Grade 2 is data from cohort or population studies; these report what happens in
large groups of patients, but results may be distorted by causes that aren’t
captured by the data. Grade 3 is expert opinion and case reports. There are
also, in the case of the US
guidelines, three different strengths of recommendation, A, B and C for ‘strong’,
‘moderate’ and ‘optional’.
I'm trying in everything I do to represent the diversity of our community. Roy Trevelion, patient representative
So you could have a strong recommendation based on weak
evidence (A3). This might apply, say, where a potentially lethal side-effect
has been observed but where it’s difficult to say how common it is. Or you
could have an optional recommendation based on strong evidence (C1), as when a
rigorous scientific study establishes an outcome difference in something that
doesn’t crucially affect health, like a tendency to get headaches.
These grades are still fallible, however, to experts’
knowledge of trials and to their opinion of how important specific outcomes
are. So, for instance, one might regard a (statistically significant) 5%
superiority for treatment A over treatment B in terms of patients achieving an
undetectable viral load as clinching evidence in favour of treatment A. Another
expert, however, might regard the fact that, although only a small number of
patients drop dead from heart attacks, 20% more do on treatment A than B as an
ironclad reason to favour treatment B.
In some cases, billions of pounds may depend on the result
of such disputes, so there may be bitter battles over evidence. HIV is no
stranger to this, especially when the cost of drugs is involved. BHIVA was well
aware, for instance, of the decision by the London Specialist Commissioning
Group to recommend Kivexa
(abacavir/3TC) over Truvada (tenofovir/FTC)
as first-line therapy for patients with a viral load under 100,000 copies/ml.
BHIVA accordingly stepped up the calibre of its evidence
grading for the most crucial recommendations, to the point where the new
guidelines may be the most rigorously evaluated anywhere. Firstly, doctors
writing a particular section voted on how important they thought particular
outcomes were (viral undetectability, speed of viral suppression, side-effects,
CD4 count, resistance and so on). They then employed a health researcher to
comb through every piece of evidence pertaining to the most crucial outcomes
and generate what are called ‘forest plots’ – diagrams that show the overall
strength of evidence across the range of available studies. In this case, two
of the most crucial decisions – firstly, the choice of nucleoside drugs, which
involves for most patients the Kivexa/Truvada
decision, and secondly, the choice of which third drug to put alongside those –
the result was two documents, one of 52 pages and one of 146.