In an ideal world, all patients receiving antiretroviral therapy (ART) would undergo regular viral load testing in order to detect viral load rebound above 50 copies/ml, the current limit of detection of tests in common use.
However, viral load testing is expensive and requires a dedicated laboratory with highly trained staff. Outside national reference laboratories and universities few countries in sub-Saharan Africa and Asia can provide viral load testing for people on ART.
CD4 counts may be easier to measure, and World Health Organization (WHO) guidelines suggest that a CD4 count decline of 50% from the previous peak, or a one-third decline in the previous six months, should be considered a trigger for changing treatment.
But even CD4 counts are out of reach for many clinics, and until cheaper point of care tests for viral load and CD4 cells become available, clinical symptoms are the only indicator available in many resource-limited settings for determining when to change to second-line treatment.
Some clinicians and policy makers have misgivings about expanding access to antiretroviral treatment without these laboratory tests, fearing that patients will develop drug resistance due to prolonged periods of viral replication after a rebound occurs. This could lead to transmission of drug-resistant virus and a poor response to second-line therapy in a context where only a limited number of drugs are available for second-line treatment.
Researchers at the Royal Free and University College Medical School in London, the London School of Hygiene, the University of Copenhagen and the World Health Organization developed a mathematical model based on data from simulated patients in order to examine the long-term consequences of different types of treatment monitoring.
At the outset it was assumed that 58% of patients were women, the average age was 30, the median CD4 count 66 cells/mm3, the median viral load 5.4log10 copies/ml (250,000 copies/ml), 32% had a previous history of TB and all had previously been diagnosed with a WHO stage 4 illness (AIDS). Thirteen per cent had previously used single-dose nevirapine for prevention of mother to child transmission.
These assumptions were based on data from existing cohorts.
After the patients started treatment, progression to second-line treatment was then governed by the interplay between baseline viral load, adherence and the number of active drugs in the regimen. The model also incorporated the increased risk of non-HIV related mortality in resource-limited settings.
These factors in the model were programmed based on data from resource-limited settings, derived from meta-analyses of treatment outcomes and adherence in sub-Saharan Africa, Brazil and South-East Asia.
Treatment monitoring was carried out every six months in this model, and patients were switched if:
- Viral load rebounded above 500 copies/ml or 10,000 copies/ml after at least six months of continuous ART.
- The CD4 cell count fell by 50% from its peak or 33% within the past six months (after a confirmatory test), and in both cases had fallen below 200 cells/mm3, occurring after at least nine months of treatment.
- One new WHO stage 3 or 4 event, or two new stage 3 events, occurring at least six months after starting treatment.
Virological failure (viral load above 500 copies/ml) was detected in 16% of patients after year 1, 28% by year 5, 37% by year 10 and 51% by year 20. Eighty-seven per cent of these virological failures would have been detected at year 1 if a viral load threshold of 10,000 copies/ml were used, but only 32% if a new WHO stage 3 or 4 event were used as the trigger for switching. A CD4 cell decline was even less reliable – only 25% of virological failures would have manifested a 33% decline in CD4 cell count in the previous six months.
A similar pattern held true at years 2, 3, 4 and 5.
However, when survival rather than detection of treatment failure was used as the outcome by which the success of the monitoring strategy was measured, there was no difference between any of the monitoring strategies.
Rather than calculating the proportion of patients alive at specific points, the researchers expressed the outcomes in terms of the percentage of life years out of a possible 20 years that persons could live.
| || |
5 years survival
WHO stage 3 or 4 symptoms
61 - 68%*
*Three different measures were compared – only at year 20 was substantial variance observed.