Further evidence has emerged that a substantial proportion of switches to second-line treatment in a resource-limited setting, triggered in the absence of viral load testing, are unnecessary and result in an avoidable inflation in drug costs as people switch to more expensive regimens.
The findings, published in the August 1st edition of Clinical Infectious Diseases, are likely to lend further support to calls for viral load testing to be made more accessible in resource-limited settings to confirm cases of suspected treatment failure.
In well-resourced settings everyone receiving treatment undergoes regular viral load testing in order to detect viral rebound and the failure of treatment. Switches to new treatment take place if viral rebound is detected, since the existing regimen becomes ineffective - due to drug resistance - once viral rebound occurs.
In resource-limited settings, viral load testing is rarely available due to cost and lack of well-equipped laboratories.
Failure of first-line treatment can be detected only by monitoring the CD4 count for declines or looking for the development of clinical symptoms.
It had been widely assumed that CD4 counting would chiefly tend to result in delayed identification of large numbers of cases of viral rebound because of the time lag between viral rebound and subsequent loss of CD4 cells due to uncontrolled viral replication. It was feared that the major consequence would be that large numbers of patients would develop high-level resistance to some second-line drugs.
However research presented at the Conference on Retroviruses and Opportunistic Infections in February this year showed that treatment switches on the basis of CD4 counts were often unnecessary, because the patients often continued to have undetectable viral load despite a decline in CD4 count. The researchers who conducted the study, in Uganda, suggested that infections such as malaria could be causing temporary dips in CD4 count.
They also estimated that in a cohort of 125 patients who experienced CD4 declines, 107 would have been switched to more expensive second-line treatment, adding $75,000 in drug costs to the treatment programme’s budget.
Now, research from western Kenya has confirmed that the Ugandan observation is a common problem.
AMPATH, a service collaboration between Moi University and local clinics in the Eldoret region of western Kenya, carried out viral load tests on all patients receiving ART who had suspected immunologic signs of treatment failure (a CD4 cell decrease of at least 25% over the previous six months).
The retrospective study identified 149 patients who had suspected treatment failure. Of these 58% turned out still to have a viral load below 400 copies, and even among the subset of 42 who experienced a CD4 decline of more than 50% during the previous six months, 43% (18) still had a viral load below 400 copies, indicating that there was no need to switch treatment in those cases.
Among those with a CD4 cell count above 200 at the time of suspected treatment failure, two-thirds (66%) had a viral load below 400 copies, compared to 41% of those with a CD4 count below 100 cells/mm3.
When misclassification was analysed according to CD4 cell percentage rather than absolute number it became clear that the highest risk of 'true' treatment failure occurred in those with a CD4 cell percentage below 10 (65% had viral load above 400 copies, compared to only 26% of those with a CD4 percentage between 20 and 29).
Logistic regression analysis showed that misclassification of treatment failure was more likely if the patient had a higher CD4 count, a shorter duration of treatment and a smaller decline in CD4 cell percentage.
“In our study, there was a high likelihood of failure if the patient had a CD4 cell count of <200 cells/uL and was on therapy for > 20 months; there was a low likelihood of failure of therapy if the patient had a CD4 count of <300 and >200 cells/uL and was on therapy for <12 months.”
At AMPATH clinics, viral load testing is now mandatory in all cases of suspected treatment failure, but, say the authors: “We recognize the fact that … selective virological monitoring may not be instantly achievable. These results suggest the need to reconsider recommendations on immunological monitoring in resource-limited settings.”
They suggest that use of CD4 percentages may improve the sensitivity of immunological monitoring for treatment failure, but say that their findings need to evaluated in other populations before generalised conclusions can be drawn.
They also note that a previous simulation study carried out by Professor Andrew Phillips, which found only modest benefit to viral load and CD4 monitoring when compared to clinical monitoring in resource-limited settings with regard to cost-effectiveness, was based on the assumption that misclassification of treatment failure occurred in no more than 19% of cases.
They note several limitations: the fact that they could not verify viral load and CD4 measures; an average delay of two months between CD4 count and viral load test; and a lack of information about seasonal variations in CD4 count or changes in CD4 count due to intercurrent illnesses such as malaria.
In an accompanying editorial, doctors from Kenya and South Africa say: “In 2008 Smith and Schooley referred to managing ART without viral load as “running with scissors”. The emerging data…suggest it is more akin to throwing these programs onto drawn swords.”
“The time has come to work towards the progressive introduction of appropriate viral load monitoring technology in these programs with the same sense of urgency and commitment as the world approached ART access. To do less is to abandon the early success of ART to global collapse.”
Kantor R et al. Misclassification of first-line antiretroviral treatment failure based on immunological monitoring of HIV infection in resource-limited settings. Clin Infect Dis 49: 454-462, 2009.
Sawe FE, McIntyre JA. Monitoring antiretroviral therapy in resource-limited settings: time to avoid costly outcomes. Clin Infect Dis 49: 463-464, 2009.