Microbicides 2006: Are the microbicide clinical efficacy studies big enough?

This article is more than 18 years old. Click here for more recent articles on this topic

No surrogate markers for HIV seroconversion

Perhaps at the very root of the challenges is the fact that prevention studies must measure a clinical endpoint — HIV transmission. “We have no surrogate endpoints so we have to rely upon efficacy trials to tell us if the product works or not,” said Dr Elof Johansson of the Population Council, who presented an update on the Carraguard study, one of the furthest along with efficacy results expected sometime next year.

In HIV treatment studies, treatment-related changes in surrogate markers such as viral load and CD4 cell count have been proven to effectively predict whether treatment prevents disease progression. With these surrogate makers, the trial investigator doesn’t have to wait for actual opportunistic infections or deaths to occur — which allows the studies to be shorter, smaller and much less of a burden on the participants.

HIV prevention studies, on the other hand, must enrol large numbers of often vulnerable women from resource-limited settings who are at risk of HIV infection, place some of them in a placebo-control arm and wait for an event that no one wants to happen — for some of them to become HIV-infected.

Of course, the hope is that fewer women will become infected on the experimental microbicide but the first crop of microbicides are expected to be at best only partially protective.



A product (such as a gel or cream) that is being tested in HIV prevention research. It could be applied topically to genital surfaces to prevent or reduce the transmission of HIV during sexual intercourse. Microbicides might also take other forms, including films, suppositories, and slow-releasing sponges or vaginal rings.


A pill or liquid which looks and tastes exactly like a real drug, but contains no active substance.


How well something works (in a research study). See also ‘effectiveness’.


Studies aim to give information that will be applicable to a large group of people (e.g. adults with diagnosed HIV in the UK). Because it is impractical to conduct a study with such a large group, only a sub-group (a sample) takes part in a study. This isn’t a problem as long as the characteristics of the sample are similar to those of the wider group (e.g. in terms of age, gender, CD4 count and years since diagnosis).

sample size

A study has adequate statistical power if it can reliably detect a clinically important difference (i.e. between two treatments) if a difference actually exists. If a study is under-powered, there are not enough people taking part and the study may not tell us whether one treatment is better than the other.

The clinical efficacy trials being conducted on microbicides include some of the largest HIV-related clinical studies ever performed — and issues of study design and power are paramount to their success. Some of the challenges in designing these studies were discussed during an update session of the Microbides 2006 conference held from April 22nd to 26th in Cape Town, South Africa.

HIV incidence

But the key parameter for adequately powering these microbicide studies is a good estimate of the clinical endpoint — the incidence of HIV — that is expected to occur in the control (placebo) group over the course of observation. To get some idea of what incidence to expect, pilot studies are often performed at each trial site.

For example, the researchers of the MDP301 trial first conducted feasibility trials to determine the incidence in the different populations at each trial site. They found HIV incidence rates ranging from a low of 3.5 per 100 patient years (py) in Mwanza, Tanzania to a high of 12.6 per 100 py in Mtubatuba, South Africa for a weighted average of 6.2 per 100 py.

However, as some of the sites with lower incidence rates are expected to feed more women into the trial, a conservative estimated incidence of 4 per 100 py was used to calculate how many women would be needed for the study. Then in order to give the study a 90% power to demonstrate a moderate (in this case, 40%) reduction in HIV incidence, the researchers calculated that the trial would need 7,875 patient years, which, with a 20% loss to follow-up factored in, means the study will need to enrol 9,673 women. This is by far the largest study — but it stands as one of the best chances for yielding a clear result.

Powering the study — size counts

Four factors determine what size a clinical trial should be: the incidence rate (in this case, how many people are expected to get infected on the control arm), the size of the effect being measured, length of follow-up, and expected loss to follow-up. It’s probably also wise to build in some breathing room for poor adherence to the experimental product.

Clinical trials must be quite large to have the power to conclusively demonstrate whether a moderately effective microbicide will work or not. The Carraguard study, for example, has enrolled 5,620 women so far, but there are plans to accrue 6,639 women to measure a 33% reduction in HIV incidence. MIRA’s diaphragm/Replens gel study, fully enrolled with 5,042 women, is also trying to measure whether the use of the diaphragm buffer combination results in a third fewer incidents of HIV transmission.

The smallest studies, conversely, are of CS, with 2,000 to 2,500 women each, but the studies are looking for a greater (50%) reduction in transmission (and also target higher risk women).

Overall, the eight clinical efficacy studies reviewed at the conference have enrolled or plan to involve over 33,500 women, though some have only begun enrolment in the last several months.

These numbers must also account for a substantial proportion of these women who may drop out or be lost to follow-up over the course of observation for a variety of reasons, such as travel or pregnancy. Most of the studies expect about 20% of the participants to be lost to follow-up before observation is complete.

Shortening the length of follow-up might improve the chances of maintaining women in the study, and maintaining adherence to the microbicide since adherence drops off over time. However, shorter studies need even greater numbers of participants to reach a statistically significant result.

But what if HIV incidence rates are lower than expected?

And yet the fate of the Savvy Ghana trial overshadowed many of these calculations and projections of incidence.

Savvy is a surface active microbicide which entered into a phase III efficacy study in March 2004 at two sites in Ghana. The study enrolled 2,142 women who were considered to be at especially high risk for HIV (including many commercial sex workers). (LINK) The trial had an 80% power to detect a 50% reduction in the HIV infection rate, and planned for, at most, a 20% loss to follow-up at 12 months. The researchers estimated that there would be at least five infections per 100 person years in the placebo group, and that they would observe at least 66 incident infections.

However, halfway through the study, an interim analysis found that only 17 total seroconversions had occurred: nine on placebo and eight on Savvy. This translates into an HIV incidence of 1.0% (95% confidence interval 0.3-1.7%) for Savvy and 1.1% (0.4-1.8%) for the placebo; for a risk ratio of 0.9 (0.3-2.3).

This HIV incidence was dramatically lower than anyone anticipated, and the trial was closed on the recommendation of the Data Safety and Monitoring Board because HIV incidence in the study population was too low to demonstrate whether or not the microbicide had an effect. This doesn’t mean that Savvy doesn’t work.

“We cannot make any conclusion about product effectiveness using this protocol in the Ghana cohort,” stressed Dr Leigh Peterson of FHI, who presented these results.

So what happened? No one is exactly sure but there are a number of theories.

One is that the HIV epidemic in Ghana, or at least these parts of Ghana, has matured and that the incidence in the area is simply on the decline.

Another is that the high rate of pregnancy (LINK) (about four times as common as HIV seroconversion in this study) decreased the likelihood of the study to reach a result, because the women who were most likely to become infected simply became pregnant first, and then either dropped out of the study or changed their sexual risk taking behaviour for the sake of the pregnancy.

A final possibility (that might be even more problematic for the conduct of these studies) is that simply participating in an ethical patient-centred prevention trial reduces the risk of HIV acquisition dramatically.

It's important to remember that these women get the best available safer sex counselling and support, which is reinforced with every clinic visit. The stress of being repeatedly tested for HIV may be a fairly effective motivator to reduce one’s risk taking behaviour. And when is a placebo not a placebo? Answer #1: when it is a condom (at least in microbicide studies).

“For ethical reasons we have to promote condom use within the trial. In my 35 years of experience working with clinical trials, I’ve never been in such a difficult situation were you have to promote another treatment that will work as good, and probably better, than the product you are testing,” said Dr Johansson.

In fact, self-reported condom use has increased significantly in several of the microbicide studies. For example, in Savvy Nigeria, participants reported that condoms were used for 66% of their last sex acts. After follow-up, however, participants reported that they now used condoms for 88% of their sex acts within the last seven days (gel use, however, was not as high). In the CS #2 study, self-reported condom use during the last week went up from 58% at screening, to 90% at follow-up.

There is also another answer, for "when a placebo is not a placebo?" When you receive free medical treatment to which you previously didn’t have access, including treatment of sexually transmitted infections (STIs). Treatment of STIs directly impacts the likelihood of HIV acquisition. (LINK).

If simply conducting a good HIV prevention study dramatically lowers HIV incidence, it would be a happy outcome for the trial participants, but many of these studies could find that they are underpowered.

Adjusting sample size on the fly

Probably due to the Savvy Ghana experience, several of the ongoing clinical efficacy trials are keeping a closer eye on HIV incidence rates to see whether they may need to increase the sample sizes.

According to Dr Lut Van Damme, who reported on CS #1, the study will have an interim meeting after approximately 33 seroconversions because of concern that there could be a lower than expected HIV incidence. “It is still too early at this point to say... but we are monitoring the seroconversions very closely and in our interim analysis, we will look at sample size reassessment and decide whether we need a bigger trial or not,” she said.

A number of studies are adopting this approach.

HPTN 035, on the other hand, will have no interim efficacy analysis but will simply continue until 192 HIV infections are observed. However, the observed HIV incidence rates are being monitored independently to make sure the sample size is adequate.


Ampofo W et al. Randomized controlled trial of SAVVY and HIV in Ghana: operational challenges of the Accra site. Microbicides 2006 Conference, Cape Town, abstract AB4, 2006.