Summary of Clarification on Precision Criteria to Derive Sample Size When Designing Pediatric Pharmacokinetic Studies

This article summarizes: Journal of Clinical Pharamacology, 2012; 52:1601-1606

Introduction
Paragraph 1:
Most development programs have one chance to obtain an informative set of trials; generally, thereafter, companies lose financial incentive.
PK information is useful because you can use it to help select a dose range, assess drug exposure for efficacy and safety purposes (via matching adult exposures), and support dosing approval.
Guidances have been published to elucidate the role of pediatric studies in the context of ADME.
Although provided, sample size selection for pediatric PK and safety studies have been very different and have not had clear justification.
A uniform definition of study quality for pediatric PK studies is needed.

Paragraph 2:

An important goal is to ensure precise estimates of PK parameters, such as CL and Vd, which are use to justify the safe and effective dose.

Regulatory guidance has been as follows:

The study must be prospectively powered to target a
95% CI [confidence interval] within 60% and 140%
of the geometric mean estimates of clearance and
volume of distribution for DRUG NAME in each
pediatric sub-group with at least 80% power.

Paragraph 3:

The article of interest will report how to do this based on an NCA or popPK approach.

Note that in terms of the NCA analysis discussion, they assume the PK parameters have been robustly evaluated (appropriate blood sampling; on a patient level).

In terms of safety objectives, there should be a minimum number of participants required and this number may be more than that which is considered required for the PK analysis objectives. The sponsor will still report to the regulatory authority via the same language if the number required for the safety analysis is greater than that required for the PK objectives.

3 pediatric clinical trials will be presented to demonstrate what is being proposed, while complying with the regulatory request.

Potential impacts of the quality standard on pediatric drug development is discussed as well.

Method

Sample Size Calculation for Rich PK Sampling Design Intended for NCA Analysis

Step 1: Derive a reasonable estimate of variability.

the standard deviation (SD) of log-transformed individual clearance for adults or any prior and related study can be used. This would also be done for WT. 

Also, the CL SD can be predicted within the pediatric patients using allometry.

If you know the %CV for the untransformed CLi is M% then use the following:

SD = sqrt(log(%CV^2+1)) =sqrt(log((M/100)^2+1)) 

Use this for each of the pediatric patient age subgroups.

This works well for well-divided age groups (but not for a group spanning the entire pediatric age group).

Note the reported BSV is usually a combination of BSV+IOC (intra-occasion variability; within subject variability).

If the true BSV and IOC were separated, the total BSV should be used to avoid underestimating this SD value.

If different SDs can be used for different age sub-groups, it would increase the probability of success.

Step 2: Calculate sample size to achieve the target. The 95% CI for the geometric mean CL (or WT) in one subgroup can be constructed as follows:

The 95% CI can be calculated as follows:

CLbar_geo is the sample geometric mean of CLis.

The S is the sample standard deviation of logCLi.

N is the number of patients in that age subgroup

t_0.975, N-1 is the t value corresponding to the 97.5th percentile of a Student t distribution with N-1 degrees of freedom (df).

To fulfill the requirement, the above equation should result within the following bounds: (0.6, 1.4).

Given the sampling distribution of S, a required N is needed to ensure that:

or

with a certain level of confidence. This is referred to as “power” in the requirement. Note that no hypothesis testing is involved here.

R Code for this is as follows:

f <- function(s, n, sigma){2*((n-1)/2/sigma**2)((n-1)/2)/gamma(0.5*(n-1))*s**(n-2)*exp(-(n-1)***s2/2/sigma**2)}

nmin <- 4

nmax <- 25

nsub <- nmax – nmin + 1

result <- rep(0, nsub)

sd <- 0.4 # SD of logX from adult data

for (i in 1:nsub) { n <- nmin + i – 1

tv <- qt(0.975, n – 1)

sup <- log(1.4)*sqrt(n)/tv
g <- function(x){f(x,n,sd)}
poweri <- integrate(g,0, sup, subdivisions = 100)
result[i] <- poweri[[1]]
}
power <- data.frame(nsub=c(nmin:nmax), power=result)
power

For further convenience, you could also change the “sd <- 0.4” line above to:

percCV <- 35 # percent CV
sd <- sqrt(log((percCV/100)**2+1))

And that would allow you to directly enter the % CV values.

You can also add the following at the end of the script for convenience (also add library(dplyr) at the top if you do):

powerCutoff <- 0.8

powerConcise <- (power %>% filter(power > powerCutoff))[1,]
sprintf(‘The number of subjects that should be used for %f is %i’, powerCutoff, powerConcise$nsub)

The answers are the numbers required per age group, FYI.

Sample Size Calculation for Sparse/Rich PK Sampling Design Intended for popPK Analysis

The script written in the article was not clear so I re-wrote it here.

#Assume covariate model CL=theta1(wt/70)* theta2*age/(age+theta3) and

#input parameter estimates

thetap1=3.7421
theta2=1.0078
theta3=4.8422

#input variance-covariance matrix for the 3 parameters above

covp3=matrix(
c(
0.29810, 0.05782, 1.27120,
0.05782, 0.02921, 0.02073,
1.27120, 0.02073, 8.42210
),nrow=3, byrow=T
)

#define the weight and age combination for SE estimation

wt=14
age=3

#define covariate model in R

f2 <- function(x,y,z) x + ylog(wt/70) + log(age/(age+z))

df<- deriv(body(f2), c(“x”,”y”,”z”))

x=thetap1

y=theta2

z=theta3

out=eval(df)

dfout=attr(out,”gradient”)

varlcl=dfout%%covp3%*%t(dfout)

#SE of LCL 0.09436884

SElcl=sqrt(varlcl)
se <- as.numeric(SElcl)
se

library(dplyr)
f <- function(s, n, sigma){2*((n-1)/2/sigma**2)**((n-1)/2)/gamma(0.5*(n-1))*s**(n-2)*exp(-(n-1)*s**2/2/sigma**2)}

nmin <- 2 # > 1
nmax <- 25
nsub <- nmax – nmin + 1
result <- rep(0, nsub)

for (i in 1:nsub) {
n <- nmin + i – 1
tv <- qt(0.975, n – 1)
sup <- log(1.4)*sqrt(n)/tv

g <- function(x){f(x,n,se*sqrt(n))}
poweri <- integrate(g,0, sup, subdivisions = 100)
result[i] <- poweri[[1]]
}
power <- data.frame(nsub=c(nmin:nmax), power=result)
power

powerCutoff <- 0.8

powerConcise <- (power %>% filter(power > powerCutoff))[1,]
sprintf(‘The number of subjects that should be used for %f is %i’, powerCutoff, powerConcise$nsub)

#End