This shows an example of integrated workflow between xgxr
nlmixr
and ggPmx
library(nlmixr)
library(xgxr)
library(readr)
library(ggplot2)
library(dplyr)
library(tidyr)
library(ggPMX)
pkpd_data <- case1_pkpd %>%
arrange(DOSE) %>%
select(-IPRED) %>%
mutate(TRTACT_low2high = factor(TRTACT, levels = unique(TRTACT)),
TRTACT_high2low = factor(TRTACT, levels = rev(unique(TRTACT))),
DAY_label = paste("Day", PROFDAY),
DAY_label = ifelse(DAY_label == "Day 0","Baseline",DAY_label))
pk_data <- pkpd_data %>%
filter(CMT == 2)
pk_data_cycle1 <- pk_data %>%
filter(CYCLE == 1)
Often in exploring data it is worthwhile to plot by dose by each nominal time and add the 95% confidence interval. This typical plot can be cumbersome and lack some nice features that xgxr
can help with. Note the following helper functions:
xgx_theme_set()
this sets the theme to black and white color theme and other best pratices in xgxr
.
xgx_geom_ci()
which creates the Confidence Interval and mean plots in a simple interface.
xgx_scale_y_log10()
which creates a log-scale that includes the minor grids that immediately show the viewer that the plot is a semi-log plot without carefully examining the y axis.
xgx_scale_x_time_units()
which creates an appropriate scale based on your times observed and the units you use. It also allows you to convert units easily for the right display.
xgx_annote_status()
which adds a DRAFT
annotation which is often considered best practice when the data or plots are draft.
xgx_theme_set() # This uses black and white theme based on xgxr best
# pratices
# flag for labeling figures as draft
status <- "DRAFT"
time_units_dataset <- "hours"
time_units_plot <- "days"
trtact_label <- "Dose"
dose_label <- "Dose (mg)"
conc_label <- "Concentration (ng/ml)"
auc_label <- "AUCtau (h.(ng/ml))"
concnorm_label <- "Normalized Concentration (ng/ml)/mg"
sex_label <- "Sex"
w100_label <- "WEIGHTB>100"
pd_label <- "FEV1 (mL)"
cens_label <- "Censored"
ggplot(data = pk_data_cycle1, aes(x = NOMTIME,
y = LIDV,
group = DOSE,
color = TRTACT_high2low)) +
xgx_geom_ci(conf_level = 0.95) + # Easy CI with xgxr
xgx_scale_y_log10() + # semi-log plots with semi-log grid minor lines
xgx_scale_x_time_units(units_dataset = time_units_dataset,
units_plot = time_units_plot) +
# The last line creates an appropriate x scale based on time-units
# and time unit scale
labs(y = conc_label, color = trtact_label) +
xgx_annotate_status(status) # Adds draft status to plot
With this plot you see the mean concentrations confidence intervals stratified by dose
Not only is it useful to look at the mean concentrations, it is often useful to look at the mean concentrations and their relationship between actual individual profiles. Using ggplot
coupled with the xgxr
helper functions used above, we can easily create these plots as well:
ggplot(data = pk_data_cycle1, aes(x = TIME, y = LIDV)) +
geom_line(aes(group = ID), color = "grey50", size = 1, alpha = 0.3) +
geom_cens(aes(cens=CENS)) +
xgx_geom_ci(aes(x = NOMTIME, color = NULL, group = NULL, shape = NULL), conf_level = 0.95) +
xgx_scale_y_log10() +
xgx_scale_x_time_units(units_dataset = time_units_dataset, units_plot = time_units_plot) +
labs(y = conc_label, color = trtact_label) +
theme(legend.position = "none") +
facet_grid(.~TRTACT_low2high) +
xgx_annotate_status(status)
To me it appears the variability seems to be higher with higher doses and higher with later times.
A common way to explore the dose linearity is to normalize by the dose. If the confidence intervals overlap, often this is a dose linear example.
ggplot(data = pk_data_cycle1,
aes(x = NOMTIME,
y = LIDV / as.numeric(as.character(DOSE)),
group = DOSE,
color = TRTACT_high2low)) +
xgx_geom_ci(conf_level = 0.95, alpha = 0.5, position = position_dodge(1)) +
xgx_scale_y_log10() +
xgx_scale_x_time_units(units_dataset = time_units_dataset, units_plot = time_units_plot) +
labs(y = concnorm_label, color = trtact_label) +
xgx_annotate_status(status)
This example seems to be dose-linear, with the exception of the censored data. This can be made even more clear by removing the censored data for this plot:
ggplot(data = pk_data_cycle1 %>% filter(CENS == 0),
aes(x = NOMTIME,
y = LIDV / as.numeric(as.character(DOSE)),
group = DOSE,
color = TRTACT_high2low)) +
xgx_geom_ci(conf_level = 0.95, alpha = 0.5, position = position_dodge(1)) +
xgx_scale_y_log10() +
xgx_scale_x_time_units(units_dataset = time_units_dataset, units_plot = time_units_plot) +
labs(y = concnorm_label, color = trtact_label) +
xgx_annotate_status(status)
The lowest dose, with the most censoring, is the one that seems to be the outlier. That is likely an artifact of censoring.
Other ways to explore the data include by looking at normalized Cmax and AUC values (which we will skip in this vignette).
Using the xgx
helper functions to ggplot
you can explore the effect of high baseline weight. This particular plot is shown below:
ggplot(data = pk_data_cycle1, aes(x = NOMTIME,
y = LIDV,
group = WEIGHTB > 100,
color = WEIGHTB > 100)) +
xgx_geom_ci(conf_level = 0.95) +
xgx_scale_y_log10() +
xgx_scale_x_time_units(units_dataset = time_units_dataset, units_plot = time_units_plot) +
facet_grid(.~DOSE) +
labs(y = conc_label, color = w100_label) +
xgx_annotate_status(status)
It seems that the weight effect is not extreme for either dose group
First we need to subset to the PK only data and rename LIDV
to DV
Next create a 2 compartment model:
## Use 2 compartment model
cmt2 <- function(){
ini({
lka <- log(0.1) # log Ka
lv <- log(10) # Log Vc
lcl <- log(4) # Log Cl
lq <- log(10) # log Q
lvp <- log(20) # Log Vp
eta.ka ~ 0.01
eta.v ~ 0.1
eta.cl ~ 0.1
logn.sd = 10
})
model({
ka <- exp(lka + eta.ka)
cl <- exp(lcl + eta.cl)
v <- exp(lv + eta.v)
q <- exp(lq)
vp <- exp(lvp)
linCmt() ~ lnorm(logn.sd)
})
}
## Check parsing
cmt2m <- nlmixr(cmt2)
print(cmt2m)
#> ▂▂ RxODE-based 2-compartment model with first-order absorption ▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂
#> ── Initialization: ─────────────────────────────────────────────────────────────
#> Fixed Effects ($theta):
#> lka lv lcl lq lvp
#> -2.302585 2.302585 1.386294 2.302585 2.995732
#>
#> Omega ($omega):
#> eta.ka eta.v eta.cl
#> eta.ka 0.01 0.0 0.0
#> eta.v 0.00 0.1 0.0
#> eta.cl 0.00 0.0 0.1
#> ── μ-referencing ($muRefTable): ────────────────────────────────────────────────
#> ┌─────────┬─────────┐
#> │ theta │ eta │
#> ├─────────┼─────────┤
#> │ lka │ eta.ka │
#> ├─────────┼─────────┤
#> │ lcl │ eta.cl │
#> ├─────────┼─────────┤
#> │ lv │ eta.v │
#> └─────────┴─────────┘
#>
#> ── Model: ──────────────────────────────────────────────────────────────────────
#> ka <- exp(lka + eta.ka)
#> cl <- exp(lcl + eta.cl)
#> v <- exp(lv + eta.v)
#> q <- exp(lq)
#> vp <- exp(lvp)
#> linCmt() ~ lnorm(logn.sd)
#> ▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂
## First try log-normal (since the variabilitiy seemed proportional to concentration)
cmt2fit.logn <- nlmixr(cmt2m, dat, "saem",
control=list(print=0),
table=tableControl(cwres=TRUE))
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#>
#>
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#>
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
## Now try proportional
cmt2fit.prop <- cmt2fit.logn %>%
update(linCmt() ~ prop(prop.sd)) %>%
nlmixr(est="saem", control=list(print=0),
table=tableControl(npde=TRUE, cwres=TRUE))
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#>
#>
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#>
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
## now try add+prop
cmt2fit.add.prop <- cmt2fit.prop %>%
update(linCmt() ~ prop(prop.sd) + add(add.sd)) %>%
nlmixr(est="saem", control=list(print=0),
table=tableControl(npde=TRUE, cwres=TRUE))
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#>
#>
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#>
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
Now that we have run 3 different estimation methods, we can compare the results side-by-side
library(huxtable)
huxreg("lognormal"=cmt2fit.logn, "proportional"=cmt2fit.prop, "add+prop"=cmt2fit.add.prop,
statistics=c(N="nobs", "logLik", "AIC"))
lognormal | proportional | add+prop | |
---|---|---|---|
lka | -1.086 | -1.284 | -0.863 |
(0.072) | (0.071) | (0.067) | |
lv | 1.612 *** | 1.261 *** | 2.673 *** |
(0.140) | (0.162) | (0.078) | |
lcl | 2.034 *** | 1.950 *** | 2.148 *** |
(0.068) | (0.069) | (0.061) | |
lq | 2.871 *** | 2.550 *** | 2.810 *** |
(0.058) | (0.055) | (0.049) | |
lvp | 4.740 *** | 4.530 *** | 4.959 *** |
(0.021) | (0.022) | (0.031) | |
sd__eta.ka | 0.533 | 0.616 | 0.382 |
(NA) | (NA) | (NA) | |
sd__eta.v | 1.379 | 1.653 | 0.232 |
(NA) | (NA) | (NA) | |
sd__eta.cl | 0.829 | 0.842 | 0.731 |
(NA) | (NA) | (NA) | |
logn.sd | 0.379 | ||
(NA) | |||
prop.sd | 0.349 | 0.290 | |
(NA) | (NA) | ||
add.sd | 0.020 | ||
(NA) | |||
N | 3900 | 3900 | 3900 |
logLik | 5005.048 | -24.464 | 101.605 |
AIC | -9992.095 | 66.928 | -183.211 |
*** p < 0.001; ** p < 0.01; * p < 0.05. |
Note that the additive and proportional model has the additive component approach zero. When comparing the objective functions of log-normal and proportional models, the proportional model has the lowest objective function value. (Since we modeled log-normal without data transformation it is appropriate to compare the AIC/Objective function values)
## The controller then can be piped into a specific plot
ctr <- pmx_nlmixr(cmt2fit.logn, conts = c("WEIGHTB"), cats="TRTACT", vpc=TRUE)
ctr %>% pmx_plot_npde_pred
## Modify graphical options and remove DRAFT label:
ctr %>% pmx_plot_npde_time(smooth = list(color="blue"), point = list(shape=4), is.draft=FALSE,
labels = list(x = "Time after first dose (days)", y = "Normalized PDE"))
ctr %>% pmx_plot_dv_ipred(scale_x_log10=TRUE, scale_y_log10=TRUE,filter=IPRED>0.001)
ctr %>% pmx_plot_dv_pred(scale_x_log10=TRUE, scale_y_log10=TRUE,filter=IPRED>0.001)
ctr %>% pmx_plot_abs_iwres_ipred
## For this display only show 1x1 individual plot for ID 110 for time < 12
ctr %>% pmx_plot_individual(1, filter=ID == 110 & TIME > 0 & TIME < 12,
facets = list(nrow = 1, ncol = 1))
ctr %>% pmx_plot_iwres_dens
ctr %>% pmx_plot_eta_qq
This creates two reports with default settings, both a pdf and word document. The report can be customized by editing the default template to include project specificities (change labels, stratifications, filtering, etc.).
ctr %>% pmx_plot_eta_box
ctr %>% pmx_plot_eta_hist
ctr %>% pmx_plot_eta_matrix
This creates two reports with default settings, both a pdf and word document. The report can be customized by editing the default template to include project specificities (change labels, stratifications, filtering, etc.).
ctr %>% pmx_plot_eta_matrix
By creating events you can simply simulate a new scenario. Perhaps your drug development team wants to explore the 100 mg dose 3 times a day dosing to see what happens with the PK. You can simply simulate from the nlmixr model using a new event table created from RxODE.
In this case we wish to simulate with some variability and see what happens at steady state:
# Start a new simulation
(ev <- et(amt=100, ii=8, ss=1))
time | amt | ii | evid | ss |
---|---|---|---|---|
0 | 100 | 8 | 1 | 1 |
ev$add.sampling(seq(0, 8, length.out=50))
print(ev)
#> ────────────────────────── EventTable with 51 records ──────────────────────────
#>
#> 1 dosing records (see $get.dosing(); add with add.dosing or et)
#> 50 observation times (see $get.sampling(); add with add.sampling or et)
#> ── First part of : ─────────────────────────────────────────────────────────────
#> # A tibble: 51 × 5
#> time amt ii evid ss
#> <dbl> <dbl> <dbl> <evid> <int>
#> 1 0 NA NA 0:Observation NA
#> 2 0 100 8 1:Dose (Add) 1
#> 3 0.163 NA NA 0:Observation NA
#> 4 0.327 NA NA 0:Observation NA
#> 5 0.490 NA NA 0:Observation NA
#> 6 0.653 NA NA 0:Observation NA
#> 7 0.816 NA NA 0:Observation NA
#> 8 0.980 NA NA 0:Observation NA
#> 9 1.14 NA NA 0:Observation NA
#> 10 1.31 NA NA 0:Observation NA
#> # … with 41 more rows
A nlmixr model already includes information about the parameter estimates and can simulate without uncertainty in the population parameters or covariances, like what is done for a VPC.
If you wish to simulate 100
patients repeated by 100
different theoretical studies where you simulate from the uncertainty in the fixed parameter estimates and covariances you can very easily with nlmixr/RxODE:
set.seed(100)
sim1 <- simulate(cmt2fit.logn, events=ev, nSub=100, nStud=100)
print(sim1)
#> ▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂ Solved RxODE object ▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂
#> ── Parameters ($params): ───────────────────────────────────────────────────────
#> # A tibble: 10,000 × 9
#> sim.id eta.cl lcl eta.v lv lq lvp eta.ka lka
#> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 0.429 2.11 0.101 1.46 2.83 4.74 0.241 -1.02
#> 2 2 0.0925 2.11 -0.0205 1.46 2.83 4.74 -0.419 -1.02
#> 3 3 0.696 2.11 0.00909 1.46 2.83 4.74 0.162 -1.02
#> 4 4 -0.481 2.11 0.216 1.46 2.83 4.74 -0.356 -1.02
#> 5 5 0.525 2.11 -1.07 1.46 2.83 4.74 0.334 -1.02
#> 6 6 0.282 2.11 0.0938 1.46 2.83 4.74 -0.815 -1.02
#> 7 7 0.243 2.11 -0.792 1.46 2.83 4.74 -1.42 -1.02
#> 8 8 -0.704 2.11 0.215 1.46 2.83 4.74 -0.624 -1.02
#> 9 9 0.204 2.11 -0.0833 1.46 2.83 4.74 0.896 -1.02
#> 10 10 -0.430 2.11 -0.542 1.46 2.83 4.74 -0.375 -1.02
#> # … with 9,990 more rows
#> ── Initial Conditions ($inits): ────────────────────────────────────────────────
#> named numeric(0)
#>
#> Simulation with uncertainty in:
#> • parameters (sim1$thetaMat for changes)
#> • omega matrix (sim1$omegaList)
#> • sigma matrix (sim1$sigmaList)
#>
#> ── First part of data (object): ────────────────────────────────────────────────
#> # A tibble: 500,000 × 4
#> sim.id time ipred sim
#> <int> <dbl> <dbl> <dbl>
#> 1 1 0 0.548 0.677
#> 2 1 0.163 1.49 1.60
#> 3 1 0.327 1.77 1.84
#> 4 1 0.490 1.81 1.69
#> 5 1 0.653 1.78 2.16
#> 6 1 0.816 1.71 0.811
#> # … with 499,994 more rows
#> ▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂
You may examine the simulated study information easily, as show in the RxODE
printout:
head(sim1$thetaMat)
#> lka lv lcl lq lvp
#> [1,] 0.06804597 -0.14958839 0.073820512 -0.04441966 -0.002959164
#> [2,] 0.13844574 0.15744323 -0.069197598 0.10998123 0.025996318
#> [3,] 0.07723728 0.18447561 0.060867490 0.07178382 0.008246075
#> [4,] 0.04944842 -0.13097795 0.031735384 0.03362874 0.017450184
#> [5,] 0.03687544 0.07506332 -0.004122622 0.08505149 0.018399011
#> [6,] 0.01795239 -0.16514242 -0.038780240 -0.02887758 0.004205545
You can also see the covariance matricies that are simulated (note they come from an inverse Wishart distribution):
head(sim1$omegaList)
#> [[1]]
#> [,1] [,2] [,3]
#> [1,] 0.348030401 -0.009258726 0.02309008
#> [2,] -0.009258726 1.749297508 -0.06284038
#> [3,] 0.023090081 -0.062840379 0.67659875
#>
#> [[2]]
#> [,1] [,2] [,3]
#> [1,] 0.31843998 -0.03933959 0.03600633
#> [2,] -0.03933959 1.83725983 -0.10232219
#> [3,] 0.03600633 -0.10232219 0.65008270
#>
#> [[3]]
#> [,1] [,2] [,3]
#> [1,] 0.31329329 -0.08803792 0.04564609
#> [2,] -0.08803792 2.03410491 -0.10428307
#> [3,] 0.04564609 -0.10428307 0.67570007
#>
#> [[4]]
#> [,1] [,2] [,3]
#> [1,] 0.25935493 0.04943556 0.02372883
#> [2,] 0.04943556 1.93885838 0.06716862
#> [3,] 0.02372883 0.06716862 0.73930475
#>
#> [[5]]
#> [,1] [,2] [,3]
#> [1,] 0.31401843 -0.02867440 0.02748918
#> [2,] -0.02867440 1.94908348 0.02233326
#> [3,] 0.02748918 0.02233326 0.73279617
#>
#> [[6]]
#> [,1] [,2] [,3]
#> [1,] 0.282509059 -0.06273188 0.007263501
#> [2,] -0.062731876 1.85964030 -0.117744614
#> [3,] 0.007263501 -0.11774461 0.628805639
head(sim1$sigmaList)
#> [[1]]
#> [,1]
#> [1,] 0.1425227
#>
#> [[2]]
#> [,1]
#> [1,] 0.146945
#>
#> [[3]]
#> [,1]
#> [1,] 0.1455487
#>
#> [[4]]
#> [,1]
#> [1,] 0.146411
#>
#> [[5]]
#> [,1]
#> [1,] 0.1404777
#>
#> [[6]]
#> [,1]
#> [1,] 0.1465778
It is also easy enough to create a plot to see what is going on with the simulation:
p1 <- plot(sim1) ## This returns a ggplot2 object
## you can tweak the plot by the standard ggplot commands
p1 + xlab("Time (hr)") +
ylab("Simulated Concentrations of TID steady state")
# And put the same plot on a semi-log plot
p1 + xlab("Time (hr)") +
ylab("Simulated Concentrations of TID steady state") +
xgx_scale_y_log10()
For more complex simulations with variability you can also simulate dosing windows and sampling windows and use any tool you want to summarize it in the way you wish.