nlmixr

nlmixr

Changing models via piping

As in the running nlmixr vignette, Let’s start with a very simple PK example, using the single-dose theophylline dataset generously provided by Dr. Robert A. Upton of the University of California, San Francisco:


library(nlmixr)

## To allow nlmixr to reload runs without large run times
## To run the actual models on your system, take the save options off.
options(nlmixr.save=TRUE,
        nlmixr.save.dir=system.file(package="nlmixr"));

one.compartment <- function() {
    ini({
        tka <- 0.45 # Log Ka
        tcl <- 1 # Log Cl
        tv <- 3.45    # Log V
        eta.ka ~ 0.6
        eta.cl ~ 0.3
        eta.v ~ 0.1
        add.err <- 0.7
    })
    model({
        ka <- exp(tka + eta.ka)
        cl <- exp(tcl + eta.cl)
        v <- exp(tv + eta.v)
        d/dt(depot) = -ka * depot
        d/dt(center) = ka * depot - cl / v * center
        cp = center / v
        cp ~ add(add.err)
    })
}

We can try the First-Order Conditional Estimation with Interaction (FOCEi) method to find a good solution:

fit <- nlmixr(one.compartment, theo_sd, est="focei")
#> Loading model already run (/home/matt/R/x86_64-pc-linux-gnu-library/3.5/nlmixr/nlmixr-one.compartment-theo_sd-focei-cdab62d4795084a5a33ff1bba675543c.rds)

print(fit)
#> ── nlmixr FOCEi (outer: nlminb) fit ─────────────────────────────────────── 
#>           OBJF      AIC      BIC Log-likelihood Condition Number
#> FOCEi 116.8045 373.4043 393.5839      -179.7022         29.32241
#> 
#> ── Time (sec; $time): ───────────────────────────────────────────────────── 
#>            setup optimize covariance table    other
#> elapsed 0.401569 1.118783   1.118785 0.046 4.494863
#> 
#> ── Population Parameters ($parFixed or $parFixedDf): ────────────────────── 
#>         Parameter  Est.     SE %RSE Back-transformed(95%CI) BSV(CV%)
#> tka        Log Ka 0.469  0.203 43.3        1.6 (1.07, 2.38)     70.2
#> tcl        Log Cl  1.01  0.316 31.2       2.75 (1.48, 5.11)     26.8
#> tv          Log V  3.46 0.0728  2.1       31.8 (27.6, 36.7)     13.9
#> add.err           0.695                               0.695         
#>         Shrink(SD)%
#> tka          1.57% 
#> tcl          4.03% 
#> tv           10.3% 
#> add.err             
#> 
#>   Covariance Type ($covMethod): r,s
#>   No correlations in between subject variability (BSV) matrix
#>   Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs) 
#>   Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink 
#>   Minimization message ($message):  
#>     false convergence (8) 
#>   In an ODE system, false convergence may mean "useless" evaluations were performed.
#>   See https://stackoverflow.com/questions/40039114/r-nlminb-what-does-false-convergence-actually-mean
#>   It could also mean the convergence is poor, check results before accepting fit
#>   You may also try a good derivative free optimization:
#>     nlmixr(...,control=list(outerOpt="bobyqa"))
#> 
#> ── Fit Data (object is a modified tibble): ──────────────────────────────── 
#> # A tibble: 132 x 22
#>   ID     TIME    DV  EVID  PRED    RES   WRES IPRED   IRES  IWRES CPRED
#>   <fct> <dbl> <dbl> <int> <dbl>  <dbl>  <dbl> <dbl>  <dbl>  <dbl> <dbl>
#> 1 1     0      0.74     0  0     0.74   0.391  0     0.74   1.07   0   
#> 2 1     0.25   2.84     0  3.27 -0.433 -0.229  3.85 -1.01  -1.45   3.23
#> 3 1     0.570  6.57     0  5.84  0.727  0.384  6.79 -0.215 -0.310  5.79
#> # … with 129 more rows, and 11 more variables: CRES <dbl>, CWRES <dbl>,
#> #   eta.ka <dbl>, eta.cl <dbl>, eta.v <dbl>, ka <dbl>, cl <dbl>, v <dbl>,
#> #   cp <dbl>, depot <dbl>, center <dbl>

Changing and fixing parameter values in models

Something that you may want to do is change initial estimates with a model. It is simple to modify the model definition and change them yourself, but you may also want to change them in a specific way; For example try a range of starting values to see how the system behaves (either by full estimation or by a posthoc estimation). In these situations it can be come tedious to modify the models by hand.

nlmixr provides the ability to:

  1. Change parameter estimates before or after running a model. (ie update(tka=0.5))
  2. Fix parameters to arbitrary values, or estimated values (ie update(tka=fix(0.5)) or update(tka=fix))

The easiest way to illustrate this is by showing a few examples of piping changes to the model:

## Example 1 -- Set inital estimate to 0.5 (shown w/posthoc)
one.ka.0.5 <- fit %>%
    update(tka=0.5) %>% 
    nlmixr(est="posthoc")

print(one.ka.0.5)
## Example 2 -- Fix tka to 0.5 and re-estimate.
one.ka.0.5 <- fit %>%
    update(tka=fix(0.5)) %>%
    nlmixr(est="focei")
#> Loading model already run (/home/matt/R/x86_64-pc-linux-gnu-library/3.5/nlmixr/nlmixr-one.compartment-theo_sd-focei-d62efb1b96bf7ef6ded9f8d993886f01.rds)

print(one.ka.0.5)
#> ── nlmixr FOCEi (outer: nlminb) fit ─────────────────────────────────────── 
#>           OBJF      AIC      BIC Log-likelihood Condition Number
#> FOCEi 116.8398 371.4396 388.7364      -179.7198         7.394079
#> 
#> ── Time (sec; $time): ───────────────────────────────────────────────────── 
#>            setup optimize covariance table   other
#> elapsed 0.376321 0.647163   0.647166 0.078 0.97935
#> 
#> ── Population Parameters ($parFixed or $parFixedDf): ────────────────────── 
#>         Parameter  Est.     SE  %RSE Back-transformed(95%CI) BSV(CV%)
#> tka        Log Ka   0.5  FIXED FIXED                    1.65     70.2
#> tcl        Log Cl  1.01 0.0763  7.54        2.75 (2.37, 3.2)     26.8
#> tv          Log V  3.46 0.0551  1.59       31.8 (28.6, 35.5)     13.9
#> add.err           0.695                                0.695         
#>         Shrink(SD)%
#> tka          1.48% 
#> tcl          4.06% 
#> tv           10.3% 
#> add.err             
#> 
#>   Covariance Type ($covMethod): r,s
#>   No correlations in between subject variability (BSV) matrix
#>   Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs) 
#>   Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink 
#>   Minimization message ($message):  
#>     false convergence (8) 
#>   In an ODE system, false convergence may mean "useless" evaluations were performed.
#>   See https://stackoverflow.com/questions/40039114/r-nlminb-what-does-false-convergence-actually-mean
#>   It could also mean the convergence is poor, check results before accepting fit
#>   You may also try a good derivative free optimization:
#>     nlmixr(...,control=list(outerOpt="bobyqa"))
#> 
#> ── Fit Data (object is a modified tibble): ──────────────────────────────── 
#> # A tibble: 132 x 22
#>   ID     TIME    DV  EVID  PRED    RES   WRES IPRED   IRES  IWRES CPRED
#>   <fct> <dbl> <dbl> <int> <dbl>  <dbl>  <dbl> <dbl>  <dbl>  <dbl> <dbl>
#> 1 1     0      0.74     0  0     0.74   0.391  0     0.74   1.07   0   
#> 2 1     0.25   2.84     0  3.36 -0.517 -0.273  3.85 -1.01  -1.45   3.33
#> 3 1     0.570  6.57     0  5.95  0.616  0.325  6.79 -0.219 -0.315  5.91
#> # … with 129 more rows, and 11 more variables: CRES <dbl>, CWRES <dbl>,
#> #   eta.ka <dbl>, eta.cl <dbl>, eta.v <dbl>, ka <dbl>, cl <dbl>, v <dbl>,
#> #   cp <dbl>, depot <dbl>, center <dbl>
## Example 3 -- Fix tka to model estimated value and re-estimate.
one.ka.0.5 <- fit %>%
    update(tka=fix) %>%
    nlmixr(est="focei")
#> Loading model already run (/home/matt/R/x86_64-pc-linux-gnu-library/3.5/nlmixr/nlmixr-one.compartment-theo_sd-focei-8511f1da52ffb52a64046a9b12e4c314.rds)

print(one.ka.0.5)
#> ── nlmixr FOCEi (outer: nlminb) fit ─────────────────────────────────────── 
#>           OBJF      AIC      BIC Log-likelihood Condition Number
#> FOCEi 116.8049 371.4046 388.7015      -179.7023         7.971318
#> 
#> ── Time (sec; $time): ───────────────────────────────────────────────────── 
#>            setup optimize covariance table    other
#> elapsed 0.356382   0.6514   0.651403 0.042 0.750815
#> 
#> ── Population Parameters ($parFixed or $parFixedDf): ────────────────────── 
#>         Parameter  Est.     SE  %RSE Back-transformed(95%CI) BSV(CV%)
#> tka        Log Ka 0.469  FIXED FIXED                     1.6     70.2
#> tcl        Log Cl  1.01 0.0812  8.02       2.75 (2.35, 3.23)     26.8
#> tv          Log V  3.46 0.0611  1.77       31.8 (28.2, 35.9)     13.9
#> add.err           0.695                                0.695         
#>         Shrink(SD)%
#> tka          1.58% 
#> tcl          4.03% 
#> tv           10.4% 
#> add.err             
#> 
#>   Covariance Type ($covMethod): r,s
#>   No correlations in between subject variability (BSV) matrix
#>   Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs) 
#>   Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink 
#>   Minimization message ($message):  
#>     false convergence (8) 
#>   In an ODE system, false convergence may mean "useless" evaluations were performed.
#>   See https://stackoverflow.com/questions/40039114/r-nlminb-what-does-false-convergence-actually-mean
#>   It could also mean the convergence is poor, check results before accepting fit
#>   You may also try a good derivative free optimization:
#>     nlmixr(...,control=list(outerOpt="bobyqa"))
#> 
#> ── Fit Data (object is a modified tibble): ──────────────────────────────── 
#> # A tibble: 132 x 22
#>   ID     TIME    DV  EVID  PRED    RES   WRES IPRED   IRES  IWRES CPRED
#>   <fct> <dbl> <dbl> <int> <dbl>  <dbl>  <dbl> <dbl>  <dbl>  <dbl> <dbl>
#> 1 1     0      0.74     0  0     0.74   0.391  0     0.74   1.07   0   
#> 2 1     0.25   2.84     0  3.27 -0.433 -0.229  3.85 -1.01  -1.45   3.23
#> 3 1     0.570  6.57     0  5.84  0.727  0.384  6.79 -0.216 -0.310  5.79
#> # … with 129 more rows, and 11 more variables: CRES <dbl>, CWRES <dbl>,
#> #   eta.ka <dbl>, eta.cl <dbl>, eta.v <dbl>, ka <dbl>, cl <dbl>, v <dbl>,
#> #   cp <dbl>, depot <dbl>, center <dbl>
## Example 4 -- Change tka to 0.7 in orginal model function and then estimate
one.ka.0.7 <- one.compartment %>% 
    update(tka=0.7) %>% 
    nlmixr(data=theo_sd,est="focei")
#> Loading model already run (/home/matt/R/x86_64-pc-linux-gnu-library/3.5/nlmixr/nlmixr-focei-f22f76916ff54c7252a88764f14d8b80.rds)

print(one.ka.0.7)
#> ── nlmixr FOCEi (outer: nlminb) fit ─────────────────────────────────────── 
#>           OBJF      AIC      BIC Log-likelihood Condition Number
#> FOCEi 116.8071 373.4068 393.5865      -179.7034         47.02366
#> 
#> ── Time (sec; $time): ───────────────────────────────────────────────────── 
#>            setup optimize covariance table    other
#> elapsed 0.348489 1.155553   1.155557 0.027 3.131401
#> 
#> ── Population Parameters ($parFixed or $parFixedDf): ────────────────────── 
#>         Parameter  Est.     SE %RSE Back-transformed(95%CI) BSV(CV%)
#> tka        Log Ka  0.46  0.198   43       1.58 (1.08, 2.33)     70.2
#> tcl        Log Cl  1.01 0.0728 7.19       2.75 (2.39, 3.17)     26.5
#> tv          Log V  3.46 0.0554  1.6       31.8 (28.5, 35.4)     14.1
#> add.err           0.695                               0.695         
#>         Shrink(SD)%
#> tka          1.55% 
#> tcl          3.47% 
#> tv           10.7% 
#> add.err             
#> 
#>   Covariance Type ($covMethod): r,s
#>   No correlations in between subject variability (BSV) matrix
#>   Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs) 
#>   Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink 
#>   Minimization message ($message):  
#>     false convergence (8) 
#>   In an ODE system, false convergence may mean "useless" evaluations were performed.
#>   See https://stackoverflow.com/questions/40039114/r-nlminb-what-does-false-convergence-actually-mean
#>   It could also mean the convergence is poor, check results before accepting fit
#>   You may also try a good derivative free optimization:
#>     nlmixr(...,control=list(outerOpt="bobyqa"))
#> 
#> ── Fit Data (object is a modified tibble): ──────────────────────────────── 
#> # A tibble: 132 x 22
#>   ID     TIME    DV  EVID  PRED    RES   WRES IPRED   IRES  IWRES CPRED
#>   <fct> <dbl> <dbl> <int> <dbl>  <dbl>  <dbl> <dbl>  <dbl>  <dbl> <dbl>
#> 1 1     0      0.74     0  0     0.74   0.391  0     0.74   1.07   0   
#> 2 1     0.25   2.84     0  3.25 -0.413 -0.218  3.84 -1.00  -1.45   3.21
#> 3 1     0.570  6.57     0  5.82  0.752  0.397  6.78 -0.214 -0.308  5.76
#> # … with 129 more rows, and 11 more variables: CRES <dbl>, CWRES <dbl>,
#> #   eta.ka <dbl>, eta.cl <dbl>, eta.v <dbl>, ka <dbl>, cl <dbl>, v <dbl>,
#> #   cp <dbl>, depot <dbl>, center <dbl>

Changing model features

When developing models, often you add and remove between subject variability to parameters, add covariates to the effects, and/or change the residual errors. You can change lines in the model by piping the fit or the nlmixr model specification function to a model

Adding or Removing between subject variability

Often in developing a model you add and remove between subject variability to certain model parameters. For example, you could remove the between subject variability in the ka parameter by changing that line in the model;

For example to remove a eta from a prior fit or prior model specification function, simply pipe it to the model function. You can then re-estimate by piping it to the nlmixr function again.

## Remove eta.ka on ka
noEta <- fit %>%
    update(ka <- exp(tka)) %>%
    nlmixr(est="focei")
#> Loading model already run (/home/matt/R/x86_64-pc-linux-gnu-library/3.5/nlmixr/nlmixr-focei-9d74c5deda3a8cdbadb4090d613a7e00.rds)

print(noEta)
#> ── nlmixr FOCEi (outer: nlminb) fit ─────────────────────────────────────── 
#>           OBJF      AIC     BIC Log-likelihood Condition Number
#> FOCEi 176.5764 431.1762 448.473      -209.5881         33.66599
#> 
#> ── Time (sec; $time): ───────────────────────────────────────────────────── 
#>            setup optimize covariance table    other
#> elapsed 0.349456 0.596201   0.596205 0.016 1.968138
#> 
#> ── Population Parameters ($parFixed or $parFixedDf): ────────────────────── 
#>         Parameter  Est.     SE %RSE Back-transformed(95%CI) BSV(CV%)
#> tka        Log Ka 0.433  0.168 38.8       1.54 (1.11, 2.14)         
#> tcl        Log Cl  0.99 0.0743 7.51       2.69 (2.33, 3.11)     30.2
#> tv          Log V  3.48 0.0479 1.38       32.5 (29.5, 35.6)     15.4
#> add.err            1.02                                1.02         
#>         Shrink(SD)%
#> tka                
#> tcl          7.70% 
#> tv           7.12% 
#> add.err             
#> 
#>   Covariance Type ($covMethod): r,s
#>   No correlations in between subject variability (BSV) matrix
#>   Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs) 
#>   Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink 
#>   Minimization message ($message):  
#>     false convergence (8) 
#>   In an ODE system, false convergence may mean "useless" evaluations were performed.
#>   See https://stackoverflow.com/questions/40039114/r-nlminb-what-does-false-convergence-actually-mean
#>   It could also mean the convergence is poor, check results before accepting fit
#>   You may also try a good derivative free optimization:
#>     nlmixr(...,control=list(outerOpt="bobyqa"))
#> 
#> ── Fit Data (object is a modified tibble): ──────────────────────────────── 
#> # A tibble: 132 x 21
#>   ID     TIME    DV  EVID  PRED    RES   WRES IPRED   IRES  IWRES CPRED
#>   <fct> <dbl> <dbl> <int> <dbl>  <dbl>  <dbl> <dbl>  <dbl>  <dbl> <dbl>
#> 1 1     0      0.74     0  0     0.74   0.709  0     0.74   0.725  0   
#> 2 1     0.25   2.84     0  3.12 -0.280 -0.268  3.57 -0.732 -0.717  3.09
#> 3 1     0.570  6.57     0  5.61  0.957  0.917  6.45  0.120  0.117  5.56
#> # … with 129 more rows, and 10 more variables: CRES <dbl>, CWRES <dbl>,
#> #   eta.cl <dbl>, eta.v <dbl>, ka <dbl>, cl <dbl>, v <dbl>, cp <dbl>,
#> #   depot <dbl>, center <dbl>

Of course you could also add an eta on a parameter in the same way;

addBackKa <- noEta %>%
    update({ka <- exp(tka + bsv.ka)}) %>%
    ini(bsv.ka=0.1) %>%
    nlmixr(est="focei")
#> Loading model already run (/home/matt/R/x86_64-pc-linux-gnu-library/3.5/nlmixr/nlmixr-focei-d3a4f368901f6feadc5f9cd775e92f11.rds)

print(addBackKa)
#> ── nlmixr FOCEi (outer: nlminb) fit ─────────────────────────────────────── 
#>           OBJF      AIC      BIC Log-likelihood Condition Number
#> FOCEi 116.8129 373.4127 393.5923      -179.7063         62.15095
#> 
#> ── Time (sec; $time): ───────────────────────────────────────────────────── 
#>            setup optimize covariance table    other
#> elapsed 0.369516 1.109735   1.109738 0.069 3.513011
#> 
#> ── Population Parameters ($parFixed or $parFixedDf): ────────────────────── 
#>         Parameter  Est.     SE %RSE Back-transformed(95%CI) BSV(CV%)
#> tka        Log Ka 0.473  0.204 43.1        1.6 (1.08, 2.39)     70.5
#> tcl        Log Cl  1.01 0.0725 7.14       2.76 (2.39, 3.18)     27.2
#> tv          Log V  3.46 0.0419 1.21       31.8 (29.3, 34.5)     13.8
#> add.err           0.693                               0.693         
#>         Shrink(SD)%
#> tka          1.83% 
#> tcl          4.89% 
#> tv           10.3% 
#> add.err             
#> 
#>   Covariance Type ($covMethod): r,s
#>   No correlations in between subject variability (BSV) matrix
#>   Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs) 
#>   Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink 
#>   Minimization message ($message):  
#>     false convergence (8) 
#>   In an ODE system, false convergence may mean "useless" evaluations were performed.
#>   See https://stackoverflow.com/questions/40039114/r-nlminb-what-does-false-convergence-actually-mean
#>   It could also mean the convergence is poor, check results before accepting fit
#>   You may also try a good derivative free optimization:
#>     nlmixr(...,control=list(outerOpt="bobyqa"))
#> 
#> ── Fit Data (object is a modified tibble): ──────────────────────────────── 
#> # A tibble: 132 x 22
#>   ID     TIME    DV  EVID  PRED    RES   WRES IPRED   IRES  IWRES CPRED
#>   <fct> <dbl> <dbl> <int> <dbl>  <dbl>  <dbl> <dbl>  <dbl>  <dbl> <dbl>
#> 1 1     0      0.74     0  0     0.74   0.375  0     0.74   1.07   0   
#> 2 1     0.25   2.84     0  3.29 -0.449 -0.227  3.85 -1.01  -1.45   3.25
#> 3 1     0.570  6.57     0  5.86  0.705  0.357  6.79 -0.216 -0.312  5.81
#> # … with 129 more rows, and 11 more variables: CRES <dbl>, CWRES <dbl>,
#> #   eta.cl <dbl>, eta.v <dbl>, bsv.ka <dbl>, ka <dbl>, cl <dbl>, v <dbl>,
#> #   cp <dbl>, depot <dbl>, center <dbl>

You can see the name change by examining the omega matrix:

addBackKa$omega
#>            eta.cl      eta.v  bsv.ka
#> eta.cl 0.07161515 0.00000000 0.00000
#> eta.v  0.00000000 0.01889769 0.00000
#> bsv.ka 0.00000000 0.00000000 0.40355

Note that new between subject variability parameters are distinguished from other types of parameters (ie population parameters, and individual covariates) by their name. Parameters starting or ending with the following names are assumed to be between subject variability parameters:

  • eta (from NONMEM convention)
  • ppv (per patient variability)
  • psv (per subject variability)
  • iiv (inter-individual variability)
  • bsv (between subject variability)
  • bpv (between patient variability)

Adding Covariate effects

## Note currently cov is needed as a prefix so nlmixr knows this is an
## estimated parameter not a parameter
wt70 <- fit %>% ## FIXME could be based on if it finds the covarite in the last used nlmixr data.
    update({cl <- exp(tcl + eta.cl)*(WT/70)^covWtPow}) %>%
    update(covWtPow=fix(0.75)) %>%
    nlmixr(est="focei")
#> Loading model already run (/home/matt/R/x86_64-pc-linux-gnu-library/3.5/nlmixr/nlmixr-focei-a0f9f608da09ddce9f6a5348fbbeadb8.rds)

print(wt70)
#> ── nlmixr FOCEi (outer: nlminb) fit ─────────────────────────────────────── 
#>           OBJF      AIC      BIC Log-likelihood Condition Number
#> FOCEi 116.1449 372.7447 392.9243      -179.3723         53.97154
#> 
#> ── Time (sec; $time): ───────────────────────────────────────────────────── 
#>            setup optimize covariance table    other
#> elapsed 0.378631 1.237841   1.237845 0.015 2.534683
#> 
#> ── Population Parameters ($parFixed or $parFixedDf): ────────────────────── 
#>          Parameter  Est.     SE  %RSE Back-transformed(95%CI) BSV(CV%)
#> tka         Log Ka 0.469  0.174  37.1        1.6 (1.14, 2.25)     69.5
#> tcl         Log Cl  1.03 0.0606   5.9       2.79 (2.48, 3.15)     26.4
#> tv           Log V  3.46  0.035  1.01         31.7 (29.6, 34)     13.7
#> add.err            0.696                                0.696         
#> covWtPow            0.75  FIXED FIXED                    0.75         
#>          Shrink(SD)%
#> tka           1.49% 
#> tcl           5.51% 
#> tv            11.8% 
#> add.err             
#> covWtPow             
#> 
#>   Covariance Type ($covMethod): r,s
#>   No correlations in between subject variability (BSV) matrix
#>   Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs) 
#>   Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink 
#>   Minimization message ($message):  
#>     false convergence (8) 
#>   In an ODE system, false convergence may mean "useless" evaluations were performed.
#>   See https://stackoverflow.com/questions/40039114/r-nlminb-what-does-false-convergence-actually-mean
#>   It could also mean the convergence is poor, check results before accepting fit
#>   You may also try a good derivative free optimization:
#>     nlmixr(...,control=list(outerOpt="bobyqa"))
#> 
#> ── Fit Data (object is a modified tibble): ──────────────────────────────── 
#> # A tibble: 132 x 23
#>   ID     TIME    DV  EVID    WT  PRED    RES   WRES IPRED   IRES  IWRES
#>   <fct> <dbl> <dbl> <int> <dbl> <dbl>  <dbl>  <dbl> <dbl>  <dbl>  <dbl>
#> 1 1     0      0.74     0  79.6  0     0.74   0.391  0     0.74   1.06 
#> 2 1     0.25   2.84     0  79.6  3.28 -0.443 -0.234  3.84 -0.999 -1.44 
#> 3 1     0.570  6.57     0  79.6  5.85  0.722  0.381  6.78 -0.212 -0.305
#> # … with 129 more rows, and 12 more variables: CPRED <dbl>, CRES <dbl>,
#> #   CWRES <dbl>, eta.ka <dbl>, eta.cl <dbl>, eta.v <dbl>, ka <dbl>,
#> #   cl <dbl>, v <dbl>, cp <dbl>, depot <dbl>, center <dbl>

Changing residual errors

Changing the residual errors is also just as easy, by simply specifying the error you wish to change:

## Since there are 0 observations in the data, these are changed to 0.0150 to show proportional error change.
d <- theo_sd
d$DV[d$EVID == 0 & d$DV == 0] <- 0.0150

addPropModel <- fit %>%
    update({cp ~ add(add.err)+prop(prop.err)}) %>%
    update(prop.err=0.1) %>%
    nlmixr(d,est="focei")
#> Loading model already run (/home/matt/R/x86_64-pc-linux-gnu-library/3.5/nlmixr/nlmixr-focei-a009112ae5af2c031d874d07154bf324.rds)

print(addPropModel)
#> ── nlmixr FOCEi (outer: nlminb) fit ─────────────────────────────────────── 
#>           OBJF      AIC      BIC Log-likelihood Condition Number
#> FOCEi 104.3524 362.9522 386.0146      -173.4761         71.93474
#> 
#> ── Time (sec; $time): ───────────────────────────────────────────────────── 
#>            setup optimize covariance table    other
#> elapsed 0.384517 1.105729   1.105732 0.025 7.278022
#> 
#> ── Population Parameters ($parFixed or $parFixedDf): ────────────────────── 
#>          Parameter  Est.     SE %RSE Back-transformed(95%CI) BSV(CV%)
#> tka         Log Ka 0.389  0.243 62.5      1.47 (0.916, 2.37)     68.7
#> tcl         Log Cl  1.02 0.0795 7.76       2.79 (2.38, 3.26)     25.9
#> tv           Log V  3.46 0.0498 1.44         31.9 (29, 35.2)     12.5
#> add.err            0.275                               0.275         
#> prop.err           0.134                               0.134         
#>          Shrink(SD)%
#> tka           1.81% 
#> tcl           1.51% 
#> tv            15.5% 
#> add.err             
#> prop.err             
#> 
#>   Covariance Type ($covMethod): r,s
#>   No correlations in between subject variability (BSV) matrix
#>   Full BSV covariance ($omega) or correlation ($omegaR; diagonals=SDs) 
#>   Distribution stats (mean/skewness/kurtosis/p-value) available in $shrink 
#>   Minimization message ($message):  
#>     false convergence (8) 
#>   In an ODE system, false convergence may mean "useless" evaluations were performed.
#>   See https://stackoverflow.com/questions/40039114/r-nlminb-what-does-false-convergence-actually-mean
#>   It could also mean the convergence is poor, check results before accepting fit
#>   You may also try a good derivative free optimization:
#>     nlmixr(...,control=list(outerOpt="bobyqa"))
#> 
#> ── Fit Data (object is a modified tibble): ──────────────────────────────── 
#> # A tibble: 132 x 22
#>   ID     TIME    DV  EVID  PRED    RES   WRES IPRED   IRES  IWRES CPRED
#>   <fct> <dbl> <dbl> <int> <dbl>  <dbl>  <dbl> <dbl>  <dbl>  <dbl> <dbl>
#> 1 1     0      0.74     0  0     0.74   0.396  0     0.74   2.69   0   
#> 2 1     0.25   2.84     0  3.05 -0.214 -0.114  3.46 -0.624 -1.16   3.03
#> 3 1     0.570  6.57     0  5.54  1.03   0.541  6.23  0.337  0.383  5.51
#> # … with 129 more rows, and 11 more variables: CRES <dbl>, CWRES <dbl>,
#> #   eta.ka <dbl>, eta.cl <dbl>, eta.v <dbl>, ka <dbl>, cl <dbl>, v <dbl>,
#> #   cp <dbl>, depot <dbl>, center <dbl>

Simulating new scenarios with nlmixr and RxODE

Once you have a fit, you can easily simulate different regimens by piping too.

For example, you can simulate 100 subjects taking only the variability defined by the between subject variability matrix omega and the unexplained variability matrix sigma.

fit %>% 
    et(amt=300,ii=12,until=48) %>% # Now use amt=300 BID for 48 hrs
    rxSolve(nSub=100) %>% # Simulate with 50 subjects and 50 studies
    confint() %>% # Create intervals (not exactly confidence intervals)
    plot() # Plot the intervals
#> Warning in rxSolve_(object, .ctl, .nms, .xtra, params, events, inits,
#> setupOnly = .setupOnly): 'thetaMat' is ignored since nStud <= 1.
#> Warning in confint.rxSolve(.): In order to put confidence bands around the
#> intervals, you need at least 2500 simulations.
#> Summarizing data
#> done.

You can also simulate by sampling from the uncertainty in the omega, sigma and theta matrices. The simulation needs:

  • the population covariance matrix for the (truncated) multivariate normal distribution (thetaMat),
  • the number of observations and the number of subjects in the model for the covariance re-sampling

These are all stored in the nlmixr fit object.

This means you may simply:

  • Specify the number of “studies” you will be taking. Each study is a single parameter sample from the theta parameters as well as the omega and sigma matrices.
  • Specify the number of subjects you are sampling for each of the “study” specified above, nSub.

In the end, it is simply done by nSub=x, nStud=y as in the following example:

if (file.exists("modelPiping-sim0.rds")){
    load("modelPiping-sim0.rds");
} else {
    sim0 <- fit %>% 
        et(amt=300,ii=12,until=48) %>% # Now use amt=300 BID for 48 hrs
        rxSolve(nSub=50, nStud=50) %>% # Simulate with 50 subjects and 50 studies
        confint("sim");
    save(sim0, file="modelPiping-sim0.rds")
}

sim0 %>% # Create intervals (not exactly confidence intervals)
    plot() # Plot the intervals