Control Options for dynmodel
dynmodelControl(
...,
ci = 0.95,
nlmixrOutput = FALSE,
digs = 3,
lower = -Inf,
upper = Inf,
method = c("bobyqa", "Nelder-Mead", "lbfgsb3c", "L-BFGS-B", "PORT", "mma",
"lbfgsbLG", "slsqp", "Rvmmin"),
maxeval = 999,
scaleTo = 1,
scaleObjective = 0,
normType = c("rescale2", "constant", "mean", "rescale", "std", "len"),
scaleType = c("nlmixr", "norm", "mult", "multAdd"),
scaleCmax = 1e+05,
scaleCmin = 1e-05,
scaleC = NULL,
scaleC0 = 1e+05,
atol = NULL,
rtol = NULL,
ssAtol = NULL,
ssRtol = NULL,
npt = NULL,
rhobeg = 0.2,
rhoend = NULL,
iprint = 0,
print = 1,
maxfun = NULL,
trace = 0,
factr = NULL,
pgtol = NULL,
abstol = NULL,
reltol = NULL,
lmm = NULL,
maxit = 100000L,
eval.max = NULL,
iter.max = NULL,
abs.tol = NULL,
rel.tol = NULL,
x.tol = NULL,
xf.tol = NULL,
step.min = NULL,
step.max = NULL,
sing.tol = NULL,
scale.init = NULL,
diff.g = NULL,
boundTol = NULL,
epsilon = NULL,
derivSwitchTol = NULL,
sigdig = 4,
covMethod = c("nlmixrHess", "optimHess"),
gillK = 10L,
gillStep = 4,
gillFtol = 0,
gillRtol = sqrt(.Machine$double.eps),
gillKcov = 10L,
gillStepCov = 2,
gillFtolCov = 0,
rxControl = NULL
)
Other arguments including scaling factors for each compartment. This includes S# = numeric will scale a compartment # by a dividing the compartment amount by the scale factor, like NONMEM.
Confidence level for some tables. By default this is 0.95 or 95% confidence.
Option to change output style to nlmixr output. By default this is FALSE.
Option for the number of significant digits of the output. By default this is 3.
Lower bounds on the parameters used in optimization. By default this is -Inf.
Upper bounds on the parameters used in optimization. By default this is Inf.
The method for solving ODEs. Currently this supports:
"liblsoda"
thread safe lsoda. This supports parallel
thread-based solving, and ignores user Jacobian specification.
"lsoda"
-- LSODA solver. Does not support parallel thread-based
solving, but allows user Jacobian specification.
"dop853"
-- DOP853 solver. Does not support parallel thread-based
solving nor user Jacobain specification
"indLin"
-- Solving through inductive linearization. The RxODE dll
must be setup specially to use this solving routine.
Maximum number of iterations for Nelder-Mead of simplex search. By default this is 999.
Scale the initial parameter estimate to this value. By default this is 1. When zero or below, no scaling is performed.
Scale the initial objective function to this value. By default this is 1.
This is the type of parameter
normalization/scaling used to get the scaled initial values
for nlmixr. These are used with scaleType
of.
With the exception of rescale2
, these come
from
Feature
Scaling. The rescale2
The rescaling is the same type
described in the
OptdesX
software manual.
In general, all all scaling formula can be described by:
v_scaled = (v_unscaled-C_1)/C_2
Where
The other data normalization approaches follow the following formula
v_scaled = (v_unscaled-C_1)/C_2;
rescale2
This scales all parameters from (-1 to 1).
The relative differences between the parameters are preserved
with this approach and the constants are:
C_1 = (max(all unscaled values)+min(all unscaled values))/2
C_2 = (max(all unscaled values) - min(all unscaled values))/2
rescale
or min-max normalization. This rescales all
parameters from (0 to 1). As in the rescale2
the
relative differences are preserved. In this approach:
C_1 = min(all unscaled values)
C_2 = max(all unscaled values) - min(all unscaled values)
mean
or mean normalization. This rescales to center
the parameters around the mean but the parameters are from 0
to 1. In this approach:
C_1 = mean(all unscaled values)
C_2 = max(all unscaled values) - min(all unscaled values)
std
or standardization. This standardizes by the mean
and standard deviation. In this approach:
C_1 = mean(all unscaled values)
C_2 = sd(all unscaled values)
len
or unit length scaling. This scales the
parameters to the unit length. For this approach we use the Euclidean length, that
is:
C_1 = 0
C_2 = sqrt(v_1^2 + v_2^2 + ... + v_n^2)
constant
which does not perform data normalization. That is
C_1 = 0
C_2 = 1
The scaling scheme for nlmixr. The supported types are:
nlmixr
In this approach the scaling is performed by the following equation:
v_scaled = (v_current - v_init)/scaleC[i] + scaleTo
The scaleTo
parameter is specified by the normType
,
and the scales are specified by scaleC
.
norm
This approach uses the simple scaling provided
by the normType
argument.
mult
This approach does not use the data
normalization provided by normType
, but rather uses
multiplicative scaling to a constant provided by the scaleTo
argument.
In this case:
v_scaled = v_current/v_init*scaleTo
multAdd
This approach changes the scaling based on
the parameter being specified. If a parameter is defined in an
exponential block (ie exp(theta)), then it is scaled on a
linearly, that is:
v_scaled = (v_current-v_init) + scaleTo
Otherwise the parameter is scaled multiplicatively.
v_scaled = v_current/v_init*scaleTo
Maximum value of the scaleC to prevent overflow.
Minimum value of the scaleC to prevent underflow.
The scaling constant used with
scaleType=nlmixr
. When not specified, it is based on
the type of parameter that is estimated. The idea is to keep
the derivatives similar on a log scale to have similar
gradient sizes. Hence parameters like log(exp(theta)) would
have a scaling factor of 1 and log(theta) would have a scaling
factor of ini_value (to scale by 1/value; ie
d/dt(log(ini_value)) = 1/ini_value or scaleC=ini_value)
For parameters in an exponential (ie exp(theta)) or parameters specifying powers, boxCox or yeoJohnson transformations , this is 1.
For additive, proportional, lognormal error structures, these are given by 0.5*abs(initial_estimate)
Factorials are scaled by abs(1/digamma(inital_estimate+1))
parameters in a log scale (ie log(theta)) are transformed by log(abs(initial_estimate))*abs(initial_estimate)
These parameter scaling coefficients are chose to try to keep similar slopes among parameters. That is they all follow the slopes approximately on a log-scale.
While these are chosen in a logical manner, they may not always apply. You can specify each parameters scaling factor by this parameter if you wish.
Number to adjust the scaling factor by if the initial gradient is zero.
a numeric absolute tolerance (1e-8 by default) used by the ODE solver to determine if a good solution has been achieved; This is also used in the solved linear model to check if prior doses do not add anything to the solution.
a numeric relative tolerance (1e-6
by default) used
by the ODE solver to determine if a good solution has been
achieved. This is also used in the solved linear model to check
if prior doses do not add anything to the solution.
Steady state atol convergence factor. Can be a vector based on each state.
Steady state rtol convergence factor. Can be a vector based on each state.
The number of points used to approximate the objective function via a quadratic approximation for bobyqa. The value of npt must be in the interval [n+2,(n+1)(n+2)/2] where n is the number of parameters in par. Choices that exceed 2*n+1 are not recommended. If not defined, it will be set to 2*n + 1
Beginning change in parameters for bobyqa algorithm (trust region). By default this is 0.2 or 20 parameters when the parameters are scaled to 1. rhobeg and rhoend must be set to the initial and final values of a trust region radius, so both must be positive with 0 < rhoend < rhobeg. Typically rhobeg should be about one tenth of the greatest expected change to a variable. Note also that smallest difference abs(upper-lower) should be greater than or equal to rhobeg*2. If this is not the case then rhobeg will be adjusted.
The smallest value of the trust region radius that is allowed. If not defined, then 10^(-sigdig-1) will be used.
Print option for optimization. See bobyqa
,
lbfgsb3c
, and lbfgs
for more
details. By default this is 0.
Integer representing when the outer step is printed. When this is 0 or do not print the iterations. 1 is print every function evaluation (default), 5 is print every 5 evaluations.
The maximum allowed number of function evaluations. If this is
exceeded, the method will terminate. See bobyqa
for
more details. By default this value is NULL.
Tracing information on the progress of the optimization is
produced. See bobyqa
,
lbfgsb3c
, and lbfgs
for more
details. By default this is 0.
Controls the convergence of the "L-BFGS-B" method. Convergence
occurs when the reduction in the objective is within this factor of the
machine tolerance. Default is 1e10, which gives a tolerance of about
2e-6
, approximately 4 sigdigs. You can check your exact tolerance
by multiplying this value by .Machine$double.eps
is a double precision variable.
On entry pgtol >= 0 is specified by the user. The iteration will stop when:
max(\| proj g_i \| i = 1, ..., n) <= lbfgsPgtol
where pg_i is the ith component of the projected gradient.
On exit pgtol is unchanged. This defaults to zero, when the check is suppressed.
Absolute tolerance for nlmixr optimizer
tolerance for nlmixr
An integer giving the number of BFGS updates retained in the "L-BFGS-B" method, It defaults to 7.
Maximum number of iterations for lbfgsb3c. See
lbfgsb3c
for more details. By default this is
100000L.
Number of maximum evaluations of the objective function
Maximum number of iterations allowed.
Used in Nelder-Mead optimization and PORT optimization. Absolute tolerance. Defaults to 0 so the absolute convergence test is not used. If the objective function is known to be non-negative, the previous default of 1e-20 would be more appropriate.
Relative tolerance before nlminb stops.
X tolerance for nlmixr optimizers
Used in Nelder-Mead optimization and PORT optimization. false
convergence tolerance. Defaults to 2.2e-14. See nlminb
for more details.
Used in Nelder-Mead optimization and PORT optimization.
Minimum step size. By default this is 1. See nlminb
for more details.
Used in Nelder-Mead optimization and PORT optimization.
Maximum step size. By default this is 1. See nlminb
for more details.
Used in Nelder-Mead optimization and PORT optimization.
Singular convergence tolerance; defaults to rel.tol. See
nlminb
for more details.
Used in Nelder-Mead optimization and PORT optimization.
See nlminb
for more details.
Used in Nelder-Mead optimization and PORT optimization. An
estimated bound on the relative error in the objective function value. See
nlminb
for more details.
Tolerance for boundary issues.
Precision of estimate for n1qn1 optimization.
The tolerance to switch forward to central differences.
Optimization significant digits. This controls:
The tolerance of the inner and outer optimization is 10^-sigdig
The tolerance of the ODE solvers is
0.5*10^(-sigdig-2)
; For the sensitivity equations and
steady-state solutions the default is 0.5*10^(-sigdig-1.5)
(sensitivity changes only applicable for liblsoda)
The tolerance of the boundary check is 5 * 10 ^ (-sigdig + 1)
The significant figures that some tables are rounded to.
Method for calculating covariance. In this discussion, R is the Hessian matrix of the objective function. The S matrix is the sum of individual gradient cross-product (evaluated at the individual empirical Bayes estimates).
The total number of possible steps to determine the optimal forward/central difference step size per parameter (by the Gill 1983 method). If 0, no optimal step size is determined. Otherwise this is the optimal step size determined.
When looking for the optimal forward difference step size, this is This is the step size to increase the initial estimate by. So each iteration the new step size = (prior step size)*gillStep
The gillFtol is the gradient error tolerance that is acceptable before issuing a warning/error about the gradient estimates.
The relative tolerance used for Gill 1983 determination of optimal step size.
The total number of possible steps to determine the optimal forward/central difference step size per parameter (by the Gill 1983 method) during the covariance step. If 0, no optimal step size is determined. Otherwise this is the optimal step size determined.
When looking for the optimal forward difference step size, this is This is the step size to increase the initial estimate by. So each iteration during the covariance step is equal to the new step size = (prior step size)*gillStepCov
The gillFtol is the gradient error tolerance that is acceptable before issuing a warning/error about the gradient estimates during the covariance step.
This uses RxODE family of objects, file, or model
specification to solve a ODE system. See rxControl
for
more details. By default this is NULL.
dynmodelControl list for options during dynmodel optimization