Greybox main vignette
Ivan Svetunkov
20240827
There are three wellknown notions of “boxes” in modelling: 1. White
box  the model that is completely transparent and does not have any
randomness. One can see how the inputs are transformed into the specific
outputs. 2. Black box  the model which does not have an apparent
structure. One can only observe inputs and outputs but does not know
what happens inside. 3. Grey box  the model that is in between the
first two. We observe inputs and outputs plus have some information
about the structure of the model, but there is still a part of
unknown.
The white boxes are usually used in optimisations (e.g. linear
programming), while black boxes are popular in machine learning. As for
the grey box models, they are more often used in analysis and
forecasting. So the package greybox
contains models that
are used for these purposes.
At the moment the package contains augmented linear model function
and several basic functions that implement model selection and
combinations using information criteria (IC). You won’t find statistical
tests in this package  there’s plenty of them in the other packages.
Here we try using the modern techniques and methods that do not rely on
hypothesis testing. This is the main philosophical point of
greybox
.
Main functions
The package includes the following functions for models
construction:
 alm()  Augmented Linear Model. This is
something similar to GLM, but with a focus on forecasting and the
information criteria usage for time series. It also supports mixture
distribution models for the intermittent data and allows adding trend to
the data via the formula.
 stepwise()  select the linear model with
the lowest IC from all the possible in the provided data. Uses partial
correlations. Works fast;
 lmCombine()  combine the linear models
into one using IC weights;
 lmDynamic()  produce model with dynamic
weights and time varying parameters based on point IC weight.
See discussion of some of these functions in this vignette below.
Models evaluation functions
 ro()  produce forecasts with a specified
function using rolling origin.
measures()
 function, returning a bunch of error
measures for the provided forecast and the holdout sample.
rmcb()
 regression on ranks of forecasting methods.
This is a fast alternative to the classical nemenyi / MCB test.
Methods
The following methods can be applied to the models, produced by
alm()
, stepwise()
, lmCombine()
and lmDynamic()
:
logLik()
 extracts loglikelihood.
AIC()
, AICc()
, BIC()
,
BICc()
 calculates the respective information
criteria.
pointLik()
 extracts the point likelihood.
pAIC()
, pAICc()
, pBIC()
,
pBICc()
 calculates the respective point information
criteria, based on pointLik.
actuals()
 extracts the actual values of the response
variable.
coefbootstrap()
 produces bootstrapped values of
parameters, taking nsim
samples of the size
size
from the data and reapplying the model.
coef()
, coefficients()
 extract the
parameters of the model.
confint()
 extracts the confidence intervals for the
parameters.
vcov()
 extracts the variancecovariance matrix of the
parameters.
sigma()
 extracts the standard deviation of the
residuals.
nobs()
 the number of the insample observations of
the model.
nparam()
 the number of all the estimated parameters
in the model.
nvariate()
 the number of variates (columns /
dimensions) of the resposne variable.
summary()
 produces the summary of the model.
predict()
 produces the predictions based on the model
and the provided newdata
. If the newdata
is
not provided, then it uses the already available data in the model. Can
also produce confidence
and prediction
intervals.
forecast()
 acts similarly to predict()
with few differences. It has a parameter h
 forecast
horizon  which is NULL
by default and is set to be equal
to the number of rows in newdata
. However, if the
newdata
is not provided, then it will produce forecasts of
the explanatory variables to the horizon h
and use them as
newdata
. Finally, if h
and
newdata
are provided, then the number of rows to use will
be regulated by h
.
plot()
 produces several plots for the analysis of the
residuals. This includes: Fitted over time, Standardised residuals vs
Fitted, Absolute residuals vs Fitted, QQ plot with the specified
distribution, Squared residuals vs Fitted, ACF of the residuals and PACF
of the residuals, which is regulated by which
parameter.
See documentation for more info: ?plot.greybox
.
detectdst()
and detectleap()
 methods
that return the ids of the hour / date for the DST / Leap year
change.
extract()
method, needed in order to produce printable
regression outputs using texreg()
function from the
texreg
package.
Distribution functions
qlaplace()
, dlaplace()
,
rlaplace()
, plaplace()
 functions for Laplace
distribution.
qalaplace()
, dalaplace()
,
ralaplace()
, palaplace()
 functions for
Asymmetric Laplace distribution.
qs()
, ds()
, rs()
,
ps()
 functions for S distribution.
qgnorm()
, dgnorm()
, rgnorm()
,
pgnorm()
 functions for the Generalised normal
distribution.
qfnorm()
, dfnorm()
, rfnorm()
,
pfnorm()
 functions for folded normal distribution.
qtplnorm()
, dtplnorm()
,
rtplnorm()
, ptplnorm()
 functions for three
parameter log normal distribution.
qbcnorm()
, dbcnorm()
,
rbcnorm()
, pbcnorm()
 functions for the
BoxCox normal distribution.
qlogitnorm()
, dlogitnorm()
,
rlogitnorm()
, plogitnorm()
 functions for the
Logitnormal distribution.
Additional functions
graphmaker()
 produces linear plots for the variable,
its forecasts and fitted values.
xregExpander
The function xregExpander()
is useful in cases when the
exogenous variable may influence the response variable either via some
lags or leads. As an example, consider BJsales.lead
series
from the datasets
package. Let’s assume that the
BJsales
variable is driven by the today’s value of the
indicator, the value five and 10 days ago. This means that we need to
produce lags of BJsales.lead
. This can be done using
xregExpander()
:
BJxreg < xregExpander(BJsales.lead,lags=c(5,10))
The BJxreg
is a matrix, which contains the original
data, the data with the lag 5 and the data with the lag 10. However, if
we just move the original data several observations ahead or backwards,
we will have missing values in the beginning / end of series, so
xregExpander()
fills in those values with the forecasts
using es()
and iss()
functions from
smooth
package (depending on the type of variable we are
dealing with). This also means that in cases of binary variables you may
have weird averaged values as forecasts (e.g. 0.7812), so beware and
look at the produced matrix. Maybe in your case it makes sense to just
substitute these weird numbers with zeroes…
You may also need leads instead of lags. This is regulated with the
same lags
parameter but with positive values:
BJxreg < xregExpander(BJsales.lead,lags=c(7,5,10))
Once again, the values are shifted, and now the first 7 values are
backcasted. In order to simplify things we can produce all the values
from 10 lags till 10 leads, which returns the matrix with 21
variables:
BJxreg < xregExpander(BJsales.lead,lags=c(10:10))
stepwise
The function stepwise() does the selection based on an information
criterion (specified by user) and partial correlations. In order to run
this function the response variable needs to be in the first column of
the provided matrix. The idea of the function is simple, it works
iteratively the following way:
 The basic model of the first variable and the constant is
constructed (this corresponds to simple mean). An information criterion
is calculated;
 The correlations of the residuals of the model with all the original
exogenous variables are calculated;
 The regression model of the response variable and all the variables
in the previous model plus the new most correlated variable from (2) is
constructed using
lm()
function;
 An information criterion is calculated and is compared with the one
from the previous model. If it is greater or equal to the previous one,
then we stop and use the previous model. Otherwise we go to step 2.
This way we do not do a blind search, going forward or backwards, but
we follow some sort of “trace” of a good model: if the residuals contain
a significant part of variance that can be explained by one of the
exogenous variables, then that variable is included in the model.
Following partial correlations makes sure that we include only
meaningful (from technical point of view) variables in the model. In
general the function guarantees that you will have the model with the
lowest information criterion. However this does not guarantee that you
will end up with a meaningful model or with a model that produces the
most accurate forecasts. So analyse what you get as a result.
Let’s see how the function works with the BoxJenkins data. First we
expand the data and form the matrix with all the variables:
BJxreg < as.data.frame(xregExpander(BJsales.lead,lags=c(10:10)))
BJxreg < cbind(as.matrix(BJsales),BJxreg)
colnames(BJxreg)[1] < "y"
ourModel < stepwise(BJxreg)
This way we have a nice data frame with nice names, not something
weird with strange long names. It is important to note that the response
variable should be in the first column of the resulting matrix. After
that we use stepwise function:
ourModel < stepwise(BJxreg)
And here’s what it returns (the object of class lm
):
ourModel
#> Time elapsed: 0.08 seconds
#> Call:
#> alm(formula = y ~ xLag4 + xLag9 + xLag3 + xLag10 + xLag5 + xLag6 +
#> xLead9 + xLag7 + xLag8, data = data, distribution = "dnorm")
#>
#> Coefficients:
#> (Intercept) xLag4 xLag9 xLag3 xLag10 xLag5
#> 17.6448055 3.3712175 1.3724166 4.6781051 1.5412071 2.3213097
#> xLag6 xLead9 xLag7 xLag8
#> 1.7075130 0.3766692 1.4024772 1.3370199
The values in the function are listed in the order of most correlated
with the response variable to the least correlated ones. The function
works very fast because it does not need to go through all the variables
and their combinations in the dataset.
All the basic methods can be used together with the final model
(e.g. predict()
, forecast()
,
summary()
etc).
Furthermore, the greybox
package implements
extract()
method from texreg
package for the
production of printable outputs from the regression, here is an
example:
texreg::htmlreg(ourModel)
Statistical models

Model 1

(Intercept)

17.64^{*}


[16.05; 19.24]

xLag4

3.37^{*}


[ 2.75; 3.99]

xLag9

1.37^{*}


[ 0.75; 2.00]

xLag3

4.68^{*}


[ 4.10; 5.26]

xLag10

1.54^{*}


[ 0.98; 2.11]

xLag5

2.32^{*}


[ 1.68; 2.96]

xLag6

1.71^{*}


[ 1.06; 2.35]

xLead9

0.38^{*}


[ 0.12; 0.63]

xLag7

1.40^{*}


[ 0.75; 2.05]

xLag8

1.34^{*}


[ 0.69; 1.98]

Num. obs.

150.00

Num. param.

11.00

Num. df

139.00

AIC

416.74

AICc

418.66

BIC

449.86

BICc

454.65

^{*} 0 outside the confidence interval.

Similarly, you can produce pdf tables via texreg()
function from that package. Alternatively, you can use
kable()
function from knitr
package on the
summary to get a table for LaTeX / HTML.
lmCombine
lmCombine()
function creates a pool of linear models
using lm()
, writes down the parameters, standard errors and
information criteria and then combines the models using IC weights. The
resulting model is of the class “lm.combined”. The speed of the function
deteriorates exponentially with the increase of the number of variables
\(k\) in the dataset, because the
number of combined models is equal to \(2^k\). The advanced mechanism that uses
stepwise()
and removes a large chunk of redundant models is
also implemented in the function and can be switched using
bruteforce
parameter.
Here’s an example of the reduced data with combined model and the
parameter bruteforce=TRUE
:
ourModel < lmCombine(BJxreg[,c(3:7,18:22)],bruteforce=TRUE)
summary(ourModel)
#> The AICc combined model
#> Response variable: y
#> Distribution used in the estimation: Normal
#> Coefficients:
#> Estimate Std. Error Importance Lower 2.5% Upper 97.5%
#> (Intercept) 20.9029 0.2327 1.0000 20.4429 21.3629 *
#> x 0.0432 0.0286 0.2591 0.0998 0.0134
#> xLag5 6.3973 0.0840 1.0000 6.2313 6.5633 *
#> xLag4 5.8467 0.0900 1.0000 5.6688 6.0245 *
#> xLag3 5.6857 0.0901 1.0000 5.5076 5.8638 *
#> xLag2 0.1251 0.0382 0.2876 0.0495 0.2006 *
#> xLag1 0.0843 0.0342 0.2716 0.1520 0.0166 *
#> xLead1 0.0906 0.0323 0.2780 0.1545 0.0267 *
#> xLead2 0.0354 0.0257 0.2599 0.0863 0.0154
#> xLead3 0.1193 0.0342 0.2967 0.1868 0.0517 *
#> xLead4 0.0067 0.0229 0.2585 0.0520 0.0385
#> xLead5 0.1161 0.0300 0.3032 0.0568 0.1754 *
#>
#> Error standard deviation: 2.2077
#> Sample size: 150
#> Number of estimated parameters: 7.2146
#> Number of degrees of freedom: 142.7854
#> Approximate combined information criteria:
#> AIC AICc BIC BICc
#> 670.6810 671.5170 692.4015 694.4959
summary()
function provides the table with the
parameters, their standard errors, their relative importance and the 95%
confidence intervals. Relative importance indicates in how many cases
the variable was included in the model with high weight. So, in the
example above variables xLag5, xLag4, xLag3 were included in the models
with the highest weights, while all the others were in the models with
lower ones. This may indicate that only these variables are needed for
the purposes of analysis and forecasting.
The more realistic situation is when the number of variables is high.
In the following example we use the data with 21 variables. So if we use
brute force and estimate every model in the dataset, we will end up with
\(2^{21}\) = 2^21
combinations of models, which is not possible to estimate in the
adequate time. That is why we use bruteforce=FALSE
:
ourModel < lmCombine(BJxreg,bruteforce=FALSE)
summary(ourModel)
#> The AICc combined model
#> Response variable: y
#> Distribution used in the estimation: Normal
#> Coefficients:
#> Estimate Std. Error Importance Lower 2.5% Upper 97.5%
#> (Intercept) 17.6704 0.7766 1.0000 16.1349 19.2060 *
#> xLag4 3.3755 0.3023 1.0000 2.7779 3.9732 *
#> xLag9 1.3709 0.3031 0.9998 0.7717 1.9702 *
#> xLag3 4.6859 0.2811 1.0000 4.1302 5.2417 *
#> xLag10 1.5420 0.2751 1.0000 0.9981 2.0859 *
#> xLag5 2.3225 0.3120 1.0000 1.7056 2.9394 *
#> xLag6 1.7076 0.3147 1.0000 1.0854 2.3299 *
#> xLead9 0.3639 0.1248 0.9661 0.1172 0.6106 *
#> xLag7 1.4014 0.3154 0.9997 0.7778 2.0250 *
#> xLag8 1.3362 0.3135 0.9994 0.7164 1.9559 *
#>
#> Error standard deviation: 0.9369
#> Sample size: 150
#> Number of estimated parameters: 10.965
#> Number of degrees of freedom: 139.035
#> Approximate combined information criteria:
#> AIC AICc BIC BICc
#> 416.9994 418.9003 450.0110 454.7733
In this case first, the stepwise()
function is used,
which finds the best model in the pool. Then each variable that is not
in the model is added to the model and then removed iteratively. IC,
parameters values and standard errors are all written down for each of
these expanded models. Finally, in a similar manner each variable is
removed from the optimal model and then added back. As a result the pool
of combined models becomes much smaller than it could be in case of the
brute force, but it contains only meaningful models, that are close to
the optimal. The rationale for this is that the marginal contribution of
variables deteriorates with the increase of the number of parameters in
case of the stepwise function, and the IC weights become close to each
other around the optimal model. So, whenever the models are combined,
there is a lot of redundant models with very low weights. By using the
mechanism described above we remove those redundant models.
There are several methods for the lm.combined
class,
including:
predict.greybox()
 returns the point and interval
predictions.
forecast.greybox()
 wrapper around
predict()
The forecast horizon is defined by the length of
the provided sample of newdata
.
plot.lm.combined()
 plots actuals and fitted
values.
plot.predict.greybox()
 which uses
graphmaker()
function from smooth
in order to
produce graphs of actuals and forecasts.
As an example, let’s split the whole sample with BoxJenkins data
into insample and the holdout:
BJInsample < BJxreg[1:130,];
BJHoldout < BJxreg[(1:130),];
ourModel < lmCombine(BJInsample,bruteforce=FALSE)
A summary and a plot of the model:
summary(ourModel)
#> The AICc combined model
#> Response variable: y
#> Distribution used in the estimation: Normal
#> Coefficients:
#> Estimate Std. Error Importance Lower 2.5% Upper 97.5%
#> (Intercept) 19.3889 0.8562 1.0000 17.6936 21.0843 *
#> xLag4 3.3491 0.2967 1.0000 2.7617 3.9366 *
#> xLag9 1.3338 0.2984 0.9990 0.7430 1.9246 *
#> xLag3 4.7577 0.2788 1.0000 4.2057 5.3098 *
#> xLag10 1.5362 0.2702 1.0000 1.0013 2.0712 *
#> xLag5 2.3213 0.3064 1.0000 1.7146 2.9280 *
#> xLag6 1.6612 0.3091 1.0000 1.0492 2.2732 *
#> xLead9 0.2944 0.1261 0.8910 0.0447 0.5442 *
#> xLag8 1.3690 0.3085 0.9989 0.7582 1.9799 *
#> xLag7 1.3270 0.3094 0.9982 0.7145 1.9396 *
#>
#> Error standard deviation: 0.9554
#> Sample size: 130
#> Number of estimated parameters: 10.887
#> Number of degrees of freedom: 119.113
#> Approximate combined information criteria:
#> AIC AICc BIC BICc
#> 368.1614 370.3528 399.3803 404.7136
Importance tells us how important the respective variable is in the
combination. 1 means 100% important, 0 means not important at all.
And the forecast using the holdout sample:
ourForecast < predict(ourModel,BJHoldout)
plot(ourForecast)
These are the main functions implemented in the package for now. If
you want to read more about IC model selection and combinations, I would
recommend (Burnham and Anderson 2004)
textbook.
References
Burnham, Kenneth P, and David R Anderson. 2004.
Model Selection and Multimodel Inference.
Edited by Kenneth P Burnham and David R Anderson. Springer New York.
https://doi.org/10.1007/b97636.