The SuperCENT methodology that simultaneously solves the centrality estimation and regression using k-fold cross-validation to choose the tuning parameter \(\lambda\).
cv.supercent( A, X, y, l = NULL, lrange = 2^4, gap = 2, folds = 10, tol = 1e-04, max_iter = 200, weights = rep(1, length(y)), verbose = 0, ... )
A | The input network |
---|---|
X | The design matrix |
y | The response vector |
l | The initial tuning parameter |
lrange | The search range of the tuning parameter |
gap | The search gap of the tuning parameter |
folds | The number of fold for cross-validation |
tol | The precision tolerance to stop |
max_iter | The maximum iteration |
weights | The weight vector for each observation in (X,y) |
verbose | Output detailed message at different levels |
Output a cv.supercent
object
The estimated \(d\)
The estimated hub centrality
The estimated authority centrality
The scaled estimated regression coeffcients
The tuning parameter \(\lambda\)
The residuals of the regression
The predicted response
The estimated \(\sigma_a\)
The estimated \(\sigma_y\)
The adjacency matrix of the input network
The input design matrix
The input response
The grid of the tuning parameter
The estimated regression coefficients of l_sequence
The cross-validation MSEs of l_sequence
The fold indices (X,y)
The grid of the tuning parameter
The maximum iteration
The sequence of differences of \(\hat{u}\) between the two consecutive iterations
The estimation method: supercent
n <- 100 p <- 3 sigmaa <- 1 sigmay <- 1e-5 A <- matrix(rnorm(n^2, sd = sigmaa), nrow = n) X <- matrix(rnorm(n*p), nrow = n, ncol = p) y <- rnorm(n, sd = sigmay) ret <- cv.supercent(A, X, y)#> [1] "SuperCENT"