The SuperCENT methodology that simultaneously solves the centrality estimation and regression using k-fold cross-validation to choose the tuning parameter \(\lambda\).

cv.supercent(
  A,
  X,
  y,
  l = NULL,
  lrange = 2^4,
  gap = 2,
  folds = 10,
  tol = 1e-04,
  max_iter = 200,
  weights = rep(1, length(y)),
  verbose = 0,
  ...
)

Arguments

A

The input network

X

The design matrix

y

The response vector

l

The initial tuning parameter

lrange

The search range of the tuning parameter

gap

The search gap of the tuning parameter

folds

The number of fold for cross-validation

tol

The precision tolerance to stop

max_iter

The maximum iteration

weights

The weight vector for each observation in (X,y)

verbose

Output detailed message at different levels

Value

Output a cv.supercent object

d

The estimated \(d\)

u

The estimated hub centrality

v

The estimated authority centrality

beta

The scaled estimated regression coeffcients

l

The tuning parameter \(\lambda\)

residuals

The residuals of the regression

fitted.values

The predicted response

epsa

The estimated \(\sigma_a\)

epsy

The estimated \(\sigma_y\)

A

The adjacency matrix of the input network

X

The input design matrix

y

The input response

l_sequence

The grid of the tuning parameter

beta_cvs

The estimated regression coefficients of l_sequence

mse_cv

The cross-validation MSEs of l_sequence

cv_index

The fold indices (X,y)

iter

The grid of the tuning parameter

max_iter

The maximum iteration

u_distance

The sequence of differences of \(\hat{u}\) between the two consecutive iterations

method

The estimation method: supercent

Examples

n <- 100 p <- 3 sigmaa <- 1 sigmay <- 1e-5 A <- matrix(rnorm(n^2, sd = sigmaa), nrow = n) X <- matrix(rnorm(n*p), nrow = n, ncol = p) y <- rnorm(n, sd = sigmay) ret <- cv.supercent(A, X, y)
#> [1] "SuperCENT"