The SuperCENT methodology that simultaneously solves the centrality estimation and regression given a fixed \(\lambda\).

supercent(
  A,
  X,
  y,
  l = NULL,
  tol = 1e-04,
  max_iter = 200,
  weights = rep(1, length(y)),
  verbose = 0,
  ...
)

Arguments

A

The input network

X

The design matrix

y

The response vector

l

The tuning parameter of the penalty

tol

The precision tolerance to stop

max_iter

The maximum iteration

weights

The weight vector for each observation in (X,y)

verbose

Output detailed message at different levels

folds

The number of fold for cross-validation

Value

Output a supercent object

d

The estimated \(d\)

u

The estimated hub centrality

v

The estimated authority centrality

beta

The scaled estimated regression coeffcients

l

The tuning parameter \(\lambda\)

residuals

The residuals of the regression

fitted.values

The predicted response

epsa

The estimated \(\sigma_a\)

epsy

The estimated \(\sigma_y\)

A

The adjacency matrix of the input network

X

The input design matrix

y

The input response

iter

The grid of the tuning parameter

max_iter

The maximum iteration

u_distance

The sequence of differences of \(\hat{u}\) between the two consecutive iterations

method

The estimation method: supercent

Examples

n <- 100 p <- 3 sigmaa <- 1 sigmay <- 1e-5 A <- matrix(rnorm(n^2, sd = sigmaa), nrow = n) X <- matrix(rnorm(n*p), nrow = n, ncol = p) y <- rnorm(n, sd = sigmay) ret <- supercent(A, X, y)