Linear regression: confidence interval of the average of 10 slopes, error of propagation and error of the mean

I have 10 regression slopes in one group, each slope has its associated standard error. Now I want to find the average slope of this group and the associated standard error which will be used to find the 95% confidence interval. I know the standard error of the mean is the sum of the squared deviation from the mean divided by the square root of the sample size and the sample size is 10 in this case. I can calculate the average slope and find the deviation of each slope by subtracting the mean. But since each slope has an associated standard error, how do I take into account of this when I calculate the standard error of the mean or actually I don’t need include the error from each slope?

For the confidence range of the average slope of this group, I am using 1.96*standard error of the mean.

Your help will be much appreciated!

With respect to differential privacy what should be the qlobal sensitivity for regression in non interactive mode?

I need to make a dataset differentially private on which regression, which in more general sense could be extended to learning any model, is to be performed. I need to calculate the global sensitivity for adding noise. How do I calculate global sensitivity in such cases.

Variant of ridge regression loss function

We can create variants of the loss function, especially of ridge regression by adding more regularizer terms. One of the variants I saw in a book is given below

$ min_{w \in \mathbf{R}^d} \ \ \alpha.||w||^2 + (1-\alpha).||w||^4 + C||y-X^T.w||^2$

where $ y \in \mathbf{R^n}, s \in \mathbf{R^d}, X \in \mathbf{R^{dxn}}$ and $ C$ a regularization parameter $ \in \mathbf{R}$ and $ \alpha \in [0,1]$

My question is how does change in $ \alpha$ affects our optimization problem? and how does generally adding more regularizers help? why is not one regularizer enough?

Regression problem with trace regression model

i’m working on a regression task with trace regression model that recieve as input a matrix X. It’s formula is as following:

$ $ y = tr(\beta_*^{T} X)+ \epsilon$ $

where tr(·) denotes the trace, $ \beta$ is some unknown matrix of regression coefficients, and $ \epsilon$ is random noise.

Can someone with that knowledge provide me the steps to achieve the regression task where we must calculate the trace (sum of the diagonal elements) in the regression phase ? Also i need to know how to generate the regression coefficients of such matrix.

I found the description of the model here: https://arxiv.org/pdf/0912.5338.pdf

Posterior distribution of logistic regression coefficient

I have a binary logistic regression with the following properties

Consider the logistic regressionfor binarydataYi ∈{0,1} and the covariate vector xi =(xi,1,xi,2,…,xi,p) . Under the logistic regression assumption, the sampling distribution of Yi is (1). We assume a normal prior for βj for j = 1,…,n as in (2), and βj’s are independent of each other. (μj,σj2) are prior parameters and need to be specified.

Prob(Yi = 1|β) = exp(x⊤i β)/ 1+exp(x⊤i β) βj ∼N(μj,σj2) (2)

I have to find the posterior distribution for β, i.e., p(β|y, μ1, . . . , μp, σ12, . . . , σp2),.

How do you calculate the training error and validation error of a linear regression model?

I have a linear regression model that I’ve implemented using Gradient Descent and my cost function is a Mean Squared Error function. I’ve split my full dataset into three datasets, a training set, a validation set, and a testing set. I am not sure how to calculate the training error and validation error (and the difference between the two).

Is the training error the Residual Sum of Squares error calculated using the training dataset? Is the validation error the Residual Sum of Squares error calculated using the validation dataset? What is the test set for exactly (I’ve learned the model using the training set, from the textbooks I’ve read I think this is the set to use to learn the model)?

Any help in clearing up these points is much appreciated.

Comparison of feature importance values in logistic regression and random forest in scikitlearn [closed]

I am trying to rank the features for binary classification, based on their importance using an ensemble method by combining the feature importances estimated by random forest and logistic regression. I know that logisticregression coefficients and random forest feture_importances are different values and Im looking for a method to make them comparable. Here is what I have in mind:

X=features y=lablels rf=RandomForestClassifier() rf.fit(X,y) RFfitIMP=rf.feature_importances_/rf.feature_importances_.sum() #normalizing feature importances to sum up to 1 lr=LogisticRegression() lr.fit(X,y) lrfitIMP=np.absolute(lr.coef_)/np.absolute(lr.coef_).sum() #Taking absolute values and normalizing coefficient values to sum up to 1 ensembleFitIMP = np.mean([featIMPs for featIMPs in [RFfitIMP,lrfitIMP]], axis=0) 

What I think the code does is to take the relative importance from both models, normalize them and returns the importance of features averaged over two models. I was wondering whether it is a correct approach for this purpose or not?

Sckit regression on power set of data

How do I run linear regression on every subset of dataframe in a loop with Linear Regression of scikit-learn?

    def sub_lists(list1):   sublist = [[]]    for i in range(len(list1) + 1):          for j in range(i + 1, len(list1) + 1):              sub = list1[i:j]              sublist.append(sub)              return sublist  X = sub_lists(df5);y = df4;  

I ran regression on this however it keeps on throwing error, its a .dta(stata) file.