Find the standard matrix for the orthogonal projections onto col(A), and find least squares line y=ax+b (0,1),(1,3),(2,4) and (3,4)

Find the standard matrix for the orthogonal projections onto col(A), Find the least squares line y=ax+b of best fit to the four points(0,1),(1,3),(2,4) and (3,4)

A=enter image description here

A is 3×3 matrix [ 1 1 2] [ 2 1 3] [-1 2 1]

(a)Find the standard matrix for the orthogonal projections onto col(A) *Calculate the inverse of a matrix. *Do not calculate the product of matrices.

(b)Find the least squares line y=ax+b of best fit to the four points(0,1),(1,3),(2,4) and (3,4)

Find out views on Random selection

I have to design the architecture of a system dashboard and one of the components will report how many more students can take the quiz when there are the following conditions:

The quiz is selecting Random 30 Questions out of 1000 question pool. Those 30 questions are divided into 5 Categories and sometimes further to 3 more categories. At any given time a maximum of 50 students can take the quiz.

Example: Quiz Name: Programming Languages Time: 60 Mins No. of Questions: 50

When a student will take this quiz, the system will select these 50 questions out of 1000 questions pool in the following categories:

Javascript Java C Swift PHP

Each question in the above category is further divided into the following:

Easy Medium Hard

When selection is random of questions out of 1000 pool, how many students will see 1 specific question? Or in other words how many times each question will be shown to unique users? enter image description here These could be cases with quiz and questions

https://prnt.sc/ns7csz (please check the image for more examples

To write a javascript or even a select statement I need to come up with the logic then move into development of code or designing the database queries.

I think some sort of probability formula will be used to calculate all the fields. i.e 10 questions out of 100 in 3 Categories 50 questions out of 500 pool in 3 Categories?

Can gradient descent be used to find value of exponent?

I’m experimenting with machine learning and I’m trying to develop a model that’ll find the exponent that the input will need to be raised to in order to result in the output. For example, if input=$ [0, 1, 2, 3]$ and output=$ [0, 1, 8, 27]$ then the exponent is $ 3$ .

The loss function I’m using is $ L(g)=(k^g-k^3)^2$ where $ g$ is the model’s current guess. I found the derivative of this function to be $ L'(g)=2(k^g-k^3)\cdot k^g \cdot \ln(k)$

The guess is then bettered by subtracting its derivative multiplied by the learning rate. I.e: $ g_{n+1}=g_n-r\cdot L'(g_n)$ for each $ k$ in the training data for some number of training cycles.

The problem I found is that even when $ g$ is close to $ 3$ , the derivative of the loss function is too extreme and ends up missing the zero as seen in the picture:

The above picture is the graph of $ r\cdot L'(g)$ where $ r=0.0001$ . It seems like for any $ g$ even considerably greater than $ 3$ , the gradient blows up and ends up shooting the next guess way too far left. I’m already giving up on the idea of having a constant learning rate. I tried on basing the learning rate on the loss function so that the lesser the error, the lesser the learning rate, and the less chance it’ll miss the zero. However, that it did not work at all, and I’m wondering if gradient descent can be used at all to solve this problem. Thank you in advance.