Will learning about integrated circuits, help me be a better computer architect(long-term)?

I do not know if this is the right place to ask this type of question, but here I go, im thinking about learning integrated circuits as part of learning more about computer hardware in general (but more focused in its architecture/hardware), will I be losing my time learning about integrated circuits?(Im planning to read the following book: “Analysis and Design of Analog Integrated Circuits 5th edition”). thanks for the answers.

Learning a specific functional form with machine learning

Suppose I have only three independent features (x, y, z) as the input to some machine learning routine (e.g. neural network). From some domain knowledge, I know that the output o(x, y, z) must have the specific functional form

o(x, y, z) = f(x)*g(y)*g(z)

where the g(.) are the same function. The details of f(.) and g(.) are not known beforehand (except that f(.) is a decaying function with respect to x). Given that there is no upper limit on the sample size, is it possible to incorporate such a functional form (or in general, any specific functional form) into the machine learning routine?

Explain the equation to find the collaboration between neighbors in SOM in unsupervised learning

In kohonen SOM algorithm, the equation to find the collaboration is:

enter image description here

I know that LDist is the lattice distance and sigma is the standard deviation. I am just wondering why they are squared ? Can anyone help me to visualize the equation or explain to me what is going on in the above equation ?

Deep learning: how to represent 24 fraction image into 1 image?

Goal: Represent 24 fractions images into one image. These 24 fractions belong to one patient. We want to represent one patient with one image. How to manipulate the data to achieve it: (T.A. suggested to divide the picture in 100 key parts with crucial data and merge only those key parts)

My Take on this: I have tried HOG and SIFT(Scale-Invariant Feature Transform)enter image description here algorithm and as you can see the HOG results in Black and white output picture as attached and the colorful picture is the SIFT output. The problem with this is that: The output image does not distinguish the red (cancer) and blue (non-cancer).i.e the color changes.

My Question: I am still learning Deep learning as I go along. Please excuse the naive questions. What are the steps necessary to represent 24 fraction images (red-blue dotted ones) into one image? without changing the colors? enter image description here

Is Artificial intelligence simply taking decisions on the basis of values produced by a machine learning model

I am researching on AI and its working. Whenever I try to search for AI algorithms, ML algorithms come up. Then, I read the differences between ML & AI. One of the key points mentioned was “AI is decision making” & “Machine learning is generating values and learn new things”.

I come up with a conclusion that ML allows us to take generate some sort of values and using AI we can make decisions with those values.

But I am confused with “The weather forecast” problem. Our machine learning model will directly generate the decision that will it rain or not? Is our ML model lies in the AI domain or I am wrong? Help me!

How does Learning a Spell work with repertoires?

How does Learn a Spell work with repertoires? For example, my primal sorcerer obtained spell scrolls of heal and fear, neither of which were in their repertoire. From my understanding, I roll a DC 15 Nature skill check, spend 2 gp per spell, and 1 hour per level of the spell learned.

What happens next? From what I understand, these new spells would become available as spell choices once the sorcerer went up a level. There’s no immediate advantage to adding spells to the repertoire (unlike a wizard’s spellbook), but it expands available spell choices as the repertoire caster becomes more powerful.

For reference: Learn a Spell: https://2e.aonprd.com/Skills.aspx?ID=4&General=true

Clarification on “clause learning” in DPLL algorithm

I am struggling to understand the idea of conflict-driven clause learning, in particular, I can not understand why the clause we ‘learned’ is a substantially new (i.e. the clause database does not already contain it, neither any subset of it). Here is what Knuth in his book says:

enter image description here

I can understand why the clause database has no subset of $ c’$ that contains $ \overline{l’}$ (because $ \overline{l’}$ would have been forced (i.e. unit-propagated) at level lower than $ d$ ), but what contradicts to the existence of clause, let’s say, $ \overline{b_1}\lor\overline{b_2}$ ?

How machine learning libraries are created?

I would like to know how machine learning libraries (or in general libraries at large scale) are created. I mean Python doesn’t have inbuilt array system but c has. So how they are supported for Python and how do they start the thing and develop it as we know today as a final product (like NumPy) ?

P.S.- Let me know if this is not the right community for asking general computing questions because there is significant overlap among CS stack exchange forums and if it’s not right place then recommend the appropriate stack exchange platform for asking general computing questions.

Also, I couldn’t find relevant tags so had to tag it with machine learning.