## How do you discover what libraries to use when solving a problem?

I’ve read that experienced software developers tend to use libraries more often than less experienced developers. However, how does one find out about these libraries?

How does one even become aware that they have a problem, that could be solved by a library?

After learning the syntax of a language, do developers tend to spend their time learning the most popular libraries for their specific language on Github? To add to their mental toolbox?

Or do they just Google “library for solving X” as they work on a project?

## Solving numerically a special singular integral equation

I am trying to code the following integral equation to find the solution numerically using Mathematica. In fact the exact solution is x^2 (1 - x).

First we define the following functions:

phi[x_]:=Piecewise[{{1, 0 <= x < 1}}, 0] 
f[x_] := 1/    1155 (112 (-1 + x)^(3/4) +       x (144 (-1 + x)^(3/4) +          x (1155 + 256 (-1 + x)^(3/4) -             1280 x^(3/4) - (1155 + 512 (-1 + x)^(3/4)) x +             1024 x^(7/4)))); exactsoln[x_] := x^2 (1 - x); 

I am trying to solve the following integral equation for u (x) (numerically). where

u[x] - Integrate[(x - t)^(-1/4)*u[t], {t, 0, x}] -     Integrate[(x - t)^(-1/4)*u[t], {t, 0, 1}] = f[x]; 

where f[x] is defined as above. Here is the numerical scheme. Our goal is to find the coefficients c[j, k]. We approximate the solution u by the approximated solution \approx[x] which can be written as

approxsoln[x_, n_] :=   Sum[c[j, k]*psijk[x, j, k], {j, -n, n}, {k, -2^n, 2^n - 1}] 

If you plug the function approxsoln[x,n] in the integral equation, we will end up by

Sum[c[j, k]*(psijk[x, j, k] -       Integrate[(x - t)^(-1/4)* psijk[t, j, k], {t, 0, x}] -       Integrate[(x - t)^(-1/4)* psijk[t, j, k], {t, 0, 1}]), {j, -n,     n}, {k, -2^n, 2^n - 1}]; 

Now every thing is known except the coefficients c[j,k]. We need to use a suitable subdivision, may be divide the interval [0,1] into (2n+1) 2^(n+1) points (to meet the size of the truncated sum in approxsoln[x,n]) to be used in the equation to construct a system of linear equation, to find these coefficients in order to find the approximated solution (approxsoln[x,n]) defined above. Is there any way to code this problem using Mathematica. I think it is worth to try for n=10 first.

## Ignoring constant when solving Legendre equation of order 1 using reduction of order method

I’m studying a course on ODEs and there is an example given where Legendre’s equation is solved using reduction of order as follows:

$$(1-x^2)y”-2xy’+2y=0, \ \ \ P_1(x)=x \ y(x)=xv, y’ = v+xv’,y”=2v’+xv” \\implies (1-x^2)(2v’+xv”)-2x(v+xv’)+2xv=0 \ \implies \frac{v”}{v’}=\frac{-2}{x}+\frac{1}{1-x}+\frac{1}{1+x}$$ after rearranging and solving the partial fraction decomposition. From here the integral is taken of both sides to give $$\log{v’} = -2\log{(x)}+\log{(1-x)}+\log{(1+x)}$$and it is stated that constants of integration do not matter at this point, which I do not understand. On exponentiation and integrating again the final answer is given as $$v=\frac{-1}{x}+\frac{1}{2}\log{\left(\frac{1+x}{1-x}\right)}\left[ + C \right]$$ Similarly here the constant is bracketed as if it is optional. If v was solved with constants of integration I believe the answer would be of the form $$v=C_1\left[\frac{-1}{x}+\frac{1}{2}\log{\left(\frac{1+x}{1-x}\right)}\right]+C_2$$ Is this solution considered ‘the same’ because of being just a multiple of the previous solution (so not linearly independent)? Is it standard to ignore the constant of integration with this method or would it be safer to leave it in?

## Solving using the master theorem: T(n)=T(n/2)+n⋅log n and T(n)=T(n/8)+2.n

Could someone help me with these 2 questions?

I do not understand the case 3

First T(n) = T(n/2) + n log n

Second T(n)=T(n/8)+2 n

## Introduction

I’ve been developing interest in complexity classes and thought I was unable to prove my problem is NP hard, so I wanted to see if it was P-hard. I wanted to see if my puzzle solving is special. I have linked a previous question of mine to help clarify what is mentioned below. Explained here

## Random Instance of 2-SAT used for attempt reduction

I took an instance of my problem X

Let’s say my problem X is determining if a puzzle is valid for my language

(a∨¬b)∧(¬a∨b)∧(¬a∨¬b)∧(a∨¬c)

Instance of X

a = shift(L) puzzle

¬ a = invalid puzzle

¬b = invalid puzzle

b = shift(L) puzzle

## Reduction into Shorter Instance

(a∨¬b)∧(¬a∨b)

This boolean expression is only checking for 2-sat. Being True (valid puzzle) or False (invalid puzzle).

My idea is that a deterministic machine will consider n^2 x n^2 puzzles as invalid if the lower right-hand box is not filled. My puzzles are solved in quadratic time when provided a puzzle with this n x n box. Any other puzzle will simply fail to be solved if it does not follow my language. I have used an algorithm that can determine in poly-time that those failed puzzles are invalid puzzles. Because, the solver will not know it as an invalid puzzle. So it will try to map it out in my language thus giving an invalid puzzle.

## Question

I have the algorithm and proven that it works in poly-time, the problem is that I only know how to prove it by showing the algorithm. I just don’t know how to write it out mathematically. How would I properly do this?

## a question about solving the primal problem via the dual

I have a question when reading the convex optimization by Boyd, it is about solving the primal problem via the dual(in page 248):

suppose we have strong duality and an optimal $$(\lambda^*,v^*)$$ is known, and suppose that the minimizer of $$L(x,\lambda^*,v^*)$$ is unique; then why “if the solution of $$minimize\quad L(x,\lambda^*,v^*)$$ is not feasible, then no primal optimal point can exist”? What does “no primal optimal point can exist” mean, does it mean that the primal problem is unsolvable?

## Methods of solving linear system of equations, how to select the appropriate method

A linear system of equations Ax=b can be solved using various methods, namely, inverse method, Gauss/Gauss-Jorden elimination, LU factorization, EVD (Eigenvalue Decomposition), and SVD (Singular Value Decomposition).
I know that there are several disadvantages of using inverse method; for example, with ill conditioned matrix A the solution can not be computed with inverse method. Moreover, I am know that with changing vector b LU factorization has advantage over Gauss/Gauss-Jorden elimination.
How to decide between LU, SVD, and EVD?
Is there any scenario where Gauss/Gauss-Jorden elimination has advantage over LU, SVD, and EVD?

## Different methods of solving a linear system of first order DE?

$$x'(t)=\begin{pmatrix} 1&\frac{-2}3\3&4\end{pmatrix}x(t).$$

This is the method described in my book:

• finding the eigenvalues of matrix $$A = \begin{pmatrix} 1&\frac{-2}3\3&4\end{pmatrix}$$: $$\begin{vmatrix} 1-\lambda&\frac{-2}3 \3&4-\lambda\end{vmatrix}=0\iff \lambda=3, \lambda=2.$$

• eigenvector corresponding to $$\lambda=3$$: $$\begin{pmatrix} -1/3 \ 1\end{pmatrix}$$, eigenvector corresponding to $$\lambda=2$$: $$\begin{pmatrix} -2/3 \ 1\end{pmatrix}$$.

• define $$C = \begin{pmatrix} -2/3 & -1/3 \ 1&1\end{pmatrix}$$, then $$C^{-1} = \begin{pmatrix} -3 & 3 \ -1&2\end{pmatrix}$$, and $$A = C\operatorname{diag}(\lambda_1,\lambda_2)C^{-1} = \begin{pmatrix} -2/3 & -1/3 \ 1&1\end{pmatrix} \begin{pmatrix} 2&0\0&3\end{pmatrix}\begin{pmatrix} -3 & 3 \ -1&2\end{pmatrix}.$$

• calculate $$e^{tA}$$. Using previous expression for $$A$$ we get $$e^{tA}=\begin{pmatrix} -2/3 & -1/3 \ 1&1\end{pmatrix} \begin{pmatrix} e^{2t}&0\0&e^{3t}\end{pmatrix}\begin{pmatrix} -3 & 3 \ -1&2\end{pmatrix} = \begin{pmatrix} 2e^{2t}+\frac13e^{3t} & -2e^{2t}-\frac23e^{3t} \ -3e^{2t}-e^{3t} & 3e^{2t}+2e^{3t}\end{pmatrix}.$$

• the solution of the given system, considering the initial condition $$x_0=(x_1,x_2)^t$$, equals to $$e^{tA}x_0$$: $$e^{tA}x_0 = \dots = x_1\begin{pmatrix}2e^{2t}+\frac13e^{3t} \-3e^{2t}-e^{3t} \end{pmatrix}+x_2\begin{pmatrix} -2e^{2t}-\frac23e^{3t} \3e^{2t}+2e^{3t}\end{pmatrix}$$

Now, I have looked up some extra exercises online, but these seem to solve such systems in a shorter way: the solution of the system above with be given by $$c_1e^{2t}\begin{pmatrix} -2/3 \ 1 \end{pmatrix} + c_2e^{3t}\begin{pmatrix} -1/3 \ 1\end{pmatrix}$$.

What is the difference between both approaches? The method used by my book seems to have more coefficients (and is therefore maybe a little more detailed/exact?). Are these solution methods equivalent? Which one would you use?

Thanks.

## Solving an equation in a noncommutative ring.

Suppose $$R$$ is noncommutative ring with unit and has the properties necessary for a right and left skew field of fractions to exist (i.e. $$R$$ has no zero divisors, is left and right Noetherian and satisfies the left and right Ore condition). Let $$x,y\in R$$ be nonzero elements in $$R$$ so that $$x,y$$ are prime and $$xy=yx$$ in $$R$$.

I am interested in showing that the solutions to the equation $$xp+yq=0$$ in $$R$$ must be of the form $$p=yt, q=-xt$$ for some $$t\in R$$.

Here is my reasoning:

Since there are no zero divisors in $$R$$, we know both $$p$$ and $$q$$ are nonzero. In the skew field of fractions we can rewrite the equation as $$p=-x^{-1}yq$$ but since $$x$$ and $$y$$ commute, we have $$p=-yx^{-1}q.$$ Since the left side resides in $$R$$ if seems like $$q=xt$$ for some $$t\in R$$. By a similar argument, we can conclude that $$p=ys$$ for some $$s\in R$$. Hence the equation becomes $$xys+yxt=0 .$$ Multiplying on the right by $$x^{-1}y^{-1}$$ yields $$s=-t$$ Proving the result.

I feel I may be making a mistake. Some of the steps may not work in such a general setting? Thanks.

## Solving for the exponent in a congruence

I thought of another cool (in my opinion) idea in number theory where you can have exponential congruence equations such as:

$$10^n \equiv 73\pmod {729}$$

But I do not know how to solve this manually by hand on paper. I’ve tried taking the Carmichael lambda of 729 which would yield 486 but I don’t know how to apply it in this situation. I have knowledge of CRT and how to do modular exponentiation using more than one method. I used wolfram alpha to get a solution of $$n=80 + 81x$$ where $$x$$ is a nonnegative integer but it’s no use if I don’t know the process.

If someone could please help me by explaining the process to solve the example problem, I would be very thankful.