Plotting a small gaussian | Small values and dealing with Machine Precision

I’ve defined the following:

k := 1.38*10^-16 kev := 6.242*10^8 q := 4.8*10^-10 g := 1.66*10^-24 h := 6.63*10^-27 

and

b = ((2^(3/2)) (\[Pi]^2)*1*6*(q^2)*(((1*g*12*g)/(1*g + 12*g))^(   1/2)) )/h  T6 := 20 T := T6*10^6 e0 := ((b k T6 *10^6)/2)^(2/3)  \[CapitalDelta] := 4/\[Sqrt]3 (e0 k T6 *10^6)^(1/2)  \[CapitalDelta]kev = \[CapitalDelta]*kev e0kev = e0*kev bkev = b*kev^(1/2) 

Then, I want to plot these functions:

fexp1[x_] = E^(-bkev *(x*kev)^(-1/2)) fexp2[x_] = E^(-x/(k*T)) fexp3[x_] = fexp1[x]*fexp2[x] 

and check that this Taylor expansion works:

fgauss[x_] =   Exp[(-3 (bkev^2/(4 k T*kev ))^(1/3))]*   Exp[(-((x*kev - e0kev)^2/(\[CapitalDelta]kev/2)^2))] 

which should, e.g., as expected:

Figure 10.1

This plot came from "Stellar Astrophysics notes" of Edward Brown (also it is a known approximation).

I used this line of command to Plot:

Plot[{fexp1[x],fexp2[x],fexp3[x],fgauss[x]}, {x, 0, 50},   PlotStyle -> {{Blue, Dashed}, {Dashed, Green}, {Thick, Red}, {Thick,      Black, Dashed}}, PlotRange -> Automatic, PlotTheme -> "Detailed",   GridLines -> {{{-1, Blue}, 0, 1}, {-1, 0, 1}},   AxesLabel -> {Automatic}, Frame -> True,   FrameLabel -> {Style["Energía E", FontSize -> 25, Black],     Style["f(E)", FontSize -> 25, Black]}, ImageSize -> Large,   PlotLegends ->    Placed[LineLegend[{"","","",""}, Background -> Directive[White, Opacity[.9]],      LabelStyle -> {15}, LegendLayout -> {"Column", 1}], {0.35, 0.75}]] 

but it seems that Mathematica doesn’t like huge negative exponentials. I know I can compute this using Python but it’s a surprise to think that Mathematica can’t deal with the problem somehow. Could you help me?

Gaussian distribution with condition?


What does this expression mean?

Normal distribution with condition

I am reading a research paper and found the following expression (Eq.28 in the paper below).

enter image description here

It means a Gaussian distribution, but the mean component seems conditional probability-like expression $ \it{\bf{s}}_t | \it{\bf{m}}_{b, t, m}^{(j)}$ . I have never seen this expression before and cannot find any info about it.

The variables $ \it{\bf{s}}_t$ and $ \it{\bf{m}}_{b, t, m}^{(j)}$ are both vectors and $ \bf{\Sigma}_{b}$ is a covariance matrix.

Does anybody have an idea of what this expression means?

Original paper where the expression is.

The original paper can be found here: https://eprints.soton.ac.uk/437941/1/08340823.pdf

Thanks in advance.

When does Gaussian elimination solve exact 1-in-3 SAT?

Terms:

A literal is a variable or its negation.

A clause is a set of literals.

An exact 3-in-1 clause is satisfied if an assignment of values to variables results in exactly 1 true literal and 2 false literals.

Exact 3-in-1 SAT is the problem, given a set of exact 3-in-1 clauses, is there as assignment of variables that satisfies all clauses?

Question:

This corresponds to a linear algebra problem, sort of:

Let true be 1 and false be -1.

For each variable v and its negation w, add the equations:

v + w = 0

(This is because 1 + (-1) = 0)

For each exact 3-in-1 clause (a b c), add the equations:

a + b + c = -1

(This is because two -1‘s and one 1 will add up to -1.)

It’s possible solving the equations results in a value other than 1 or -1. However if the solution to the system of equations is only 1 and -1, I suspect that’s a valid solution to the original exact 1-in-3 problem.

So, when does Gaussian elimination solve exact 1-in-3 SAT?

Here’s an example when it does:

These clauses:

(1 2 3) (2 3 -2) (2 3 -3)

Correspond to this matrix:

1, 0, 0, 1, 0, 0, 0 0, 1, 0, 0, 1, 0, 0 0, 0, 1, 0, 0, 1, 0 1, 1, 1, 0, 0, 0, -1 0, 1, 1, 0, 1, 0, -1 0, 1, 1, 0, 0, 1, -1 

Reduced row echelon:

1, 0, 0, 0, 0, 0, 1 0, 1, 0, 0, 0, 0, -1 0, 0, 1, 0, 0, 0, -1 0, 0, 0, 1, 0, 0, -1 0, 0, 0, 0, 1, 0, 1 0, 0, 0, 0, 0, 1, 1 

Therefore solution (via far-right column) is: (1 -2 -3)

Does this always work on larger matrices with 2*n rows and 2*n+1 columns where n is the number of variables? (I think it may need non-redundant (linearly independent?) rows.)

Finding all rows of 2 variables using Gaussian Elimination

Suppose I have a system of linear equations. Using Gaussian elimination, I can determine whether a solution exists, and even find a valid solution.

During the elimination, I can combine rows together, to produce new rows with different number of variables. Is there a method to find all possible rows that contain exactly 2 variables? For example, I may want to find all equalities between variables. This is equivalent to finding all rows that contain exactly 2 variables. Is it possible to do this without trying all (exponentially many) combinations of rows?

For example if I have:

Row 1: A xor B xor C = 1

Row 2: A xor B xor D = 1

I can combine row 1 and row 2 to say that C xor D = 0

If I have a large amount of rows, and they require large combinations of large rows to produce smaller rows, is it trivial or hard to find all rows of size 2? Can I do better than adding random pairs to the system and checking it still has a solution?

land-cover classification (matlab) (maximum likelihood) Gaussian models [closed]

Remotely sensed data are provided as 6 images showing an urban area, with the ground-truth information. These images have already been registered. You are required to implement the Maximum Likelihood (ML) algorithm to classify the given data into four classes, 1 – building; 2 – vegetation; 3 – car; 4 – ground. By doing so, each pixel in the images will be assigned a class. There are four objectives to achieve the final classification goal.

To select training samples from given source data based on information in the given ground truth (at least 20 training samples for each class)

To establish Gaussian models for each class with the training samples;

To apply maximum likelihood to the testing data (measured data) and classify each pixel into a class;

To evaluate the classification accuracy by using a confusion matrix and visual aid (colour coded figures).

Gaussian noise: Image Processing

As we all know, Gaussian Noise follows Gaussian or Normal distribution, and that distribution follows a $ BELL$ $ CURVE$ .

enter image description here

As we can see that most of the values are centered around the mean.

Now consider this image

enter image description here

When I add Gaussian noise to this image I get something like this

enter image description here

As we can see that the noise appears to be $ UNIFORMLY$ $ DISTRIBUTED$ throughout the image. There is no region where we can say that the noises concentrated around the mean value

So how can these be called Gaussian Noise?

The Code that I have used in octave is given below

 pkg load image;  I=imread('C:\Users\Hirak\Desktop\apple.jpg'); I=rgb2gray(I);  J = imnoise(I,'gaussian',0.02); K = medfilt2(J); imshow(J); 

Interpretability of feature weights from Gaussian process classifier

Suppose I trained a Gaussian process classifier with a linear kernel (using GPML toolbox) and got some feature weights for each input feature.

My question is then:

Does it/When does it make sense to interpret the weights to indicate the real-life importance of each feature or interpret at group level the average over the weights of a group of features?

Are neural network latent representations fitting a Gaussian distribution?

Neural network latent (or pre-activation) representations are often fitting a Gaussian-like bell distribution:

Preactivation Gaussian distribution (image source)

The representations are weighted sums of inputs to the neurons. Within a layer, before batch distribution, it would be interesting to know if the pre-activations are distributed in a particular way.

My question is about finding any explanation or mention of this observation in the literature. Is there any reference in the literature supporting this, or explaining what a non-Gaussian-like distribution indicates?

Mathematical Techniques to Reduce the Width of a Gaussian Peak

In the chemical analysis by instruments, the signals of several molecules are overlapped which makes it difficult to determine the true area of each peak, such as those shown in red. I simulated this as a sum of six Gaussians (with some tailing elements)

  1. One of the simplest technique is to raise the discrete signal values to any positive power (n>0). The standard deviation of the Gaussians becomes smaller and smaller (C, in blue). The big drawback is that we lose all the original peak area information. The transformed data is highly resolved now at the cost of losing true area information.

  2. Alternatively, we can add a first derivative of the signal and subtract the second derivative from the original signal i.e., Sharpened signal= Original signal +K (first derivative) – J(second derivative)

K and J are small positive real numbers. This neat “trick” maintains the true area because area under the derivatives is negligible (zero in ideal cases).

Do mathematicians use any other transformations which can make each overlapping peak very narrow, yet maintain the original peak areas. I am not interested in curve fitting techniques at this moment. Any pointers to some similar “peak sharpening” transformations would be appreciated which can resolve overlapping signals.

Thanks.

enter image description here

How do i jump directly from level 0 to level 5 in a Gaussian pyramid without having the levels 1 2 3 and 4?

I am current studying the Laplacian Pyramid as a compact image code by peter J Burt and Edawrd Hudson.

I understood all the concepts but i am having trouble with the equivalent weighting function i understand how it works but i can’t find the relation between h(n,m) and w(n,m)

here is the link for the paper the equations are at the end of page 2 http://persci.mit.edu/pub_pdfs/pyramid83.pdf