Creating a list that shows the numerical values not in boolean form

Based from the equation I used in the code below, I found how many values are divisible by 5. However, this is in boolean form. I only know how many are divisible by 5. I need to formulate a conjecture by exploring the values, but I’m having trouble figuring out how to create a list that gives me the numerical values that are divisible by 5 based from given function. Here’s is my program.

expn = Flatten[Table[1^n + 2^n + 3^n + 4^n, {n, 1, 10000}]]; sumpowern = Mod[Total /@ expn, 5] ; Count[sumpowern, 0] posints = Length[Select[Divisible[Total /@ expn , 5], TrueQ]]  

Best fit giving wrong parameter values

I’ve been trying to find the fit for my data using NonLinearModelFit

impedance[R_, L_, C1_, C2_, f_] :=    14.03 + ((R + 2 Pi I f L - I/(2 Pi f C1)) 1/(I 2 Pi f C2))/(    R + I 2 Pi f L + 1/(I 2 Pi f C1) + 1/(I 2 Pi f C2)); absimp[R_, L_, C1_, C2_, f_] =   Simplify[ComplexExpand[Abs[impedance[R, L, C1, C2, f]]]]; NonlinearModelFit[vacamp, (14.03*400)/   absimp[R, L, C1, C2, \[Omega]], {{R, 200}, {L, 10^6}, {C1,     2*10^-12}, {C2, 2*10^-9}}, \[Omega]] 

But this gives me the wrong fit. The data that I have looks approximately like a Gaussian in the range \Omega=32740 to 32800. I tried different starting points but no luck. Would appreciate some help. TIA!

AWS RDS is showing very high wait/synch/mutex/sql/ values and EXPLAIN statements in performance insights

I’m running a CRON script which checks the database for work and executes anything that needs to be done. It does this across ~500 customers per minute, but we are using AWS RDS with a 16 vCPU machine which, until recently, has been plentiful to keep it happy (normally plugging along under 20%).

This weekend we updated customers to the latest version of the code and implemented some tooling, and since then we’ve started seeing these huge waits: enter image description here

Further I’m seeing that about half of our busiest queries are EXPLAIN statements, somewhere illustrated here: enter image description here

Nowhere in our code base is an "EXPLAIN" performed (though we are using AWS RDS performance insights, ProxySQL and New Relic for monitoring). I did notice that in the past week our number of DB connections was previously baselined around 10 and is now closer to 90. enter image description here

Any ideas on where I should be digging to find the cause of these waits and explain statements? And if they could justify the large number of open connections?

Match two distinct values from a single column in a joined table

I have three tables:

class: class_id class_name  student: student_id student_name  class_schedule: class_id student_id 

I want to select all the classes where studentA and studentB are in the same class using the student names. I can use a subquery to pull all the classes studentA is in, and then from that subset the classes that studentB is in and that works. That being said, it is terribly inefficient. I have tried a number of solutions including joining the same table twice, once for each value I want to find, but always get an empty result set.

For testing and prototyping purposes I am using sqlite, but will reside on DB2 long term.

Are there heuristics in CSP to help avoid redundancy while checking constraints for inconsistent values?

I’m a complete beginner. Please forgive my ignorance.Trying to learn about CSP online, I noticed a lot of the focus on search methods and heuristics which tell you which variable to expand next (e.g. most constrained variable) and those that tell you which value to try first (e.g. least constraining value) but I’ve yet to see heuristics that relate to the ordering of constraints. Since I’m doing everything by hand, I notice a lot of redundancy when eliminating values from variable domains. How do you go about checking for violated constraints in a way that is efficient? Say constraint A will have me eliminate odd numbers 1 to 1000 and constraint B will have me wipe out everything above 250. Intuitively, it feels like order matters as I would waste my time cherry picking even numbers above 250 to only later find out that anything above 250 was not consistent in the first place. I apologize for lacking the proper terminology, my understanding is mostly intuitive. I hope it makes sense. Thanks in advance! I’m mostly looking to acquire a conceptual understanding of selected topics in computer science so if you have book recommendations or any resource that would be appropriate for me as an interested layman, please don’t hesitate!

List of tuples without duplicates & repeated values

Given some number nand set of values vals, I want to obtain all the tuples/permutations of size n for the values in vals, but without any repeated tuples. So e.g. n=2 and vals={3,6} should give

 n = 2, vals = {0,1}   --> { {0,0}, {0,1}, {1,1} }  n = 2, vals = {0,1,2} --> { {0,0}, {0,1}, {0,2}, {1,1}, {1,2}, {2,2} }  n = 3, vals = {0,1}   --> { {0,0,0}, {0,0,1}, {0,1,1}, {1,1,1} }  n = 3, vals = {0,1,2} --> { {0,0,0}, {0,0,1}, {0,0,2}, {0,1,1}, {0,1,2}, {0,2,2}, {1,1,1}, {1,1,2}, {1,2,2}, {2,2,2} } 

I’ve tried the following commands:

 n    = 2;  vals = {0, 1};  Tuples[vals, {n}]        (* gives { {0, 0}, {0, 1}, {1, 0}, {1, 1} } *)  Permutations[vals, {n}]  (* gives { {0, 1}, {1, 0} } *)  Subsets[vals, {n}]       (* gives { {0, 1} } *) 

Permutations and Subsets are incomplete. Tuples contains all the right combinations, but also contains duplicates like {0, 1} and {1, 0}. Since I do not care about order, I’d like to remove those.

How do I achieve the behavior of Tuples, but without duplicates?

i want to count the number of occurrences of certain values of my TagName column

i want to count the number of occurrences of different sensor rows in SQL, but i seem to be doing it wrong and seemingly i am not visualizing it correctly.

if i was doing this in pseudocode in a c style language i would do it like this:

FOR i in range(taglist[i]):          print(taglist[i], count(taglist[i]) ) 

i have been trying this:

Select Count(a) FROM (     SELECT Distinct [TagName]          FROM [A2ALMDB].[dbo].[AlarmMaster]         where (TagName like '%Sensor%' or GroupName like'%Sensors%')     ) a 

it returns ’66’, but i want it to return the count of each of the distinct tagnames that are returned in Select A.

Can anyone help me with how i should be trying to get all the counts of my different sensor occurrences to total instead of a count of all the distinct tagnames?

thanks for the help!

simplyfing common data values (parts of a string) in an array – by merging same attributes (data values)

I have arrays comprised of string elements like the following 2 examples. Each of the following lines is a string element of the array.

example1:

aaa bbb cc1 dd1  aaa bbb cc1 dd2  aaa bbb cc2 dd1  aaa bbb cc2 dd2  aaa bbb cc3 dd1  aaa bbb cc3 dd2  

example2:

bbb rrr nnn ttt ooo eee ddd fff contr  bbb sss nnn ttt ppp eee contr  bbb sss nnn aaa ooo eee ddd fff contr  bbb rrr nnn ttt yyy eee ddd fff contr  

I want to simplify and remove "redundant" lines by merging in a single line duplicate attributes. So the results should be:

example1:

aaa bbb cc1 dd1/dd2 aaa bbb cc2 dd1/dd2 aaa bbb cc3 dd1/dd2 

example2:

bbb rrr nnn ttt ooo/yyy eee ddd fff contr  bbb sss nnn ttt ppp eee contr  bbb sss nnn aaa ooo eee ddd fff contr  

(the results are an array as well).

My current approach goes like this: remove the first column then compare all elements. if equal lines are found, merge them. remove second column, compare. It becomes though somewhat complicated, as not all lines have the same number of data values (separated by spaces).

I’m stuck here. Any help would be welcome.

How to calculate probability of values under Weibull distribution?

I have a Genomic data that shows the interaction between genomic regions that I would like to understand which interactions are significant statistically.

Dataset look likes:

chr  start1   end1   start2   end2   normalized count  1     500    1000   2000     3000       1.5  1     500    1000   4500     5000       3.2  1     2500   3500   1000     2000       4 

So, I selected a random number of data (as background) and fitted the normalized frequency into the Weibull distribution using fitdistrplus R packages and estimated some parameters like scale and shape for those sets of data (PD = fitdist(data$ normalized count,'weibull')).

Now I would like to calculate the probability of each observation (like a p-value for each data point) under the fitted Weibull distribution.

But I do not know how can I do that, Can I calculate the Mean of distribution then calculated Z-statistic for each observation and convert it to the p-value?

for example:

The random background that fitted to Weibull using the below parameters:

scale:0.12 shape:023 Mean: 20 Var:12 

How can I calculate the probability of sets of data like (1.2,2.3,4.5,5.0,6.1)?

Compute duration of an event based on consecutive values in another column

I need to compute the sum of durations of multiple events based on consecutive values in another column.

             ts             | w ----------------------------+---  2020-07-27 15:40:04.045+00 | t  2020-07-27 15:41:04.045+00 | t  2020-07-27 15:41:14.045+00 | f  2020-07-27 15:42:14.045+00 | t  2020-07-27 15:43:14.045+00 | t 

The event is considered being active as long as the column w has a consecutive value of true.

The duration of the first event would be 60 seconds. '2020-07-27 15:41:04.045+00' - 2020-07-27 15:40:04.045+00. The second event has the same duration. The sum of both would be 120 seconds.

What would be the best/most performant approach to computing these values? The longest time range we’ll be looking at will probably be half a year involving about 30 million rows.

I wrote a custom aggregate function that computes the duration but it takes about 16 seconds for only 1.5 million rows.

 Aggregate  (cost=444090.03..444090.04 rows=1 width=4) (actual time=16290.826..16290.828 rows=1 loops=1)    ->  Seq Scan on discriminator0  (cost=0.00..57289.03 rows=1547203 width=9) (actual time=0.016..1723.178 rows=1547229 loops=1)  Planning Time: 0.196 ms  JIT:    Functions: 3    Options: Inlining false, Optimization false, Expressions true, Deforming true    Timing: Generation 0.889 ms, Inlining 0.000 ms, Optimization 0.508 ms, Emission 6.472 ms, Total 7.870 ms  Execution Time: 16291.836 ms 

I’m new to SQL and basically just got this working through trial and error, so I’m sure there is room for improvement. Maybe even a whole different approach. Here’s a fiddle with the aggregate function.

I’m not sure if i should include the code because it’s quite long