## Rackoff’s coverability bounds in case the addition vectors of VAS and the target vector are from {-1, 0, +1}?

In “The covering and boundedness problems for vector addition systems”, Rackoff considers a VAS $$(v,A)$$ of dimension $$k$$ and size $$n$$ and derives an upper bound of $$2^{2^{(\log_2 3)n(\log_2 n)}}$$ on the length of nonnegative covering executions.

Let us consider the case $$A\subseteq \{-1,0,+1\}^k$$, $$v\in\mathbb{N}_{\geqslant 0}^k$$ and the vector to cover being from $$\{0,1\}^k$$.

What would be a good upper bound on the length of covering nonnegative executions in terms of $$k$$? Using Rackoff’s Thm. 3.5, $$\lvert A\rvert \leqslant 3^k$$ and $$n=\mathcal{O}(3^k+\|v\|_1)$$ (where $$\|\cdot\|_1$$ returns the 1-norm of a vector) would yield an upper bound of $$2^{2^{\mathcal{O}(3^k+\|v\|_1)\log_2(3^k+\|v\|_1)}}$$. We need to remove the dependency of the bound on $$v$$ and tighten the bound.

It seems to me that a better bound would result from a better bound on $$f(k)$$ with respect to $$k$$ (rather than to $$n$$), where $$f(0)=1$$ and $$f(i+1)\leqslant (2^n f(i))^{i+1} + f(i)\qquad\text{for i\in \mathbb{N}_{\geqslant 0} \cap [0,k[ }.$$ Any ideas of how to bound $$f(k)$$ by an expression in $$k$$ (where $$n$$ is dervied from our setup)? If I interpret the proof of Thm. 3.5 correctly, we could probably get something like $$2^{(3k)^k}$$ as an upper bound on the length of covering nonnegative executions. Can you confirm or reject this or provide further ideas or literature citations?

## Disjoint Set Union-Find Special Case

I was doing reading on weighted quick-union using path compression. I have a clear understanding of why this is $$O(1)$$ with amortized analysis for union and find operations but do not understand how to address this special case.

Suppose we have $$x$$ items to start with, (all disjoint) and we perform a sequence of $$m$$ operations such that all calls to find come after all calls to union. I am trying to determine what kind of data structure could be used, because in my previous understanding of disjoint sets, the union operation is always dependent on the find operation.

Despite having this information about the order of operations, I do not quite see how it is useful- or, how it can be used to achieve an amortized analysis of $$O(1)$$. Also, not sure of how to approach the union operation without relying on calls to find.

## Do you know case studies of applications created by converting excel sheets to web app?

I want to prepare to my new project. My client is working in excel now and wants to create web app. I’m doing a research to get inspired, but I can’t find cases like that. Do you know case studies, when excel sheets is converting to web app?

## What is the difference between a Use Case and a User Journey? Is there any?

I was wondering what the difference is between a use case and a user journey? To me they seem to be doing the same thing?

## Case inside case in oracle query

Can anyone help me?

I want to use query using CASE WHEN with logic where time less than 03:00 PM on Monday until Friday will get value start from tomorrow 03:00 PM until today 03:00 PM

  SELECT SUBSTR (A.TXT_TXN_DESC, 7, 15)                  AS billing_no,          TO_CHAR (dat_txn, 'dd/mm/yyyy hh24:mi:ss.')     AS waktu,          ''                                              AS note     FROM ncbshost.v_ch_nobook A    WHERE     cod_drcr = 'C'          AND (CASE                   WHEN (SELECT TO_CHAR (SYSDATE, 'dd/mm/yyyy hh24:mi:ss')                           FROM DUAL) <=                        TO_CHAR (SYSDATE, 'dd/mm/yyyy') || ' 15:00:00'                   THEN                       dat_txn BETWEEN (CASE                                            WHEN (SELECT TRIM (                                                             TO_CHAR (SYSDATE,                                                                      'DAY'))                                                    FROM DUAL) =                                                 'MONDAY'                                            THEN                                                TO_DATE (                                                       TO_CHAR (SYSDATE - 3,                                                                'dd/mm/yyyy')                                                    || ' 15:00:00',                                                    'dd/mm/yyyy hh24:mi:ss')                                            ELSE                                                TO_DATE (                                                       TO_CHAR (SYSDATE - 1,                                                                'dd/mm/yyyy')                                                    || ' 15:00:00',                                                    'dd/mm/yyyy hh24:mi:ss')                                        END)                                   AND TO_DATE (                                              TO_CHAR (SYSDATE, 'dd/mm/yyyy')                                           || ' 15:00:00',                                           'dd/mm/yyyy hh24:mi:ss')                   ELSE                       a.dat_txn BETWEEN TO_DATE (                                                TO_CHAR (SYSDATE, 'dd/mm/yyyy')                                             || ' 15:00:00',                                             'dd/mm/yyyy hh24:mi:ss')                                     AND TO_DATE (                                                TO_CHAR (SYSDATE, 'dd/mm/yyyy')                                             || ' 23:59:59',                                             'dd/mm/yyyy hh24:mi:ss')               END) ORDER BY SUBSTR (A.TXT_TXN_DESC, 7, 15) ASC 

## Worst Case running time of the Minimum Vertex Cover Approximation algorithm

Considering this factor $$2$$ minimum vertex cover approximation algorithm :

Repeat while there is an edge:

Arbitrarily pick an uncovered edge $$e=(u,v)$$ and add $$u$$ and $$v$$ to the solution. Delete $$u$$ and $$v$$ from the graph. Finally output the candidate cover.

I want to find the worst case running time of this algorithm. Since in a fully connected graph we have $$O(n^2)$$ edges, then the loop will run at most $$O(n^2)$$ times, I guess here I am not sure what the maximal number of delete operations could be or perhaps for less than $$O(n^2)$$ edges there would be some scenario with a large number of delete operations.

Any insights appreciated.

## Generalization of code is slower than particular case

I wrote the following Mathematica module:

QNlalternative[NN_, l_, f_] := Module[{s, wz, w, z, j, lvec},    s = 0;    Do[       wz = Table[weightsNodesQ1l@lvec@i, {i, NN}];       w = Table[wz[[i]][[1, All]], {i, NN}];       z = Table[wz[[i]][[2, All]], {i, NN}];       s = s + Function[Sum[(f @@ (Table[z[[i]][[j[i]]], {i, NN}]))*(Times @@ (Table[                 w[[i]][[j[i]]], {i, NN}])), ##]] @@                  Table[{j[k], 2^lvec[k] + 1}, {k, NN}],       ##       ] & @@ Table[{lvec[i], l + NN - 1 - Total@Table[lvec[k], {k, i - 1}]}, {i, NN}];    Return[s]    ]; 

This module calls another module:

sumPrime[v_List] := First[v]/2 + Total[Delete[v, 1]]  weightsNodes[NN_] := Module[{w, z},    w = Table[4/NN*sumPrime[Table[1/(1 - n^2)*Cos[n*k*Pi/NN], {n, 0., NN, 2}]], {k, 0., NN}];    z = Table[Cos[k*Pi/NN], {k, 0., NN}];    Return[{w, z}]    ];  weightsNodesQ1l[l_] := weightsNodes[2^l] 

This code is related to a mathematical problem I am solving (it is a modification). When I first was thinking about how to write the module QNlalternative, I wrote the particular case of NN=5 in a sloppy manner, using repeated statements, as follows:

Q5l[l_, f_] :=    Module[{s, wzl1, wzl2, wzl3, wzl4, wzl5, wl1, zl1, wl2, zl2, wl3,      zl3, wl4, zl4, wl5, zl5},    s = 0;    Do[     wzl1 = weightsNodesQ1l[l1];     wzl2 = weightsNodesQ1l[l2];     wzl3 = weightsNodesQ1l[l3];     wzl4 = weightsNodesQ1l[l4];     wzl5 = weightsNodesQ1l[l5];     wl1 = wzl1[[1, All]]; zl1 = wzl1[[2, All]];     wl2 = wzl2[[1, All]]; zl2 = wzl2[[2, All]];     wl3 = wzl3[[1, All]]; zl3 = wzl3[[2, All]];     wl4 = wzl4[[1, All]]; zl4 = wzl4[[2, All]];     wl5 = wzl5[[1, All]]; zl5 = wzl5[[2, All]];     s = s +  Sum[f[zl1[[i1]], zl2[[i2]], zl3[[i3]], zl4[[i4]], zl5[[i5]]]*         wl1[[i1]]*wl2[[i2]]*wl3[[i3]]*wl4[[i4]]*wl5[[i5]], {i1, 1,          2^l1 + 1}, {i2, 1, 2^l2 + 1}, {i3, 1, 2^l3 + 1}, {i4, 1,          2^l4 + 1}, {i5, 1, 2^l5 + 1}],     {l1, 1, l + 5 - 1}, {l2, 1, l + 5 - 1 - l1}, {l3, 1,       l + 5 - 1 - l1 - l2}, {l4, 1, l + 5 - 1 - l1 - l2 - l3}, {l5, 1,       l + 5 - 1 - l1 - l2 - l3 - l4}     ];    Return[s]    ]; 

The module Q5l is much faster than QNlalternative:

AbsoluteTiming[QNlalternative3[5, 6, Sin[Plus[##]]^2 &]] (* {19.4634, 6213.02} *)  AbsoluteTiming[Q5l[6, Sin[Plus[##]]^2 &]] (* {6.64357, 6213.02} *) 

Why is QNlalternative slower? Which step of the generalization of Q5l to an arbitrary NN is too slow?

## Should captcha or verification codes be case sensitive?

As I have come across many verification codes, the most annoying thing in them is when you write in all letters in the code in lower case and the code is rejected saying its not valid. Some codes are accepted and some are not. And the user has no idea if they are case sensitive or not before actually submitting it and validating it. Given the curvy, hardly recognisable characters on the codes, not making the users aware of its case sensitivity could annoy them more and bring down the UX.

So what would be the ideal way to deal with this ??

## update table multiple colum with case when and multiple condition

i would like to create a query to update multiple columns at once in a specific table using case when with multiple condition using inner join with another table. can anyone help me? thanks in advance.