## Laser Plane Estimation for Laser-Camera system?

I have to set up a system of a laser line/ plane projector and a web camera to localize the 3D position of the laser in the camera image. I’ve read / come across several resources but the idea is still not quite concrete in my head.

So my intuition is that since we have a set up of a laser projector and the camera, and we want to find the position of the laser point in the image – we have to find the ‘correct’ laser plane that intersects with the camera / image plane. I am confused as to how we find the relative pose of this plane with respect to the camera, and how we can use this to find the 3D coordinates?

Posted on Categories proxies

## simple question about epsilon and estimation turing machines

i am getting really confused by it. i got to a point i had to calculate the lim when $$n \rightarrow \infty$$ for an optimization problem, and i got to the point that i had to calculate a fairly simple limit: $$lim_{n \rightarrow \infty} {3-\frac{7}{n}}$$.

now i used $$3 – \epsilon$$ and i am trying to show that there can’t be any $$\epsilon>0$$ so that the estimation of the algorithm is $$3-\epsilon$$, because there exists a “bigger estimation” – and this is the part i am not sure about, what is the correct direction of the inequality? $$3-\frac{7}{n} > 3 – \epsilon$$ or the opposite? i am trying to show that the estimation ration is close to 3.

i think that what i wrote is the correct way, but not sure. would appreciate knowing what is correct in this case. thanks.

Posted on Categories proxies

## Estimation of vertex cover in a constant boundsry

I would really appreciate your assistance with this:

for the following function: $$f\left(G,v\right)\:=\:size\:of\:minimal\:vertex\:cover\:v\:belongs\:to$$.

The function gets an undirected graph G and a vertex v and returns a natural number, which is the size of the smallest vertex cover in G that v belongs to.

problem: proving that if it is possible to estimate f in a constant boundary of 5 in polynomial time, then P=NP. meaning, if it is possible to compute in polynomial time a function $$g(G,v)$$ and it is guaranteed that $$f(G,v)-5 \leq g(G,v) \leq f(G,v) + 5$$ then P=NP.

I don’t understand why it happens and why if known that $$f(G,v)$$ can be computed in polynomial time and $$f(G,v)-5 \leq g(G,v) \leq f(G,v) + 5$$ then P=NP

Posted on Categories proxies

## Criteria based estimation of a task’s due date

Basically we have a ticket system, each day we get multiple tickets with different categorization (Question,Change,Error) and importance (High,Mid,Low) for different customers (A,B,C). My job is to create some kind of a system determining the due date of these “tasks” based on

• The state of the previous ones, completed or not.
• The availability of the developers.
• A criteria of the previously mentioned attributes (categorization,importance and customer type).

How can one achieve that?

## Starting with SQL Server 2019, does compatibility level no longer influence cardinality estimation?

In SQL Server 2017 & prior versions, if you wanted to get cardinality estimations that matched a prior version of SQL Server, you could set a database’s compatibility level to an earlier version.

For example, in SQL Server 2017, if you wanted execution plans whose estimates matched SQL Server 2012, you could set the compatibility level to 110 (SQL 2012), and get execution plan estimates that matched SQL Server 2012.

This is reinforced by the documentation, which states:

Changes to the Cardinality Estimator released on SQL Server and Azure SQL Database are enabled only in the default compatibility level of a new Database Engine version, but not on previous compatibility levels.

For example, when SQL Server 2016 (13.x) was released, changes to the cardinality estimation process were available only for databases using SQL Server 2016 (13.x) default compatibility level (130). Previous compatibility levels retained the cardinality estimation behavior that was available before SQL Server 2016 (13.x).

Later, when SQL Server 2017 (14.x) was released, newer changes to the cardinality estimation process were available only for databases using SQL Server 2017 (14.x) default compatibility level (140). Database Compatibility Level 130 retained the SQL Server 2016 (13.x) cardinality estimation behavior.

However, in SQL Server 2019, that doesn’t seem to be the case. If I take the Stack Overflow 2010 database, and run this query:

CREATE INDEX IX_LastAccessDate_Id ON dbo.Users(LastAccessDate, Id); GO ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 140; GO SELECT LastAccessDate, Id, DisplayName, Age   FROM dbo.Users   WHERE LastAccessDate > '2018-09-02 04:00'   ORDER BY LastAccessDate; 

I get an execution plan with 1,552 rows estimated coming out of the index seek operator: But if I take the same database, same query on SQL Server 2019, it estimates a different number of rows coming out of the index seek – it says “SQL 2019” in the comment at right, but note that it’s compat level 140: And if I set the compatibility level to 2019, I get that same estimate of 1,566 rows: So in summary, starting with SQL Server 2019, does compatibility level no longer influence cardinality estimation the way it did in SQL Server 2014-2017? Or is this a bug?

## Sample complexity of mean estimation using empirical estimator and median-of-means estimator?

Given a random variable $$X$$ with unknown mean $$\mu$$ and variance $$\sigma^2$$, we want to produce an estimate $$\hat{\mu}$$ based on $$n$$ i.i.d. samples from $$X$$ such that $$\rvert \hat{\mu} – \mu \lvert \leq \epsilon\sigma$$.

Empirical estimator: why are $$O(\epsilon^{-2}\cdot\delta^{-1})$$ samples necessary? why are $$\Omega(\epsilon^{-2}\cdot\delta^{-1})$$ samples sufficient?

Median-of-means estimator: why are $$O(\epsilon^{-2}\cdot\log\frac{1}{ \delta})$$ samples necessary?

Posted on Categories proxies

## Big O notation – estimation of run time [migrated]

I am running very computationally intensive tasks and wish to adjust the parameters respective of how long it takes.

The function I am running is PLINK – for those who don’t know, it is used for genotype data.

The function is said to follow a O(n*m^2) w.r.t. big O.

I have the run time for two time points with different parameters for m and a constant n, they are: 3 hours and 648 hours.

From this I wish to estimate the run-time for different parameters of m, that would respect the O(n*m^2) relationship.

Can anybody provide some insight as to methods for estimating run-time with the constant n parameters however also, for running tests with different parameters as well in order to achieve an optimal run-time with respect to accuracy of results?

Posted on Categories buy anonymous proxy

## [GET][NULLED] – WP Cost Estimation & Payment Forms Builder v9.681  [GET][NULLED] – WP Cost Estimation & Payment Forms Builder v9.681

Posted on Categories Free Proxy

## Magento2 How to remove estimation call on state change action in Cart page?

I need to remove estimation call on state change action on the Cart page. actually when a user on cart change Regio estimation call execute when to add zip code again estimation call so shipping calculation takes a too time.

So How I can achieve it?

Posted on Categories buy best proxies

## Estimation of the number of solutions by Counting

This is a question from a quantum computation textbook.

Consider a classical algorithm for counting the number of solutions to a problem. The algorithm samples uniformly and independently $$k$$ times from the Search Space of size $$N$$ for solutions using an Oracle that outputs 1 or 0, and let $$X_1,X_2,X_3,…X_k$$ be the results of the Oracle calls. So $$X_j=1$$ if the $$jth$$ Oracle call found a solution and $$X_j=0$$ otherwise. This algorithm estimates the number of solutions $$S$$:

$$S=N * \sum_{j}\frac{X_j}{k}$$

Assuming the number of solutions is $$M$$ and this is not known in advance. The Standard Deviation of $$S$$ is stated and found to be:

$$\Delta S=\sqrt{\frac{M(N-M)}{k}}$$

The question is:
Prove that to obtain a probability at least $$\frac{3}{4}$$ of estimating $$M$$ correctly to within an accuracy $$\sqrt{M}$$ for all values of $$M$$, we must have $$k=\Omega(N)$$.

I know how to get the 2nd equation from the 1st, which is by moving $$N$$ and $$k$$ to the left, thus treating $$kS/N$$ as a Binomial Distribution $$B(k,\frac{M}{N})$$. Then finding the variance of the Binomial Distribution and some algebraic manipulation will lead to the 2nd equation. I’m clueless in proving of $$k=\Omega(N)$$. Only thing I tried writing is:

$$P\Big(\sqrt{\frac{M(N-M)}{k}}\leq \sqrt{M}\Big)\geq \frac{3}{4}$$

Can someone help me with this?

Posted on Categories proxies