## Queries on unbounded knapsack

Given $$n$$ types of items with integer cost $$c_{i}$$ (there is an unlimited number of items of each type), such that $$c_{i} \leq c$$ for all $$i = 1, 2, \dots, n$$, answer (a lot of) queries of form “is there some set of items of total weight $$w$$?” in time $$O(1)$$ with some kind of precalculation in time $$O(n c \log c)$$.

I’ve got a hint – for every $$i = 0, 1, 2, \dots, c – 1$$ find minimal $$x$$ such that there is a set of items with total weight $$x$$ and $$x \equiv i \quad (\bmod c)$$. How to calculate all $$x$$‘s and how to use them to answer the queries?

This problem is somehow related to graphs and shortest paths, but I don’t understand the connection between actual knapsack-like thing and graphs (maybe there is some graph with paths of desired weight?).

Source: problem 76 on neerc.ifmo.ru wiki.

## Given a DCEL, how do you identify the unbounded face

I have constructed a DCEL using the procedure described in How do I construct a doubly connected edge list given a set of line segments?.

This correctly identifies all faces, however I’m struggling to come up with a way to identify the unbounded face surrounding my graph.

So far my only idea is that by building a polygonal representation of every face, I could find the face polygon which ‘contains’ all the others, but this seems kind of messy.

## how to maximise the no of edges selected in the graph in form of cycles of unbounded length?

Recently i started looking the perfect cycle cover algorithm related to kidney exchange problem where it is considered as NP-Complete for cycles restricted to a length>2. However if cycles are not restricted by removing the length constraint it is considered as solvable in polynomial time ,But if perfect cycle cover doesn’t exist and just we need to maximise the no of edges selected in form of cycles(So as to maximise the no of kidney exchanges), also no restriction on cycle’s length, then how do we do it?

I am thinking of finding all cycles in the graph and then one by one remove the cycle’s edges from the graph and then find the cycles in the residual graph and so on till no more cycles exist and then sum up all the cycle’s lengths continuing this for all cycles and find the solution with maximum edges selected. But since this is not a good solution i want to know how to solve this maximising problem

## Unbounded derivatives of real differentiable functions

I’m interested in the way values at specific points affect the overall structure of a real differentiable function. So suppose that for the real function $$f$$

$$(1)\space f$$ is infinitely differentiable for $$x \ge 0$$
$$(2)\space f(0) = 0$$ and $$f^n(0) = 0$$ for all $$n=1,2,3…$$
$$(3)$$ for some $$a>0, f(a) \neq 0$$

Then by the Taylor remainder theorem, there exists some $$b: 0 < b < a$$, such that the sequence of derivatives $$\lvert f^n(b) \rvert$$ of $$f$$ at $$b$$, is unbounded as $$n=1,2,3…$$ (otherwise $$f(a) = 0$$).

My question is: must the sequence of derivatives $$f^n(x)$$ for $$n=1,2,3,…$$ be unbounded
for all $$0 < x \le b$$? Is there a relevant theorem or a counterexample?

## Unbounded Knapsack problem optimization

Simply there is an equation as

time complexity: O(N^3)

f[i][j] = max{f[i-1][j-k*w[i]]+k*v[i]} where 0 <= k*w[i] <= j 

and it could be optimized to

time complexity: O(N^2)

f[i][j] = max{f[i-1][j], f[i][j-w[i]]+v[i]} 

by replacing

f[i][j-w[i]] = max{f[i-1][j-w[i]-k*w[i]]+v[i]+k*w[i]} while 0 <= k*w[i] <= j-w[i]  

The condition in the second equation could be further converted to 0 <= (k+1)*w[i] <= j, which means the second equation also covers all the cases in the first.

Does the above reasoning correct? Or something I missed out?

Any help will be appreciated 😉

## Why does the unbounded $\mu$ operator preserve effective computability?

Let $$f$$ be a partial function from $$\mathbb{N}^{p+1}$$ to $$\mathbb{N}$$. The partial function $$(x_1,…,x_p)\mapsto \mu y[f(x_1,…,x_p,y)=0]$$ is defined in the following way: If there exists at least one integer $$z$$ such that $$f(x_1,\dots, x_p,z)=$$ and if for every $$z', $$f(x_1,\dots, x_p,z’)$$ is defined, then $$\mu y[f(x_1,\dots, x_p,y)=0]$$ is equal to the least such $$z$$. In the opposite case, $$\mu y[f(x_1,\dots, x_p,y)=0]$$ is not defined.

I don’t understand why this unbounded $$\mu$$ operator preserves effective computability, in my textbook and in a note that I found online, this is mentioned as if it is a trivial fact.

I appreciate any help!

## unbounded metric spaces [on hold]

Let $$(X,d)$$ be an unbounded metric space. Is it right to say: There are a $$c\in X$$ and $$\{x_n\}_{n \in {N}}\subset X$$ such that $$\lim_{n \to +\infty}d(x_n,c)=+\infty$$?

## Definition of unbounded approximation ratio

Suppose that there is a specific instance of a graph for which the approximation ratio of an algorithm polynomially increases with the number of nodes of the graph, say the approximation ratio is $$n^2$$. Further, suppose that the number of nodes of that bad instance can be easily increased. For example, assume that the approximation ratio $$n^2$$ is obtained when nodes are distributed over the circumference of a circle and the number of nodes can be arbitrary large.

Then, is it correct to say the approximation ratio is unbounded? When an approximation ratio can be called unbounded?

## Unbounded Component of the Fredholm Domain

Let $$X$$ be a Banach space and $$T \in \mathcal L(X)$$.

The authors Engel and Nagel introduce in their book “One-Parameter Semigroups for Linear Evolution Equations” on p. 248 the concept of the Fredholm domain of $$T$$ defined by $$\rho_F(T) := \{\lambda \in \mathbb C: \lambda – T \text{ is a Fredholm operator} \}.$$ On the next page the following is stated:

“Here, we only recall that the poles of $$R(\cdot, T)$$ with finite algebraic multiplicity belong to $$\rho_F(T)$$. Conversely, an element of the unbounded connected component of $$\rho_F(T)$$ either belongs to $$\rho(T)$$ or is a pole of finite algebraic multiplicity.”

I can prove the first statement very elementary just by using properties of spectral projections and some very basic functional calculus. But the second statement seems to be quite difficult to prove. In the cited literature I found a proof of the stament (cf. the proof Corollary XI.8.5 in “Classes of Linear Operators Vol. I” by Gohberg, Goldberg and Kaashoek). But it seems to rely on quite some theorems about Fredholm operator valued functions.

So my question is whether there is a more elementary way to see that the statement holds, maybe just by using some basic facts on spectral projections? I thought quite some time about I couldn’t prove it. So is there maybe a reference for the statement which uses more elementary arguments? Or does someone know another way how to prove it? I am looking forward to your answers.

## $\bigcap I_\gamma$ if $I_0 \supset \cdots \supset\I_{\gamma} \cdots$ are unbounded

Let be $$\kappa$$ a regular cardinal and $$I_0 \supset \cdots \supset I_{\gamma} \cdots$$ are unbounded subsets of $$\kappa$$ for $$\gamma < \lambda <\kappa$$ where $$\lambda$$ is limit. I want to show $$\bigcap I_\gamma$$ is unbounded. Is it true? It’s easy show it’s not empty for regularity of $$\kappa$$.