$\Phi_1=1$ or $\Phi_1=2$ for the dynamic $\text{Table-Insert}$ , where $\Phi_i$ is the potential function after $i$ th operation, as per CLRS

The following is the section from Introduction to Algorithms by Cormen. et. al. in the Dynamic Tables section.

In the following pseudocode, we assume that $ T$ is an object representing the table. The field $ table[T]$ contains a pointer to the block of storage representing the table. The field $ num[T]$ contains the number of items in the table, and the field $ size[T]$ is the total number of slots in the table. Initially, the table is empty: $ num[T] = size[T] = 0$ .

$ \text{Table-Insert(T,x)}$

$ 1\quad \text{if $ size[T]=0$ }$

$ 2\quad\quad \text{then allocate $ table[T]$ with $ 1$ slot}$

$ 3\quad\quad size[T] \leftarrow 1$

$ 4\quad\text{if } num[T] =size[T]$

$ 5\quad\quad\text{then allocate $ new-table$ with $ 2 • size[T]$ slots}$

$ 6\quad\quad\quad\text{insert all items in $ table[T]$ into $ new-table$ }$

$ 7\quad\quad\quad\text{free $ table[T]$ }$

$ 8\quad\quad\quad table[T] \leftarrow new-table$

$ 9\quad\quad\quad size[T] \leftarrow 2 • size[T]$

$ 10\quad \text{insert $ x$ into $ table[T]$ }$

$ 11\quad num[T] \leftarrow num[T] + 1$

For the amortized analysis for the a sequence of $ n$ $ \text{Table-Insert}$ the potential function which they choose is as follows,

$ $ \Phi(T) = 2.num[T]-size[T]$ $

To analyze the amortized cost of the $ i$ th $ \text{Table-Insert}$ operation, we let $ num_i$ denote the number of items stored in the table after the $ i$ th operation, $ size_i$ denote the total size of the table after the $ i$ th operation, and $ \Phi_i$ denote the potential after the $ i$ th operation.

Initially, we have $ num_0 = 0, size_0 = 0$ , and $ \Phi_0 = 0$ .

If the $ i$ th Table-Insert operation does not trigger an expansion, then we have $ size_i = size_{i-i}$ and $ num_i=num_{i-1}+1$ , the amortized cost of the operation is $ \widehat{c_i}$ is the amortized cost and $ c_i$ is the total cost.

$ $ \widehat{c_i}=c_i+\Phi_i- \Phi_{i-1} = 3 \text{ (details not shown)}$ $

If the $ i$ th operation does trigger an expansion, then we have $ size_i = 2 . size_{i-1}$ and $ size_{i-1} = num_{i-1} = num_i —1$ ,so again,

$ $ \widehat{c_i}=c_i+\Phi_i- \Phi_{i-1} = 3 \text{ (details not shown)}$ $


Now the problem is that they do not make calculations for $ \widehat{c_1}$ the situation for the first insertion of an element in the table (line 1,2,3,10,11 of code only gets executed).

In that situation, the cost $ c_1=1$ , $ \Phi_0=0$ and $ num_1=size_1=1 \implies \Phi_1 = 2.1-1 =1$

We see that $ \Phi_1=1 \tag 1$

So, $ $ \widehat{c_1}=c_1+\Phi_1-\Phi_0=2$ $

But the text says that the amortized cost is $ 3$ , (I feel they should have said the amortized cost is at most $ 3$ , from what I can understand)

Moreover in the plot below,

Plot

The text represents graphically the $ \Phi_1=2$ which sort of contracts $ (1)$ , but as per the graph if we assume $ \Phi_1=2$ then $ \widehat{c_i}=3, \forall i$

I do not quite get where I am making the fault.

Get modulus and plot complex function

I have the following function:

freq[a_, b_, t0_, tr_, s_] := -((b E^(-s (b + t0)) (b E^(s (b + t0)) (-1 +             b s) UnitStep[-b] - b E^(s t0) UnitStep[b] +          E^(s (b - tr)) (E^(s (t0 + tr)) (-1 + b s) UnitStep[-t0] +             E^(s tr) (-1 + b s - s t0) UnitStep[t0] -             E^(s (t0 + tr)) (-1 + b s) UnitStep[-t0 - tr] + (1 +                s (-b + t0 + tr)) UnitStep[t0 + tr])))/(s^2 tr)) 

Now I want to plot the function as follows:

Plot[ComplexExpand@Abs@ExpToTrig@freq[0, 1, 0, 10^-6, Iw], {w,0,10^9}] 

However that doesn’t work. I couldn’t exact the absolute value of the complex function to plot it.
(w is a real positive number)

Does anyone know how to plot that?

Complexity of approximating a function value using queries

I am looking for information on problems of the following kind.

There is a function $ f: [0,1] \to \mathbb{R}$ that is continuous and monotonically-increasing, with $ f(0)<0$ and $ f(1)>0$ . You have to find the unique $ x\in[0,1]$ such that $ f(x)=0$ . You can access $ f$ only through queries of the type "what is $ f(x)$ ?". How many such queries do you need in order to approximate $ x$ up to some constant $ \epsilon$ ?

Here, the solution is simple: using binary search, the interval in which $ x$ can lie shrinks by 2 after each query, so $ \log_2(1/\epsilon)$ queries are sufficient. This is also an upper bound, since an adversary can always answer in such a way that the possible interval for $ x$ shrinks by at most 2 after each query.

However, one can think of more complicated problems of this kind, with several different functions and possibly different kinds of queries.

What is a term, and some references, for this kind of computational problems?

What properties of a discrete function make it a theoretically useful objective function?

A few things to get out of the way first: I’m not asking what properties the function must have such that a global optimum exists, we assume that the objective function has a (possibly non-unique) global optimum which could be theoretically found by an exhaustive search of the candidate space. I’m also using "theoretically useful" in a slightly misleading way because I really couldn’t understand how to phrase this question otherwise. A "theoretically useful cost function" the way I’m defining it is:

A function to which some theoretical optimisation algorithm can be applied such that the algorithm has a non-negligible chance of finding the global optimum in less time than exhaustive search

A few simplified, 1-dimensional examples of where this thought process came from: graph of a bimodal function exhibiting both a global and local maxima

Here’s a function which, while not being convex or differentiable (as it’s discrete), is easily optimisable (in terms of finding the global maximum) with an algorithm such as Simulated Annealing.

graph of a boolean function with 100 0 values and a single 1 value

Here is a function which clearly cannot be a useful cost function, as this would imply that the arbitrary search problem can be classically solved faster than exhaustive search.

graph of a function which takes random discrete values

Here is a function which I do not believe can be a useful cost function, as moving between points gives no meaningful information about the direction which must be moved in to find the global maximum.

The crux of my thinking so far is along the lines of "applying the cost function to points in the neighbourhood of a point must yield some information about the location of the global optimum". I attempted to formalise (in a perhaps convoluted manner) this as:

Consider the set $ D$ representing the search space of the problem and thus the domain of the function and the undirected graph $ G$ , where each element of $ D$ is assigned a node in $ G$ , and each node in $ G$ has edges which connect it to its neighbours in $ D$ . We then remove elements from $ D$ until the objective function has no non-global local optima over this domain and no plateaus exist (i.e. the value of the cost function at each point in the domain is different from the value of the cost function at each of its neighbours). Every time we remove an element $ e$ from $ D$ , we remove the corresponding node from the graph $ G$ and add edges which directly connect each neighbour of $ e$ to each other, thus they become each others’ new neighbours. The number of elements which remain in the domain after this process is applied is designated $ N$ . If $ N$ is a non-negligible proportion of $ \#(D)$ (i.e. significantly greater than the proportion of $ \#(\{$ possible global optima$ \})$ to $ \#(D)$ ) then the function is a useful objective function.

Whilst this works well for the function which definitely is useful and the definitely not useful boolean function, this process applied to the random function seems incorrect, as the number of elements that would lead to a function with no local optima IS a non-negligible proportion of the total domain.

Is my definition on the right track? Is this a well known question I just can’t figure out how to find the answer to? Does there exist some optimisation algorithm that would theoretically be able to find the optimum of a completely random function faster than exhaustive search, or is my assertion that it wouldn’t be able to correct?

In conclusion, what is different about the first function that makes it a good candidate for optimisation to any other functions which are not.

Analyzing space complexity of passing data to function by reference

I have some difficulties with understanding the space complexity of the following algorithm. I’ve solved this problem subsets on leetcode. I understand why solutions’ space complexity would be O(N * 2^N), where N – the length of the initial vector. In all those cases all the subsets (vectors) are passed by value, so we contain every subset in the recursion stack. But i passed everything by reference. This is my code:

class Solution { public: vector<vector<int>> result; void rec(vector<int>& nums, int &position, vector<int> &currentSubset) {     if (position == nums.size()) {         result.push_back(currentSubset);         return;     }          currentSubset.push_back(nums[position]);     position++;     rec(nums, position, currentSubset);     currentSubset.pop_back();     rec(nums, position, currentSubset);     position--; }  vector<vector<int>> subsets(vector<int>& nums) {     vector <int> currentSubset;     int position = 0;     rec(nums, position, currentSubset);     return result; } }; 

Would the space complexity be O(N)? As far as i know, passing by reference doesn’t allocate new memory, so every possible subset would be contained in the same vector, which was created before the recursion calls.

I would also appreciate, if you told me how to estimate the space complexity, when working with references in general. Those are the only cases, where i hesitate about the correctness of my reasonings.

Thank you.

Recurrence relation for the number of “references” to two mutually recursive function

I was going through the Dynamic Programming section of Introduction to Algorithms(2nd Edition) by Cormen et. al. where I came across the following recurrence relations in the assembly line scheduling portion.


$ (1),(2),(3)$ are three relations as shown.

$ $ f_{1}[j] = \begin{cases} e_1+a_{1,1} &\quad\text{if } j=1\ \min(f_1[j-1]+a_{1,j},f_2[j-1]+t_{2,j-1}+a_{1,j})&\quad\text{if} j\geq2\ \end{cases}\tag 1$ $

Symmetrically,

$ $ f_{2}[j] = \begin{cases} e_2+a_{2,1} &\quad\text{if } j=1\ \min(f_2[j-1]+a_{2,j},f_1[j-1]+t_{1,j-1}+a_{2,j})&\quad\text{if} j\geq2\ \end{cases}\tag 2$ $

(where $ e_i,a_{i,j},t_{2,j-1}$ are constants for $ i=1,2$ and $ j=1,2,3,…,n$ )

$ $ f^\star=\min(f_1[n]+x_1,f_2[n]+x_2)\tag 3$ $


The text tries to find the recurrence relation of the number of times $ f_i[j]$ ($ i=1,2$ and $ j=1,2,3,…,n$ ) is referenced if we write a mutual recursive code for $ f_1[j]$ and $ f_2[j]$ . Let $ r_i(j)$ denote the number of times $ f_i[j]$ is referenced.

They say that,

From $ (3)$ ,

$ $ r_1(n)=r_2(n)=1.\tag4$ $

From $ (1)$ and $ (2)$ ,

$ $ r_1(j)=r_2(j)=r_1(j+1)+r_2(j+1)\tag 5$ $


I could not quite understand how the relations of $ (4)$ and $ (5)$ are obtained from the three corresponding relations.

Thought I could make out intuitively that as there is only one place where $ f_1[n]$ and $ f_2[n]$ are called, which is in $ f^\star$ , so probably in $ (4)$ we get the required relation.

But as I had not encountered such concept before I do not quite know how to proceed. I would be grateful if someone guides me with the mathematical prove of the derivation as well as the intuition, however I would prefer an alternative to mathematical induction as it is a mechanical cookbook method without giving much insight into the problem though (but if in case there is no other way out, then I shall appreciate mathematical induction as well provided the intuition is explained to me properly).

PostgreSQL – Update in function not working

I am working with Postgres 10.7. In my application, I want to update a table on a daily basis as an automated schedule.

I have created a function for this (not the schedule but the code I want to execute). Function compiles and runs successfully but does not update the columns when executed. This is my function.

CREATE or replace function updateProdDaily() returns void AS $  func$   BEGIN     UPDATE products_table SET trxn_cnt =: 0;    EXCEPTION     WHEN OTHERS THEN     raise notice '% %', SQLERRM, SQLSTATE; END; $  func$    LANGUAGE plpgsql; 

This is how I execute it.

SELECT updateProdDaily(); 

Can someone point out if there is a problem with the function please?

Also how would I schedule this function to run on a daily basis?

DO $  $        BEGIN     PERFORM updateProdDaily();     commit;    EXCEPTION     WHEN OTHERS THEN     raise notice '% %', SQLERRM, SQLSTATE; END $  $  ; 

Why, if a function accepts arguments, it fails on ajax calls?

I am trying to write a WP function to work with both ajax and direct calls, something like this:

PHP

function some_function($  var){     $  var = !empty($  var) ? $  var : $  _POST['var'];     echo $  var;          //or even      $  var = null;     echo 'something';      if(!empty($  _POST['action'])) wp_die(); } 

AJAX CALL

let ajaxurl = '███'; let data = {'action': 'somefunction','var':'foo')}; $  .post(ajaxurl, data, function(response) {console.log(response);}); 

WP use

add_action( 'wp_ajax_somefunction', 'some_function',10,1); add_action( 'wp_ajax_nopriv_somefunction', 'some_function',10,1); 

Another WP use

some_function('bar'); 

However, any time I place $ var as an accepted function argument, some_function($ var), my ajax calls start returning a 500 error. So, something like this

function some_function(){     $  var = !empty($  var) ? $  var : $  _POST['var'];     echo $  var; } 

works for ajax.

I tried looking up wp ajax & arguments, but the search results are always about the variables we pass through ajax, not the callback function arguments. The only thing I learned is that we have to add a number of accepted arguments into add_action()

What am I doing wrong?

Thank you.

…P.S. I found a funny workaround:

function some_function_ajax(){     $  var = $  _POST['var'];     some_function($  var); } function some_function($  var){     echo $  var; } // =) 

but still, what is the right way?

FindRoot blocked by Jacobian in multi-valued function

Clear[t, r, z] c = 1; gam = 1.; tm = 16.; r[t_] = 2 c ArcTanh[Tan[t/(2 Sqrt[1 - Cot[gam]^2])]];  z[t_] = r[t] Cot[gam]; plr = PolarPlot[r[t], {t, 0, tm}, GridLines -> Automatic,    PlotStyle -> {Blue, Thick}] rad = Plot[r[t], {t, 0, tm}, GridLines -> Automatic,    PlotStyle -> {Red, Thick}] pp3 = ParametricPlot3D[{r[t] Cos[t], z[t], r[t] Sin[t]}, {t, -tm, tm},    PlotStyle -> {Magenta, Thick}] FindRoot[r[t] = 1.25, {t, 1.2345}] 

The last line FindRoot does not work due to Jacobian singularity. Can there be some work around? Thanks for help.