Anonymous function uses variable declared outside the function?

Please see the code below, which I have taken and adapted from here: https://www.rabbitmq.com/tutorials/tutorial-one-dotnet.html and here: https://github.com/rabbitmq/rabbitmq-tutorials/tree/master/dotnet (RPCServer and RPCClient):

public RpcClient()         {             var factory = new ConnectionFactory() { HostName = "localhost" };              connection = factory.CreateConnection();             channel = connection.CreateModel();             replyQueueName = channel.QueueDeclare().QueueName;             consumer = new EventingBasicConsumer(channel);             List<String> responses = new List<string>();             consumer.Received += (model, ea) =>             {                 var body = ea.Body;                 var response = Encoding.UTF8.GetString(body);                  responses.Add(response);                 if (responses.Count == 2)                 {                     if (!callbackMapper.TryRemove(ea.BasicProperties.CorrelationId, out TaskCompletionSource<List<string>> tcs))                         return;                     tcs.TrySetResult(responses);                 }                 else                     return;             };         } 

This is a Scatter Gather (https://www.enterpriseintegrationpatterns.com/patterns/messaging/BroadcastAggregate.html). Notice that the list of responses is defined outside of consumer.Received. Are there any pitfalls doing this that I have not considered?

What makes a function computable?

It seems to me, at the moment, that I don’t understand where the line is drawn between “computability” and being incomputable. The main question is, is it the ability to be “solved” only by trying out every possible combination?

I know there are probably “even worse” things where it is not even possible to define “every possible combination”, but those are obviously over the line.

I have a few fringe examples I would like to discuss.

1 – If there was a program called HaltsInUnder100Operations, one could make that decides whether any program x, given an input i, halts in under n operations (say 100 operations as it might need to be a certain goldilocks range). It seems like it could be possible to analyse the program and find the answer from there. But if you put it through the same test as the general halting problem that Turing proved was impossible, it could run into the same logical difficulty.

The problem would not directly be that it is incomputable, but instead that there are probably some programs you cannot tell will halt in under n operations, without running them. Is it that itself that makes it incomputable?

If you tried to test that program, it would possibly take more than n operations to give the result.

If you design a set of programs where it is possible to tell how many steps they will take, then the program HaltsInUnder100Operations(x,i) will not itself be in that set of programs either, so that can’t work.

2 – In Numberphile where they are solving a^3 + b^3 + c^3 = 33, they say equations like these officially “could be undecideable” (incomputable?), and yet they are finding solutions left and right by trying every possible combination. Maybe they mean that there might be no “formula” for finding the solutions.

3 – Similarly, the travelling salesman problem is a famous example of undecidability but can be solved for small enough sets by trying every possible combination.

And the busy beaver problem (a prime example of “incomputable”) has been solved for up to 4 states in binary, again by trying every possible program and finding which ones halt or loop – but I heard there is a point (1919 states?) where it can’t be solved in this way. Maybe there are infinitely changing results for that which technically do not halt but there is no way of telling that they won’t at some extremely large finite number? So, given a never-ending amount of time to get the answer, the answer would never be found.

4 – The prime numbers may seem “computable” at first glance, but then there is no known formula that can “efficiently compute” them (i.e. you plug in n and you get out the nth prime number). So what side of the line is that on?

5 – Pi (or the digits thereof) is considered to be “computable”, even though it would take infinity to find all the digits.

6 – If you were to write down some numbers “at random” (without using a formula) or generate them based on random inputs, would these be “incomputable”? It could be argued that being possibly affected by stochastic processes makes them incomputable, though finite (because you are going to stop at some point). And yet if you had a computer generate all decimals of length n (where n is the length of your randomly generated decimal) then one of those would be the same decimal. So it is again a blurred line.

With these possible incomputable finite problems (if they are indeed incomputable), that makes me wonder if the basic foundation of computability is itself “incomputable” by some definition? For example, the formulae themselves which had to require some exhaustive testing, even if doing so was very easy. Computability would then be something that is based on established formulae, and then the ability to get further answers using those, without needing an exhaustive search. Maybe it just means anything that can be reached with “shortcuts” of some type?

LedgerJS – How to P2SH sign multiple transactions from multiple wallets in 1 function call?

First some explanation

I have made 2 raw transactions via Electrum using my Nano S Ledger as seed and exported them to .txn files.

let rawTx1 = "02000000000101bf00f7aca2e0d393ad0a762224ad4cd5a10d9950804fbc5a22fe970918301179000000001716001400ce8131595e014b45ec6ca49495d547ab8bd872fdffffff02a08601000000000017a91457fd0d41e459a4227b8932327786cf512d99399987bc72ca..."; let rawTx2 = "02000000000101bf00f7aca2e0d393ad0a762224ad4cd5a10d9950804fbc5a22fe970918301179000000001716001400ce8131595e014b45ec6ca49495d547ab8bd872fdffffff02a08601000000000017a914a66dff1bf27dd1a5944b5bc9ff2b0f410efb64cd87bc72ca..."; 

Then I turned them into UTXO objects using the splitTransaction() function:

const UTXO1 = await btc.splitTransaction(rawTx1, true); const UTXO2 = await btc.splitTransaction(rawTx2, true); 

Next, I get the Wallet public keys I used for these transactions:

const wallet_1 = await btc.getWalletPublicKey("m/49'/1'/1'", false, true); const wallet_2 = await btc.getWalletPublicKey("m/49'/1'/2'", false, true); 

So transaction 1 was made with wallet 1 and transaction 2 with wallet 2. According to LedgerJS, I have to respect that order when calling the signP2SH() function:

btc.signP2SHTransaction([[UTXO1, 1, wallet_1.publicKey], [UTXO2, 1, wallet_2.publicKey]],     ["m/49'/1'/1'", "m/49'/1'/2'"],     btc.serializeTransactionOutputs(???).toString('hex') ); 

Here,

[UTXO1, 1, wallet_1.publicKey] 

are the Transaction object, output index, and redeem script in that order. And

["m/49'/1'/1'", "m/49'/1'/2'"] 

are the derivation paths of both of my wallets.

My question is about the third line:

btc.serializeTransactionOutputs(???).toString('hex') 

I know for a single transaction from a single wallet, I just toss in the UTXO1 in there:

btc.serializeTransactionOutputs(UTXO1).toString('hex') 

But now that I have multiple UTXO’s, I don’t know what to fill in there anymore. Any ideas?

Interpretation of composition operator when applying a function to the output of another function

Refreshing my calculus skills a bit, I reviewed the chain rule:

enter image description here

I wondered if the composition operation $ \circ$ in $ g \circ f(x)$ could actually also be written as $ g(f(x))$ as this would resemble how one might think about such an operation from a (functional) programming perspective.

Disclaimer

My background is in Software Engineering combined with Applied Statistical Analysis within the context of a degree in Business Administration – so unfortunately never had heavy formal training on theoretical Math. Thus apologies if I might sometimes not use the correct technical terms and/or express things a bit “unmathy”.

Passing uint64_t from C++ to R: Hilbert Mapping – xy2d function

I have been working with Rcpp to perform a forward and backward Hilbert Mapping. Below is an implementation based on this code.

My application is in genomics and I may be dealing with enormous datasets, which necessitates the use of very large integers for indices, so I found this code for passing large integers to R using Rcpp and the bit64 R package and incorporated it after the for loop.

The xy2d() function works properly. My interest is on your thought regarding the code AFTER the for loop, which prepared the result for passage back to R. Please let me know what you think 🙂

#include <Rcpp.h> using namespace Rcpp; # include <bitset> # include <cstdint> # include <ctime> # include <iomanip> # include <iostream> using namespace std; //****************************************************************************80 // [[Rcpp::export]] Rcpp::NumericVector xy2d ( int m, uint64_t x, uint64_t y ) // //****************************************************************************80 {   uint64_t d = 0;   uint64_t n;   int rx;   int ry;   uint64_t s;    n = i4_power ( 2, m );    if ( x > n - 1 || y > n - 1) {     throw std::range_error("Neither x nor y may be larger than (2^m - 1)\n");   }    for ( s = n / 2; s > 0; s = s / 2 )   {     rx = ( x & s ) > 0;     ry = ( y & s ) > 0;     d = d + s * s * ( ( 3 * rx ) ^ ry );     rot ( s, x, y, rx, ry );   }    std::vector<uint64_t> v;   v.push_back(d);   //v[0] = d    size_t len = v.size();   Rcpp::NumericVector nn(len);         // storage vehicle we return them in    // transfers values 'keeping bits' but changing type   // using reinterpret_cast would get us a warning   std::memcpy(&(nn[0]), &(v[0]), len * sizeof(uint64_t));    nn.attr("class") = "integer64";   return nn; } 

This post will be followed up shortly with another post regarding the rot() function, and well as the reverse d2xy() function

How to do if a potential function Does not work? Amortized analysis

Here is an example taken from CLRS.

q)Consider an ordinary binary min-heap data structure with n elements supporting the instructions INSERT and EXTRACT-MIN in O(lg n) worst-case time. Give a potential function Φ such that the amortized cost of INSERT is O(lg n) and the amortized cost of EXTRACT-MIN is O(1), and show that it works.

$ \Phi(H) = 2 \cdot (size of heap = n) = 2n$

insert:

the amortized cost has the formula

$ a_n = c_n + (\Phi_{n+1}) – \Phi_n$

$ = log(n) + 2(n+1) – 2n = log(n) + 2 = log(n)$

holds ?

$ a_n = c_n + (\Phi_{n+1}) – \Phi_n$

$ = log(n) + 2(n+1) – 2n = log(n) + 2 = log(n)$

delete is a bit different because after operation n goes down by 1 hence

$ a_n = c_n + (\Phi_{n+1}) – \Phi_n$

$ = log(n) + 2(n) – 2(n+1) = log(n) – 2 = log(n)$

so delete is obviously wrong since its not O(1) but insert gave correct one. How do I properly show a potential function being incorrect? Is it enough to just show this? Note: I’m not looking to solve the above question just looking how to disprove potential functions.

Can you tell me if this function is correct?

Can you tell me if this function is correct?
If not, can you tell me why/how it is not?

 var recLimit;  function recLimit() { stopvideo.trigger( "click"); } 
Code (JavaScript):

Do you need more info to help me? I was told " with stopvideo, you only need to use trigger and specify the "click" event", but I need more help. I look forward to your assistance

How to get dplyr::mutate() and factor() to work when placed inside a function?

I am exploring data from the Pokemon API (not actually using the API, just pulling the .csv files from the github). In a file that contains the types of every Pokemon in narrow format (a Pokemon can have up to two types) called pokemon_types.csv, the types are encoded as integers (essentially factors). I want to label these levels by using a lookup table (types.csv), also from the API, that contains the levels as an id (1, 2, 3, etc.) and a corresponding identifier (normal, fighting, flying, etc.) which I want to use as the label.

> head(read_csv(path("pokemon_types.csv")), 10) # A tibble: 10 x 3    pokemon_id type_id  slot         <dbl>   <dbl> <dbl>  1          1      12     1  2          1       4     2  3          2      12     1  4          2       4     2  5          3      12     1  6          3       4     2  7          4      10     1  8          5      10     1  9          6      10     1 10          6       3     2 > head(read_csv(path("types.csv"))) # A tibble: 6 x 4      id identifier generation_id damage_class_id   <dbl> <chr>              <dbl>           <dbl> 1     1 normal                 1               2 2     2 fighting               1               2 3     3 flying                 1               2 4     4 poison                 1               2 5     5 ground                 1               2 6     6 rock                   1               2 

My code works when I pipe all of the steps individually, but since I am going to perform this labeling step at least a dozen times or so I tried to put it into a function. The problem is that when I call the function instead (which has exactly the same steps as far as I can tell) it throws an object not found error.

The Setup:

library(readr) library(magrittr) library(dplyr) library(tidyr)  options(readr.num_columns = 0)  # Append web directory to filename path <- function(x) {   paste0("https://raw.githubusercontent.com/",          "PokeAPI/pokeapi/master/data/v2/csv/", x) } 

The offending function:

# Use lookup table to label factor variables label <- function(data, variable, lookup) {   mutate(data, variable = factor(variable,                                   levels = read_csv(path(lookup))$  id,                                  labels = read_csv(path(lookup))$  identifier)) } 

This version, which doesn’t use the function, works:

df.types <-   read_csv(path("pokemon_types.csv")) %>%   mutate(type_id = factor(type_id,                            levels = read_csv(path("types.csv"))$  id,                           labels = read_csv(path("types.csv"))$  identifier)) %>%   spread(slot, type_id)  head(df.types) 

it returns:

# A tibble: 6 x 3   pokemon_id `1`   `2`           <dbl> <fct> <fct>  1          1 grass poison 2          2 grass poison 3          3 grass poison 4          4 fire  NA     5          5 fire  NA     6          6 fire  flying 

This version, which uses the function, does not:

df.types <-   read_csv(path("pokemon_types.csv")) %>%   label(type_id, "types.csv") %>%   spread(slot, type_id) 

it returns:

Error in factor(variable,                  levels = read_csv(path(lookup))$  id,                  labels = read_csv(path(lookup))$  identifier) :    object 'type_id' not found  

I know that there are several things that may be sub-optimal here (downloading lookup twice each time for instance) but I am more interested in why a function that seems identical to some written code makes it not work anymore. I am sure I am just making a silly mistake.