Transforming an immutable binary tree without recursion

I’m struggling on this one. I have a Binary Decision Diagram, which is pretty much tree-like. Each node has a hi and lo node. I need to recurse into the tree, and if some conditions are the case replace the node with a new node. Nodes are immutable. So when I encounter something I have to change, I have to return the new version of the node all the way up. Ultimately the root changes. Specifically I’m trying to implement the RESTRICT algorithm for ROBDDs.

And I have! This works fine (C#, sorry?)

        Node Restrict(Node node, Func<Variable, bool?> npoint)         {             if (node == context.Term0 || node == context.Term1)                 return node;              // value has been restricted, replace with true or false path             if (npoint(node.Variable) is bool value)                 return Restrict(value ? node.Hi : node.Lo, npoint);              var lo = Restrict(node.Lo, npoint);             var hi = Restrict(node.Hi, npoint);             return new Node(node.Variable, lo, hi);         } 

However, my diagram is rather huge. And I’m running out of stack space.

So, I’m trying to come up with a way to do this without recursion. I’ve tried a few things but haven’t come up with anything that works. As I start to expand it out, it gets pretty complicated and I start losing track of stuff.

Can somebody point me to some resource on this?

Python program to find a word ladder transforming “four” to “five”

I saw this Puzzling problem and thought I would try to write a Python program to solve it. The task is to transform “four” to “five”, forming a new four-letter word at each step, replacing one letter at each step, in as few steps as possible.

But turns out I don’t know how to optimize recursion, so I’m posting here for help. I’m mostly just confused on why the code to change the past needs to be at the top of the function, but I would also like advice on how to speed this up in general. Right now it takes about 10x as long for each step up max_depth gets on my computer.

There won’t be any matches until you change max_depth – I didn’t want anyone copy-pasting and lagging out. There should be a solution at depth 5, according to Puzzling. However, my words file doesn’t have the Foud or the word Fous, which that answer uses. Bumping up to max_depth six will take my computer ~10 minutes, which I don’t want to try yet.

def hamming(string1, string2):     assert len(string1) == len(string2)     return sum(char1 != char2 for char1, char2 in zip(string1, string2))  max_depth = 3 start_word = "five" end_word = "four" all_words = open("/usr/share/dict/words", "r").read().lower().splitlines() all_words = list(filter(lambda word: word.isalpha(), all_words)) all_words = list(filter(lambda word: len(word) == len(start_word), all_words))  sequences = []  def search(current_word, past = []):     # Needs to be first to be fast for some reason     past = past[:]     past.append(current_word)      if len(past) > max_depth:         sequences.append(past)         return      for word in all_words:         if hamming(word, current_word) == 1 and word not in past:             search(word, past)  search(start_word)  sequences = [sequence[:sequence.index(end_word) + 1] for sequence in sequences if end_word in sequence]  if len(sequences) == 0:     print("No matches") else:     print(min(sequences, key=len))  

How do I write an asterisk at the beginning of a line in wiki syntax without transforming into a list item?

When using wiki syntax, if I put an asterisk (*) at the beginning of a line, it gets transformed into a unordered list. How is it possible to have the asterisk to remain as it is when at the beginning of a line?

Example :

*Hello world, this sentence is not in an unordered list. 

Transforming string with keys:values in column name:rows using pandas

I have a string that contain data likes this:

{"key1":"1","key2":2,"key 3":"text3","key4":{"subkey4.1":"text4.1","subkey4.2":"date"},"key5":[{"subkey5.1":"text5.1","subkey5.2":"text5.2"}],"key6":"6"}  

I would like to transform into pandas df, using keys as column names, subkeys as secondary columns, and key values as separate rows under the respective column.

Guessing pivot function needs to be used but not sure how.

PHP – Transforming text content according to rules then persisting to database

I have a web application where my users can upload text documents and emails into what I call Streams. A stream is like a “stack”, that holds all emails / documents.

My users can then add fields to the stream, and field_rules. This means, that every time a new document or email is added, the text content of the document/email, will be parsed according to the rules, and then the final parsing result is then stored in the database.

My current code works, to some degree, however, it feels a bit “hackish” as well as not very “Laravel like”.

My progress so far

Whenever a new document (or email) is added, it will be handled by a queue:

public function handle(DocumentHandlingFinished $  event) {     $  stream = Stream::find($  event->document->stream_id);     $  ParsingRules = new ApplyParsingRules($  stream, $  event->document);      $  event->document->storeResults($  ParsingRules->parse());      return true; } 

OK, so in above I start off by getting the “Stream” that the document was uploaded to.

Then I instanciate the ParsingRule class, that will perform the various rules on the content of the document.

Finally, I save the parsed results to the database.

Below you can see my ApplyParsingRule class:

public function __construct(Stream $  stream, Document $  document) {     $  this->data = $  document;     $  this->content = $  document->content;     $  this->stream = $  stream;     $  this->fields = $  this->stream->documentfields()->with('rules')->get(); }  //Iterate through each rule and parse through the content. public function parse() : object {     $  content = $  this->content;     $  results = collect([]);     foreach ($  this->fields as $  field) {         foreach ($  field->rules as $  fieldrule) {              $  content = doSomething($  content); //Minified for simplicity.             $  results[] = [                 'field_rule_id' => $  fieldrule->id,                 'content' => $  content             ];          }     }      return $  results; }  

Now as you can see in my handle message, I call the parse() function and then saves the results:

$  event->document->storeResults($  ParsingRules->parse()); 

In my Document model, I have the storeResults() function:


//Persist the parsed content to the database. public function storeResults(object $  results) : object {     //If the document was already parsed before, delete old records.     if ($  this->results->count() > 0) {         $  this->results()->delete($  this->results);     }      //Use ->last() because we only want to insert the last parsed text result.     return $  this->results()->create($  results->last()); } 

So above flow works as long as the $ results array contains information.

I was wondering if above code can be refactored / improved even further?

Another concern I have is, If the $ results array from the ApplyParsingRule class is empty, then the create() method will fail.

Algorithm for transforming an array of lines and arcs in a closed contour

I’m reading from a DXF with dxf_lib. In my DXFs there are different closed contours, what I do with dxf_lib is to estract information about every line and arc. I want to transorm a big array of lines and arcs, belonging to different contours, to arrays of ordered lines and arcs, one array for every closed contour. Do you have any suggestion on how to proceed?

Transforming a long-running operation into a step by step operation?

I am working on a video game in Unity and at some point I’m facing a tough problem to solve:

The game freezes while loading level data.

Let me lay down what’s happening in that process that takes a few seconds:

  • load track mesh data consisting of
    • sections
    • faces
    • vertices
  • load track textures
    • load each texture
    • build a texture atlas for the mesh
  • build track mesh using all the data that’s been loaded

This loading process takes a few seconds and I expect it to be even longer since I will have to load scenery which is bigger by a magnitude (not yet done).


Some of the steps in this loading process cannot be run in a background thread as they instantiate objects from Unity, which can only be done from its main thread, e.g. create a texture, mesh etc.

Attempts and ideas at solving the problem:

  • start a coroutine, this simply doesn’t work since they are expected to run within a single frame, the loading process take hundreds of frames, a frame being 1/60th of a second

  • split the loading into chunks, process each at every frame, likely to be the solution but tricky

    • when required, generate engine objects using a dispatcher, i.e. an async call executing in the appropriate thread, e.g. create the final texture out of raw pixels

I will focus on the second approach as it seems the right one but I am open for another approach.


What approach or pattern I could use to split a long running operation into smaller ones?

(Hope that’s clear enough to you, let me know otherwise.)

Transforming an ODE into Legendre’s Equation

I am trying to transform the ODE $ $ \frac{1}{\sin(\theta)}\frac{d}{d\theta}\left(\sin(\theta)\frac{dS}{d\theta}\right)+\lambda S=0,$ $ in to Legendre’s equation $ $ (1-\mu^2)\frac{d^2S}{d\mu^2}-2\mu\frac{dS}{d\mu}+n(n+1)S=0$ $ when $ \lambda=n(n+1)$ for $ n=0,1,2..$ and $ \mu=\cos(\theta)$ .

I calculated that, \begin{align} \frac{dS}{d\theta}&=\frac{dS}{d\mu}\frac{d\mu}{d\theta}=-\sin(\theta)\frac{dS}{d\mu} \ \frac{d^2S}{d\theta^2}&=\frac{d^2S}{d\mu^2}\frac{d^2\mu}{d\theta^2}=-\cos(\theta)\frac{d^2S}{d\mu^2}. \end{align} Then, \begin{align} \frac{1}{\sin(\theta)}\left(\sin(\theta)\frac{d^2S}{d\theta^2}+\cos(\theta)\frac{dS}{d\theta}\right)+S\lambda&=0 \ -\mu\frac{d^2S}{d\mu^2}-\mu\frac{dS}{d\mu}+n(n+1)S&=0. \end{align} But at this point, I don’t see how Legendre’s equation is possible. A hint would be kindly appreciated.