## Is data serialization efficient? For example, how do I understand the runtime of pickle in Python?

I have two pandas dataframe

``data1 = pd.read_csv("1.csv") data2 = pd.read_csv("2.csv") ``

And then I concatenate it.

``data = pd.concat([data1, data2]) ``

I run these same line of codes multiple time. How should I understand if it is efficient to serialize it?

## Defining the standard model of PA so that a space alien could understand

First, some context. In one of the comments to an answer to the recent question Why not adopt the constructibility axiom V=L? I was directed to some papers of Nik Weaver at this link, on conceptualism. Many of the ideas in those papers appeal to me, especially the idea (put in my own words, but hopefully accurate) that the power set of the natural numbers is a work in progress and not a completed infinity like $$\mathbb{N}$$.

In some of those papers the idea of a supertask is used to argue for the existence of the completed natural numbers. One could think of performing a supertask as building a machine that does an infinite computation in a finite amount of time and space, say by doing the $$n$$th step, and then building a machine of half the size that will work twice as fast to do the $$(n+1)$$th step and also recurse. (We will suppose that the concept of a supertask machine is not unreasonable, although I think this point can definitely be argued.)

The way I’m picturing such a machine is that it would be a $$\Sigma_1$$ oracle, able to answer certain questions about the natural numbers. I suppose we would also have machines that do “super-supertasks”, and so forth, yielding higher order oracles.

To help motivate my question, suppose that beings from outer space came to earth and taught us how to build such machines. I suppose that some of us would start checking the validity of our work as it appears in the literature. Others would turn to the big questions: P vs. NP, RH, Goldbach, twin primes. With sufficient iterations of “super” we could even use the machines to start writing our proofs for us. Some would stop bothering.

Others would want to do quality control to check that the machines were working as intended. Suppose that the machine came back with: “Con(PA) is false.” We would go to our space alien friends and say, “Something is wrong. The machines say that PA is not consistent.” The aliens respond, “They are only saying that Con(PA) is false.”

We start experimenting and discover that the machines also tell us that the shortest proof that “Con(PA) is false” is larger than BB(1000). It is larger than BB(BB(BB(1000))), and so forth. Thus, there would be no hope that we could ever verify by hand (or even realize in our own universe with atoms) a proof that $$0=1$$.

One possibility would be that the machines were not working as intended. Another possibility, that we could simply never rule out (but could perhaps verify to our satisfaction if we had access to many more atoms), is that these machines were giving evidence that PA is inconsistent. But a third, important possibility would be that they were doing supertasks on a nonstandard model of PA. We would then have the option of defining natural numbers as those things “counted” by these supertask machines. And indeed, suppose our alien friends did just that–their natural numbers were those expressed by the supertask machines. From our point of view, with the standard model in mind, we might say that there were these “extra” natural numbers that the machines had to pass through in order to finish their computations–something vaguely similar to those extra compact dimensions that many versions of string theory posit. But from the aliens’ perspective, these extra numbers were not extra–they were just as actual to reality as the (very) small numbers we encounter in everyday life.

So, here (finally!) come my questions.

Question 1: How would we communicate to these aliens what we mean, precisely, by “the standard model”?

The one way I know to define the standard model is via second order quantification over subsets. But we know that the axiom of the power set leads to all sorts of different models for set theory. Does this fact affect the claim that the standard model is “unique”? More to the point:

Question 2: To assert the existence of a “standard model” we have to go well beyond assuming PA (and Con(PA)). Is that extra part really expressible?

## Got this error while trying to launch Docker Quick Start Terminal on Mac. I can’t seem to understand the error

``Traceback (most recent call last):   File "/anaconda3/lib/python3.7/site-packages/conda/cli/main.py", line 138, in main     return activator_main()   File "/anaconda3/lib/python3.7/site-packages/conda/activate.py", line 955, in main     print(activator.execute(), end='')   File "/anaconda3/lib/python3.7/site-packages/conda/activate.py", line 180, in execute     return getattr(self, self.command)()   File "/anaconda3/lib/python3.7/site-packages/conda/activate.py", line 154, in activate     builder_result = self.build_activate(self.env_name_or_prefix)   File "/anaconda3/lib/python3.7/site-packages/conda/activate.py", line 285, in build_activate     return self._build_activate_stack(env_name_or_prefix, False)   File "/anaconda3/lib/python3.7/site-packages/conda/activate.py", line 307, in _build_activate_stack     return self.build_reactivate()   File "/anaconda3/lib/python3.7/site-packages/conda/activate.py", line 452, in build_reactivate     new_path = self.pathsep_join(self._replace_prefix_in_path(conda_prefix, conda_prefix))   File "/anaconda3/lib/python3.7/site-packages/conda/activate.py", line 569, in _replace_prefix_in_path     if path_list[last_idx + 1] == library_bin_dir: IndexError: list index out of range ``

`\$ /anaconda3/bin/conda shell.posix activate base`

environment variables: CIO_TEST= CONDA_BACKUP_HOST=x86_64-apple-darwin13.4.0 CONDA_BUILD_SYSROOT=/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk CONDA_DEFAULT_ENV=base CONDA_EXE=/anaconda3/bin/conda CONDA_PREFIX=/anaconda3 CONDA_PROMPT_MODIFIER=(base) CONDA_PYTHON_EXE=/anaconda3/bin/python CONDA_ROOT=/anaconda3 CONDA_SHLVL=1 DOCKER_CERT_PATH=/Users/Ronnit/.docker/machine/machines/default NO_PROXY= PATH=/anaconda3/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/anaconda3 /bin REQUESTS_CA_BUNDLE= SSL_CERT_FILE=

`` active environment : base active env location : /anaconda3         shell level : 1    user config file : /Users/Ronnit/.condarc ``

populated config files : /Users/Ronnit/.condarc conda version : 4.6.14 conda-build version : 3.17.8 python version : 3.7.3.final.0 base environment : /anaconda3 (writable) channel URLs : https://repo.anaconda.com/pkgs/main/osx-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/free/osx-64 https://repo.anaconda.com/pkgs/free/noarch https://repo.anaconda.com/pkgs/r/osx-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /anaconda3/pkgs /Users/Ronnit/.conda/pkgs envs directories : /anaconda3/envs /Users/Ronnit/.conda/envs platform : osx-64 user-agent : conda/4.6.14 requests/2.21.0 CPython/3.7.3 Darwin/18.5.0 OSX/10.14.4 UID:GID : 501:20 netrc file : None offline mode : False

An unexpected error has occurred. Conda has prepared the above report.

If submitted, this report will be used by core maintainers to improve future releases of conda. Would you like conda to send this report to the core maintainers?

Timeout reached. No report sent.

## Predicate Logics – double negation HELP me understand

Sorry for maybe a silly question but i need to understand how ¬(¬∀x ¬A(x)) equals ∀x ¬A(x)

In my mind, the negation before the parenthesis will be applied to both ¬∀x and ¬A(x). So it would look like this:

¬(¬∀x ¬A(x)) = ¬¬∀x ¬¬A(x)

A double negation would become positive and would then let ¬(¬∀x ¬A(x)) equal to ∀x A(x).

So, why would ¬(¬∀x ¬A(x)) equal ∀x ¬A(x)? Why wouldnt ¬(¬∀x ¬A(x)) equal ∀x A(x)?

## Trying to understand extended keys

I’m slowly getting educated about how the blockchain and Bitcoin addresses work. I now understand why it is more secure to use a different Bitcoin address for every transaction you make but I’m now trying to understand how I could achieve that with something as basic as paper wallets for educational purposes.

So, I noticed most big exchanges nowadays do generate a new public address every time you want to deposit cryptocurrencies. From what I’ve read, this is made possible by the use of an extended key, which contains a public and private part, just like regular keys. Now, where I’m still a little bit confused is how does it work exactly for them (the exchanges) to access all the funds from all the public addresses you generated at the same time (since they do show you a total balance and you can spend that balance with what seems to be only 1 transaction). Does the private extended key give you access to spend all the funds in all the public addresses generated with that 1 private extended key?

Also, when we say addresses should never be used more than 1 time, I assume it still must be used 2 times at some point, since you add funds and then withdraw it, meaning 2 transactions total? Or do I get it completely wrong?

## Understand technology behind website [on hold]

For some time now I’m trying to crack what technology is used behind a website I’ve found (I especially like calendar, this is how my research started) – https://lockme.pl/en/warszawa/escape-project/cursed-island/

Based on builtwith.com info it is using PHP as a backend and simple JS. I don’t see any ReactJS any Angular, anything. And yet I am strong feeling that this wasn’t all written by hand from scratch and some library must have been used. I can see Ajax but that’s it. I also tried using browser plugins etecting Angular and React but no luck.

Is there any way to find it out?

## What do I not understand about Alpha-Beta-Pruning in Chess?

I’m trying to write a chess engine, Min-Maxing works. Now I wanted to implement Alpha-Beta-Pruning, but despite reading about it and watching videos, I must have a severe misunderstanding somewhere.

• For Min-Maxing, I build the complete search tree, and evaluate only the nodes at the deepest depth. Then go upwards, and “carry” the scores, without evaluating other nodes; I only compare the carried scores.

• For ABP, I also build the complete search tree (?!), and evaluate only the nodes at the deepest depth. Then go upwards, and “carry” the scores; but this time I can often prune nodes, meaning that I don’t need to min-max that often, and also don’t have to evaluate all nodes at the deepest depth.

Thing is: I’m not limited by the times I have to evaluate nodes, I’m limited by memory (In size and access speed) because I still need to build a tree of millions of nodes, and prune it afterwards.

I assumed ABP should somehow occur earlier, so that I don’t even have to create branches, but that doesn’t work, since you have to always evaluate the deepest nodes for comparison (So you have to build the tree all the way).

I’m feeling like an idiot here, what do I miss?

## How to understand CUDA out of memory message?

Below is the output of `nvidia-smi`.

``+-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.104      Driver Version: 410.104      CUDA Version: 10.0     | |-------------------------------+----------------------+----------------------+ | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC | | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. | |===============================+======================+======================| |   0  GeForce GTX 108...  On   | 00000000:01:00.0 Off |                  N/A | | 57%   68C    P8    19W / 280W |   7278MiB / 11176MiB |      0%      Default | +-------------------------------+----------------------+----------------------+ |   1  GeForce GTX 108...  On   | 00000000:02:00.0 Off |                  N/A | | 67%   73C    P2   259W / 280W |  10873MiB / 11178MiB |     96%      Default | +-------------------------------+----------------------+----------------------+ ``

Now I try to run a code and it gives me this message.

``RuntimeError: CUDA out of memory. Tried to allocate 8.62 MiB (GPU 0; 10.91 GiB total capacity; 2.80 GiB already allocated; 16.88 MiB free; 0 bytes cached) ``

I understand that I do not have enough memory but where do I see how much memory is required by my code?

I try to run another code that requires x10000 more memory and it gives me this error

``RuntimeError: CUDA out of memory. Tried to allocate 858.38 MiB (GPU 0; 10.91 GiB total capacity; 2.51 GiB already allocated; 756.88 MiB free; 0 bytes cached) ``

How can I make it tell me something like `CUDA out of memory. Need 12GB Memory total.`

## Character is too dumb to understand god or religion

I am playing a character with 5 intelligence in a selfmade campaign. In this campaign 2 spezific gods are fighting for supremacy and influence people to gain advantage. The other gods, planes etc. still exist, but don’t take direct action.

My questions are: Is my character able too understand the concept of gods/religions? (She should be too dumb for it)

Where in the dnd planes would she end up after she dies? (She has no religion or god and therefore can be called an atheist)

I couldn’t find an answer to this anywhere else. Hope someone can help me.

Best regards

## debugging a code to understand how it works?

I know debugging is usually used to find errors/bugs in the code. But is it a good practice to debug the code to understand how it works, how the values in variables change over time and what values will be used in function calls. This is with respect to python programming where there are no variable types.

Is it a commonly used practice or do you READ the code to understand how it works.