Character development question for a god of Knowledge and Machines: How to make a god an interesting character?


System

We are playing a custom system, and in the system we are all gods that create a universe, having mortals with a primary planet. The mortals are in the early iron age.

There is a rotating GM, but one main GM that has the final say. During the session, players are encouraged to take creative powers to add to the world, so decisions about what is going on are somewhat handled by players, but again final say goes to the current GM.

Gods

There are many gods, some npc, some pc.

Each god has 2+ attributes, e.g: Fire and Honor, or Transformation and Storytelling. There are five PCs and one rotating GM. As gods level up, they may take on new attributes.

Each god has a realm, and the realms can be though of as part of the character. The gods can also create things and control them, for instance, my god has an elevator which “takes you where you want to go, unless it thinks there’s somewhere else you should be”. So, if the PCs use my elevator, I can choose to send them somewhere besides where they want to go.

Threats

The threats that come up in session tend to be concerning other NPC gods, and maybe to a much milder degree PC gods. An example threat might be a trickster god stealing the aurora and requiring someone a mortal to purchase it back from him, or the god of the mortal world being in pain from the mortals digging into it, and our party has to find a way to solve the issue.

My Character

My god is the god of Knowledge and Machines. He’s similar to Will Rogers in manner, knows godly amounts of knowledge, uses machines to benefit mortals. He has buddhist master levels of patience. Nothing much threatens him really.

My character’s backstory

My character came about when the king of the gods wanted to know the answer to a riddle, and created my character.

My character helped create the planet the mortals live on. He has a realm that appears to be a serene countryside, but is really made up of machines.

Eventually, the king of the gods basically disintegrated, and my god was exposed to the timeline of a previous incarnation of the universe, before my god existed. This caused his mind to fragment into two realities and started “corrupting” him. This is where I have a few ideas, but I’m not sure how to execute them.

Threats

The threats that come up in session tend to be concerning other NPC gods, and maybe to a much milder degree PC gods. An example threat might be a trickster god stealing the aurora and requiring someone a mortal to purchase it back from him, or the god of the mortal world being in pain from the mortals digging into it, and our party has to find a way to solve the issue.

Question(s)

So, I guess I don’t know where to go with my character. There’s no real change to be had as a normal healthy god, so I introduced the corrupted aspect. I’m trying to

  • How should I interact with the other god PCs in interesting ways without antagonizing them as a corrupted god?
  • How can I redeem my god and bring him back to a non-corrupt state, but still leave room for growth? His original “healthy” form was too perfect and didn’t have room for growth. His corrupted form is more interesting, but I feel like it can be too antagonistic.
  • How can I get my god to take action? In the past he tended to be fairly passive, because he values allowing others to find their own meaning and not interfering with their quest to find meaning, even if that meant allowing them to destroy or steal something he owns. Now that he’s corrupted, I’m having him get upset about things being “not perfect”, and trying to “fix” them, or even just outright destroy them.

simple question about epsilon and estimation turing machines

i am getting really confused by it. i got to a point i had to calculate the lim when $ n \rightarrow \infty$ for an optimization problem, and i got to the point that i had to calculate a fairly simple limit: $ lim_{n \rightarrow \infty} {3-\frac{7}{n}}$ .

now i used $ 3 – \epsilon$ and i am trying to show that there can’t be any $ \epsilon>0$ so that the estimation of the algorithm is $ 3-\epsilon$ , because there exists a “bigger estimation” – and this is the part i am not sure about, what is the correct direction of the inequality? $ 3-\frac{7}{n} > 3 – \epsilon$ or the opposite? i am trying to show that the estimation ration is close to 3.

i think that what i wrote is the correct way, but not sure. would appreciate knowing what is correct in this case. thanks.

How did Vice detect voting machines connected to the internet? [closed]

https://www.vice.com/en_us/article/mb4ezy/top-voting-machine-vendor-admits-it-installed-remote-access-software-on-systems-sold-to-states

What method would you use to determine if a machine is a voting machine? do they give off a unique signature? Did someone give them a list of all IP addresses associated with voting machines and they pinged them to see if they were online?

the article is very scarse on details, yet it referneces a group of security experts

Why do memory dump sizes on some machines not correlate with the amount of RAM?

I’m using winpmem to do some memory dumps. I did a test run on my workstation and the files was about 32GB, exactly what I expected since I have 32 gigs of RAM. However, on other machines the output file (aaf4 format) is much larger. It does not seem to correlate to RAM + Pagefile either. What determines this file size? Some of these are VMs, if it matters.

Why aren’t distributed computing and/or GPU considered non-deterministic Turing machines if they can run multiple jobs at once?

So we know a nondeterministic Turing machine (NTM) is just a theoretical model of computation. They are used in thought experiments to examine the abilities and limitations of computers. Commonly used to dicuss P vs NP, and how NP problems cannot be solved in polynomial time UNLESS the computation was done on the hypothetical NTM. We also know an NTM would use a set of rules to prescribe more than one action to be performed for any given situation. In other words, attempt many different options simultaneously.

Isn’t this what distributed computing does across commodity hardware? Run many different possible calculations in parallel? And the GPU, does this within a single machine. Why isn’t this considered an NTM?

Are there any educational virtual machines?

There is plenty of machine/assembly languages such as LC-3, DLX, etc. designed for educational purposes. I am looking for an educational VM, by VM I mean stack virtual machine that has instructions higher-level than assembly language, something similar to JVM, but much simpler, in order to make the implementation of a compiler for this VM a doable task for a single person in restricted time, but yet powerful to be a target for high-level OOP language. I failed to google one, are there any?

Undecidability of two Turing machines acting the same way on an input

So I need to find a reduction to the (undecidable) problem of deciding if two Turing machines $ M_1$ and $ M_2$ behave the same way on an input $ x$ . “Behaving the same way” is defined like this:

$ M_1$ and $ M_2$ behave the same way on an input $ x$ , when they both don’t halt, or when they both accept $ x$ or when they both halt and reject $ x$ .

I found a reduction from the halting problem which uses the fact that if the Turing machines behave in the same way, than they must have the same language. But this all breaks down in the case that $ M_1$ rejects $ x$ and $ M_2$ doesn’t halt, obviously they could have the same language, but they don’t act in the same way.

I do think the best way to approach this is by reducing from the halting problem, but I just can’t find a valid reduction. Any help would be appreciated.

Turing machines equivalence from reduction

Given the Halting problem, I’m trying to reduce it in order to show that $ \left\{ \left( \langle M_1\rangle,\langle M_2\rangle \right)| L(M_1)=L(M_2) \right\}$ , where $ M_1, M_2$ are Turing machines, is undecidable. I’m having some trouble making sense of the answer given here. Why do we assume that $ M$ will finish reading $ x$ exactly after $ |x|$ steps? And if $ M$ doesn’t halt, why does $ M’$ automatically accept the empty language, i.e return the constant zero function? Can it not except other strings, not just $ x$ ?

I would appreciate it if someone could break down the general idea of that answer or maybe give me a few hints on how I can approach this problem differently.

Why don’t prefix-free Turing machines suffer from complexity dips? [closed]

It’s claimed in several texts on algorithmic complexity that prefix-free Turing machines are better for understanding randomness, at least in infinite sequences. In Nies’ Computability and Randomness, the reason is given by theorems 2.2.1 and 2.2.2 on p. 83. I’ll focus on the former, which states that for some plain (not prefix-free) machine there is a constant $ c$ such that for each natural number $ d$ and string $ w$ of length $ \geq 2^{d+1}+d$ , there is a prefix $ x$ of $ w$ such that the plain complexity $ C(x) \leq |x|-d+c$ . (Here $ |x|$ is the length of $ x$ .)

That is, if complexity is defined in terms of Turing machines that accept arbitrary strings as inputs, any sufficiently long string, no matter how complex, can have initial substrings with more or less arbitrary “dips” in complexity. The proof of the theorem uses a machine that uses the length of an input string to construct its output. This is apparently an important point (see below).

My question: Why can’t a prefix-free machine do this, too? Why doesn’t prefix-free complexity also allow dips in complexity of initial substrings? I have not yet found an explanation of this point that I understand in the algorithmic complexity textbooks by Nies, Downey and Hirschfeldt (see below), Li and Vitanyi (although it may be there somewhere), or Calude (Computability and Randomness). I think that Nies and D&H just think it’s obvious, but I don’t see why.

Downey and Hirschfeldt’s Algorithmic Randomness and Complexity, p. 122, refers to a similar theorem proved earlier in the book, and remarks that a prefix-free machine can be thought of as one that reads an input until its finished, without moving in the opposite direction on the tape, and without any test for a termination-of-input character or pattern. The text says that “this highlights how using prefix-free machines circumvents the use of the length of a string to gain more information than is present in the bits of a string.” I guess that if there is a single tape that’s read-only and moves in only one direction, then there is no way to keep a counter to measure arbitrary lengths; one would need to store a counter elsewhere on the tape, or at least overwrite something on the tape as one read it. But why must a prefix-free machine work like this? Li and Vitanyi’s An Introduction to Kolmogorov Complexity and Its Applications, 3rd ed, sect. 3.1, example 3.1.1, p. 201 describes a prefix-free machine that has a three tapes. There’s a read-only unidirectional tape as in Downey and Hirschfeldt, and a unidirectional write-only output tape. The third tape is a bidirectional read-write work tape. Can’t the work tape be used to calculate the length of the input? In that case, why would there be a difference between prefix-free machines and plain Turing machines? Yet at the beginning of chapter 3 (pp. 197ff), Li and Vitanyi also treat prefix-free machines as a way of avoiding consequences related to those implied by Nies’ theorem 2.2.1.