Launch of a game for computers and cell phones

For a few months I have been creating a game using Gamemaker Studio 2 Desktop.


It turns out that I would like to launch it also for cell phones, for that I bought Gamemaker Studio 2 Mobile.


That’s when my doubt arises.

In Gamemaker Studio 2 Mobile you have the option to open the projects made in Gamemaker Studio 2 Desktop and its interface is the same as in Gamemaker Studio 2 Desktop.

There are 2 shortcuts on my computer:

enter image description here

Now I don’t know if Gamemaker Studio 2 is a new program or works only as a license that allows me to launch the game on another platform (like a plugin extension).

I remember that in Gamemaker Studio 1.4 the program was one and each version of the game (desktop or mobile) had its own code.

I basically had to copy the game design and edit the copy with its equivalent features.

I remember that one of the adaptations is at the time of clicking on the screen, because on the cell phone you don’t use mouse_x or mouse_y.

What I would like to know is if I will need to adapt the code for this mobile version or when exporting the game, when choosing the mobile platform, the code will be adapted automatically.

Before coming here I looked for solutions in these links:

Cross platform from gamemaker

Cross-Platform Online Multiplayer (with or without GMnet) from gamemaker

Cross platform? from gamemaker

At one point I realized that maybe I was using the wrong term in the search, because the results I was finding revealed different things than what I was looking for. Something in the sense that computer and mobile people can play together, which is not useful for me since the game I am producing is not online.

In theory, should neuromorphic computers be more efficient than traditional computers when performing logic?

There is this general sentiment about neuromorphic computers that they are simply "more efficient" than von Neuman.

I’ve heard a lot of talk about neuromorphic computers in the context of machine learning.
But, is there any research into performing logic and maths in general on such computers? How would one translate arithmetic, logic and algorithms and into "instructions" for a neuromorphic computer, if there are no logic structures in the hardware itself?

It is common to draw parallels with a brain in this context, so here’s one: Brains are great at recognising faces and such, but I don’t think I can do maths faster than an Arduino (and that thing doesn’t need much energy).

Do schools save your search history on their computers they issue out? [duplicate]

So I received a laptop from the school and I was wondering if your logged into your own home network but signed into the computer (not in like a google account but like when your first turn on the computer you got to enter the username and password however the login for the computer is universal meaning to say it’s the same log in for all the students) if they could still receive the search history on the computer even if you were on a home network but logged into the computer with the universal login without physically having the computer in their possession and if so is it stored somewhere so they can go back and look at it? If you know the answer please let me know covid has gotten me going a little bit nuts

Do the newest computers still have ROM?

Now that many computers use UEFI instead of BIOS to boot the computer, and UEFI instructions are usually stored in a hidden hard disk partition, does this mean the newest computers do not need to have ROM, which is used to store BIOS program in the old days? Or, was ROM unnecessary long ago even before UEFI existed, as BIOS program and settings were stored in flash memory (so as to store the changes in settings by user)?

Is all data stored in computers stored as machine code?

I know that the most basic (and least abstract) code for programming is machine code (with binary of 0s and 1s being the typical machine code).

I also know that computers can save data even if they are turned off, by different types of computer memory (storage device memory, RAM and other computer system devices which can "remember" some data).

Is all data stored in computers ("all data remembered in a given computer memory") stored as machine code?
Is what’s saved in a computer’s "memory" (and becoming actually effective by electrical current correctly distributed to the computer system) just binary machine code in the sense that if I could read and understand that data directly without an operating system interfacing it for me it should appear in my mind as machine code, or rather, is it something else?

Are Thunderbolt-enabled computers without Thunderbolt ports vulnerable to Thunderspy?

Could these two attack scenarios exploit the recently publicized vulnerability?

  • Using a Thunderbolt adapter like an USB-to-Thunderbolt adapter on a computer without any Thunderbolt port
  • Temporarily replacing hardware (mainboard) with hardware that has Intel’s Thunderbolt port

And if one or both would work: what would be a reliable way to protect against this on such computers (Thunderbolt-enabled or Thunderbolt not disabled and hardware-replaceable)?

Is there a computer which can simulate all computers?

I have looked for proof that a machine cannot compute itself in more than realtime (Which would allow infinite computing speed) and I came to the conclusion that it is impossible for any computer to simulate all other computers. However, I lack proof of this intuition.

So the idea is that the state of every (modern, non-quantum) computer can be represented as a finite bitstring (just every state of registers, memory cells, and hard-drive…). However, in order to predict what this computer would do given any state, you would need a longer or at least equally long (in some trivial cases this might work) bitstring. But since there is always a bigger computer (with a longer bitstring – representation) there is no computer which can compute all computers.

However, I am wondering about the following:

Let $ f$ be a function which maps a computable function $ g$ and a valid input $ i_g$ of that function $ g$ , to the result of $ g$ given $ i_g$ :

$ $ f(g,i_g) = g(i_g)$ $

Can you proof that:

$ $ \exists i_f: f(f, i_f) \mbox{ does not halt, while } f(i_f) \mbox{ does.}$ $

Note that $ i_f$ is just a bitstring, and $ g$ can be represented by a program and therefore can be a bitstring too…

Data structures for quantum computers

In classical computers we have List,Queue,Tree etc data structures, since classical computers using 1’s & 0’s on those data structures. Then what happens when it comes to quantum computers, Do they(scientists) need to create new data structures ? Or use existing data structures with some sort of optimization ? Thanks in advance.

Why can’t we combine thousands of computers and dynamic programming to solve chess once and for all?

It takes 3000 years for a single computer to solve chess. But if we keep adding a computer to solving the problem, wouldn’t that divide the time it takes by 1/N for each computer we add? For example if we launched maybe tens of thousands of smaller computers, maybe using a cloud platform like AWS, and had each computer work on a small sub-problem of chess, store the results in a cloud database so the other computers know what subproblems have already been solved or are being worked on, we could solve chess in less than a week.

Why wouldn’t this work? And if it could work why has no one attempted this yet?