Time efficient way to implement Multi-Armed-Bandits?

I’m doing a research on Multi-Armed Bandit (MAB) problem with approx. 1 million arms. In contrast, the number of iterations is of course much larger, about 10-20 million.

Most MAB-algorithms require an argmax operator (argmax of the action space) that has to be executed in each iteration in order to select the current arm (which maximizes a given selection criterion). Regardless of the chosen programming language for implementation, this procedure/ this argmax operator over the entire action space (1 million arms) is very time-consuming.

Does anyone have some ideas on how to implement MAB algorithms in a time-efficient way?

Can I use Google Analytics to implement offline conversion tracking?

In a Google Ads account I’m working on, all conversions are imported from Google Analytics. How can I define a Google Analytics goal which has the Google Click ID configurable, i.e. such that reaching the goal is associated with a previously seen Google Click ID? I.e. can I have something to the effect of Offline Conversion Tracking except that I use Google Analytics (and maybe even Google Tag Manager)?

Background:

I’m working on a site which has its analytics managed via Google Tag Manager; some events configured in GTM trigger goals in Google Analytics, which in turn are imported as conversions in Google Ads. For example, “visitor requested a trial account” is a user interaction which is tracked like this.

I’d now like to track if people who requested a trial account actually logged in – and if so, track this as a conversion, too. When a visitor logs into his account, I can check a database to figure out the Google Click ID (if any) which the user got assigned when requesting his account. In case a GCLID is found, I’d like to have a GTM trigger which triggers a tag which bumps a Google Analytics goal (which in turn is imported as a conversion in Google Ads).

Configuring Google Tag Manager accordingly seems straightforward. However, it’s not clear to me what kind of Google Analytics Goal to create which explicitly specifies a click ID.

Algorithm to implement symmetric encryption in VB.Net for Windows and in Swift for iOS

I need to implement symmetric encryption to enable secure communication between one program running on a Windows machine (to be written in VB.Net) and an app running on an iOS device (to be written in Swift). I’d like to use a reasonably modern algorithm which is supported in both programming languages “out of the box” without having to import more code than necessary.

The use case is, information (mostly, text files) will be encrypted by one program (say, running on Windows) and uploaded to a server, where it will be stored, then later downloaded and then decrypted by the other program (running on iOS). The server doesn’t need access to the content of the file, and having the information “encrypted at rest” on the server is the main goal, although having it encrypted during transit to/from the server is also beneficial. The Windows and the iOS devices themselves aren’t considered to be targeted in this case.

What algorithm(s) are good choices as being modern, secure, and available in both Swift and Dot Net so that what’s encrypted by one can be decrypted by the other?

Another feasible way to implement multi-level page table?

The advantage of a multi-level page table is that we can swap the inner-level page tables to some secondary storage. If however we want quick access to the whole address space, we have to keep all page tables in memory. Then there are no savings.

However, imagine that not the innermost page-table pointed to the final frame, but that each page table contributes a bit to get the final address. In other words, we divide each virtual address into sections and map these instead.

I.e. we have a virtual address 1011 that maps to 1110 using a 2-level page table. Then the outer-level page table maps 10 -> 11 and the 2nd-level page table with index 3 (from binary 11) maps 11 -> 10. Together we get the address 1110.

I was learning about multi-level page tables and they were quite confusing to me. This is the way I initially imagined they worked. Now obviously, this restricts how we can map the virtual address space to the physical address space, i.e. pages with the same prefix will have physical locations close to each other. However, I don’t see the problem with this approach.

Why is this approach not used if it can save memory? Or do I have some error in my thinking?

Why not implement Union-Find structure using root as the direct parent?

I just learned about using UF with union by rank and path compression. A path can be compressed via attaching a node to its root after Find is called on the node. If the goal here is to flatten the tree, why not just implement the tree such that each node is directly attached to its root (instead of its true parent)? That way, maximum compression would be achieved from the start. What is the con of this as long as union by rank is used along with it?

Why did browsers choose to implement HSTS with Preload over checking custom DNS information?

Browsers and standards bodies favor HSTS with Preload because it avoids ever sending an http request to a website that supports https. This is good, because cleartext http requests can be intercepted to set up Man in The Middle attacks.

But a number of websites explain that a centralized Preload list doesn’t scale up well to the mostly https web that has been proposed by W3C, the EFF, and others. Managing one centralized list creates a bottleneck for looking up, adding, and deleting list items.

Yet this technology has been implemented rather than, say, to use DNS, which is already nicely distributed and is already used by browsers to lookup URL domain names.

Of course, the DNS is not yet secure, and proposals to make it secure are controversial. But why would the DNS have to be secure to hold one more bit of information (whether the domain can support https–and ONLY https–or not)?

In the worst case, a malicious MiTM attack could make it seem that a website is insecure when it is actually secure. But in this case, an insecure connection would simply fail. This failure would deny the malicious user any advantage.

So naturally I’m wondering why a centralized HSTS with Preload is preferred over adding a new flag to DNS zones for indicating that the domain supports https connections.