Best supporting class for a 3.5 low level campaign?

I’m adapting the FrozenSick campaign from the Explorer’s Guide to Wildermount to version 3.5. It’s going to be the first campaign I play as a DM. The party consists of only 2 players, a warlock and a dragon shaman; the warlock’s player is pretty experienced while the other one is totally new to the game (and to roleplaying in general).

While I don’t want to steal the players’ spotlight, I’m worried that fights could get rough without any other character, so I’m considering adding an NPC who could support them in battle, but I’m having trouble choosing the class.

My addition would primarly be a supporter and/or a utility character (buffer, debuffer, healer…).

As far as manuals are concerned, I’m using PH, PH2 and the Complete Series (all four).

Which class choice could best fit my situation, considering that the characters’ level goes from level 1 to level 3-4 through the campaign?

Are hardware security keys (e.g ones supporting Fido2) “able to protect authentication” even in case of compromised devices?

Correct me if I am wrong, please.

I understand that 2FA (MFA) increases account security in case an attacker obtains a password which might be possible via various ways, e.g. phishing, database breach, brute-force, etc..

However, if the 2FA device is compromised (full system control) which can also be the very same device then 2FA is broken. It’s not as likely as opposed to only using a password but conceptually this is true.

Do hardware security keys protect against compromised devices? I read that the private key cannot be extracted from those devices. I think about protecting my ssh logins with a FIDO2 key. Taking ssh as an example, I would imagine that on a compromised device the ssh handshake and key exchange can be intercepted and the Fido2 key can be used for malicious things.

Additionally: Fido2 protects against phishing by storing the website it is setup to authenticate with. Does FIDO2 and openssh also additionally implement host key verification or doesn’t it matter because FIDO2 with openssh is already asymmetric encryption and thus not vulnerable to MitM attacks?

Bloom filter alternative supporting deletion without false negatives

There are several alternatives to bloom filters that support deletion from the set, for example Cuckoo filters or Counting Bloom filters.

However, deleting an element from these data structures assumes that the element really is in the set. Deleting an element that is actually not in the set might introduce false negatives.

Is there a data structure that avoids this problem, trading it off against something else?

Bloom filter like data structure supporting iteration

I am wondering whether there exists a data structure similiar to a bloom filter in the sense that it is an approximate finite set representation (allows for false positives, but no false negatives) and supports:

  • constant time complexity union of two sets
  • constant time complexity insertion of an element

but also allows for efficient iteration over the approximate elements in the set, i.e. iterate over all elements that are either actually in the set or false positives with a time complexity that is linear in the number of elements in the set

Debugging Priority queue implementation supporting priority update [closed]

Please let me know if you require any additional information, but please help me out here.

For an efficient implementation of algorithms like Dijkstra’s and Prim’s, we require a priority queue that supports decrease priority operation. I’ve come up with the following code(heap based priority queue) for this which stores the index of each entry in the priority queue using a HashMap.

import java.util.HashMap; import java.util.NoSuchElementException;  public class Heap<Key extends Comparable<Key>> {     private Key[] heap;     private int maxN, n;     private HashMap<Key, Integer> map;     @SuppressWarnings("unchecked")     public Heap(int maxN) {         if(maxN < 0 ) throw new IllegalArgumentException();         this.maxN = maxN;         n = 0;         heap = (Key[]) new Comparable[maxN];         map = new HashMap<>();     }      boolean isEmpty() {         return n == 0;     }      boolean insert(Key e) {         if(n +1 > maxN) throw new IllegalArgumentException("maximum capacity reached " + maxN);         heap[n] = e;         map.put(e,n);         int i = n;         while ( (i+1)/2 - 1 >= 0){             if ( e.compareTo(heap[(i+1)/2 - 1]) < 0 ) {                 swap(i, (i+1)/2 - 1);                 i = (i+1)/2 - 1;             }             else                  break;         }         n++;         return true;     }      Key extractMin() {         if(n == 0) throw new NoSuchElementException("Priority queue underflow ");         Key min = heap[0];         swap(0, n-1);         map.remove(min);         n--;         int j = 0, s;         while(j <= n/2 - 1){             if(j == n/2 -1 && n == (j+1)*2 )                 s = (j+1)*2 - 1;             else                  s = heap[(j+1)*2 - 1].compareTo(heap[(j+1)*2]) < 0 ? (j+1)*2 - 1 : (j+1)*2;              if(heap[j].compareTo(heap[s]) > 0 ){                 swap(j, s);                 j = s;             }             else break;         }         return min;     }      Key delete(Key e){         if(!map.containsKey(e)) throw new NoSuchElementException(e+"does not exist ");         int j = map.get(e), s;         Key del = heap[j];         swap(j, n-1);         map.remove(e);         n--;         while( j <= n/2 - 1){             if(j == n/2 -1 && n == (j+1)*2)                 s = (j+1)*2 - 1;             else                 s = heap[(j+1)*2 - 1].compareTo(heap[(j+1)*2]) < 0 ? (j+1)*2 - 1 : (j+1)*2;              if(heap[j].compareTo(heap[s]) > 0 ){                 swap(j, s);                 j = s;             }             else break;         }         return del;     }      boolean decreasePriority(Key e){         if(n == 0)             return insert(e);         if(map.containsKey(e))             delete(e);         return insert(e);     }      private void swap(int i, int j) {         Key t = heap[i];         heap[i] = heap[j];         heap[j] = t;         map.replace(heap[i], i);         map.replace(heap[j], j);     }      @Override     public String toString() {         String res = "[";         int i;         for (i = 0; i < n-1; i++)             res += heap[i] + ", ";         res += heap[i]+"]";         return res;     } } 

For the problem I’m working on the program is supposed to output the total weight of the minimum spanning tree found by applying Prim’s algorithm. The input graph is a 500 node graph, and the answer obtained by this implementation is incorrect. I’m certain that the issue lies with the Heap and not with my implementation of Prim’s algorithm because using the inbuilt priority queue in java outputs the correct answer. Here is my Program running Prims’s algorithm.

import; import; import java.util.ArrayList; import java.util.List; import java.util.Scanner;  class Edge<Key>{     Key v;     int w;     public Edge(Key v, int w){         this.v = v; this.w = w;     }     @Override     public String toString(){         return "("+v+","+w+")";     } }  class vertex implements Comparable<vertex>{     int position, dis;     public vertex(int position){         this.position = position;         dis = Integer.MAX_VALUE;     }     @Override     public int compareTo(vertex v) {         if(dis > v.dis)             return 1;         if(dis < v.dis)             return -1;         return 0;     }     @Override     public String toString(){         return Integer.toString(position+1);     } } public class Prims {     public static void main(String[] args) throws FileNotFoundException {         Scanner in = new Scanner(new FileReader("prims.txt"));         int n = in.nextInt(), m = in.nextInt();         List<List<Edge<vertex>>> graph = new ArrayList<List<Edge<vertex>>>();         List<vertex> nodes = new ArrayList<vertex>();         for(int i = 0; i<n; i++){             graph.add(new ArrayList<>());             nodes.add(new vertex(i));         }         while(m-- > 0){             int u = in.nextInt()-1, v = in.nextInt()-1, w = in.nextInt();             graph.get(u).add(new Edge<vertex>(nodes.get(v), w));             graph.get(v).add(new Edge<vertex>(nodes.get(u), w));         }         in.close();         long st = System.currentTimeMillis();         System.out.println(prims(graph,nodes));         long end = System.currentTimeMillis();         System.out.println("Runtime = "+(end-st)+"ms");     }     static int prims(List<List<Edge<vertex>>> graph, List<vertex> nodes){         int n = graph.size(), weight = 0;         boolean[] inSpanTree = new boolean[n];         Heap<vertex> pq = new Heap<>(n);         inSpanTree[0] = true;         for(Edge<vertex> e : graph.get(0)){             e.v.dis = e.w;             pq.insert(e.v);         }         while(!pq.isEmpty()){             vertex u = pq.extractMin();             inSpanTree[u.position] = true;             weight += u.dis;             for(Edge<vertex> e : graph.get(u.position)){                 if(!inSpanTree[e.v.position]){                     if(e.v.dis > e.w){                         e.v.dis = e.w;                         pq.decreasePriority(nodes.get(e.v.position));                     }                 }             }         }         return weight;     } } 

If for this same implementation I use the inbuilt priority queue I get the correct output(-3612829)

static int prims(List<List<Edge<vertex>>> graph, List<vertex> nodes){         int n = graph.size(), weight = 0;         boolean[] inSpanTree = new boolean[n];         PriorityQueue<vertex> pq = new PriorityQueue<>();         inSpanTree[0] = true;         for(Edge<vertex> e : graph.get(0)){             e.v.dis = e.w;             pq.add(e.v);         }         System.out.println(pq);         while(!pq.isEmpty()){             vertex u = pq.poll();             if(u.position == 307)                 System.out.println(pq);             inSpanTree[u.position] = true;             weight += u.dis;             for(Edge<vertex> e : graph.get(u.position)){                 if(!inSpanTree[e.v.position]){                     if(e.v.dis > e.w){                         e.v.dis = e.w;                         pq.remove(nodes.get(e.v.position));                         pq.add(nodes.get(e.v.position));                     }                 }             }         }         return weight;     } 

My Heap implementation works for almost all test cases even up to 300 nodes. I don’t understand how to debug and find out where the problem lies. Here is the link to the Problem input: 

Can someone please help me debug my Heap implementation, I’ve been stuck a long time with this now. Thanks in advance.

Maintaining an array supporting sum requests

We have to maintain an array $ a_1,\ldots,a_n$ supporting the following operations:

  1. Assign $ a_i = x$
  2. Given $ \ell,r$ , return $ \sum_{i=\ell}^r a_i$
  3. Given $ \ell,r$ , return $ \sum_{i=\ell}^r \sum_{j=\ell}^r a_i a_j$
  4. Given $ \ell,r$ , return $ \sum_{i=\ell}^r \sum_{j=\ell}^r \sum_{k=\ell}^r a_i a_j a_k$

After $ O(n)$ initialization, these operations should run in amortized $ O(\log n)$ time.

Constructing a data structure supporting prioritized key lookup

so this is more or less a shot in the dark as I am feeling stuck. Maybe some of you have an idea which helps.

Here is the problem description (pseudo formal):

I want to have a structure $ T = \{ \hat{x_1}, \hat{x_2}, … \}$ with $ \hat{x_i} = (p_i, k_i, v_{k_i})$ .

$ p_i \in \mathbb{N}$ can be interpreted as an associated priority. They can be considered unqiue.

$ k_i \in \mathcal{K}$ a key index, $ d := |\mathcal{K}|$ is not required to be negligibly small, though generally $ d \lt\lt |T|$

$ v_{k_i} \in \mathcal{V}^{(k_i)}$ a partial key over some universe associated with the given key index. As a little pace killer, this key may not be hashable. The only requirement is totally ordered.

Considering an array $ y = [v_i], i \in \mathcal{K}$ , the structure should be capable of supporting the following operations efficiently:

$ lookup(T, y) \rightarrow \underset{x_i \in T : y[k_i] = v_{k_i}}{arg max}( p_i )$ , i.e. the node $ \hat{x_j}$ with a partial key matching and the highest associated priority.

$ succ(T, y) \rightarrow lookup(T \backslash \{\hat{x_j}\}, y)$ , i.e. the successor (in terms of priority) of a node $ \hat{x_j}$ matching $ y$

Ultimately I would also like efficient insertion and deletion (by priority).

(For insertion: The selection of $ k_i$ for each node is another point of research and can by chosen at the time of insertion. It is essentially possible that this key index is subject to change if it helps the overall structure. But it is also not required that all key indices in $ \mathcal{K}$ are supported for each node, i.e. one might need to insert a node which is only queryable by a single $ k_i$ .)

The key indices are somewhat causing a headache for me. My research so far includes standard priority search trees and dynamic variations thereof. I also found this very interesting, though very theoretical, paper which takes care of the increased dimensionality (kind of) caused by the d-dimensional keys.

I know that I can construct d-dimensional binary search trees with a query complexity of $ O(d + n log(n))$ . These should definitely yield a gain though I am not capable of mapping the priority problem to it (Ref).

But I guess the complexity can be reduced even further if we consider that fact, that each node is only storing partial keys anyway.

My approach so far is somewhat naive as it simply creates a hash map for each key index in $ \mathcal{K}$ and queries each hash map upon lookup. It then aggregates all the results and sorts them by priority. This works fine for hashable keys but a fallback structure has to be used whenever they are not (I am using binary search trees).

Triple handshake attack – what are the implications of not supporting RFC 7627: “Session Hash and Extended Master Secret Extension”?

The referenced RFC details a mitigation to what appears to be the ability to compromise a TLS connection through an attack known as the ‘triple handshake attack’.

How serious is this vulnerability? How could this vulnerability be exploited and what would the impact be?

The related RFC for this can be found here: