How to best calculate the best possible path with weights

Given a set of nodes, with connections in certain directions (see image), what is the most coins you can collect between the first and last given node. Not all rooms have coins, and we want to output the path taken, as well as total coins taken. You may only travel in the direction of the arrows. We are allowed to take the same path twice. The solution for the situation in the figure below is the path 1 – 4 – 3 – 2 – 1 – 5 – 7 – 9 – 10 – 8 – 12, with a total of 7 coins. I think this should be possible in linear time. My idea so far is to start at the last node and go back, saving the “best attainable score” for each node. However, this implementation runs into issues when there are cycles in the graph. Is there a better way of doing this?

edit: Assuming n nodes, there will never be more than 10n connections total.

Text

A pathfinding algorithm for graphs in which arc weights can change over time

So I’m not really sure even what to be googling for solutions to this. Hence this question, hopefully, someone can point me in the right direction.

Here’s the situation, I have a weighted undirected graph of nodes and arcs. I have an implementation that uses A* for pathfinding on this graph. However, I now have a situation where the weights (cost) of each arc can change over time. That is at each step in the A* pathfinding algorithm the weights of the entire graph can change.

So I’m trying to see if there is an existing algorithm or alteration of A*-like algorithm that handles changing weights well. If anyone has any keywords I should be looking into I’d appreciate any pointers you can provide.

Graph theory question involving weights of edges

I’m trying to solve the following problem but I can’t understand it. Could you guys kindly break it down for me? I’m not asking for anyone to solve it. I just want to be able to grasp the problem.

Given a graph of siblings who have different interests, you’d like to know which groups of siblings have the most interests in common. You will then use a little math to determine a value to return.

You are given integers sibling_nodes and sibling_edges, representing the number of nodes and edges in the graph respectively. You are also given three array integers, sublings_from, siblings_to and siblings_weight, which describe the edges between siblings.

The graph consists of nodes numbered consecutively from 1 to siblings_nodes. Any members or groups of members who share the same interests are said to be connected by that interest (note that two group members can be connected by some interest even if they are not directly connected by the corresponding edge).

Once you’ve determined the node pairs with the maximum numbers of shared interests, return the product of the node pairs’ labels. If there are multiple pairs with the maximum number of shared interests, return the maximum product among them.

For example, you are given a graph with subling_nodes = 4 and sibling_edges = 5:

   FROM   TO   WEIGHT    1      2    2    1      2    3    2      3    1    2      3    3    2      4    4 

If we look at each interest, we have the following connections:

   INTEREST   CONNECTIONS       1          2,3    1          1,2    2          1,2,3    2          2,4 

Example input:

   siblings_nodes: 4    siblings_edges: 5    siblings_from: [1, 1, 2, 2, 2]    siblings_to: [2, 2, 3, 3, 4]    siblings_weight: [1, 2, 1, 3, 3] 

output:

   6 

Is it possible to keep weights of left and right subtree at each node of BST that has duplicate values?


Is it possible to keep weights of left and right subtree at each node of BST that has duplicate values?

I must be able to delete a node completely(irrespective of how many times it is present)

Currently, in my code, I am keeping count variable in each node that records the number of times it is present in tree.

During insertion, I can increase the size of left and right subtree weight at each node according to if my value is less or more. but how do I adjust the weights when I delete a node(because I may delete a node with count >1)

Weights vs Training ML

The weights of Machine Learning models are learned during the training process. Why is said that the higher values of weights leads to the overfitting of the model? Why is it necessary to have low value of weights for good machine learning models?

Coloring an interval graph with weights

I have an interval graph $ G=(V,E)$ and a set of colors $ C=\{c_1,c_2,…,c_m\}$ , when a color $ c_i$ is assigned to a vertex $ v_j$ , we have a score $ u_{ij}\geq 0$ . The objective is to find a coloring of $ G$ with at most $ m$ colors that maximizes the total score (sum of the scores of the vertices).

Do you know about any results in the literature that may help me to find the complexity of this problem?

Knapsack with a fixed number of weights

Consider a special case of the knapsack problem in which all weights are integers, and the number of different weights is fixed. For example, the weight of every item is either 1k or 2k or 4k. There is one unit of each item.

The problem can be solved using dynamic programming. Suppose the knapsack capacity is $ C$ , and the most valuable item of weight $ w$ has a value of $ v_w$ . Then, the maximum value of KNAPSACK($ C$ ) is the maximum of the following three values:

KNAPSACK($ v_1$ ,$ C-1$ ), KNAPSACK($ v_2$ ,$ C-2$ ), KNAPSACK($ v_4$ ,$ C-4$ ).

Is there a more efficient algorithm? Particularly, is there a greedy algorithm for this problem?

I tried two greedy algorithms, but they fail already for weights 1 and 2. For example, suppose there are 3 items, with values 100, 99, 51 and weights 2, 1, 1:

  • If the capacity is 2, then the greedy algorithm that selects items by their value fails (it selects the 100 while the maximum is 99+51).
  • If the capacity is 3, then the greedy algorithm that selects items by their value/weight ratio fails (it selects the 99+51 while the maximum is 100+99).

However, this does not rule out the possibility that another greedy algorithm (sorting by some other criterion) can work. Is there a greedy algorithm for this problem? Alternatively, is there a proof that such an algorithm does not exist?

Interpretability of feature weights from Gaussian process classifier

Suppose I trained a Gaussian process classifier with a linear kernel (using GPML toolbox) and got some feature weights for each input feature.

My question is then:

Does it/When does it make sense to interpret the weights to indicate the real-life importance of each feature or interpret at group level the average over the weights of a group of features?

How do I modify weights based on a cost function

I have currently the structure for the simplest neural network. It takes a boolean input and outputs whether it is true or false after training it to know how to tell them apart. I have implemented the structure and have two weights that link to the outputs nodes.

After randomising the initial weights. Inputting true into the network currently returns a 0.7 confidence that it is true and 0.3 for it being false. Giving this a cost value of 0.18 ((1 - 0.7)**2 + (0 - 0.3)**2). How do I then go from the cost function to modifying the weights to give the expected outputs? I am aware of multivariable calculus but I am not sure how it works in this example and how I would program it into the system to modify the dependant weights?

packing with time-variant weights

This appears to be a knapsack / bin-packing problem, but I seem to have got stuck and could appreciate contributions. (If this is the wrong SE to ask this question, rather than downvoting it, could you comment what the correct SE is, and I can post it there)

Scenario 1: Tough (for me!) There is a one day conference with a set of (4 or more) sessions. The conference will be attended by a number of companies, each one being represented by a (1 or more) representatives.

Across sessions the number of representatives for each company will vary (and may be zero), with each individual having the same chance of being present as any other individual (so there is an increasing likelihood of a company being represented when it has more representatives).

There is a single row of seats in the conference. There are more than enough seats for the most popular session. (e.g., if the most popular session has 100 delegates, there could be 120 seats for the whole conference).

Constraints & Priorities (from highest to lowest)

  • Constraint: Company representatives must be seated
  • Constraint: Company representatives sit next to each other
  • Constraint: seats will not be greater than 125% most popular session.
  • Priority: A company representative should not need to change seat across consecutive sessions
  • Priority: Companies should be in approximately alphabetical order.

Goal To fit the constraints and priorities optimally. Example. 15 chairs, 4 companies (A-D), 4 sessions (S1-S4).

// Session attendees by company S1: A2 B6 C3 D3  S2: A4 B5 C1 D2 S3: A3 B3 C4 D1  S4: A5 B2 C5 D0  // Possible solution (I did this manually!) S1: [AA.BBBBBBCCCDDD] S2: [AAAABBBBBC...DD] S3: [AAA...BBBCCCC.D] S4: [AAAAA..BBCCCCC.] 

Question How can I solve this algorithmically? The algorithm doesn’t have to be particularly fast but it does need to yield working results.

Scenario 2: Tougher?! The same as above but the row of seats has enforced break points (like pillars, walkways, etc). I think this latter issue is merely a ‘knapsack’-type modification to the above problem

Thoughts towards a solution It seems that it should be possible to have consecutive seating available in most cases, which implies that a solution could be found by identifying company-invariant seats by filling the minimum company representation across all sessions, leaving gaps between companies that are somehow calculated based on the variation of the company and it’s neighbour.

It is trivial to find cases where there is only a partial solution, eg the following, but that’s okay. I still need to get the best solution I can.

S1:   [AAAABC] S2:   [ABCCCC]