Schedule Optimization With Priority and Weighted Costs

I need an algorithm to determine the best itinerary for a series of events.

Each event has a time, location, and reward. Arriving at an event in time yields the reward; too late means no reward. Each event is at a physical location thus it takes time to travel from event to event. It is not necessary to attend every event.

What itinerary will yield the largest total reward?

Does anyone know if there is an existing algorithm for this or one that would be easily adapted? Given the similarity to the traveling salesman problem I am tempted to start with a weighted TSP solution and work from there.

Why can’t a compiler just “think more” about optimization?

This happens to me from time to time: I compile my code with the highest optimization level (-Ofast) of the allegedly fastest compiler (GCC) of one of the fastest languages (C/C++). It takes 3 seconds. I run the compiled program, measuring performance. Then I make some trivial change (say, marking a function inline), compile it again, and it runs 20% faster.

Why? Often I’d rather wait a few minutes or even hours, but be sure that my code is at least hard to optimize further. Why does the compiler give up so quickly?

As far as I know modern architectures are super complicated and hard to a priori optimize for. Couldn’t a compiler test many possibilities and see which one is the fastest? I effectively do this by making random changes in the source code, but that doesn’t sound optimal.

Proof of the undecidability of compiler code optimization

While reading Compilers by Alfred Aho, I came across this statement:

The problem of generating the optimal target code from a source program is undecidable in general.

The Wikipedia entry on optimizing compilers reiterates the same without a proof.

Here’s my question: Is there a proof (formal or informal) of why this statement is true? If so, please provide it.

SEARCH ENGINE OPTIMIZATION

To design a website that is compatible with SEO we have to first find out what exactly SEO entails. Search engine optimization, or SEO for short, is designed to describe certain aspects such as finding the terms and phrases, which can generate qualified traffic to the website until it makes your website friendly to search engines. Meaning in layman’s terms, that this set of rules will allow you to be placed correctly when users search “Google” or any other search engine. Every time search engine optimization is more difficult and is constantly changing. For one example, Google owns more than 7,000 websites that are managed by hundreds of product and marketing teams around the world. Every day more than 200 changes are made to websites which can affect the SEO of a site. One of the suggestions that I would say would be to start with little data; it may seem simple but it can help you focus on small incremental changes in the overall SEO strategy of a website, this can help you generate big profits over time.

Go you agree ot disagree? Why or why not?

Force-directed graph optimization with step-wise costs and constraints


Introduction

I have an optimization problem. There are up to 25 nodes. The connectivity between the nodes is far less important than the Cartesian placement of the nodes. Since all nodes can potentially affect each other in the optimization problem it is safe to model this as a complete, undirected graph.

In most modes of this optimization problem there are between 2-3 regions extending out infinitely from the origin separated by straight lines, i.e.

 A | B --------    C 

Each region exactly encompasses one or more Cartesian quadrants. Each imposes a fixed cost or benefit to each node, but this cost does not change the "farther into the region" a node gets.

Costs

This is the exhaustive list of costs and constraints on the nodes; all factors are cost multipliers (higher is worse). Distances are shown in metres but are really just discrete integers.

  • The distance between any two nodes must be at least 4m
  • For each node pair within 25m, there is a factor of 1.04
  • For each node, if there are three or fewer other nodes within 120m, there is a factor of 0.90
  • Depending on what region a node is in, the node has a factor between 0.90 and 1.10
  • For every node, there is an individual edge factor to every other node within 25m of between 0.90 and 1.10
  • The product of all of the above factors, for each node, will have a set minimum of 0.67 and a set maximum of 1.50

So none of the factors are continuous, and none are differentiable in space since they are all step-wise.

Search space

The 2D coordinates of each node are discrete and unbounded. Since there are 25 nodes, there are 50 integer variables (xy for each node) to optimize. The hope is that even though there are no bounds, there will be enough sub-1.0 factors to have the optimization converge rather than force the nodes to fly apart.

If I get this working well enough for a given region configuration, I might expand this to selection of a region configuration, for which there are currently 46 possibilities.

Optimization

Since none of the cost factors are space-differentiable, something like Gradient Descent would not be possible.

I have read about force-directed graph drawing; in particular this is interesting:

using the Kamada–Kawai algorithm to quickly generate a reasonable initial layout and then the Fruchterman–Reingold algorithm to improve the placement of neighbouring nodes.

Unfortunately, it seems that these methods have no notion of cost tied to absolute location, only distance of nodes relative to each other.

Implementation

I will probably end up implementing this in Python.

Any hints on how to approach this would be appreciated.

How to consider combinatorial optimization problem with multiple objectives?

I am considering a combinatorial optimization problem with two objectives. The two objectives have a trade-off between each other which means if I minimized the first objective alone it gives the worst solution to the other one and vise versa. How I should start tackling such problems and if anyone can recommend a famous combinatorial problem has the same nature I appreciate.

Is this the correct “standard form” of nonlinear programming (optimization) problem and if it is why it’s in this form?

Rather a simple question I guess, though makes me wonder. The standard form I’ve found in the book (and on wiki) is something like this:

$ min f(x)$

$ s.t.$

$ h_i(x) = 0$

$ g_i(x) <= 0$

Is this considered a “standard form” for nonlinear optimization problems? And if it is why it’s defined like this? Why it has to be exactly the min of the function and why constraints have to be either equal or less than 0 or equal to 0? I couldn’t find any answer why it is as it is actually. Is there some important thing why it couldn’t be max actually for example?

Method for combining derivative free optimization results of different data inputs

I am working on an algorithm that has multiple fixed parameters. The algorithm analyzes time series data and spits out a number. The fixed parameters need to be such that this number is as small as possible.

What I found, is that when optimizing the parameters for a specific time period, these parameters don’t necessarily work well when used on another time period.

The way I see it, is that there are two possible solutions to this problem:

  1. use a longer time period when optimizing the parameters
  2. find a method of combining the optimal parameters for different time periods, such that these “averaged” parameters work well on all time periods

Option 1. would be incredibly expensive in terms of computational time. And although it makes intuitive sense that this should fix the problem, I am not sure that this would indeed be the case.

Option 2. reminds me of training neural networks, where one would feed in a large number of “data points” and somehow take a (weighted) average of the results to find a set of parameters that work well for all data points. Unfortunately, I know very little to nothing about the algorithms used for this kind of optimization/learning.

Any help or suggestions are greatly appreciated. Please let me know if there is anything you’d like me to expand upon.

Thanks!

How to calculate wierd number in efficient way – optimization of algorithm needed

I am trying to print n weird numbers where n is really big number (eg: 15000).

I found this site to check the algorithm for n 600 if I have some errors: http://www.numbersaplenty.com/set/weird_number/more.php

However, my algorithm is really slow in bigger numbers:

import java.util.ArrayList; import java.util.List;  public class Test {      public static void main(String[] args) {         int n = 2;          for ( int count = 1 ; count <= 15000 ; n += 2 ) {             if (n % 6 == 0) {                 continue;             }              List<Integer> properDivisors = getProperDivisors(n);             int divisorSum = properDivisors.stream().mapToInt(i -> i.intValue()).sum();              if ( isDeficient(divisorSum, n) ) {                 continue;             }              if ( isWeird(n, properDivisors, divisorSum) ) {                 System.out.printf("w(%d) = %d%n", count, n);                 count++;             }         }     }      private static boolean isWeird(int n, List<Integer> divisors, int divisorSum) {         return isAbundant(divisorSum, n) && ! isSemiPerfect(divisors, n);     }      private static boolean isDeficient(int divisorSum, int n) {         return divisorSum < n;     }      private static boolean isAbundant(int divisorSum, int n) {         return divisorSum > n;     }      private static boolean isSemiPerfect(List<Integer> divisors, int sum) {         int size = divisors.size();          //  The value of subset[i][j] will be true if there is a subset of divisors[0..j-1] with sum equal to i          boolean subset[][] = new boolean[sum+1][size+1];          // If sum is 0, then answer is true          for (int i = 0; i <= size; i++) {             subset[0][i] = true;          }          //  If sum is not 0 and set is empty, then answer is false          for (int i = 1; i <= sum; i++) {             subset[i][0] = false;          }          // Fill the subset table in bottom up manner          for ( int i = 1 ; i <= sum ; i++ ) {             for ( int j = 1 ; j <= size ; j++ ) {                 subset[i][j] = subset[i][j-1];                 int test = divisors.get(j-1);                 if ( i >= test ) {                     subset[i][j] = subset[i][j] || subset[i - test][j-1];                  }             }          }           return subset[sum][size];     }      private static final List<Integer> getProperDivisors(int number) {         List<Integer> divisors = new ArrayList<Integer>();         long sqrt = (long) Math.sqrt(number);         for ( int i = 1 ; i <= sqrt ; i++ ) {             if ( number % i == 0 ) {                 divisors.add(i);                 int div = number / i;                 if ( div != i && div != number ) {                     divisors.add(div);                 }             }         }         return divisors;     }  } 

I have three easy breakouts:

  1. If a number is divisable by 6 it is semiperfect which means it cannot be weird

  2. If a number is deficient this means it cannot be weird

The above points are based on https://mathworld.wolfram.com/DeficientNumber.html

  1. If a a number is odd it cannot be weird at least for 10^21 numbers (which is good for the numbers I am trying to obtain).

The other optimization that I used is the optimization for finding all the dividers of a number. Instead of looping to n, we loop to SQRT(n).

However, I still need to optimize: 1. isSemiPerfect because it is really slow 2. If I can optimize further getProperDivisors it will be good too.

Any suggestions are welcome, since I cannot find any more optimizations to find 10000 weird numbers in reasonable time.

PS: Any code in Java, C#, PHP and JavaScript are OK for me.

EDIT: I found this topic and modified isSemiPerfect to look like this. However, it looks like it does not optimize but slow down the calculations:

private static boolean isSemiPerfect(List<Integer> divisors, int n) {         BigInteger combinations = BigInteger.valueOf(2).pow(divisors.size());         for (BigInteger i = BigInteger.ZERO; i.compareTo(combinations) < 0; i = i.add(BigInteger.ONE)) {           int sum = 0;           for (int j = 0; j < i.bitLength(); j++) {             sum += i.testBit(j) ? divisors.get(j) : 0;           }            if (sum == n) {             return true;           }         }          return false;       } 

The script is still running from 11 hours and I am only at 4800th number.

WordPress Speed Optimization Service – Make Your WordPress Super Fast

We are a team of WordPress experts who focuses on WordPress speed optimization.

What will we do for you?

  • Google Page / Pingdom / GTMetrix Speed Optimization
  • Mobile Loading Speed Optimization
  • Extra Loading Speed Optimization
  • Optimize All Images
  • Reduce the number of plugins
  • Use CDN ( if you want )
  • Full Before & After Report

We charge $150 per website.

Have any questions? Feel free to contact me!

My contacts:
Email: dmitry.wplegends@gmail.com
Skype: never2stop
Telegram: WPLegends

.png   1.png (Size: 202.44 KB / Downloads: 0)

.png   2.png (Size: 210.25 KB / Downloads: 0)

.png   3.png (Size: 103.04 KB / Downloads: 0)

.png   4.png (Size: 171.76 KB / Downloads: 0)

.png   5.png (Size: 87.12 KB / Downloads: 0)

.png   6.png (Size: 213.76 KB / Downloads: 0)

.png   7.png (Size: 54.8 KB / Downloads: 0)

.png   8.png (Size: 52.45 KB / Downloads: 0)

.png   9.png (Size: 237.95 KB / Downloads: 0)