Response time optimization – Getting record count based on Input Parameters

I’m trying to optimize the process of calculating count of records, based on variable input parameters. The whole proces spans several queries, functions and stored procedures.

1/ Basically, front-end sends a request to the DB (it calls a Stored procedure) with an input parameter (DataTable). This DataTable (input parameter collection) contains 1 to X records. Each record corresponds to one specific rule.

2/ SP receives the collection of rules (as a custom typed table) and iterates through them one by one. Each rule apart from other meta-data contains a name of a specific function that should be used in evaluating the said rule.

For every rule, the SP prepares a dynamic query wherein it calls the mentioned function with 3 input parameters.

a/ Custom type Memory Optimized Table (Hashed index) b/ collection of lookup values (usually INTs) that the SELECT query uses to filtr data. Ie. "Get me all records, that have fkKey in (x1, x2, x3)" c/ BIT determining if this is the first rule in the whole process.

Each function has an IF statement, that determines based on the c/ parameter if it should return "all" records that fullfill the input criteria (b/). Or if it should fullfill the criteria on top of joining the result of previous rule that is contained in the custom table (a/)

3/ Once the function is run, it’s result is INSERTed into a table variable called @tmpResult. @result is then compared to tmpResult and records that are not in the tmpResult are DELETEd from result.

  • @result is a table variable (custom memory optimized table type), that holds intermediate result during the whole SP execution. It is fully filled up on the first rule, every consequent rule only removes records from it.

4/ The cycle repeats for every rule until all of the rules are done. At the end, count is called on the records in @result and returned as a result of SP.

Few things to take into account:

  • There are dozens of different types of rules. And the list of rules only grows bigger over time. That’s why dynamic query is used.
  • The most effective way to temporarily store records between individual rule execution so far proved to be custom Memory-Optimized table type. We tried a lot of things, but this one seems to be the fastest.
  • The number of records that are usually returned for 1 single rule is roughly somewhere between 100 000 and 3 000 000. That’s why a bucket_size of 5 000 000 for the HASHed temporary tables is used. And even though we tried nonclustered index, it was slower than that HASH.
  • The input collection of rules can vary strongly. There can be anything from 1 rule up to dozens of rules used at once.
  • Most every rules can be defined with at minimum 2 lookup values .. at most with dozens or in a few cases even hundred values. For a better understanding of rules, here are some examples:

Rule1Color, {1, 5, 7, 12} Rule2Size, {100, 200, 300} Rule3Material, {22, 23, 24}

Basically every rule is specified by it’s Designation, which corresponds to a specific Function. And by it’s collection of Lookup values. The possible lookup values differ based on the designation.

What we have done to optimize the process so far:

  • Where big number of records need to be temporarily stored, we use Memory-Optimized variable tables (also tried with temp ones, but it was basically same when using Memory-Optimized variants).
  • We strongly reduced and optimized the source tables the SELECT statements are run against.

Currently, the overal load is somewhat balanced 50/50 between I/O costs pertaining to SELECT statements and manipulation with records between temporary tables. Which is frankly not so good .. ideally the only bottleneck should be the I/O operations, but so far we were not able to come up with a better solution since the whole process has a lot of variability.

I will be happy for any idea you can throw my way. Of course feel free to ask questions if I failed to explain some part of the process adequately.

Thank you

Can Mathematica solve matrix-based parametric (convex or semidefinite) constrained optimization problems?

I have gone through Mathematica’s documentation and guides on ConvexOptimization, ParametricConvexOptimization and SemidefiniteOptimization. I am also running the latest version of Mathematica.

The kind of matrix-based, parametric, constrained optimization problems I want to solve is this:

\begin{equation} \min_{X_j, \, L_{jk}} \text{Tr}(A L) \ \text{such that, } X_j \text{ and } L_{jk} \text{ are } 4\times4 \text{ Hermitian matrices} \ G_k \cdot X_j = \delta_{j k}\ L:=\begin{bmatrix}L_{11} &L_{12} &L_{13} \ L_{12} &L_{22} &L_{23} \ L_{13} &L_{23} &L_{33}\end{bmatrix} \succeq \begin{bmatrix} X_1 \ X_2 \ X_3 \end{bmatrix}\begin{bmatrix}X_1 &X_2 &X_3\end{bmatrix} \end{equation} where the variables to be optimized over are $ X_j$ and $ L_{jk}$ ($ j$ and $ k$ run from 1 to 3), which are themselves matrices! The matrices $ G_k$ and $ A$ depend on some parameter $ \alpha$ (and satisfy additional properties).

I have been able to run this kind of optimization in MATLAB, and also a much simpler version of this in Mathematica, where $ j, k=1$ and the parameter value is fixed, using,

ConvexOptimization[   Tr[\[Rho]0 .      v11], {VectorGreaterEqual[{v11, x1}, "SemidefiniteCone"] &&       Tr[\[Rho]0 . x1] == 0  && Tr[\[Rho]1 . x1] == 1   &&      Element[x1, Matrices[{4, 4}, Complexes, Hermitian]] &&      Element[v11, Matrices[{4, 4}, Complexes, Hermitian]]} , {x1,     v11}]] 

However I simply can not get the full problem to run on Mathematica, using either ConvexOptimization[ ] (at fixed parameter values), ParametricConvexOptimization[ ], SemidefiniteOptimization[ ], or Minimize[ ].

ConvexOptimization[ ] at fixed parameter values for $ j, k = 1, 2$ shows the warning ConvexOptimization::ctuc: The curvature (convexity or concavity) of the term X1.X2 in the constraint {{L11,L12},{L12,L22}}Underscript[\[VectorGreaterEqual], Subsuperscript[\[ScriptCapitalS], +, \[FilledSquare]]]{{X1.X1,X1.X2},{X1.X2,X2.X2}} could not be determined.

Minimize[ ] shows the error Minimize::vecin: Unable to resolve vector inequalities ...

And ParametricConvexOptimization[ ] and SemidefiniteOptimization[ ] simply return the input as output.

Has anyone got some experience with running such matrix-based optimizations in Mathematica? Thanks for your help.

EDIT 1: For the two-dimensional case ($ j, k=1, 2$ ) I tried (with $ A$ the identity matrix, and at fixed parameter value):

ConvexOptimization[  Tr[Tr[ArrayFlatten[{{L11, L12}, {L12,        L22}}]]], {VectorGreaterEqual[{ArrayFlatten[{{L11, L12}, {L12,          L22}}], ArrayFlatten[{{X1 . X1, X1 . X2}, {X1 . X2,          X2 . X2}}]}, "SemidefiniteCone"] &&  Tr[\[Rho]0 . X1] == 0  &&     Tr[\[Rho]0 . X2] == 0 && Tr[\[Rho]1 . X1] == 1  &&     Tr[\[Rho]1 . X2] == 0  && Tr[\[Rho]2 . X1] == 0  &&     Tr[\[Rho]2 . X2] == 1  &&     Element[X1, Matrices[{4, 4}, Complexes, Hermitian]] &&     Element[X2, Matrices[{4, 4}, Complexes, Hermitian]] &&     Element[L11, Matrices[{4, 4}, Complexes, Hermitian]] &&     Element[L12, Matrices[{4, 4}, Complexes, Hermitian]]  &&     Element[L22, Matrices[{4, 4}, Complexes, Hermitian]] }, {X1, X2,    L11, L12, L22}] 

and for the three-dimensional case ($ j, k = 1, 2, 3$ ) with variable parameter value and $ A$ the identity matrix, I tried

ParametricConvexOptimization[  Tr[Tr[ArrayFlatten[{{L11, L12, L13}, {L12, L22, L23}, {L13, L23,        L33}}]]], {VectorGreaterEqual[{ArrayFlatten[{{L11, L12,         L13}, {L12, L22, L23}, {L13, L23, L33}}],      ArrayFlatten[{{X1}, {X2}, {X3}}] .       Transpose[ArrayFlatten[{{X1}, {X2}, {X3}}]]},     "SemidefiniteCone"],  Tr[\[Rho]0 . X1] == 0 ,    Tr[\[Rho]0 . X2] == 0  , Tr[\[Rho]0 . X3] == 0 ,    Tr[\[Rho]1 . X1] == 1 , Tr[\[Rho]1 . X2] == 0  ,    Tr[\[Rho]1 . X3] == 0  , Tr[\[Rho]2 . X1] == 0 ,    Tr[\[Rho]2 . X2] == 1  , Tr[\[Rho]2 . X3] == 0 ,    Tr[\[Rho]3 . X1] == 0 , Tr[\[Rho]3 . X2] == 0  ,    Tr[\[Rho]3 . X3] == 1 }, {Element[X1,     Matrices[{4, 4}, Complexes, Hermitian]],    Element[X2, Matrices[{4, 4}, Complexes, Hermitian]],    Element[X3, Matrices[{4, 4}, Complexes, Hermitian]],    Element[L11, Matrices[{4, 4}, Complexes, Hermitian]],    Element[L12, Matrices[{4, 4}, Complexes, Hermitian]],    Element[L13, Matrices[{4, 4}, Complexes, Hermitian]],    Element[L22, Matrices[{4, 4}, Complexes, Hermitian]],    Element[L23, Matrices[{4, 4}, Complexes, Hermitian]],    Element[L33, Matrices[{4, 4}, Complexes, Hermitian]]}, {\[Alpha]}] 

Here, the $ \rho_{k}$ matrices are the $ G_k$ matrices.

When answering optimization questions in E6 (D&D3.5 variant), can you assume bonus feats whenever convenient, or only at level 6?

When answering optimization questions in E6 (D&D3.5 variant), can you assume bonus feats whenever convenient, or only at level 6?

I’m very familiar with the underlying 3.5 system, but not very familiar with the conventions of E6. It seems like character-build questions handwave feat requirements away freely, but it’s not clear to me how that works. Can you select arbitrary many bonus feats at any arbitrary level? Or is there some known way to achieve the same effect?

Generally, I would expect this sort of question to be “read the book.” However, E6 does not have a “book” per se (or does it?), and to the extent that it does, that book appears different from how common practice appears.

MYSQL optimization for Magento 2

Im running my magento 2 project on 3 core 8gb ram VM. I did some benchmarks and i see that my DB queries take quite some time to execute. Traffic on this site is almost zero. Mysql is running on 5.7.31-34 Percona Server. Database size 1.7gb. Can anyone spot some obvious mistakes and give me some suggestions ?

2947 queries in 1,057.20ms (average time: 0.36ms) - 2788 queries/second 2912 SELECT - 0 INSERT - 0 UPDATE - 0 DELETE - 0 TRANSACTION   2618 queries in 1,132.66ms (average time: 0.43ms) - 2311 queries/second 2593 SELECT - 0 INSERT - 0 UPDATE - 0 DELETE - 0 TRANSACTION  2915 queries in 1,213.47ms (average time: 0.42ms) - 2402 queries/second 2878 SELECT - 0 INSERT - 0 UPDATE - 0 DELETE - 0 TRANSACTION   620 queries in 235.52ms (average time: 0.38ms) - 2633 queries/second 610 SELECT - 0 INSERT - 0 UPDATE - 0 DELETE - 0 TRANSACTION  3214 queries in 1,930.16ms (average time: 0.60ms) - 1665 queries/second 3179 SELECT - 0 INSERT - 0 UPDATE - 0 DELETE - 0 TRANSACTION 

At moment my.cnf is …

[mysqld] user   = mysql pid-file = /var/run/mysqld/mysqld.pid socket   = /var/run/mysqld/mysqld.sock port   = 3306 basedir    = /usr datadir    = /var/lib/mysql tmpdir   = /tmp lc-messages-dir  = /usr/share/mysql explicit_defaults_for_timestamp   ### MyISAM # key_buffer_size = 16M myisam-recover-options = FORCE,BACKUP  ### SAFETY # innodb = force max_allowed_packet = 250M max_connect_errors = 100000 bind-address = 127.0.0.1 skip-name-resolve  ### LANGUAGE # #init_connect='SET collation_connection = utf8_unicode_ci' #init_connect='SET NAMES utf8' #character-set-server=utf8 #collation-server=utf8_unicode_ci #skip-character-set-client-handshake  ### CACHES AND LIMITS # back_log = 20 interactive_timeout = 7200 wait_timeout = 7200 net_read_timeout = 120 net_write_timeout = 300 sort_buffer_size = 8M read_buffer_size = 8M  read_rnd_buffer_size = 2M join_buffer_size = 16M tmp_table_size = 512M  max_heap_table_size = 512M query_cache_type = 1 query_cache_size = 128M max_connections = 60 thread_cache_size = 32 thread_pool_size = 16 open_files_limit = 65535 table_definition_cache = 8000 table_open_cache = 10000  ### INNODB_ # innodb_thread_concurrency = 0 innodb_lock_wait_timeout = 7200 innodb_flush_method = O_DIRECT innodb_flush_log_at_trx_commit = 2 innodb_log_files_in_group = 2 innodb_log_file_size = 512M innodb_log_buffer_size = 64M innodb_file_per_table = 1 innodb_read_io_threads = 8 innodb_write_io_threads = 8 innodb_buffer_pool_instances = 2 innodb_buffer_pool_size = 2000M  

Schedule Optimization With Priority and Weighted Costs

I need an algorithm to determine the best itinerary for a series of events.

Each event has a time, location, and reward. Arriving at an event in time yields the reward; too late means no reward. Each event is at a physical location thus it takes time to travel from event to event. It is not necessary to attend every event.

What itinerary will yield the largest total reward?

Does anyone know if there is an existing algorithm for this or one that would be easily adapted? Given the similarity to the traveling salesman problem I am tempted to start with a weighted TSP solution and work from there.

Why can’t a compiler just “think more” about optimization?

This happens to me from time to time: I compile my code with the highest optimization level (-Ofast) of the allegedly fastest compiler (GCC) of one of the fastest languages (C/C++). It takes 3 seconds. I run the compiled program, measuring performance. Then I make some trivial change (say, marking a function inline), compile it again, and it runs 20% faster.

Why? Often I’d rather wait a few minutes or even hours, but be sure that my code is at least hard to optimize further. Why does the compiler give up so quickly?

As far as I know modern architectures are super complicated and hard to a priori optimize for. Couldn’t a compiler test many possibilities and see which one is the fastest? I effectively do this by making random changes in the source code, but that doesn’t sound optimal.

Proof of the undecidability of compiler code optimization

While reading Compilers by Alfred Aho, I came across this statement:

The problem of generating the optimal target code from a source program is undecidable in general.

The Wikipedia entry on optimizing compilers reiterates the same without a proof.

Here’s my question: Is there a proof (formal or informal) of why this statement is true? If so, please provide it.

SEARCH ENGINE OPTIMIZATION

To design a website that is compatible with SEO we have to first find out what exactly SEO entails. Search engine optimization, or SEO for short, is designed to describe certain aspects such as finding the terms and phrases, which can generate qualified traffic to the website until it makes your website friendly to search engines. Meaning in layman’s terms, that this set of rules will allow you to be placed correctly when users search “Google” or any other search engine. Every time search engine optimization is more difficult and is constantly changing. For one example, Google owns more than 7,000 websites that are managed by hundreds of product and marketing teams around the world. Every day more than 200 changes are made to websites which can affect the SEO of a site. One of the suggestions that I would say would be to start with little data; it may seem simple but it can help you focus on small incremental changes in the overall SEO strategy of a website, this can help you generate big profits over time.

Go you agree ot disagree? Why or why not?

Force-directed graph optimization with step-wise costs and constraints


Introduction

I have an optimization problem. There are up to 25 nodes. The connectivity between the nodes is far less important than the Cartesian placement of the nodes. Since all nodes can potentially affect each other in the optimization problem it is safe to model this as a complete, undirected graph.

In most modes of this optimization problem there are between 2-3 regions extending out infinitely from the origin separated by straight lines, i.e.

 A | B --------    C 

Each region exactly encompasses one or more Cartesian quadrants. Each imposes a fixed cost or benefit to each node, but this cost does not change the "farther into the region" a node gets.

Costs

This is the exhaustive list of costs and constraints on the nodes; all factors are cost multipliers (higher is worse). Distances are shown in metres but are really just discrete integers.

  • The distance between any two nodes must be at least 4m
  • For each node pair within 25m, there is a factor of 1.04
  • For each node, if there are three or fewer other nodes within 120m, there is a factor of 0.90
  • Depending on what region a node is in, the node has a factor between 0.90 and 1.10
  • For every node, there is an individual edge factor to every other node within 25m of between 0.90 and 1.10
  • The product of all of the above factors, for each node, will have a set minimum of 0.67 and a set maximum of 1.50

So none of the factors are continuous, and none are differentiable in space since they are all step-wise.

Search space

The 2D coordinates of each node are discrete and unbounded. Since there are 25 nodes, there are 50 integer variables (xy for each node) to optimize. The hope is that even though there are no bounds, there will be enough sub-1.0 factors to have the optimization converge rather than force the nodes to fly apart.

If I get this working well enough for a given region configuration, I might expand this to selection of a region configuration, for which there are currently 46 possibilities.

Optimization

Since none of the cost factors are space-differentiable, something like Gradient Descent would not be possible.

I have read about force-directed graph drawing; in particular this is interesting:

using the Kamada–Kawai algorithm to quickly generate a reasonable initial layout and then the Fruchterman–Reingold algorithm to improve the placement of neighbouring nodes.

Unfortunately, it seems that these methods have no notion of cost tied to absolute location, only distance of nodes relative to each other.

Implementation

I will probably end up implementing this in Python.

Any hints on how to approach this would be appreciated.