Queries on unbounded knapsack

Given $ n$ types of items with integer cost $ c_{i}$ (there is an unlimited number of items of each type), such that $ c_{i} \leq c$ for all $ i = 1, 2, \dots, n$ , answer (a lot of) queries of form “is there some set of items of total weight $ w$ ?” in time $ O(1)$ with some kind of precalculation in time $ O(n c \log c)$ .

I’ve got a hint – for every $ i = 0, 1, 2, \dots, c – 1$ find minimal $ x$ such that there is a set of items with total weight $ x$ and $ x \equiv i \quad (\bmod c)$ . How to calculate all $ x$ ‘s and how to use them to answer the queries?

This problem is somehow related to graphs and shortest paths, but I don’t understand the connection between actual knapsack-like thing and graphs (maybe there is some graph with paths of desired weight?).

Source: problem 76 on neerc.ifmo.ru wiki.

how to Reduce number of queries generated by Django ORM [closed]

I have below models

class Order(models.Model):    ....  class Component(models.Model):      line = models.ForeignKey(         Line,         on_delete=models.CASCADE,         blank=True,         null=True,         related_name="components",     )   ...  class Detail(models.Model):     line = models.ForeignKey(         "Line",         on_delete=models.CASCADE,         blank=True,         null=True,         related_name="details",     )   order= models.ForeignKey(Order, on_delete=models.CASCADE, related_name="details")    .....  class Line(models.Model):  ....  **Serializer** class ComponentSerializer(serializers.ModelSerializer):     qty = serializers.SerializerMethodField(read_only=True)      def get_qty(self,component):         return (component.qty)-sum(                map(                    some_calculation,                    Detail.objects.filter(                        line__components=component,order__active=True)                )            ) 

I have a list view using model viewsets

def list(self, request):  queryset = Order.objects.filter(order__user=request.user.id,active=True)   serilizer = OrderSerializer(queryset,many=true) 

The component serializer is used inside the order serializer. My question is the query inside the ComponentSerializer hits DB fpr every order record. If my understanding is correct, is there any way to reduce this?

Optimal algorithm for making queries to a database

There is a database of, let’s say, 500k English two-word combinations (e.g. “clover arc”, “minister horse”). I can search for an arbitrary string and I will get a list of the alphabetically first 1000 entries containing this string; the time each query takes is proportional to the number of results it returns, plus some constant overhead. I have a certain dynamic number of the unique results I want to get (e.g. 400k, 490k, 499k) and I want to spend as little time as possible sending queries to get them. By what algorithm should I craft my queries to achieve this?

One possible naive approach would be as following:

  1. Search for every single letter.
  2. Check which queries have maxed out the 1000 result limit.
  3. For each of those, make 26 new queries, appending every letter of the alphabet to them.
  4. Go to 2, until all queries give fewer than 1000 results.

However, this is obviously quite suboptimal, since every time we expand the tree the previous results get essentially obsoleted – almost all of them (except for those where the letter combination was at the end of the word) will be present across the queries generated from it, plus there will wasted overhead time on impossible combinations (e.g. we had a maxed-out query of “qu” and on the next level we’ll be requesting “quq” and “qux”, which will certainly not give any results).

How would you approach this?

(I apologize in advance if this is the wrong SE to ask this kind of question, but I couldn’t find a better match.)

Queries on large database kill connection to the server, works with LIMIT

I’m trying to run queries on a large-ish database without killing the connection to the server.

I’m using Postgres 12.1 on a mac with 16gb of memory, and about 40gb of free disk. The database is 78gb according to pg_database_size with the largest table being 20gb according do pg_total_relation_size.

The error I get (from the log), regardless of which non-working query I run, is:

server process (PID xxx) was terminated by signal 9: Killed: 9 

In VS code the error is "lost connection to server".

Two examples that don’t work are:

UPDATE table SET column = NULL WHERE column = 0; 
select columnA from table1 where columnA NOT IN ( select columnB from table2 ); 

I can run some of the queries (the above one, for example) by adding a LIMIT of, say, 1,000,000.

I suspected that I was running out of disk due to temp files, but in the log (with log_temp_files = 0), I can’t see any temp files being written.

I tried increasing and decreasing work_mem, maintenance_work_mem, shared_buffers, and temp_buffers. None worked, the performance was about the same.

I tried dropping all indexes, which brought down the “cost” on some of the queries, but they still killed the connection to the server.

What could be my problem and how can I troubleshoot this further?

Additionally, I read that temp files from timed-out queries are stored in pqsql_tmp. I checked the folder, and it does not have files of significant size. Could the temp files be stored somewhere else?


The log file for running a failed query looks like:

2020-02-17 09:31:08.626 CET [94908] LOG:  server process (PID xxx) was terminated by signal 9: Killed: 9 2020-02-17 09:31:08.626 CET [94908] DETAIL:  Failed process was running: update table         set columnname = NULL         where columnname = 0;  2020-02-17 09:31:08.626 CET [94908] LOG:  terminating any other active server processes 2020-02-17 09:31:08.626 CET [94919] WARNING:  terminating connection because of crash of another server process 2020-02-17 09:31:08.626 CET [94919] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exi$   2020-02-17 09:31:08.626 CET [94919] HINT:  In a moment you should be able to reconnect to the database and repeat your command. 2020-02-17 09:31:08.626 CET [94914] WARNING:  terminating connection because of crash of another server process 2020-02-17 09:31:08.626 CET [94914] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exi$   2020-02-17 09:31:08.626 CET [94914] HINT:  In a moment you should be able to reconnect to the database and repeat your command. 2020-02-17 09:31:08.629 CET [94908] LOG:  all server processes terminated; reinitializing 2020-02-17 09:31:08.698 CET [94927] LOG:  database system was interrupted; last known up at 2020-02-17 09:30:57 CET 2020-02-17 09:31:08.901 CET [94927] LOG:  database system was not properly shut down; automatic recovery in progress 2020-02-17 09:31:08.906 CET [94927] LOG:  invalid record length at 17/894C438: wanted 24, got 0 2020-02-17 09:31:08.906 CET [94927] LOG:  redo is not required 

SQL Injection penn testing from the queries only

Is there an established method or tool available to perform penn testing on an application by only testing queries it sends to the database?

For example, if I have a bunch of SQL Servers hosting various websites and a query came through that wasn’t parameterised, is there a way I can detect these?

example query that probably isn’t secure: SELECT x,y,z FROM logins WHERE username = ‘xx’ and password = ‘yyy’

..instead I would expect a secure application to be probably be using sp_executesql

Is there a data structure that can perform range modulo additions and range minimum queries?

It is well-known that the Segment Tree performs range additions and range minimum queries in O(logN) each.

Let each element in the array have value V[i], M[i]. We define a “range modulo add” as the following: add +X to V[i] for each element in the range L<=i<=R and then calculate modulo M[i] for each element L<=i<=R. Can both this operation and range minimum queries be run in (worst-case or average-case) o(N)? If not on ranges [L,R], is it possible to handle range minimum queries and range modulo adds on the entire array quickly?

Use of CSS media queries with Customiser and preprocessor?

I’m building a WordPress theme, which is, of course, responsive. In my CSS I’ve used media queries, to differentiate styles between desktop and mobile. Currently I’m making the theme compatible with the customiser.

I read that styles added via the customiser appear in the head of a page. Now I’m wondering how to use media queries with those styles. Some divs are used on both mobile and desktop, but with different styling. Let’s say I have a div X and in the customiser I need the following:

1) A setting with its control to change a few styles of X, online on mobile. Let’s say @media (max-width: 1199px). 2) I also need a setting with a control to change the style of div X on desktop, min-width of 1200px.

How can I accomplish this?

I’m also open to using a CSS preprocessor, like LESS or SCSS. The value set in the customiser would then become a variable. What is your advice on this and could you please give me an example (e.g. a Color Picker)? Thank a lot!