Range queries in Trie

The question specifically aims to ask about range queries in a trie data structure. I have a binary trie which represents a set of numbers in the binary form.

The numbers are initially given as an array : {2,1,5,6,4,7,3,8,9,10}

The trie has been constructed from the above array. Now, a range(l-r) and a number n is given. We have to find the maximum xor value between k and the elements in the range(l-r) of the array.


Input : L=3,R=6,N=5

Output : 6(index=4)

I have constructed the trie. I also know the bit manipulation behind finding the maximum xor. I am unable to limit my search in the given range. Kindly help.

Is there a way to use the IMPORTXML function of Google Sheets to import two queries at once?

In Google Sheets, I’m working on a tool to associate information about certain US Congressional Districts to their respective Members of Congress. To facilitate updating information about which Representatives serve which districts, I have opted to use an IMPORTXML function to retrieve up-to-date lists of Members and districts.

Fortunately, the US House Clerk publishes an up-to-date xml file containing all the information I need. While I’m still trying to master xpath queries in Sheets, I think I’ve got a pretty basic handle on how to apply it for this project. I’ve found I can use the following function to retrieve State and Congressional District information:


And this is the data the function returns:

here’s a link because I’m not allowed to embed images yet

Obviously, the results continues on for all 435 districts (actually 441 because it includes non-voting delegates too), and I can work with this. The issues I’m running into are when I try to import names of Members of the House with the following function:


And this is what that function returns:

again, link because I can’t embed yet

And again, the results continue and include every Member. BUT, there are not 435 Members (441 including delegates) in the House right now due to some vacancies. And the IMPORTXML function that retrieves the names of the Members is only returning the 438 names it can find.

This means I cannot easily associate a Member to a district by simply using two IMPORTXML calls in two adjacent columns (one with the state/district, the other with names), as the lists don’t line up, which can be seen at the bottom of the columns:

here you can see the bottom of the columns

I did a bit of digging, and learned that I can use two xpath queries in one IMPORTXMLcall by adding | between queries. Doing so with the xpath queries from the previous functions, the IMPORTXML call looks like this:

=IMPORTXML("http://clerk.house.gov/xml/lists/MemberData.xml","//member/statedistrict | //member/member-info/namelist") 

And it returns a single column with the state/district interlaced with the names like this:

here’s a link to the image of the double query

Interestingly though, when I do this, the names are appropriately paired with their districts; when there is a vacancy, the function imports the district, skips the non-existent name, imports the next district, and then the next name. So when it comes to a vacant district, this is what the output looks like (with the vacant districts highlighted):

in this image, you can see the skipping of names

However, for this to be useful, I really need to have this data in two columns, one with the state/district data, and the other with that district’s respective Member’s name. I’m trying to learn as much as I can about the problem, but this is just way beyond the scope of anything I’ve attempted in the past, and well-outside my comfort zone. That’s where I stand so far, and any help at this point would be sincerely appreciated.

Como criar Queries para gerar um SCHEMA em JPA

Estou tentando criar uma query para criar um schema em sql automaticamente; assim que abrir o programa ele executa: createQuery(“CREATE NEW SCHEMA IF NOT EXISTS BancoDeDados”);

Meu projeto está assim:

-Classe EntityManagerSource

public class EntityManagerSource {  private static final EntityManagerFactory emf = Persistence.createEntityManagerFactory("PersistenciaDAO");  @Produces @RequestScoped public static EntityManager getEntityManager(){     System.out.println("Banco de Dados: Conectado");     return emf.createEntityManager(); } 


<persistence-unit name="PersistenciaDAO" transaction-type="RESOURCE_LOCAL">  <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>     <class>br.edu.ifma.ticketif.model.entity.database.Aluno</class>     <class>br.edu.ifma.ticketif.model.entity.Usuario</class> <properties>    <!--  propriedades do servidor -->     <property name="javax.persistence.jdbc.url"               value="jdbc:mysql://localhost:3306/BancoDeDados?useTimezone=true&amp;serverTimezone=UTC&amp;useSSL=false"/>   <property name="javax.persistence.jdbc.user" value="root"/>   <property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver"/>   <property name="javax.persistence.jdbc.password" value="admin"/>   <property name="javax.persistence.schema-generation.database.action" value="create"/>   <property name="hbm2ddl.auto" value="create"/>    <!--  propriedades da hibernate -->   <property name="hibernate.dialect"             value="org.hibernate.dialect.MySQL5InnoDBDialect"/>   <property name="hibernate.show_sql" value="true"/>   <property name="hibernate.format_sql" value="true"/>   <property name="hibernate.use_sql_comments" value="false" />   <property name="hibernate.jdbc.wrap_result_sets" value="false" />   <property name="hibernate.hibernate.cache.use_query_cache" value="true" />    <!--  Atualizações do banco de dados -->   <property name="hibernate.hbm2ddl.auto" value="update"/>  </properties> 

What could stop bitcoin-seeder from “hearing” DNS queries?

I’m running several bitcoin-seeders. Under Ubuntu 14.04 & 16.04, they run fine and answer queries. On Ubuntu 18.04, however, dnsseed does not detect the queries sent to it. I know the machine is receiving the query because DNS requests are monitored with dnstop, and every DNS query sent with ‘dig‘ is sensed by dnstop and reported, but dnsseed shows “0 DNS Requests”.
There is no firewall running and apparmor has been disabled. What tests could be run or troubleshooting strategy followed to find the problem ?

Under Ubuntu 16.04:

Loading dnsseed.dat...done Starting 4 DNS threads for ra.zmark.org on (port 5353).......done Starting seeder...done [18-10-24 19:27:41] 274/37963 available (1258 tried in 1000s, 38980 new, 1536 active), 0 banned; 3 DNS requests, 3 db queries 

Under Ubuntu 18.04:

Supporting whitelisted filters: 0x1,0x5,0x9,0xd Loading dnsseed.dat...done Starting 4 DNS threads for shido.bitmark.one on (port 5353).......done Starting seeder...done Starting 96 crawler threads...done [18-10-24 19:25:23] 3593/87930 available (64497 tried in 3805s, 21897 new, 1536 active), 1 banned; 0 DNS requests, 0 db queries 


Queries: 0 new, 1363 total                                                                                                                                Wed Oct 24 19:39:03 2018 Replies: 0 new, 191 total  Query Name                     Count      %   cum% -------------------------- --------- ------ ------ shido.bitmark.one               1169   85.8   85.8 bitseed.xf2.org                  117    8.6   94.4 org.members.linode.com            24    1.8   96.1 seed.bitcoin.sipa.be              20    1.5   97.6 dnsseed.bitcoin.dashjr.org        14    1.0   98.6 dnsseed.bluematt.me               12    0.9   99.5 motd.ubuntu.com                    5    0.4   99.9 github.com                         2    0.1  100.0 

Reduce the nested logic from my Angular firebase subscriptions and avoid unnecessary additional queries

I could use some help refactoring my component and service. I have a lot of nested logic and wasn’t sure what I could do to improve this. I’m new to Angular 7 and their piping syntax. I suspect this is what I could use in this example to improve my code? Also, I have an issue where the drawPoll() function is being called before the forEach iteration is done so I manually added a 3 second setTimeout() for the time being. I think that refactoring these subscriptions to not be so nested and also be synchronous might fix this issue.


import { Component, OnInit } from '@angular/core'; import * as Chart from 'chart.js'; import { Observable } from 'rxjs'; import { FirebaseService } from '../services/firebase.service'; import { Input, Output, EventEmitter } from '@angular/core'; import { CardModule } from 'primeng/card';  @Component({   selector: 'app-poll',   templateUrl: './poll.component.html',   styleUrls: ['./poll.component.scss'] }) export class PollComponent implements OnInit {   chart:any;   poll:any;   votes:[] = [];   labels:string[] = [];   title:string = "";   isDrawn:boolean = false;   inputChoices:any = [];    @Input()   pollKey: string;    @Output()   editEvent = new EventEmitter<string>();    @Output()   deleteEvent = new EventEmitter<string>();    constructor(private firebaseService: FirebaseService) { }    ngOnInit() {     this.firebaseService.getPoll(this.pollKey).subscribe(pollDoc => {       // ToDo: draw poll choices on create without breaking vote listener       console.log("details?", pollDoc);       // Return if subscription was triggered due to poll deletion       if (!pollDoc.payload.exists) {         return;       }       const pollData:any = pollDoc.payload.data();       this.poll = {         id: pollDoc.payload.id,         helperText: pollData.helperText,         pollType: pollData.pollType,         scoringType: pollData.scoringType,         user: pollData.user       };       this.title = this.poll.pollType == 1 ? "Title 1" : "Title 2"        this.firebaseService.getChoices(this.pollKey).subscribe(choices => {         this.poll.choices = [];         choices.forEach(choice => {           const choiceData:any = choice.payload.doc.data();           const choiceKey:any = choice.payload.doc.id;           this.firebaseService.getVotes(choiceKey).subscribe((votes: any) => {             console.log("does this get hit on a vote removal?", votes.length);             this.poll.choices.push({               id: choiceKey,               text: choiceData.text,               votes: votes.length             });           })         });         setTimeout(() => {           this.drawPoll();         }, 3000);       });     });   }    drawPoll() {     if (this.isDrawn) {       this.chart.data.datasets[0].data = this.poll.choices.map(choice => choice.votes);       this.chart.data.datasets[0].label = this.poll.choices.map(choice => choice.text);       this.chart.update()     }     if (!this.isDrawn) {       this.inputChoices = this.poll.choices;       var canvas =  <HTMLCanvasElement> document.getElementById(this.pollKey);       var ctx = canvas.getContext("2d");       this.chart = new Chart(ctx, {         type: 'horizontalBar',         data: {           labels: this.poll.choices.map(choice => choice.text),           datasets: [{             label: this.title,             data: this.poll.choices.map(choice => choice.votes),             fill: false,             backgroundColor: [               "rgba(255, 99, 132, 0.2)",               "rgba(255, 159, 64, 0.2)",               "rgba(255, 205, 86, 0.2)",               "rgba(75, 192, 192, 0.2)",               "rgba(54, 162, 235, 0.2)",               "rgba(153, 102, 255, 0.2)",               "rgba(201, 203, 207, 0.2)"             ],             borderColor: [               "rgb(255, 99, 132)",               "rgb(255, 159, 64)",               "rgb(255, 205, 86)",               "rgb(75, 192, 192)",               "rgb(54, 162, 235)",               "rgb(153, 102, 255)",               "rgb(201, 203, 207)"             ],             borderWidth: 1           }]         },         options: {           events: ["touchend", "click", "mouseout"],           onClick: function(e) {             console.log("clicked!", e);           },           tooltips: {             enabled: true           },           title: {             display: true,             text: this.title,             fontSize: 14,             fontColor: '#666'           },           legend: {             display: false           },           maintainAspectRatio: true,           responsive: true,           scales: {             xAxes: [{               ticks: {                 beginAtZero: true,                 precision: 0               }             }]           }         }       });       this.isDrawn = true;     }   }    vote(choiceId) {     if (choiceId) {       const choiceInput:any = document.getElementById(choiceId);       const checked = choiceInput.checked;       if (checked) this.firebaseService.incrementChoice(choiceId);       if (!checked) this.firebaseService.decrementChoice(choiceId);       this.poll.choices.forEach(choice => {         const choiceEl:any = document.getElementById(choice.id);         if (choiceId !== choiceEl.id && checked) choiceEl.disabled = true;         if (!checked) choiceEl.disabled = false;       });     }   }  } 


import { Injectable } from '@angular/core'; import { AngularFirestore } from '@angular/fire/firestore'; import { map, switchMap, first } from 'rxjs/operators'; import { Observable, from } from 'rxjs'; import * as firebase from 'firebase'; import { AngularFireAuth } from '@angular/fire/auth';  @Injectable({   providedIn: 'root' }) export class FirebaseService {   // Source: https://github.com/AngularTemplates/angular-firebase-crud/blob/master/src/app/services/firebase.service.ts   constructor(public db: AngularFirestore, private afAuth: AngularFireAuth) { }    getPoll(pollKey) {     return this.db.collection('polls').doc(pollKey).snapshotChanges();   }    getChoices(pollKey) {     return this.db.collection('choices', ref => ref.where('poll', '==', pollKey)).snapshotChanges();   }     incrementChoice(choiceKey) {     const userId = this.afAuth.auth.currentUser.uid;     const choiceDoc:any = this.db.collection('choices').doc(choiceKey);     // Check if user voted already     choiceDoc.ref.get().then(choice => {       let pollKey = choice.data().poll       this.db.collection('votes').snapshotChanges().pipe(first()).subscribe((votes:any) => {         let filteredVote = votes.filter((vote) => {           const searchedPollKey = vote.payload.doc._document.proto.fields.poll.stringValue;           const searchedChoiceKey = vote.payload.doc._document.proto.fields.choice.stringValue;           const searchedUserKey = vote.payload.doc._document.proto.fields.user.stringValue;           return (searchedPollKey == pollKey && searchedChoiceKey == choiceKey && searchedUserKey == userId);         });         if (filteredVote.length) {           // This person aleady voted           return false;         } else {           let votes = choice.data().votes           choiceDoc.update({             votes: ++votes           });           const userDoc:any = this.db.collection('users').doc(userId);           userDoc.ref.get().then(user => {             let points = user.data().points             userDoc.update({               points: ++points             });           });           this.createVote({             choiceKey: choiceKey,             pollKey: pollKey,             userKey: userId           });         }       });     });   }    decrementChoice(choiceKey) {     const choiceDoc:any = this.db.collection('choices').doc(choiceKey);     const userId = this.afAuth.auth.currentUser.uid;     choiceDoc.ref.get().then(choice => {       let pollKey = choice.data().poll       let votes = choice.data().votes       choiceDoc.update({         votes: --votes       });       const userDoc:any = this.db.collection('users').doc(userId);       userDoc.ref.get().then(user => {         let points = user.data().points         userDoc.update({           points: --points         });       });       // Find & delete vote       this.db.collection('votes').snapshotChanges().pipe(first()).subscribe((votes:any) => {         let filteredVote = votes.filter((vote) => {           const searchedPollKey = vote.payload.doc._document.proto.fields.poll.stringValue;           const searchedChoiceKey = vote.payload.doc._document.proto.fields.choice.stringValue;           const searchedUserKey = vote.payload.doc._document.proto.fields.user.stringValue;           return (searchedPollKey == pollKey && searchedChoiceKey == choiceKey && searchedUserKey == userId);         });         this.deleteVote(filteredVote[0].payload.doc.id);       });     });   }     createVote(value) {     this.db.collection('votes').add({       choice: value.choiceKey,       poll: value.pollKey,       user: value.userKey     }).then(vote => {       console.log("Vote created successfully", vote);     }).catch(err => {       console.log("Error creating vote", err);       });   }    deleteVote(voteKey) {     this.db.collection('votes').doc(voteKey).delete().then((vote) => {       console.log("Vote deleted successfully");     }).catch(err => {       console.log("Error deleting vote", err);     });   }    getVotes(choiceKey) {     return this.db.collection('votes', ref => ref.where('choice', '==', choiceKey)).snapshotChanges().pipe(first());   }  } 

PHP Laravel – Improving and refactoring code to Reduce Queries

Improve Request to Reduce Queries

I have a web application, where users can upload Documents or Emails, to what I call a Strema. The users can then define document fields email fields to the stream, that each document/email will inherit. The users can then furthermore apply parsing rules to these fields, that each document/email will be parsed after.

Now let’s take the example, that an user uploads a new document. (I have hardcoded the ID’s for simplicty).

$  stream = Stream::find(1); $  document = Document::find(2);  $  parsing = new ApplyParsingRules; $  document->storeContent($  parsing->parse($  stream, $  document)); 

Below is the function that parses the document according to the parsing rules:

    public function parse(Stream $  stream, DataTypeInterface $  data) : array     {         //Get the rules.         $  rules = $  data->rules();          $  result = [];         foreach ($  rules as $  rule) {              $  result[] = [                 'field_rule_id' => $  rule->id,                 'content' => 'something something',                 'typeable_id' => $  data->id,             ];         }          return $  result;     } 

So above basically just returns an array of the parsed text.

Now as you probably can see, I use an interface $ DataTypeInterface. This is because the parse function can accept both Documents and Emails.

To get the rules, I use this code:

//Get the rules. $  rules = $  data->rules(); 

The method looks like this:

class Document extends Model implements DataTypeInterface {     public function stream()     {         return $  this->belongsTo(Stream::class);     }     public function rules() : object     {         return FieldRule::where([             ['stream_id', '=', $  this->stream->id],             ['fieldable_type', '=', 'App\DocumentField'],         ])->get();     } } 

This will query the database, for all the rules that is associated with Document Fields and the fields, that is associated with the specific Stream.

Last, in my first request, I had this:

$  document->storeContent($  parsing->parse($  stream, $  document)); 

The storeContent method looks like this:

class Document extends Model implements DataTypeInterface {     // A document will have many field rule results.     public function results()     {         return $  this->morphMany(FieldRuleResult::class, 'typeable');     }     // Persist the parsed content to the database.     public function storeContent(array $  parsed) : object     {         foreach ($  parsed as $  parse) {             $  this->results()->updateOrCreate(                 [                     'field_rule_id' => $  parse['field_rule_id'],                     'typeable_id' => $  parse['typeable_id'],                 ],                 $  parse             );         }         return $  this;     } } 

As you can probably imagine, everytime a document gets parsed, it will create be parsed by some specific rules. These rules will all generate a result, thus I am saving each result in the database, using the storeContent method.

However, this will also generate a query for each result.

One thing to note: I am using the updateOrCreate method to store the field results, because I only want to persist new results to the database. All results where the content was just updated, I want to update the existing row in the database.

For reference, above request generates below 8 queries:

select * from `streams` where `streams`.`id` = ? limit 1 select * from `documents` where `documents`.`id` = ? limit 1 select * from `streams` where `streams`.`id` = ? limit 1     select * from `field_rules` where (`stream_id` = ? and `fieldable_type` = ?) select * from `field_rule_results` where `field_rule_results`.`typeable_id` = ? and... select * from `field_rule_results` where `field_rule_results`.`typeable_id` = ? and...   insert into `field_rule_results` (`field_rule_id`, `typeable_id`, `typeable_type`, `content`, `updated_at`, `created_at`) values (..) insert into `field_rule_results` (`field_rule_id`, `typeable_id`, `typeable_type`, `content`, `updated_at`, `created_at`) values (..) 

Above works fine – but seems a bit heavy, and I can imagine once my users starts to generate a lot of rules/results, this will be a problem.

Is there any way that I can optimize/refactor above setup?

Block certain queries from being logged in slow query log

I have mysqld_node running on my mysql server to get statistics to be used by grafana + prometheus to graph dashboard for this MySQL server. However, i noticed that 90% of the queries in my slow query logs are created by mysqld_node, and hence hard to filter queries from Application servers. Is there a way to block queries from being logged in the slow query logged based on the port or some other criteria? The port for mysqld_node is 9104.

Queries in Relational Algebra

Given a schema : Student (Std_id,s_name,class,year,dept,fees,phone) Dept (deptid,dname,fees,HOD) emp(empid,ename,dob,doj,Sal,phone)

List all the HODs who have a teaching experience of 20 years. My approach : Select from dept natural join emp under the condition that (count doj)=20 and project the HOD I am not sure about how to actually count the doj such that its 20 years however.

Find all the teachers who have annual salary of over 2000 Dollars. I did : Project_ename (select (emp) condition : Sal>2000 )

Find the students whose phone number is not NULL I don’t know how to do this one. I am sorry i don’t know how to type RA signs. Any sort of help will be appreciated.

How to use complex filtering queries in Gmail?

I am playing with a different / complex filtering queries / string in Gmail.

I found this answer (to my own question):

after:1552896000 before:1552924800 

And I was able to use it without any problems, i.e. I managed to filter e-mails with given dates.

Then I found this answer:

If email is from:semi-valuable-email-service.com AND contains:"Monday OR Wednesday OR Friday" THEN send it to trash 

and got a bit lost.

Is this a real string to be pasted somewhere into Gmail (where?) or a pseudo-code to explain filter settings that needs to be applied?

Where should I put queries as complex a above? When I try to create a rule to filter my emails, all that I see is a filter configuration box with some simple fields and no place to put a query directly.

Actually, I don’t need queries as comples as above, but I’d like to merge two or more simple queries (as in first example, if possible) to filter out e-mails sent in given period of time for two or more days:

after:1502294400 before:1502352000 AND after:1552896000 before:1552924800 

But I am getting no results, neither from first nor from second day. Is this possible at all in Gmail?

Automating Site Usage Reporting: Top queries by month, abandoned queries etc

My question is almost identical to one raised 5 years ago:

Pragmatically download export search usage reports

Again, being able to access, download and schedule the distribution of reports with the dynamic links located on a site page:


The key requirement as @Petter has stated is to allow business users to access the reports without needing to bother SharePoint or Site collection admins.