Can a computer/MacBook be compromised/infected/hacker simply be requesting/attempt to join a network?

I was finishing a clean installation of macOS Catalina on my MacBook, and I was asked to select a wifi network. I misclicked on a wifi network with a name that’s very similar to mine and was asked to enter the password, obviously I cancelled it and selected my own network instead.

My question is: Can the act of requesting to join that network put my MacBook at risk? I am especially concern since I was still in the middle of finalizing my macOS Catalina’s setup, I wonder if A. Does Clicking on the wifi network itself and being asked to type in your password gives any information about my system that put it at risk ? B. Can the request itself cause my MacBook to get any virus that may be plaguing the devices on that network? Or if the person who owns the network is malicious, can they compromise my computer that way?

I am considering reinstalling my MacOS again just for safe measure…

join table values separate by (,)

i want on 2nd table depending 1st column values to have values on 2nd column like:

table 1 works as reference table for table2 (column2)

|table 1  | name1  |active1|        |--------|-------| |  name1 |john   |    |  name2 |mary   |   |  name3 |george |   |  name4 |peter  |    |table 2 |when 1st column has |to have on 2nd column  |  |one or more values  |as result              | |separate by ,       |                       | |                    |                       | | name2              |active2                |        |--------------------|-----------------------| |  name1,name2       |john,mary              |    |  name2,name4       |mary,peter             |   |  name3             |george                 |   |  name4,name1,name3 |mary,john,george       |  

Oracle Sql: Join Multiple Tables with 1 Association Table

enter image description here

Above is the association between the different tables. Dotted lines are not on primary keys, but are inferred based on data.

This is the query I have so far:

SELECT distinct t3.assoc_id, t4.assoc_id FROM table_1 t1 LEFT OUTER JOIN assoc_table at ON at.assoc_id = t1.assoc_id LEFT OUTER JOIN table_2 t2 ON t2.detail_id = t1.detail_id LEFT OUTER JOIN  table_3 t3 ON t3.assoc_id = at.assoc_id AND at.table_id = 'table_3'AND t3.value_1 = t1.value_1 LEFT OUTER JOIN table_4 t4 ON t4.assoc_id = at.assoc_id AND t2.value_2 = t4.value_2 AND t3.value_1 = t4.value_1 WHERE t2.unique_id = 'X'; 

Note: I forgot to add Unique_ID in Table_1

enter image description here

Postgres UPDATE with data from another table – index only scan used for correlated subquery but not join


I’m tuning a bulk UPDATE which selects from another (large) table. My intention is to provide a covering index to support an index only scan of the source table. I realise the source table must be vacuumed to update its visibility map.

My investigations so far suggest the optimiser elects to index only scan the source table when the UPDATE uses a correlated subquery, but appears to use a standard index scan when a join is used (UPDATE...FROM). I’m asking this question to understand why.

I provide a simplified example here to illustrate the differences.

I’m using Postgres 9.6.8, but get very similar plans for 10.11 and 11.6. I have reproduced the plans on a vanilla 9.6 Postgres installation in Docker using the official image, and also on db<>fiddle here.


CREATE TABLE lookup (     surrogate_key   BIGINT PRIMARY KEY,     natural_key     TEXT NOT NULL UNIQUE,     data            TEXT NOT NULL);  INSERT INTO lookup SELECT id, 'nk'||id, random()::text FROM generate_series(1,400000) id;  CREATE UNIQUE INDEX lookup_ix ON lookup(natural_key, surrogate_key);  VACUUM ANALYSE lookup;  CREATE TABLE target (     target_id               BIGINT PRIMARY KEY,     lookup_natural_key      TEXT NOT NULL,     lookup_surrogate_key    BIGINT,     data                    TEXT NOT NULL );  INSERT INTO target (target_id, lookup_natural_key, data) SELECT id+1000, 'nk'||id, random()::text FROM generate_series(1,1000) id;  ANALYSE target; 

UPDATE using join

EXPLAIN (ANALYSE, VERBOSE, BUFFERS) UPDATE target SET lookup_surrogate_key = surrogate_key FROM lookup WHERE lookup_natural_key = natural_key; 

Standard index scan on lookup_ix – so heap blocks are read from lookup table:

Update on  (cost=0.42..7109.00 rows=1000 width=54) (actual time=76.688..76.688 rows=0 loops=1)   Buffers: shared hit=8514 read=550 dirtied=16   ->  Nested Loop  (cost=0.42..7109.00 rows=1000 width=54) (actual time=0.050..62.493 rows=1000 loops=1)         Output: target.target_id, target.lookup_natural_key, lookup.surrogate_key,, target.ctid, lookup.ctid         Buffers: shared hit=3479 read=535         ->  Seq Scan on  (cost=0.00..19.00 rows=1000 width=40) (actual time=0.013..7.691 rows=1000 loops=1)               Output: target.target_id, target.lookup_natural_key,, target.ctid               Buffers: shared hit=9         ->  Index Scan using lookup_ix on public.lookup  (cost=0.42..7.08 rows=1 width=22) (actual time=0.020..0.027 rows=1 loops=1000)               Output: lookup.surrogate_key, lookup.ctid, lookup.natural_key               Index Cond: (lookup.natural_key = target.lookup_natural_key)               Buffers: shared hit=3470 read=535 Planning time: 0.431 ms Execution time: 76.826 ms 

UPDATE using correlated subquery

EXPLAIN (ANALYSE, VERBOSE, BUFFERS) UPDATE target SET lookup_surrogate_key = (     SELECT surrogate_key     FROM lookup     WHERE lookup_natural_key = natural_key); 

Index only scan on lookup_ix as intended:

Update on  (cost=0.00..4459.00 rows=1000 width=47) (actual time=52.947..52.947 rows=0 loops=1)   Buffers: shared hit=8050 read=15 dirtied=16   ->  Seq Scan on  (cost=0.00..4459.00 rows=1000 width=47) (actual time=0.052..40.306 rows=1000 loops=1)         Output: target.target_id, target.lookup_natural_key, (SubPlan 1),, target.ctid         Buffers: shared hit=3015         SubPlan 1           ->  Index Only Scan using lookup_ix on public.lookup  (cost=0.42..4.44 rows=1 width=8) (actual time=0.013..0.019 rows=1 loops=1000)                 Output: lookup.surrogate_key                 Index Cond: (lookup.natural_key = target.lookup_natural_key)                 Heap Fetches: 0                 Buffers: shared hit=3006 Planning time: 0.130 ms Execution time: 52.987 ms 

db<>fiddle here

I understand that the queries are not logically identical (different behaviour when there a no/multiple rows in lookup for a given natural_key), but I’m surprised by the different usage of lookup_ix.

Can anyone explain why the join version could not use an index only scan please?

How do I join getting one row from the left table, no matter how many matches i get from the right table?

I have two tables – one is a data table and the other is a mapping table. I want to join them together, but only preserve the data from the right table. However, it is possible that the match table may contains multiple records that match to a single record in the right table. I cannot use a DISTINCT because there may be identical rows in the right table, and I want to preserve the same number of rows from the right-table in the result set.

Here is a sample of the data I am working with:

       DataTable                           MappingTable +-----+-----+-----+-----+           +------+------+------+------+ | ID1 | ID2 | ID3 | ID1 |           | ID1  | ID2  | ID3  | ID1  | +-----+-----+-----+-----+           +------+------+------+------+ |  1  |  1  |  1  |  1  |           |  1   | NULL | NULL | NULL | |  1  |  1  |  1  |  1  |           | NULL | NULL | NULL |  1   | |  2  |  1  |  1  |  1  |           |  3   |  3   | NULL | NULL | |  3  |  1  |  1  |  3  |           +------+------+------+------+ |  4  |  1  |  1  |  4  | |  2  |  2  |  1  |  1  | |  3  |  2  |  1  |  3  | |  3  |  3  |  1  |  3  | |  2  |  1  |  0  |  1  | |  2  |  1  |  0  |  1  | |  4  |  3  |  2  |  3  | +-----+-----+-----+-----+ 

Below is the join I am using. I wrote a custom function to handle the NULL-matching behavior, which I am including here as well.

SELECT * FROM DataTable P JOIN MappingTable M ON dbo.fNullMatchCheckIntS(P.ID1,M.ID1,0,1) = 1     AND dbo.fNullMatchCheckIntS(P.ID2,M.ID2,0,1) = 1     AND dbo.fNullMatchCheckIntS(P.ID3,M.ID3,0,1) = 1     AND dbo.fNullMatchCheckIntS(P.ID4,M.ID4,0,1) = 1 

CREATE FUNCTION dbo.fNullMatchCheckIntS (     @Value1 INT     ,@Value2 INT     ,@AutoMatchIfValue1IsNull BIT     ,@AutoMatchIfValue2IsNull BIT )     RETURNS BIT AS  BEGIN      DECLARE @Result BIT = 0      SELECT         @AutoMatchIfValue1IsNull = ISNULL(@AutoMatchIfValue1IsNull,0)         ,@AutoMatchIfValue2IsNull = ISNULL(@AutoMatchIfValue2IsNull,0)      IF         (@AutoMatchIfValue1IsNull = 1 AND @Value1 IS NULL)         OR (@AutoMatchIfValue2IsNull = 1 AND @Value2 IS NULL)         OR @Value1 = @Value2         OR (@Value1 IS NULL AND @Value2 IS NULL)     BEGIN         SET @Result = 1     END      RETURN @Result END 

The problem with the way the join works is that the first two rows in the DataTable match on the first two rows in the MappingTable, giving me four identical records in the result, but I only want 2. I know that I could add an identity column to the DataTable and then use DISTINCT or PARTITION to get the result I am looking for, but I would like to avoid that route if possible.

EDIT: I figured out a way to do this using EXISTS, but it looks a little ugly in my opinion. Still interested in other answers if anyone has an idea. Thanks!

SELECT * FROM DataTable D WHERE EXISTS (     SELECT D.ID1, D.ID2, D.ID3, D.ID4     FROM MappingTable M      WHERE dbo.fNullMatchCheckIntS(D.ID1,M.ID1,0,1) = 1         AND dbo.fNullMatchCheckIntS(D.ID2,M.ID2,0,1) = 1         AND dbo.fNullMatchCheckIntS(D.ID3,M.ID3,0,1) = 1         AND dbo.fNullMatchCheckIntS(D.ID4,M.ID4,0,1) = 1 ) 

Splunk Join search with time issue

Search Case:

Join search between two sources (IPS & DHCP log)

IPS log : Threat, IP, Hostname

DHCP log : IP, Hostname

Objective: Finding Host’s IP is triggered in IPS. Considering DHCP is providing same IP to multiple host.

index=ips | join IP type=inner [search index=dhcp | fields _time,IP,HOSTNAME] | stats count by Threat,IP,Hostname 

Problem: Getting only the last value from my DHCP index. If IP x.x.x.x was used by three hosts during the day: Host A, Host B, and Host C. Host B is the host that was triggered in IPS at 12 PM, but Host C is the last host that used the IP at 4 PM.

Now when I check my search at 5 PM, it shows the Threat in IPS was triggered at 12 PM with Hostname as Host C, which is wrong. It needs to show Host B.

Is there any way I can fix this so that the correct host is showing for IPS Threat?


I am trying to use a JOIN ON statement to SELECT two columns, one with the Team_Name and the other with how many times they have won. I want to include all teams, even those who haven’t won at all.

This is the JOIN statement that I’ve been using but it only returns the Team_Name Poki and a WinningTimes of 0 which is incorrect as they’ve won once

SELECT Team_Name, SUM(Winner) AS WinningTimes  From Robot_Winner r JOIN Robot_Combat c ON r.BattleNo = c.BattleNo;    Team_Name    WinningTimes Poki         0 
Create Table Robot_Combat ( BattleNo VarChar(3) Not null, Team_Number Int Not null, Team_Name VarChar(8) Not null, Bot_Name VarChar(8) Not null, Primary Key (BattleNo, Bot_Name) );  Create Table Robot_Winner ( BattleNo VarChar(3) Not null, Winner VarChar (10) Not null, Primary Key (BattleNo), Foreign Key (BattleNo) References Robot_Combat(BattleNo) ); 

This is the data inside each table

Robot_Combat BattleNo [PK]   Team_Number Team_Name   Bot_Name B07             1           A1          S1 B07             2           Poki        Pika B08             1           Phenix      Kka B08             2           StarWar     R2 B11             1           APT         4869 B11             2           Phenix      RedWin B12             1           T1          E1 B12             2           S3          Sam5  Insert Into Robot_Combat (BattleNo, Team_Number, Team_Name, BotName) Values ('B07', 1, 'A1', 'S1'); Insert Into Robot_Combat (BattleNo, Team_Number, Team_Name, BotName) Values ('B07', 2, 'Poki', 'Pika'); Insert Into Robot_Combat (BattleNo, Team_Number, Team_Name, BotName) Values ('B08', 1, 'Phenix', 'Kka'); Insert Into Robot_Combat (BattleNo, Team_Number, Team_Name, BotName) Values ('B08', 2, 'StarWar', 'R2'); Insert Into Robot_Combat (BattleNo, Team_Number, Team_Name, BotName) Values ('B11', 1, 'APT', '4869'); Insert Into Robot_Combat (BattleNo, Team_Number, Team_Name, BotName) Values ('B11', 2, 'Phenix', 'RedWin'); Insert Into Robot_Combat (BattleNo, Team_Number, Team_Name, BotName) Values ('B12', 1, 'T1', 'E1'); Insert Into Robot_Combat (BattleNo, Team_Number, Team_Name, BotName) Values ('B12', 2, 'S3', 'Sam5');  
Robot_Winner BattleNo [FK][PK]   Winner B07                 Poki B08                 Phenix B11                 Phenix B12                 S3  Insert Into Robot_Winner(BattleNo, Winner) Values ('B07','Poki); Insert Into Robot_Winner(BattleNo, Winner) Values ('B08','Phenix); Insert Into Robot_Winner(BattleNo, Winner) Values ('B11','Phenix); Insert Into Robot_Winner(BattleNo, Winner) Values ('B12','S3); 

Mysql JOIN query takes too much time to get the result

I have a mysql query as follows which JOIN’s 8 tables. When i use 3 tables to get the data the result is getting within 10 seconds. But whenever i add one more table the fetching time goes upto 1 minute. And if added more it takes infinite time. Any idea to resolve this problem?

following is my query

SELECT  c.`user_name`, e.`event_name`, e.`event_code`, e.`event_id`, COUNT(distinct ep.`participant_id`) as participants,  COUNT(DISTINCT pm.`program_material_id`) as material_count, COUNT(DISTINCT ev.`event_news_id`) as news_count ,  COUNT( DISTINCT  es.`event_speaker_id`) as speaker_count,  COUNT( DISTINCT  epr.`event_program_id`) as program_count, COUNT( DISTINCT  sw.`social_id`) as socail_wall_count  FROM `event` e LEFT JOIN `event_participant` ep ON ep.`event_id` = e.`event_id` LEFT JOIN `program_material` pm ON pm.`event_id` = e. `event_id` LEFT JOIN `event_news` ev ON ev.`event_id` = e. `event_id` LEFT JOIN `socialwall` sw ON sw.`event_id` = e. `event_id`  LEFT JOIN `event_speaker` es ON es.`event_id` = e. `event_id` LEFT JOIN `event_program` epr ON epr.`event_id` = e. `event_id` LEFT JOIN `event_customer` ec ON e.`event_id` = ec.`event_id` LEFT JOIN `customer` c ON ec.`customer_id` = c.`user_id`     GROUP BY e.`event_id` ORDER BY participants DESC LIMIT 0,10 

I indexed all tables primary keys and the columns which i used to JOIN in the subsequent tables. Here event is the master table and all other tables will have event_id

¿Cómo puedo actualizar una columna haciendo JOIN en la misma tabla?

Necesito actualizar la columna nombre de algunas filas de mi tabla usando el valor de otras filas relativas en esa misma tabla.

Para que se entienda mejor, he aquí una imagen de los resultados que arroja esta consulta:

SELECT id_celebracion, id_tipo, dia, semana, id_tiempo, nombre  FROM liturgia  WHERE  semana=11 AND id_tiempo=7  ORDER BY id_tipo,dia; 

introducir la descripción de la imagen aquí

En mi UPDATE yo quisiera asignar el valor de la columna nombre respectivo según el día y la semana. O sea:

  • Asignar el valor de nombre del id 588 también al id 212
  • Asignar el valor de nombre del id 589 también al id 213
  • Etc…

El criterio es que ellos coinciden en dia, en semana y en id_tiempo y se diferencian únicamente en que en uno el id_tipo es 49 y en otro es 50.

Normalmente, la consulta que yo haría sería esta:

UPDATE liturgia AS n INNER JOIN liturgia AS o ON   (n.dia=o.dia AND n.semana=o.semana AND n.id_tiempo=o.id_tiempo) SET   n.nombre = o.nombre WHERE o.semana=11 and o.id_tiempo=7 

Pero al aislar un SELECT para probar y no causar una catástrofe en la BD:

SELECT * from  liturgia AS n INNER JOIN liturgia AS o ON   (n.dia=o.dia AND n.semana=o.semana AND n.id_tiempo=o.id_tiempo) WHERE o.semana=11 and o.id_tiempo=7; 

Obtengo un resultado de 33 filas, por lo que algo no anda bien en mi consulta.

¿Mediante qué consulta podría entonces lograr la actualización requerida?