Does turning off my modem before leaving home significantly improve security?

Like many, most of my time is spent away from home: at work, the gym, out socializing, etc. I’d estimate I use my internet a few hours a night.

I never turn my modem off unless I travel for longer than a few days, and I have not disabled SSID broadcasting, as I’ve read that doing so has little to no security benefits.

But if my modem (and wi-fi) is off while I’m away, that would seem to nullify any security risks that do not include someone illegally accessing my physical devices, however minimal.

Are there any significant benefits to turning off your modem before leaving home, or would doing so be practically useless to protect against threats?

Logging in to a VPS using ssh and ssh to another VPS using ssh. Does it improve the security of the second VPS?

By configuring ssh, only set IP can log in over ssh. I found this important since I’m going to provide public-facing web services which would expose my IP. Moreover, I can’t use a CDN for some reason I prefer not to share.

However, I don’t have a static public IP and do not have control of the network environment. Thus, it is infeasible for me to whitelist my current public IP in the target server.

Thus, I decided to rent another VPS and use it as the midpoint. On this VPS, I follow the best practices for ssh without setting up IP whitelisting. On the target server, I only whitelist the midpoint. Then I ssh to the target server using the midpoint. Both VPS are of one provider. The midpoint is not going to be used for anything else.

Does it improve the security of the target server, or does it pose additional risks to it?

Is this problem better solved using other methods?

Looking for feedback to improve user testing and usability platform

Hi everyone!
I am working on the development of online user testing and usability platform to help web and UX designers, UX researchers and Front end testers. We are looking for feedback from web designers that is why I am posting here. If you tried it out and gave us some opinions you would be very helpful. Please register HERE .
We will also activate full access for 30 days free if you email what mail you used to register at so you can improve your…

Looking for feedback to improve user testing and usability platform

How can I improve this homebrew magic item so that it remains balanced across a wide range of levels?

I am considering the following homebrew magic item for my campaign:

Ring, rare (requires attunement)

While attuned to this ring, when you cast a cantrip that targets a single creature (but does not have a range of Self), you may choose to expend a spell slot to have that cantrip target an additional number of creatures equal to the level of the spell slot. Any additional creatures targeted must be valid targets of the spell, and no creature may be targeted more than once.

If the cantrip requires an attack roll, make a separate attack and damage roll for each creature. If the cantrip requires a saving throw, each target makes a separate saving throw but takes damage based on a single damage roll.

I really like how this feels for low level parties. For example, a level 3 wizard using a 2nd-level spell slot to attack 3 creatures for 1d10 damage each with a Firebolt seems perfectly reasonable. I am NOT happy, however, with the idea of a level 17 wizard using a 9th-level spell slot to attack 10 creatures for 4d10 damage each.

How could this magic item be improved so that it is still interesting for low level characters without being so powerful at higher levels? Or am I overestimating its usefulness at high levels?

How can I improve the performance of this query?

I’ve tried to add an index on records.status, but the cardinality is so low (<5 unique values) a seq scan was still used.

The query:

select distinct "events".* from "events"  inner join (select "records".* from "records" where "status" = 'Mined') as "record" on "record"."guid" = CAST("events"."return_values"#>>'{guid}' AS text)  where "events"."status" = 'Waiting' and "event" in ('RecordUpdated', 'RecordDiscovery', 'RecordRetrieval')  order by "block_number" asc, "transaction_index" asc, "transaction_hash" asc, "log_index" asc  limit 100; 

The execution plan looks like:

 Limit  (cost=122673.00..122677.25 rows=100 width=1639) (actual time=575.631..575.632 rows=0 loops=1)    ->  Unique  (cost=122673.00..124045.11 rows=32285 width=1639) (actual time=548.071..548.071 rows=0 loops=1)          ->  Sort  (cost=122673.00..122753.71 rows=32285 width=1639) (actual time=548.068..548.069 rows=0 loops=1)                Sort Key: events.block_number, events.transaction_index, events.transaction_hash, events.log_index, events.event_id, events.block_hash, events.address, events.return_values, events.event, events.signature, events.raw, events.processing_error, events.confirmations, events.created_at, events.updated_at,                Sort Method: quicksort  Memory: 25kB                ->  Gather  (cost=47192.22..97302.09 rows=32285 width=1639) (actual time=548.048..571.664 rows=0 loops=1)                      Workers Planned: 2                      Workers Launched: 2                      ->  Parallel Hash Join  (cost=46192.22..93073.59 rows=13452 width=1639) (actual time=526.239..526.241 rows=0 loops=3)                            Hash Cond: ((events.return_values #>> '{guid}'::text[]) = (records.guid)::text)                            ->  Parallel Index Scan using events_status_index on events  (cost=0.43..46843.84 rows=13804 width=1639) (actual time=0.106..36.976 rows=16870 loops=3)                                  Index Cond: (status = 'Waiting'::event_status)                                  Filter: ((event)::text = ANY ('{RecordUpdated,RecordDiscovery,RecordRetrieval}'::text[]))                                  Rows Removed by Filter: 1                            ->  Parallel Hash  (cost=42957.65..42957.65 rows=258731 width=44) (actual time=462.763..462.764 rows=205834 loops=3)                                  Buckets: 1048576  Batches: 1  Memory Usage: 56672kB                                  ->  Parallel Seq Scan on records  (cost=0.00..42957.65 rows=258731 width=44) (actual time=16.322..244.807 rows=205834 loops=3)                                        Filter: (status = 'Mined'::record_status)                                        Rows Removed by Filter: 6625  Planning Time: 0.432 ms  JIT:    Functions: 50    Options: Inlining false, Optimization false, Expressions true, Deforming true    Timing: Generation 8.788 ms, Inlining 0.000 ms, Optimization 3.711 ms, Emission 71.684 ms, Total 84.183 ms  Execution Time: 602.684 ms (25 rows) 

events table:

                                          Table ""       Column       |           Type           | Collation | Nullable |              Default -------------------+--------------------------+-----------+----------+------------------------------------  event_id          | character varying(255)   |           | not null |  log_index         | integer                  |           | not null |  transaction_index | integer                  |           | not null |  transaction_hash  | character varying(255)   |           | not null |  block_hash        | character varying(255)   |           | not null |  block_number      | integer                  |           | not null |  address           | character varying(255)   |           | not null |  return_values     | jsonb                    |           | not null |  event             | character varying(255)   |           | not null |  signature         | character varying(255)   |           | not null |  raw               | jsonb                    |           | not null |  status            | event_status             |           | not null |  processing_error  | character varying(255)   |           |          |  confirmations     | integer                  |           | not null |  created_at        | timestamp with time zone |           |          |  updated_at        | timestamp with time zone |           |          |  id                | integer                  |           | not null | nextval('events_id_seq'::regclass) Indexes:     "events_pkey" PRIMARY KEY, btree (id)     "events_block_log" btree (block_number, log_index)     "events_status_index" btree (status) 

records table:

                            Table "public.records"       Column      |           Type           | Collation | Nullable | Default ------------------+--------------------------+-----------+----------+---------  guid             | character varying(255)   |           | not null |  data_hash        | character varying(255)   |           | not null |  transaction_hash | character varying(255)   |           | not null |  url              | character varying(255)   |           | not null |  status           | record_status            |           | not null |  created_at       | timestamp with time zone |           |          |  updated_at       | timestamp with time zone |           |          |  client_id        | character varying(64)    |           | not null |  event_id         | character varying(255)   |           |          |  user_id          | character varying(255)   |           |          | Indexes:     "records_pkey" PRIMARY KEY, btree (guid)     "records_client_id_index" btree (client_id)     "records_event_id_index" btree (event_id)     "records_user_id_index" btree (user_id) Foreign-key constraints:     "records_client_id_foreign" FOREIGN KEY (client_id) REFERENCES clients(id) ON DELETE CASCADE Referenced by:     TABLE "discoveries" CONSTRAINT "discoveries_record_guid_foreign" FOREIGN KEY (record_guid) REFERENCES records(guid) ON DELETE CASCADE     TABLE "record_dependencies" CONSTRAINT "record_dependencies_record_foreign" FOREIGN KEY (record) REFERENCES records(guid) ON DELETE CASCADE     TABLE "retrievals" CONSTRAINT "retrievals_record_guid_foreign" FOREIGN KEY (record_guid) REFERENCES records(guid) ON DELETE CASCADE 

Any tips would be greatly appreciated

Improve MySQL query efficiency for first row in a group

I’ve written the query below in MySQL to get the top 10 top landing pages across all browser sessions.

Reading other similar posts about how to access the first row in a group, it seemed like the solution was the following:

SELECT MIN(`created_at`) AS `created_at`, `session_token`, `url` FROM `session` GROUP BY `session_token`; 

This produced incorrect results and I found that while using MIN() to get the first record in a group, it only applied to the specified column and that other columns could be picked from other rows within the group.

I amended the query to the one below which produces the correct result:

SELECT `b`.`created_at`, `b`.`session_token`, `b`.`url`  FROM (     SELECT MIN(`created_at`) AS `created_at`, `session_token`, `url`      FROM `session`      GROUP BY `session_token` ) a INNER JOIN `session` b USING (`session_token`, `created_at`); 

I’ve created the solution below that produces the correct results, however it is now using two subqueries.

SELECT `c`.`url`, COUNT(*) AS `hits`  FROM (     SELECT `b`.`created_at`, `b`.`session_token`, `b`.`url`      FROM (         SELECT MIN(`created_at`) AS `created_at`, `session_token`, `url`          FROM `session`          GROUP BY `session_token`     ) `a`     INNER JOIN `session` `b` USING (`session_token`, `created_at`) ) AS `c` GROUP BY `c`.`url` ORDER BY `hits` DESC LIMIT 10; 

I’ve only tested it on a small dataset and it doesn’t seem particularly fast. Could it be improved to increase efficiency?

How can I improve this slow query in my wordpress site?

SELECT object_id, term_taxonomy_id FROM wp_term_relationships INNER JOIN wp_posts ON object_id = ID WHERE term_taxonomy_id IN (525627,516360,525519,535782,517555,525186,517572,549564,1,517754,541497,541472,525476,549563,517633,524859,702393,541604,543483,524646,525001,550518,541516,525244,549565,517376,535783,524642,25,533395,533537,525475,2,705306,524684,525065,939122,541603,525523,533491,541590,702713,550724,525243,533634,525122,541498,549586,546982,21,524643,541478,525435,535784,541471,516611,535781,541638,516142,533416,546984,524999,533453,524682,704994,516579,516189,524644,517378,525185,541508,517634,705305,524858,517632,541637,517699,525064,517573,772367,516609,517375,525474,507436,524918,517635,541929,22,54,53,705119,524685,524683,516577,536343,191228,524915,524917,516298,541573,546983,515904,541601,56,517377,524645,517707,515905,516297,515903,517708,533635,516296,516578,517750,517554,516016,525123,533538,541625,525187,705307,55,191226,19,24,516299,541466,524916,772366,555654,516612,541503,191227,550302,991853,920642,191229,535829,525582,525524,524919,524720,525841,517636,541504,525184,525520,541562,525433,541563,516610) AND post_type IN ('post') AND post_status = 'publish' + _pad_term_counts() Theme   259514  2.0440  SELECT wp_posts.ID FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) WHERE 1=1 AND wp_posts.ID NOT IN (391534) AND ( wp_term_relationships.term_taxonomy_id IN (2,516296,517375,517376,517377,517378,517554,517555,517572,517573,517632,517633,517634,517635,517636,517699,517707,517708,517750,517754,524858,524859,524915,524916,524917,524918,524919,524999,525001,525064,525065,525185,525186,525187,525519,525520,525523,525524,525582,525841,533395,533416,533453,535782,535783,535784,535829,536343,549563,549564,549565,549586,550302,550518,550724,555654,702393,702713,704994,705119,705305,705306,705307,772366,772367,920642,939122,991853) ) AND wp_posts.post_type = 'post' AND ((wp_posts.post_status = 'publish')) GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 6 How can I improve this query ? I have many posts and they are taking like 2secs each. 


I also found this extra info that I think would help…

Why do you think modifying WordPress core table is a good idea? – Krzysiek Dróżdż♦ Jun 12 ’15 at 4:21 I really don’t think it’s a good idea but a necessary one if running wordpress with the amount of posts and combined with the limitations of mysql not having a descending index function. Those file sorts caused by the order by operations are a deal breaker for us in regards to site performance. – Ranknoodle Jun 15 ’15 at 2:34 But these operations are slow since you’re doing it wrong. In some projects we had similar issue, but came to very different solution, that didn’t modify core tables. We’ve created our own table and used it as indexing/search table. So every slow query was searching only based on this one table (no joins needed). (And we had much more data, AFAIR) – Krzysiek Dróżdż♦ Jun 15 ’15 at 5:06 Hi KRZYSIEK can you explain a little more on indexing search tables that you created? For example the slow query outline in the original question, I would create a table to store the post ID,reverse_post_id,post_type etc and only query against that? – Ranknoodle Jun 15 ’15 at 16:31 Send me an e-mail, I’ll try to elaborate on that method. – Krzysiek Dróżdż♦ Jun 15 ’15 at 16:34

But no idea on the method he used.

How do ability score improvements improve modifiers?

First time player in a group figuring out things as we all go.

I just hit level four as a Ranger, and I get to improve one ability score of my choice by 2, or I can increase two ability scores of my choice by 1.

My first thought was Strength, as I’ve been rocking a score of 11 with a mod of +0 so far. But I read that Rangers use (wisdom mod + 10) for their spell casting modifier. Currently my wisdom is 13, with a mod of +1. If I increase either STR or WIS, will the modifier increase as well? How and why?

A Monk in our party went from 9 CON to 11 during their ability score improvement, and by doing so their modifier went from −1 to 0.

We’re all pretty new, so between eight people we’re currently sharing one book, and all trying to locate our own copies for later campaigns. That’s making it… inefficient for us to learn these things quickly.

Is there any way to improve ability score without items, only with spells or class features?

I found a few, but they are all bad, like true Polymorph to improve ability scores (but I lose class features), does someone know any way to improve ability scores without losing class features, and without using magical items, only spells and class shenanigans? temporary and permanent increases are welcome my objective is improve warlock damage from lifedrinker

Improve SEO by forcing web crawlers to read csv file searching for keywords

I am trying to improve the seo of my website and I recently used an online seo tester for my first custom-coded website.

I am trying to improve the number of unique keywords and textual content crawled and I’m hoping to use the .csv file I created for the plotly.js sunburst. I followed this example

Right now I think the best way is to allow access to the .csv would be using the robots.txt file but I have not been able to confirm that approach will help. I’m new to the web development world so I apologize if the question is primitive. Any help is appreciated.