What should the GM do when players constantly argue?

I’ve got a group that it seems no matter what I do, they always end up arguing about every single situation I put in front of them. This includes the more simplistic situations like, “Where do you want to go first in town?” to the larger, “What is the general goal for the group”.

While I don’t think a bit of discussion on the topic is bad — in fact, it can be very good because that means they’re taking it seriously — some sessions I might sit there listening to them bicker longer than we actually play the game. This is not an exaggeration, as it isn’t outside of the norm to sit there and listen to them go on for about half an hour.

Of course, this also allows players with more force of personality to dominate the leadership roles in the group. Once again, this isn’t necessarily bad. Every group needs a leader, if only to keep the more passive players moving along.

My question to all of the more experienced GMs: Is there a happy medium to discussion or is the idea just a pipe dream?

Edit: By request, here is some more information. The arguments seem to be about anything and everything, but usually are about what the group should do as a whole: where they should go, what they should accomplish, what missions should they take. This has led to the more passive gamers capitulating quickly and leaving the more forceful players to do as they please, even if I know that those passive gamers wanted to stay in the local in-game area to explore. I know that at least one of my stronger players isn’t aiming for that. He’s simply playing his character to the hilt.

How do you get a secure bastion host if your IP address is constantly changing?

I am setting up AWS stuff and wondering how to setup a secure bastion host. They all say to only allow access to your IP address, but how can I do that if my IP address is changing every few hours or days (just in my house wifi, or going to coffee shops, etc.). What is best practice here, for SSHing into a bastion host and limiting access somehow to only specific IP addresses. If not possible, what is the next best alternative?

GoogleBot constantly crawls non-existant PDFs with ‘viagra’ or ‘cialis’ in the name?

My access log is full of requests for non-existent pdfs relating to ‘viagra’ and ‘cialis’ or other similar drugs from GoogleBot (user-agent is: Googlebot/2.1 (+http://www.google.com/bot.html) ip range is 66.249.64.*)


/WdjUZ/LXWKZ/cialis-hearing-loss.pdf /WKYnZ/viagra-questions.pdf /ZWohZ/LhfkZ/canadian-viagra-and-healthcare.pdf /YSXaZ/XnoZZ/buy-propecia-no-prescription.pdf /MRWQZ/MeWXZ/TZlWZ/UlaMZ/drug-manufacturers-buy-softtabs-viagra.pdf /PnddZ/NKdZZ/generic-viagra-no-prescription-australia.pdf /QQWVZ/RoRbZ/URObZ/LdNgZ/levitra-10-mg-order.pdf 

Why would google think these are urls that need to be crawled? If someone else is hosting a page with links like this, what purpose would that serve?

Is there any harm in creating a robots.txt rule to tell google to stop like so:

User-agent * Disallow /*viagra*.pdf$   Disallow /*cialis*.pdf$   Disallow /*propecia*.pdf$   Disallow /*levitra*.pdf$   

PostgreSQL 9.6.12 autovacuum constantly running on system tables

I come from a SQL Server, Oracle, Sybase DBA background but I am now looking into an AWS Aurora cluster running PostgreSQL 9.6.12 and have noticed something which I think is odd, but maybe it’s not, which is why I am here to ask the question. I have looked everywhere but can not find an answer. The default autovacuum and autoanalyze values are still set. Autovacuum does seem to get around to doing what it needs to do on application tables, eventually, but what I have noticed is that it seems to spend most of its time frequently vacuuming and analysing a small set of system tables. They are:

  1. pg_type
  2. pg_shdepend
  3. pg_attribute
  4. pg_class
  5. pg_depend

I am seeing this both through AWS Performance Insights data and also through direct queries to the database instance using this code:

    WITH rel_set AS (     SELECT         oid,         CASE split_part(split_part(array_to_string(reloptions, ','), 'autovacuum_analyze_threshold=', 2), ',', 1)             WHEN '' THEN NULL         ELSE split_part(split_part(array_to_string(reloptions, ','), 'autovacuum_analyze_threshold=', 2), ',', 1)::BIGINT         END AS rel_av_anal_threshold,                 CASE split_part(split_part(array_to_string(reloptions, ','), 'autovacuum_vacuum_threshold=', 2), ',', 1)             WHEN '' THEN NULL         ELSE split_part(split_part(array_to_string(reloptions, ','), 'autovacuum_vacuum_threshold=', 2), ',', 1)::BIGINT         END AS rel_av_vac_threshold,         CASE split_part(split_part(array_to_string(reloptions, ','), 'autovacuum_analyze_scale_factor=', 2), ',', 1)             WHEN '' THEN NULL         ELSE split_part(split_part(array_to_string(reloptions, ','), 'autovacuum_analyze_scale_factor=', 2), ',', 1)::NUMERIC         END AS rel_av_anal_scale_factor,                 CASE split_part(split_part(array_to_string(reloptions, ','), 'autovacuum_vacuum_scale_factor=', 2), ',', 1)             WHEN '' THEN NULL         ELSE split_part(split_part(array_to_string(reloptions, ','), 'autovacuum_vacuum_scale_factor=', 2), ',', 1)::NUMERIC         END AS rel_av_vac_scale_factor     FROM pg_class )  SELECT     PSUT.relname, --    to_char(PSUT.last_analyze, 'YYYY-MM-DD HH24:MI')     AS last_analyze,     to_char(PSUT.last_autoanalyze, 'YYYY-MM-DD HH24:MI') AS last_autoanalyze,     --    to_char(PSUT.last_vacuum, 'YYYY-MM-DD HH24:MI')     AS last_vacuum,     to_char(PSUT.last_autovacuum, 'YYYY-MM-DD HH24:MI') AS last_autovacuum,     to_char(C.reltuples, '9G999G999G999')               AS n_tup,     to_char(PSUT.n_dead_tup, '9G999G999G999')           AS dead_tup,     to_char(coalesce(RS.rel_av_anal_threshold, current_setting('autovacuum_analyze_threshold')::BIGINT) + coalesce(RS.rel_av_anal_scale_factor, current_setting('autovacuum_analyze_scale_factor')::NUMERIC) * C.reltuples, '9G999G999G999') AS av_analyze_threshold,     to_char(coalesce(RS.rel_av_vac_threshold, current_setting('autovacuum_vacuum_threshold')::BIGINT) + coalesce(RS.rel_av_vac_scale_factor, current_setting('autovacuum_vacuum_scale_factor')::NUMERIC) * C.reltuples, '9G999G999G999') AS av_vacuum_threshold,     CASE         WHEN (coalesce(RS.rel_av_anal_threshold, current_setting('autovacuum_analyze_threshold')::BIGINT) + coalesce(RS.rel_av_anal_scale_factor, current_setting('autovacuum_analyze_scale_factor')::NUMERIC) * C.reltuples) < PSUT.n_dead_tup         THEN '*'     ELSE ''     end     AS expect_av_analyze,         CASE         WHEN (coalesce(RS.rel_av_vac_threshold, current_setting('autovacuum_vacuum_threshold')::BIGINT) + coalesce(RS.rel_av_vac_scale_factor, current_setting('autovacuum_vacuum_scale_factor')::NUMERIC) * C.reltuples) < PSUT.n_dead_tup         THEN '*'     ELSE ''     end     AS expect_av_vacuum,     PSUT.autoanalyze_count,     PSUT.autovacuum_count FROM     pg_stat_all_tables PSUT     JOIN pg_class C         ON PSUT.relid = C.oid     JOIN rel_set RS         ON PSUT.relid = RS.oid ORDER BY PSUT.autoanalyze_count DESC; --C.reltuples 

At first I thought that it may be due to a lot of temporary tables being created and then destroyed or something similar as I would periodically see the numbers of tuples go from, for example, roughly 8,000 to 8,000,000 and then back again in several of the previously mentioned tables. But I haven’t been able to find any evidence of temp table creation and the offshore developers say they don’t use them.

Is this sort of behaviour normal in normal PostgreSQL or Aurora (PostgreSQL)? Is there anything anyone could suggest to look at to ascertain what may be happening here if this is not normal? This database is about a terabyte in size on an instance with 122GB of RAM (75% allocated to shared_buffers – the default for Aurora.)

I am looking to change the autovaccum settings from the defaults to handle this databases much larger tables but just wanted to make sure that wouldn’t be a waste of time if the tables in question just monopolise autovacuum/autoanalyse’s time.

Current settings (from pg_settings): -autovacuum on -autovacuum_analyze_scale_factor 0.05 -autovacuum_analyze_threshold 50 -autovacuum_freeze_max_age 200000000 -autovacuum_max_workers 3 -autovacuum_multixact_freeze_max_age 400000000 -autovacuum_naptime 5 -autovacuum_vacuum_cost_delay 5 -autovacuum_vacuum_cost_limit -1 -autovacuum_vacuum_scale_factor 0.1 -autovacuum_vacuum_threshold 50

In summary, what I am asking is: Is it normal for a small set of system tables to be constantly and consistently autovacuumed? Any insights would be greatly appreciated.

Players constantly capture enemies and question them, how to deal with it?

I’m running a campaign for five sixth level players and the bard has hypnotic pattern. Whenever the party runs into any fights with enemies who can speak, they always find some way of making an enemy incapacitated or doing non-lethal damage so they can question them later.

This always makes me have to think of things for that otherwise silent NPC to say and it feels bad to make them say nothing and “force” the party to just kill them.


Is there any smooth roleplaying way to handle this situation?

Am I the only one who can’t help but constantly think back on all the horrible security nightmares (and potential ones) from my past? [closed]

I’m not going to go into any specifics here, because I’ve learned that “certain” open source projects are extremely sensitive about any criticism and will defend any kind of madness by blaming the user for not fully reading and comprehending the often extremely cryptic and ambiguous manual, while not putting any blame on the software for allowing the insecure configuration in the first place.

You may think I’m joking, but even 15-20 years later, I still have actual nightmares about things which happened, or could have happened, or may have happened without me ever finding out, in the past, due to bad security, either by myself and others (affecting me).

A huge issue with security is not realizing one’s own limitations and having a big ego. You know how it is: you are 15-16 and you are totally an “1337” cracker, aren’t you? Everyone else is stupid and you know everything. You couldn’t possibly misconfigure or misunderstand anything, right? That manual only seems to verify what you already knew… yes, that seems to be correct… yes, now let’s put it live! It’s rock-solid!

… and then it turns out that somebody has been remotely connecting from across the world to your ultra-sensitive database for the last eight months and saved all your private information to blackmail you perpetually. Because it allowed any password as long as they guessed the default admin account. Which was supposed to only be accessible from

The most frustrating part is perhaps that, now that I know about many of these things, I find it utterly impossible to educate others about it. They are just as deaf to my advice as I would’ve been to advice from others back when I thought it was a good idea to enable that stupid mode which bypasses all passwords, because even though I read the manual, I read it horribly wrong!

I actually remember saying to myself: “Well, they can’t mean that ANY password goes, because that would defeat the entire purpose of having passwords in the first place, so I can safely conclude that they didn’t mean this.”

They meant it.

Not one day goes by without me thinking back on all these stupid decisions, and while I know very well that it does me no good, I can’t help but be bombarded with these memories and thoughts. It’s easy to laugh at it now and shake one’s head, but when I saved the config that fateful day and reloaded the service in question, I was 100% convinced that my database was fully locked down and that I was the only person in the world who would ever be able to access it, because I had read the manual as I was always instructed to do.

The above vague story is just one out of many such cases which I’ve experienced or heard of. Somehow, these experiences make me fully understand how there can be almost daily news of major critical databases exposed to the Internet with no password. They were simply set up by people who just didn’t understand what they were doing, and I consider it unfair to put all the blame on them.

I think a lot of software is made with a strange attitude to security, where you are harshly punished for not knowing everything about the software, as the developers (but nobody else) does.

How does one track/incorporate errata in relation to printed rulebooks without having to memorize, or constantly check against, the errata?

I bought myself a shiny new rulebook for a relatively new system and, of course, there’s already a full document of errata!

In order to make this easier for myself, I sat down with a pen and sticky notes to mark places in the book that have been errata’d. I found that the pen tended to smudge, and don’t want to wait for each individual sticky note to dry as I make my way through the document.

Is there a good way to point to errata from within a rulebook, without risk of damage? I’m essentially looking for a method that:

  • Incorporates short errata directly into the book
  • Summarizes longer errata to incorporate directly into the book
  • Marks places where the errata document needs to be referenced
  • Does so in a legible, or easily intelligible, manner
  • Does all of this with minimal damage to the book itself and not diminish the books longevity. (Smudging, page wear, stickers that begin to peel, &c.)

Relevant Meta: I want to ask about methods of incorporating errata into a rulebook, but I'm not sure of its subjectivity

Slow queries constantly getting stuck on WordPress database of ~100,000 posts

I am constantly getting stuck SELECT queries on my wordpress databases like the following:

enter image description here

Many of the SELECT queries that get stuck are fairly ordinary, such as pulling the last 10 posts of an author or pulling 10 posts in a category – these are normal WordPress core queries, not from any plugin. My WordPress databases that get afflicted with this have around 100,000 rows in the wp_posts table, with a size of around 1GB. This is an example of the largest tables from one of the databases:

enter image description here

My dedicated server is has 4 CPU cores @ 3.4 GHz and 8 GB DDR4 RAM. For this server and these databases, should these kinds of issues be happening? What can I do to make normal WordPress queries always run without getting stuck? I have tried changing from MyISAM to InnoDB with no effect, as well as changing different settings in my.cnf – here it is currently:

[mysqld] pid-file = /var/run/mysqld/mysqld.pid log-error=/var/lib/mysql/errorlog.err performance-schema=0 default-storage-engine=MyISAM max_allowed_packet=268435456 open_files_limit=10000 slow_query_log=ON log_slow_verbosity=1 innodb_buffer_pool_size=1G aria_pagecache_buffer_size=512M query_cache_size=0 query_cache_type=0 query_cache_limit=0 join_buffer_size=512K tmp_table_size=32M max_heap_table_size=32M table_definition_cache=1200 

And the result of running https://github.com/major/MySQLTuner-perl:

[root@hostname ~]# perl mysqltuner.pl --host  >>  MySQLTuner 1.7.19 - Major Hayden <major@mhtx.net>  >>  Bug reports, feature requests, and downloads at http://mysqltuner.com/  >>  Run with '--help' for additional options and output filtering  [--] Skipped version check for MySQLTuner script [--] Performing tests on [OK] Currently running supported MySQL version 10.3.20-MariaDB-log [OK] Operating on 64-bit architecture  -------- Log file Recommendations ------------------------------------------------------------------ [OK] Log file /var/lib/mysql/errorlog.err exists [--] Log file: /var/lib/mysql/errorlog.err(764K) [OK] Log file /var/lib/mysql/errorlog.err is readable. [OK] Log file /var/lib/mysql/errorlog.err is not empty [OK] Log file /var/lib/mysql/errorlog.err is smaller than 32 Mb [!!] /var/lib/mysql/errorlog.err contains 1571 warning(s). [!!] /var/lib/mysql/errorlog.err contains 1424 error(s). [--] 124 start(s) detected in /var/lib/mysql/errorlog.err [--] 1) 2019-11-25  6:31:15 0 [Note] /usr/sbin/mysqld: ready for connections. [--] 2) 2019-11-25  6:08:45 0 [Note] /usr/sbin/mysqld: ready for connections. [--] 3) 2019-11-25  5:47:35 0 [Note] /usr/sbin/mysqld: ready for connections. [--] 4) 2019-11-25  5:34:11 0 [Note] /usr/sbin/mysqld: ready for connections. [--] 5) 2019-11-25  5:22:58 0 [Note] /usr/sbin/mysqld: ready for connections. [--] 6) 2019-11-25  5:02:11 0 [Note] /usr/sbin/mysqld: ready for connections. [--] 7) 2019-11-25  4:33:46 0 [Note] /usr/sbin/mysqld: ready for connections. [--] 8) 2019-11-25  4:27:54 0 [Note] /usr/sbin/mysqld: ready for connections. [--] 9) 2019-11-25  4:21:59 0 [Note] /usr/sbin/mysqld: ready for connections. [--] 10) 2019-11-25  4:21:52 0 [Note] /usr/sbin/mysqld: ready for connections. [--] 123 shutdown(s) detected in /var/lib/mysql/errorlog.err [--] 1) 2019-11-25  6:31:08 0 [Note] /usr/sbin/mysqld: Shutdown complete [--] 2) 2019-11-25  6:08:26 0 [Note] /usr/sbin/mysqld: Shutdown complete [--] 3) 2019-11-25  5:47:27 0 [Note] /usr/sbin/mysqld: Shutdown complete [--] 4) 2019-11-25  5:33:59 0 [Note] /usr/sbin/mysqld: Shutdown complete [--] 5) 2019-11-25  5:22:55 0 [Note] /usr/sbin/mysqld: Shutdown complete [--] 6) 2019-11-25  5:02:05 0 [Note] /usr/sbin/mysqld: Shutdown complete [--] 7) 2019-11-25  4:33:42 0 [Note] /usr/sbin/mysqld: Shutdown complete [--] 8) 2019-11-25  4:27:47 0 [Note] /usr/sbin/mysqld: Shutdown complete [--] 9) 2019-11-25  4:21:58 0 [Note] /usr/sbin/mysqld: Shutdown complete [--] 10) 2019-11-25  4:21:48 0 [Note] /usr/sbin/mysqld: Shutdown complete  -------- Storage Engine Statistics ----------------------------------------------------------------- [--] Status: +Aria +CSV +InnoDB +MEMORY +MRG_MyISAM +MyISAM +PERFORMANCE_SCHEMA +SEQUENCE  [--] Data in MyISAM tables: 2.3G (Tables: 1379) [--] Data in InnoDB tables: 2.7G (Tables: 284) [OK] Total fragmented tables: 0  -------- Analysis Performance Metrics -------------------------------------------------------------- [--] innodb_stats_on_metadata: OFF [OK] No stat updates during querying INFORMATION_SCHEMA.  -------- Security Recommendations ------------------------------------------------------------------ [OK] There are no anonymous accounts for any database users [OK] All database users have passwords assigned [--] There are 620 basic passwords in the list.  -------- CVE Security Recommendations -------------------------------------------------------------- [OK] NO SECURITY CVE FOUND FOR YOUR VERSION  -------- Performance Metrics ----------------------------------------------------------------------- [--] Up for: 3m 3s (36K q [199.732 qps], 1K conn, TX: 222M, RX: 33M) [--] Reads / Writes: 96% / 4% [--] Binary logging is disabled [--] Physical Memory     : 7.6G [--] Max MySQL memory    : 2.1G [--] Other process memory: 0B [--] Total buffers: 1.7G global + 3.2M per thread (151 max threads) [--] P_S Max memory usage: 0B [--] Galera GCache Max memory usage: 0B [OK] Maximum reached memory usage: 1.7G (22.90% of installed RAM) [OK] Maximum possible memory usage: 2.1G (28.14% of installed RAM) [OK] Overall possible memory usage with other process is compatible with memory available [OK] Slow queries: 0% (11/36K) [OK] Highest usage of available connections: 14% (22/151) [OK] Aborted connections: 0.00%  (0/1174) [!!] name resolution is active : a reverse name resolution is made for each new connection and can reduce performance [OK] Query cache is disabled by default due to mutex contention on multiprocessor machines. [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 6K sorts) [!!] Joins performed without indexes: 50 [!!] Temporary tables created on disk: 70% (1K on disk / 2K total) [OK] Thread cache hit rate: 98% (22 created / 1K connections) [OK] Table cache hit rate: 98% (444 open / 450 opened) [!!] table_definition_cache(1200) is lower than number of tables(1928) [OK] Open file limit used: 5% (580/10K) [OK] Table locks acquired immediately: 99% (25K immediate / 25K locks)  -------- Performance schema ------------------------------------------------------------------------ [--] Performance schema is disabled. [--] Memory used by P_S: 0B [--] Sys schema is installed.  -------- ThreadPool Metrics ------------------------------------------------------------------------ [--] ThreadPool stat is enabled. [--] Thread Pool Size: 8 thread(s). [--] Using default value is good enough for your version (10.3.20-MariaDB-log)  -------- MyISAM Metrics ---------------------------------------------------------------------------- [!!] Key buffer used: 35.6% (47M used / 134M cache) [OK] Key buffer size / total MyISAM indexes: 128.0M/170.4M [OK] Read Key buffer hit rate: 99.9% (22M cached / 22K reads) [OK] Write Key buffer hit rate: 99.3% (824 cached / 818 writes)  -------- InnoDB Metrics ---------------------------------------------------------------------------- [--] InnoDB is enabled. [--] InnoDB Thread Concurrency: 0 [OK] InnoDB File per table is activated [!!] InnoDB buffer pool / data size: 1.0G/2.7G [!!] Ratio InnoDB log file size / InnoDB Buffer pool size (9.375 %): 48.0M * 2/1.0G should be equal to 25% [!!] InnoDB buffer pool <= 1G and Innodb_buffer_pool_instances(!=1). [--] Number of InnoDB Buffer Pool Chunk : 8 for 8 Buffer Pool Instance(s) [OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances [OK] InnoDB Read buffer efficiency: 99.71% (10795186 hits/ 10826096 total) [OK] InnoDB Write log efficiency: 98.51% (39722 hits/ 40324 total) [OK] InnoDB log waits: 0.00% (0 waits / 602 writes)  -------- AriaDB Metrics ---------------------------------------------------------------------------- [--] AriaDB is enabled. [OK] Aria pagecache size / total Aria indexes: 512.0M/1B [OK] Aria pagecache hit rate: 98.7% (130K cached / 1K reads)  -------- TokuDB Metrics ---------------------------------------------------------------------------- [--] TokuDB is disabled.  -------- XtraDB Metrics ---------------------------------------------------------------------------- [--] XtraDB is disabled.  -------- Galera Metrics ---------------------------------------------------------------------------- [--] Galera is disabled.  -------- Replication Metrics ----------------------------------------------------------------------- [--] Galera Synchronous replication: NO [--] No replication slave(s) for this server. [--] Binlog format: MIXED [--] XA support enabled: ON [--] Semi synchronous replication Master: OFF [--] Semi synchronous replication Slave: OFF [--] This is a standalone server  -------- Recommendations --------------------------------------------------------------------------- General recommendations:     Control warning line(s) into /var/lib/mysql/errorlog.err file     Control error line(s) into /var/lib/mysql/errorlog.err file     MySQL was started within the last 24 hours - recommendations may be inaccurate     Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=1     We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.              See https://dev.mysql.com/doc/internals/en/join-buffer-size.html              (specially the conclusions at the bottom of the page).     When making adjustments, make tmp_table_size/max_heap_table_size equal     Reduce your SELECT DISTINCT queries which have no LIMIT clause     Performance schema should be activated for better diagnostics Variables to adjust:     join_buffer_size (> 512.0K, or always use indexes with JOINs)     tmp_table_size (> 32M)     max_heap_table_size (> 32M)     table_definition_cache(1200) > 1928 or -1 (autosizing if supported)     performance_schema = ON enable PFS     innodb_buffer_pool_size (>= 2.7G) if possible.     innodb_log_file_size should be (=128M) if possible, so InnoDB total log files size equals to 25% of buffer pool size.     innodb_buffer_pool_instances (=1) 

I’ve tried these suggestions along with every other possible thing it feels like (caching, optimizing tables etc) and still queries get stuck over and over, causing connections to be stuck in apache with a ‘sending reply’ status and causing websites not to load. What else can I try? Should I upgrade to a more powerful server?

Ubuntu 18.04 Qualcomm Atheros QCA6174 WiFi constantly stops

I have been using Ubuntu on this machine for 4 years and have never had problems with the WiFi.

I installed 18.04 when it came out and everything was ok until last week.

Now the WiFi constantly stops with the behaviour being worse when I try to do somehting involving downloading.

Kernel version is:

uname -r 5.3.5-050305-generic 

I have been updating my kernel to the newest one since the beginning (4 years ago).

I tried the solution proposed in this launchpad bug report. I upgraded to the latest firmware for hw3.0 – the 00140 version with a Z at the end (is it significant that some of the versions have a Z and some not?) and also upgraded the board-2.bin, but my problems were not resolved.

At this stage using the Internet is almost impossible as the WiFi drops after every few page loads.

How to disable MySQL server from constantly running in the background?

This website explains MySQL server installation in Ubuntu 18.04. Basically you issue the command sudo apt install mysql-server. The same website then explains that:

Once the installation is completed, the MySQL service will start automatically. To check whether the MySQL server is running, type: sudo systemctl status mysql

Does this imply that once I install mysql-server it will always be running in the background unless I explicitly kill the process upon each reboot?

I want to play around with mysql-server occasionally, but don’t want it constantly running in the background.