Order of operations for Aurora property Lasers

Laser weapons cannot do damage to invisible creatures. The Aurora property negates invisibility for 1 minute on a hit. At the moment I am assuming that a) you can still make attacks against an invisible target with a laser and b) it will still apply non-damage effects (correct me if these are incorrect). This would mean that a laser with aurora (such as via a Mechanic’s prototype weapon) would still negate invisibility for future shots.

If I fire a laser weapon with the Aurora property, does the (lack of) damage occur before their invisibility is removed, or after?

Relevant rules text:

Laser weapons emit highly focused beams of light that deal fire damage. These beams can pass through glass and other transparent physical barriers, dealing damage to such barriers as they pass through. Barriers of energy or magical force block lasers. Invisible creatures don’t take damage from lasers, as the beams pass through them harmlessly. Fog, smoke, and other clouds provide both cover and concealment from laser attacks. Lasers can penetrate darkness, but they don’t provide any illumination.

When an aurora weapon strikes a target, the creature glows with a soft luminescence for 1 minute. This negates invisibility effects and makes it impossible for the target to gain concealment from or hide in areas of shadow or darkness.

Aurora PostgreSQL database using a slower query plan than a normal PostgreSQL for an identical query?

Following the migration of an application and its database from a classical PostgreSQL database to an Amazon Aurora RDS PostgreSQL database (both using 9.6 version), we have found that a specific query is running much slower — around 10 times slower — on Aurora than on PostgreSQL.

Both databases have the same configuration, be it for the hardware or the pg_conf.

The query itself is fairly simple. It is generated from our backend written in Java and using jOOQ for writing the queries:

with "all_acp_ids"("acp_id") as (     select acp_id from temp_table_de3398bacb6c4e8ca8b37be227eac089 )  select distinct "public"."f1_folio_milestones"."acp_id",      coalesce("public"."sa_milestone_overrides"."team",      "public"."f1_folio_milestones"."team_responsible")  from "public"."f1_folio_milestones"  left outer join      "public"."sa_milestone_overrides" on (         "public"."f1_folio_milestones"."milestone" = "public"."sa_milestone_overrides"."milestone"          and "public"."f1_folio_milestones"."view" = "public"."sa_milestone_overrides"."view"          and "public"."f1_folio_milestones"."acp_id" = "public"."sa_milestone_overrides"."acp_id" ) where "public"."f1_folio_milestones"."acp_id" in (     select "all_acp_ids"."acp_id" from "all_acp_ids" ) 

With temp_table_de3398bacb6c4e8ca8b37be227eac089 being a single-column table, f1_folio_milestones (17 million entries) and sa_milestone_overrides (Around 1 million entries) being similarly designed tables having indexes on all the columns used for the LEFT OUTER JOIN.

When we run it on the normal PostgreSQL database, it generates the following query plan:

Unique  (cost=4802622.20..4868822.51 rows=8826708 width=43) (actual time=483.928..483.930 rows=1 loops=1)   CTE all_acp_ids     ->  Seq Scan on temp_table_de3398bacb6c4e8ca8b37be227eac089  (cost=0.00..23.60 rows=1360 width=32) (actual time=0.004..0.005 rows=1 loops=1)   ->  Sort  (cost=4802598.60..4824665.37 rows=8826708 width=43) (actual time=483.927..483.927 rows=4 loops=1)         Sort Key: f1_folio_milestones.acp_id, (COALESCE(sa_milestone_overrides.team, f1_folio_milestones.team_responsible))         Sort Method: quicksort  Memory: 25kB         ->  Hash Left Join  (cost=46051.06..3590338.34 rows=8826708 width=43) (actual time=483.905..483.917 rows=4 loops=1)               Hash Cond: ((f1_folio_milestones.milestone = sa_milestone_overrides.milestone) AND (f1_folio_milestones.view = (sa_milestone_overrides.view)::text) AND (f1_folio_milestones.acp_id = (sa_milestone_overrides.acp_id)::text))               ->  Nested Loop  (cost=31.16..2572.60 rows=8826708 width=37) (actual time=0.029..0.038 rows=4 loops=1)                     ->  HashAggregate  (cost=30.60..32.60 rows=200 width=32) (actual time=0.009..0.010 rows=1 loops=1)                           Group Key: all_acp_ids.acp_id                           ->  CTE Scan on all_acp_ids  (cost=0.00..27.20 rows=1360 width=32) (actual time=0.006..0.007 rows=1 loops=1)                     ->  Index Scan using f1_folio_milestones_acp_id_idx on f1_folio_milestones  (cost=0.56..12.65 rows=5 width=37) (actual time=0.018..0.025 rows=4 loops=1)                           Index Cond: (acp_id = all_acp_ids.acp_id)               ->  Hash  (cost=28726.78..28726.78 rows=988178 width=34) (actual time=480.423..480.423 rows=987355 loops=1)                     Buckets: 1048576  Batches: 1  Memory Usage: 72580kB                     ->  Seq Scan on sa_milestone_overrides  (cost=0.00..28726.78 rows=988178 width=34) (actual time=0.004..189.641 rows=987355 loops=1) Planning time: 3.561 ms Execution time: 489.223 ms 

And it goes pretty smoothly as one can see — less than a second for the query. But on the Aurora instance, this happens:

Unique  (cost=2632927.29..2699194.83 rows=8835672 width=43) (actual time=4577.348..4577.350 rows=1 loops=1)   CTE all_acp_ids     ->  Seq Scan on temp_table_de3398bacb6c4e8ca8b37be227eac089  (cost=0.00..23.60 rows=1360 width=32) (actual time=0.001..0.001 rows=1 loops=1)   ->  Sort  (cost=2632903.69..2654992.87 rows=8835672 width=43) (actual time=4577.348..4577.348 rows=4 loops=1)         Sort Key: f1_folio_milestones.acp_id, (COALESCE(sa_milestone_overrides.team, f1_folio_milestones.team_responsible))         Sort Method: quicksort  Memory: 25kB         ->  Merge Left Join  (cost=1321097.58..1419347.08 rows=8835672 width=43) (actual time=4488.369..4577.330 rows=4 loops=1)               Merge Cond: ((f1_folio_milestones.view = (sa_milestone_overrides.view)::text) AND (f1_folio_milestones.milestone = sa_milestone_overrides.milestone) AND (f1_folio_milestones.acp_id = (sa_milestone_overrides.acp_id)::text))               ->  Sort  (cost=1194151.06..1216240.24 rows=8835672 width=37) (actual time=0.039..0.040 rows=4 loops=1)                     Sort Key: f1_folio_milestones.view, f1_folio_milestones.milestone, f1_folio_milestones.acp_id                     Sort Method: quicksort  Memory: 25kB                     ->  Nested Loop  (cost=31.16..2166.95 rows=8835672 width=37) (actual time=0.022..0.028 rows=4 loops=1)                           ->  HashAggregate  (cost=30.60..32.60 rows=200 width=32) (actual time=0.006..0.006 rows=1 loops=1)                                 Group Key: all_acp_ids.acp_id                                 ->  CTE Scan on all_acp_ids  (cost=0.00..27.20 rows=1360 width=32) (actual time=0.003..0.004 rows=1 loops=1)                           ->  Index Scan using f1_folio_milestones_acp_id_idx on f1_folio_milestones  (cost=0.56..10.63 rows=4 width=37) (actual time=0.011..0.015 rows=4 loops=1)                                 Index Cond: (acp_id = all_acp_ids.acp_id)               ->  Sort  (cost=126946.52..129413.75 rows=986892 width=34) (actual time=4462.727..4526.822 rows=448136 loops=1)                     Sort Key: sa_milestone_overrides.view, sa_milestone_overrides.milestone, sa_milestone_overrides.acp_id                     Sort Method: quicksort  Memory: 106092kB                     ->  Seq Scan on sa_milestone_overrides  (cost=0.00..28688.92 rows=986892 width=34) (actual time=0.003..164.348 rows=986867 loops=1) Planning time: 1.394 ms Execution time: 4583.295 ms 

It effectively has a lower global cost, but takes almost 10 times as much time than before!

Disabling merge joins makes Aurora revert to a hash join, which gives the expected execution time — but permanently disabling it is not an option. Curiously though, disabling nested loops gives an even better result while still using a merge join…

Unique  (cost=3610230.74..3676431.05 rows=8826708 width=43) (actual time=2.465..2.466 rows=1 loops=1)   CTE all_acp_ids     ->  Seq Scan on temp_table_de3398bacb6c4e8ca8b37be227eac089  (cost=0.00..23.60 rows=1360 width=32) (actual time=0.004..0.004 rows=1 loops=1)   ->  Sort  (cost=3610207.14..3632273.91 rows=8826708 width=43) (actual time=2.464..2.464 rows=4 loops=1)         Sort Key: f1_folio_milestones.acp_id, (COALESCE(sa_milestone_overrides.team, f1_folio_milestones.team_responsible))         Sort Method: quicksort  Memory: 25kB         ->  Merge Left Join  (cost=59.48..2397946.87 rows=8826708 width=43) (actual time=2.450..2.455 rows=4 loops=1)               Merge Cond: (f1_folio_milestones.acp_id = (sa_milestone_overrides.acp_id)::text)               Join Filter: ((f1_folio_milestones.milestone = sa_milestone_overrides.milestone) AND (f1_folio_milestones.view = (sa_milestone_overrides.view)::text))               ->  Merge Join  (cost=40.81..2267461.88 rows=8826708 width=37) (actual time=2.312..2.317 rows=4 loops=1)                     Merge Cond: (f1_folio_milestones.acp_id = all_acp_ids.acp_id)                     ->  Index Scan using f1_folio_milestones_acp_id_idx on f1_folio_milestones  (cost=0.56..2223273.29 rows=17653416 width=37) (actual time=0.020..2.020 rows=1952 loops=1)                     ->  Sort  (cost=40.24..40.74 rows=200 width=32) (actual time=0.011..0.012 rows=1 loops=1)                           Sort Key: all_acp_ids.acp_id                           Sort Method: quicksort  Memory: 25kB                           ->  HashAggregate  (cost=30.60..32.60 rows=200 width=32) (actual time=0.008..0.008 rows=1 loops=1)                                 Group Key: all_acp_ids.acp_id                                 ->  CTE Scan on all_acp_ids  (cost=0.00..27.20 rows=1360 width=32) (actual time=0.005..0.005 rows=1 loops=1)               ->  Materialize  (cost=0.42..62167.38 rows=987968 width=34) (actual time=0.021..0.101 rows=199 loops=1)                     ->  Index Scan using sa_milestone_overrides_acp_id_index on sa_milestone_overrides  (cost=0.42..59697.46 rows=987968 width=34) (actual time=0.019..0.078 rows=199 loops=1) Planning time: 5.500 ms Execution time: 2.516 ms 

We have asked the AWS support team, they are still looking at the issue, but we are wondering what could cause that issue to happen. What could explain such a behaviour difference?

While looking at some of the documentation for the database, I read that Aurora favors cost over time — and hence it uses the query plan that has the lowest cost.

But as we can see, it’s far from being optimal given its response time… Is there a threshold or a setting that could make the database use a more expensive — but faster — query plan?

Aurora Sensor Project

We are working on a project in which we implement a sensor library of aurora events. Here is a part of our code written in Python 3. We would like to get a review of the two classes below.

import logging  from aiohttp.hdrs import USER_AGENT import requests import voluptuous as vol  from homeassistant.components.binary_sensor import (     PLATFORM_SCHEMA, BinarySensorDevice) from homeassistant.const import CONF_NAME, ATTR_ATTRIBUTION import homeassistant.helpers.config_validation as cv from homeassistant.util import Throttle  Logger = logging.getLogger(__name__)  ATTRIBUTION = "National Oceanic and Atmospheric Administration"  DEFAULT_DEVICE_CLASS = 'visible'  HA_USER_AGENT = "Home Assistant Aurora"  MIN_TIME_BETWEEN_UPDATES = timedelta(minutes=10)  URL = "http://services.swpc.noaa.gov/text/aurora-nowcast-map.txt"  class AuroraSensor(BinarySensorDevice):     #Implementation of an aurora sensor.      def __init__(self, DATA, name):         #Initialize the sensor.         self.aurora_data = DATA         self._name = name      @property     def name(self):         # Return the name of the sensor.         return '{}'.format(self._name)      @property     def is_on(self):         # return true if aurora is visible.         return self.aurora_data._is_visible if self.aurora_data else False      @property     def DeviceClass(self):         # Return the class of this device.         return DEFAULT_DEVICE_CLASS      @property     def device_state_attributes(self):         # Return the state attributes.         attrs = {}          if self.aurora_data:             attrs['visibility_level'] = self.aurora_data.visibility_level             attrs['message'] = self.aurora_data._is_visible_text             attrs[ATTR_ATTRIBUTION] = ATTRIBUTION             return attrs          else: # if no data exist             return attrs      def update(self):         # Get the latest data from Aurora API and updates the states.         self.aurora_data.update()   class AuroraData:     # get aurora forecast      def __init__(self, latitude, longitude, threshold):         # Initialize the data object.         self.latitude = latitude         self.longitude = longitude         self.number_of_latitude_intervals = 513         self.number_of_longitude_intervals = 1024         self.headers = {USER_AGENT: HA_USER_AGENT}         self.threshold = int(threshold)         self._is_visible = None         self._is_visible_text = None         self.visibility_level = None      @Throttle(MIN_TIME_BETWEEN_UPDATES)     def update(self):         # Get the latest data from the Aurora service         try:             self.visibility_level = self.get_aurora_forecast()             if int(self.visibility_level) > self.threshold:                 self._is_visible = True                 self._is_visible_text = "visible!"             else:                 self._is_visible = False                 self._is_visible_text = "nothing's out"          except requests.exceptions.HTTPError as error:             Logger.error(                 "Connection to aurora forecast service failed: %s", error)             return False      def get_aurora_forecast(self):         # Get forecast data and parse for given long/lat.         raw_data = requests.get(URL, headers=self.headers, timeout=5).text         forecast_table = [             row.strip(" ").split("   ")             for row in raw_data.split("\n")             if not row.startswith("#")         ]          # Convert lat and long for data points in table         converted_latitude = round((self.latitude / 180)                                    * self.number_of_latitude_intervals)         converted_longitude = round((self.longitude / 360)                                     * self.number_of_longitude_intervals)          return forecast_table[converted_latitude][converted_longitude] 

How to schedule export and import of Aurora snapshots across AWS accounts

I am trying to build the following process, to be ran on a schedule:

  1. Export, from an AWS account A, the content of an Aurora database
  2. Transfer that dump to another AWS account B
  3. Import the dump in another Aurora database on the account B

Here are two of my constraints:

  • I need to be able to put the system in place for multiple databases with different target DBs
  • The import DB might not have the same name as the export DB
  • Both databases are in the same region

I looked online quite a bit and I was able to find the following article: Automating Cross-Region and Cross-Account Snapshot Copies with the Snapshot Tool for Amazon Aurora

However, it doesn’t cover automatically restoring the backup and my attempts at finding a way to do so (using a Lambda for example) have not yield great results.

Is there a way to do so?

Thanks

RDS Aurora Serverless “Parameter Groups”

I have an RDS Aurora Serverless MySQL cluster, and I am trying to change a MySQL setting (connect_timeout). Normally, you would use a Parameter Group to set the value on the DB instance. But, since this is serverless, the instances are all managed by AWS, so it seems I can only configure the cluster.

Is there a way to set the Parameter Group that is used by the instances that AWS creates?

PDOException: SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded WITH AWS Aurora

We are using Drupal 7.61 and Aurora 5.6.10. Recently, we migrated our database to AWS Aurora with 16 CPU, 122G RAM. Our website doesn’t have that much of traffic. Before, we were using MYSQL with 4 CPU and 30 RAM and worked perfectly. However, we got lots of Locks since we migrate our database.

Here are some of our configuration:
connect_timeout 60
wait_timeout 300
innodb_lock_wait_timeout 60
max_allowed_packet 1073741824
CPU usage around 8%

Keep getting the error when customer tried to checkout:

PDOException: SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction: SELECT revision.order_number AS order_number, revision.revision_id AS revision_id, revision.revision_uid AS revision_uid, revision.mail AS mail, revision.status AS status, revision.log AS log, revision.revision_timestamp AS revision_timestamp, revision.revision_hostname AS revision_hostname, revision.data AS data, base.order_id AS order_id, base.type AS type, base.uid AS uid, base.created AS created, base.changed AS changed, base.placed AS placed, base.hostname AS hostname FROM {commerce_order} base INNER JOIN {commerce_order_revision} revision ON revision.revision_id = base.revision_id WHERE (base.order_id IN (:db_condition_placeholder_0)) FOR UPDATE; Array ( [:db_condition_placeholder_0] => 123234) in DrupalDefaultEntityController->load() (line 198 of /var/www/html/includes/entity.inc).

Strongly consistent backup of Mongo with Aurora at the exact same time

At my employer we use AWS Aurora DB & MongoDB Atlas hosted databases together on one application.

So we have nail biting scenarios where records are created on both and typically MongoDB points to records on Aurora that it needs. If there is some sync issue, like the data is on Mongo, but not Aurora and vice versa, this translates to terrible UXes with permission issues.

So my question is to do with backups. These systems must be snapshot at the exact same point of time. I am thinking ideally without any client usage at an early hour like 2AM, i.e. it’s brought down for a few minutes whilst snapshots are being made.

Actually I am not sure how to limit client access whilst still being able to run mongodump or an RDS snapshot. Am I missing a better approach to backup these two data stores together?

Perhaps I can take a strongly consistent backup at a certain point in time?

You might be thinking, what happens if you take a snapshot at 2AM and you lose everything at 10AM. This is acceptable RPO.