How to animate two or more unrelated properties on a sprite GameObject using animation layers?

I have a sprite in Unity 2019.3. It is animated like so (with provided mockups):

  • Its base animation is just one frame; i.e. it has the same Sprite whether idle or moving.
  • When the player is invincible, the ship flashes in and out. This is implemented by setting the alpha to 0 and 1 and back.
  • The player can acquire up to three levels of shields. The is implemented as a child GameObject with its own Sprite (but not its own Animator).
  • When a player acquires a power-up, they briefly flash colors. This can happen while they’re shielded, invincible, or both. This is implemented with a custom palette-swap shader, not by replacing the Sprite.

These animations are all independent of one another, and can all occur at any time. For example, when a shielded player takes a hit their shield downgrades but they also gain brief invincibility. It would look like this:

A single state machine to account for these possibilities (and others, depending on whether or not I add other power-ups or mechanics) would be prohibitively complex. So I’m going with animation layers.

I was hoping that it would be as simple as adding separate animation layers with their own state machines, then marking each layer as additive. Nope. I have no idea what I’m doing.

How can I use Mecanim to animate my sprite as I have previously described?

How can I use ThreadingLayer to perform operations with layers of different dimmensions?

Is there a way that I can achieve subtraction between a scalar and a vector within NetGraph[], similar to what happens when I perform {1, 2, 3}-1 to get {0,1,2}? For my purposes, this needs to be done within NetGraph[] for timing and functional reasons. Part of the trouble is that the vector also will be changing in size. Is there already a NetGraph[] layer that has these functionalities?

For those who want a little more background, I am trying to create a filter within my network as part of the loss function, where the data being fed into the network spans over just two days and each day (of the week) is represented as a value between 0 and 1. So Monday is represented as 0.0 and Tuesday as 0.2 and so forth.

I am using SequenceLastLayer[] to get the time of the last bar of data, and am attempting to use this in conjunction with a slightly modified Tanh[] function within ThreadingLayer[], as well as the bars with the other time values whose form will be similar to {.0,.0,.0,...,.2,.2,.2,}. So the one scalar, in this example, would be 0.2 and the vector would be {.0,.0,.0,...,.2,.2,.2,}, and the operation would be:

`{.0,.0,.0,...,.2,.2,.2,} - 0.2` 

This seems like it should be relatively simple, but I have not yet been able to figure it out. Does anyone know what I can do differently?

Here is the code so far:

net = NetGraph[<|"thread" -> ThreadingLayer[#1*#2 &],     "getLastLayer" ->      SequenceLastLayer[]|>, {NetPort["timeInput"] -> {"getLastLayer",       "thread"}, "getLastLayer" -> "thread"}] 

enter image description here

And the code to run it would be similar to:

net[<|"timeInput" -> {0.0,0.0,0.0,...,0.2,0.2,0.2,}|>] 

Must the layers of the Prismatic Wall spell be destroyed in order?

Must the layers of the Prismatic Wall spell be destroyed in order?

You’d think this is answered read-as-written on page 269 of the 5e PHB:

The wall can be destroyed, also one layer at a time, in order from red to violet, by means specific to each layer.

However, I know I’ve seen multiple people around the internet talk about strategies of intentionally destroying certain layers of their own wall like the red, orange, or indigo layers, and holing up inside the wall and shooting or casting spells through the wall without those layers blocking them.

It also begs a question like “If a fireball hits the wall as part of battle and does 25 damage, does it destroy the blue layer individually?”

I just want some validation that you really do have to hack each layer down from red to violet as opposed to losing middle layers intentionally or unintentionally as part of the overall battle.

Does Antimagic Field suppress all layers of True Polymorph simultaneously?

Imagine that you have cast True Polymorph to turn a medium object into a Helmed Horror with Spell Immunity to Antimagic Field. If you were to then True Polymorph that Helmed Horror into something else, it would lose its Spell Immunity, as its entire statblock is changed. However, what exactly would happen if the newly True Polymorphed creature walked into an Antimagic Field?

I see two plausible outcomes:

  1. Both layers of True Polymorph are suppressed simultaneously and the creature is immediately turned back into a medium object.
  2. The most recent layer of True Polymorph is suppressed, at which point the Helmed Horror’s Spell Immunity kicks in and prevents the next layer from being suppressed.

I think each of these interpretations has a decent argument:

  1. This is how Dispel Magic works. If the Helmed Horror also had Spell Immunity to Dispel Magic, the underlying layer of True Polymorph could still be dispelled, because Dispel Magic reads (emphasis mine):

For each spell of 4th level or higher on the target, make an ability check using your spellcasting ability. The DC equals 10 + the spell’s level. On a successful check, the spell ends.

Both layers of True Polymorph are on the target, and so both can be dispelled, and there is no reason to suspect they aren’t dispelled simultaneously (before the Spell Immunity could ever kick in).

  1. Antimagic Field reads (emphasis mine):

Any active spell or other magical effect on a creature or an object in the sphere is suppressed while the creature or object is in it.

And while there are two layers of True Polymorph on the creature, it seems reasonable to assume that only the most recent is “active”, due to the rules for Combining Magical Effects:

The effects of the same spell cast multiple times don’t combine, however. Instead, the most potent effect–such as the highest bonus–from those castings applies while their durations overlap, or the most recent effect applies if the castings are equally potent and their durations overlap.

This (plausibly, I think) could be interpreted to mean that the most recent layer of True Polymorph is suppressed first, at which point Antimagic Field would suppress the next layer if not for the Spell Immunity.

Is one of these two interpretations unambiguously supported by Rules as Written?

Training of two 3×3 convolution layers vs training one 5×5 convolution layer

I’m not 100% sure this is the right stackexchange, please feel free to redirect me to another one.

I know that two 3×3 convolution layers can be equivalent to one 5×5 convolution layer.

I also know that in many cases (maybe all of them), the training of both options is equivalent (in terms of result, not speed or optimisation)

Suppose I have a dataset on which I KNOW that no information whatsoever can be derived from any 3×3 window, but it can be from a 5×5 window.

If I wanted to, I could train a network with a 5×5 convolution layer, then manually transform it into two 3×3 convolution layers.

My question is: could I train a network directly with two 3×3 convolution layers ?

My reasoning: What could the first convolution learn apart from overfitting the training dataset? Supposedly nothing. So, the only layer that could some generalization of my data is the second one. But then, my problem sums up to comparing ONE 3×3 convolution versus ONE 5×5 convolution.

In the general case, the 5×5 convolution would perform better. But with my restrictions on the dataset, maybe it becomes equivalent to the 3×3 convolution? I which case I could train 3×3 convolution.

Thank you !

Invisible layers in Sketch when using Material UI as the basis for UI component library

I’m trying to use the Material UI Theme Editor which spits out a Sketch file as the basis for our component library. I’m then exporting the components to Zeplin for developers to get specs on.

However, I’ve been unable to get the ripples to show up in Zeplin, and they don’t even show up when one UI component is created from the base sympbol in another part of the the Material UI file (here’s ours: https://www.dropbox.com/s/gf9s25nvp3axko9/Baseline.sketch).

This makes no sense to me…how could the “RIPPLE” that I can see so clearly on the base component disappear from this instantiation of it? I think the fact that I can’t see it in Sketch is the root of the trouble, and why I can’t see it in Zeplin.

Base Component

Instantiation of Component in Design

Maintaining consistency with loosely coupled business and data layers

Take the following sequence of events:

  1. Business layer requests data x and y from data layer.
  2. Data layer returns version 1 of x and y.
  3. Business layer starts performing logic based on data x and y.
  4. Another (concurrent) operation updates data x to version 2.
  5. Business layer instructs data layer to save new data z based on logic at step 3.

The data z saved in step 5 has now been saved based on inconsistent, or “stale” data. The data x became stale during the business transaction. Say for example, data x holds a flag indicating whether the creation of z data is permitted.

In past I’ve seen this issue dealt with by:

  1. Tightly coupling business logic with data operations. For example, business logic in RDBMS stored procedures, or application code co-mingling logic with persistence-aware data operations. or;
  2. Ignoring the issue because the likelihood x impact = too low to be concerned with.

My question: is there a viable 3rd option? Something that ensures consistency while maintaining a healthy separation of concerns between the business and data layers.

Is this something that ORMs address? I know they’ll deal with optimistic concurrency for data writes, but I’m not wanting to update data x, just ensure that nothing else has updated it during the course of the business transaction. Or more specifically, that the “is z creation permitted?” flag hasn’t been updated.

For any platform-specific answers or comments, I’m working with C# and Postgres.

Plotting relief data from tif file with other layers using ggplot

I’m (very) new to R. I’m trying to produce a map of the world with the following features:

  1. ISEA3H grid (with a spacing of 6 miles between hexagons (I don’t have enough memory to generate one with 6 mile hexagons; the closest I’ve gotten is 24 mile hexagons)).
  2. No political borders (i.e., only “natural borders, like coastlines, lakes, rivers, etc.) I’ve got this down as well.
  3. Elevation/relief data. This is where I’m having trouble.
  4. The cell number printed inside the cells. I haven’t gotten this far yet.

Specifically, the problem I’m having is I can’t seem to “mesh” the data from my tif file with my map of the world.

I’m using the dggridR package, which produces an ISEA3H grid of the world with variable spacing. I’ve gotten pretty far.

Here’s my code thus far:

library(rgdal) library(sp) library(dggridR) library(mapdata) library(rnaturalearth) library(rnaturalearthdata) library(raster)  # I've increased the spacing to 500 miles so the map doesn't take so long to render between iterations dggs <- dgconstruct(projection = "ISEA", aperture = 3, topology = "HEXAGON", precision = 7, spacing = 500, metric = FALSE)  # Save a fairly large image at the end of it jpeg("mymap.jpg", width = 9000, height = 6000, quality = 95)  # I'm using the physical data from https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/10m_physical.zip  # and the raster data from https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/raster/NE2_HR_LC_SR_W.zip  world <- readOGR("/home/mario/Downloads/data/10m_physical", "ne_10m_land") rivers <- readOGR("/home/mario/Downloads/data/10m_physical", "ne_10m_rivers_lake_centerlines") ocean <- readOGR("/home/mario/Downloads/data/10m_physical", "ne_10m_ocean") lakes <- readOGR("/home/mario/Downloads/data/10m_physical", "ne_10m_lakes") coast <- readOGR("/home/mario/Downloads/data/10m_physical", "ne_10m_coastline")  rel <- raster("/home/mario/Downloads/data/NE2_HR_LC_SR_W/NE2_HR_LC_SR_W.tif") rel_spdf <- as(rel, "SpatialPixelsDataFrame") rel <- as.data.frame(rel_spdf)  # use grid of the entire earth sa_grid <- dgearthgrid(dggs,frame=TRUE, wrapcells=TRUE)  # actually plot the data p<- ggplot() +      # what should be the relief layer     geom_tile(data = rel, aes(x = "x", y = "y")) +      # the world     geom_polygon(data=world, aes(x=long, y=lat, group=group), fill=NA, color="black") +      # lakes     geom_polygon(data=lakes, aes(x = long, y = lat, group = group), fill = '#ADD8E6') +      # more lakes     geom_path(data=lakes, aes(x = long, y = lat, group = group), color = 'blue') +      # rivers     geom_path(data=rivers, aes(x=long, y=lat, group=group), alpha=0.4, color="blue") +      # coastline     geom_path(data=coast, aes(x = long, y = lat, group = group), color = 'blue') +      # hexagonal grid     geom_polygon(data=sa_grid, aes(x=long, y=lat, group=group), fill="white", alpha=0.4) +      # hexagonal grid     geom_path(data=sa_grid, aes(x=long, y=lat, group=group), alpha=0.4, color="grey") +      # some necessary r code that I don't understand     coord_equal()  # change from flat map to globe p+coord_map("ortho", orientation = c(41.0082, 28.9784, 0))+   xlab('')+ylab('')+   theme(axis.ticks.x=element_blank())+   theme(axis.ticks.y=element_blank())+   theme(axis.text.x=element_blank())+   theme(axis.text.y=element_blank())+   ggtitle('World map')  # finish writing to jpeg dev.off() 

This is as far as I’ve gotten: https://imgur.com/y9LPqVS

The code above is currently chugging away, and has been for the past 4 hours. It’s remained within the bounds of the memory of my machine, so that’s a good sign.

What is the idiomatic way of projecting a geodesic grid over a relief map of the Earth? How could I include the relief data from the naturalearth tif file with my code thus far?