Any drawbacks to AWS certificate manager wildcard certificates?

Let’s say I’m using AWS Certificate Manager to get a certificate for for use with AWS CloudFront. I can specify an alternate domain of and point it to another CloudFront distribution in my DNS.

But AWS Certificate Manager also allows me to specify a wildcard * as an alternate domain, which would allow me in the future to set my DNS to route to yet another CloudFront distribution if I decided I needed that.

Is there any downside to adding a wildcard domain such as * to the AWS Certificate Manager? Does it cost more? Does it make my configuration inflexible in some way? Why wouldn’t I want to always specify a wildcard * as an alternate domain, as this gives me flexibility to add a subdomain in the future whenever I want to?

Do Solutions Exist for Mitigating Common Drawbacks of Resistance-as-a-Depletable-Resource?

I’ve always been bothered by the way resistances to non-damaging ‘bad effects’ are designed in RPGs. I’m talking about such effects as (usually non-damaging) temporary stunning, paralysing, blinding, charm, itching, loss of attributes or the like, primarily meant to be used in a conflict (most commonly a physical one).

I’m aware of two main approaches to designing them: save-or-succumb (which isn’t the subject of this question, as I find it too easily subject to ‘rocket tag’ effects) and deplete-a-resource-to-resist (such as use of Willpower to resist being persuaded in Exalted 2e, or the way Control Points work in GURPS Technical Grappling and derivative mechanical frameworks).

The second design approach seems to solve at least part of the ‘rocket tag’ issue of save-or-succumb, but at least in the more simple implementations, has a bunch of common issues of its own:

  • If the resource doesn’t regenerate meaningfully in a timeframe of a conflict, then full depletion of the resource can be pretty much equivalent to fully running out of HP – i.e. a complete loss in the conflict. At a minimum, this seems likely to lead to a full ‘stunlock’ of some sort.
  • If all afflictions deplete the same amount of resistance resources, then this encourages having a cheap affliction whose sole purpose of depleting the resource (and not actually afflicting some negative effect of its own), which seems to be a perverse incentive.
    • If afflictions cost the same but the one with a milder effect depletes more resources, it is incentivised to serve in the same role.
    • If afflictions with different degrees of effect nastiness cost the same and deplete the same, the ‘weaker’ one becomes pointless.
    • Note: for the purposes of this discussion, ‘cost’ may mean either literal cost in character creation points, or the cost in mana to cast, or even the more abstract ‘cost’ of being accessible on different levels in a levelled framework. It’s not relevant which for the purposes of the analysis.
  • Conditional recovery of the depleted resistance resource pool (such as found in some non-tabletop RPGs) seems like it can easily lead to designing an overengineered system that’s not very playable.

Are there system design principles or approaches that solve those common issues?

Directions that I think are more likely to help with the question:

  • The design principle or principles themselves that can then be applied when writing a replacement for various systems’ resistance framework.
  • A worked example from any existing system that tackles those issues of resistance-as-resource (and doesn’t reintroduce the ‘rocket tag’ issue of save-or-succumb).
  • Possible frame challenge: a radically different approach / framework of resistance, which is neither save-or-succumb nor resistance-as-a-resource, and solves the issues of the latter without reintroducing the issues of the former.

What are the drawbacks to SQL in JSP, PHP, or other server-side scripting?

I’m working on a code base that dates back at least 15 and possibly as much as 20 years, where the older code is still functional but includes thousands of JSPs full of Connection, PreparedStatement, and ResultSet objects, as well as hard-coded SQL, to manage data access.

Yes, it’s about as painful as it sounds … but my question specifically is that, since it still works, what are all the other drawbacks than simple functionality?

I want to write up a proposal to submit to the owner of the company so he’ll pony up the cash to get this all updated to modern standards. I can think of many of the more obvious issues, but from a business perspective, what looming crises are on the horizon that can only be avoided by turning the ship now?

[Edit] To clarify, I’m not looking for advice or suggestions on what technologies to use. As a question of software development I’m looking for a short list of issues relevant in 2016 that make good business sense, to use as part of the proposal. For example, what are the “hidden costs” to this architecture, what are the vulnerabilities and potential damage from failure, etc.

What are the drawbacks of using imperative programming in Scheme?

I’m trying to create a game in Scheme, but it looks like a fairly bad fit for functional programming. I stick to functional where ever possible, but it’s becoming apparent, that using an imperative approach would yield faster and easier code. I’d still like to use Scheme for those imperative parts, so that

  1. Code is all in same language

  2. I’m quite keen at using macros Scheme provides, and they seem to be a good fit for defining updating logic syntax (thinking ECS).

However there is also an approach to use other language for core bits, and call Scheme as a script, while writing imperative code in a traditionally imperative language.

So to condense — when using Scheme imperatively / with mutations, what are its drawbacks? Particularly compared to the mainstream languages like C and Java.

P.s. I know CommonLisp is more pragmatic in such scenario, but I’d like to get an answer without it.

Drawbacks of adding type equality to 1ML

In the 1ML – Core and Modules United (F-ing First-Class Modules) paper, the author gives the following example for why module types do not form a lattice under subtyping:

 f1 : {type t a; x : t int} → int  f2 : {type t a; x : int} → int  g = if condition then f1 else f2 

There are at least two possible types for g:

 g : {type t a = int; x : int} → int  g : {type t a = a; x : int} → int 

Neither is more specific than the other, so no least upper bound exists. Consequently, annotations are necessary to regain principal types for constructs like conditionals, in order to restore any hope for compositional type checking, let alone inference.

At first glance, it seems like the problem can be remedied by adding in type equalities:

g : (t int ~ int) ⇒ {type t a; x : int} → int 

However, this “solution” isn’t discussed in the paper. So it feels like I’m overlooking some obvious drawback of this idea, apart from the one that it might not be possible to encode type equality in System Fω (which is the raison d’être for the paper). Does adding type equalities make type checking/inference even more problematic in other situations?

Are there any drawbacks to “travel money cards”?

The cards I’m asking about are sometimes called “travel cards” or “prepaid cards” or “forex cards”. I don’t know if there is one universally accepted name.

I have never used one because I believe they are not suited to my style of travel, but maybe I’m wrong about that.

I believe you pre-load them with amounts of money in one or several foreign currencies.

I believe that many banks in many countries provide them but that places other thanks banks sometimes also provide them.

I cannot find the Wikipedia page or even decide what is the most usual term. But here is the link for the card of this type that my bank here in Australia provides, as one example for those who need more info about what I’m talking about.

My actual question about them is:

When are these cards not suitable for travellers? Do they have any down sides?