## Are Half-Elves supposed to have a slender build like Elves, or are they supposed to have a build that’s intermediate between Humans and Elves? [5e]

The Player’s Handbook’s weight descriptions for Half-Elves are inconsistent. The Half-Elf section in Chapter 2 (pg. 38) says:

They range from under 5 feet to about 6 feet tall, and from 100 to 180 pounds, with men only slightly taller and heavier than women.

However, the “Height and Weight Range” table in Chapter 4 (pg. 121) gives a weight formula for Half-Elves of 110 + (2d8) x (2d4), which is 114 to 238 pounds.

This is very different.

Do we have any reason to label one description Correct and the other A Mistake? The 2018 PHB Errata are silent on this point.

Looking at the rest of the table’s formulas, Humans are 114 to 270 lbs, whereas Wood Elves are 102 to 180 lbs and High Elves are 92 to 170 lbs. (I’ve excluded the Drow because they’re significantly shorter.)

So are Half-Elves supposed to have a slender build like elves (100-180 lbs), or are they supposed to have a build that’s intermediate between Human and Elf (114-238 lbs)?

[As a side note, changing the Half-Elves’ weight modifier in the table from its current 2d4 (which is like a Human’s) to only 1d4 (like the Wood and High Elves’) would yield a calculated range of 112 to 176 pounds.]

## Difference between NP Intermediate and NP Complete

Assuming P ≠ NP

How do you determine whether a problem belongs in NP Intermediate or NP Complete?

Why does integer factorization belong in NP Intermediate, but the knapsack problem belongs in NP Complete?

## How to create a satellite node as a intermediate node in NS2 while using MPTCP?

I want to create an MPTCP connection between clients. Where sender side will create 2 flows directed to GEO and LEO satellite to the receiver side.

n0 will create 2 TCP flows (n0_0, n0_1) which will be connected with GEO satellite (r1) and LEO satellite (r2) accordingly then these r1 and r2 will be connected to n1 node’s flows (n1_0,n1_1).

Could anybody suggest me any links or .tcl file which describes anything related to this? especially with satellite nodes.

## Certificate check works with root but fails with Intermediate CA [duplicate]

I have 3 certificates, server.crt issued by Intermediate CA which is issued by Root. The server.crt is configured on apache2 (version 2.4.29-1) on Ubuntu 18.04.

However when I am trying to connect to the server URL through openssl (version 1.1.1) using the intermediate CA as the CAfile parameter, I receive error as Verify return code: 2 (unable to get issuer certificate) Command used : openssl s_client -connect server_URL:443 -CAfile Intermediate.pem

But when I try to connect the same URL using the root certificate it works with return code 0. Command used : openssl s_client -connect server_URL:443 -CAfile Root.pem — Works fine

How can I make the the certificate check work using the intermediate CA and not Root certificate.

Note that the below certificate chain verification are all successful.

openssl verify -CAfile root.pem Internal_CA.pem — OK

openssl verify -CAfile root_and_intemediate_combined.pem server.pem — OK

Also I got same results for using both pem and crt formats for all certificates.

## Do CAs issue intermediate Certificate for each new certificate request?

Do CAs issue intermediate Certificate for each new certificate?

I am new to certificates and asking this to understand if the CAs have a ready set of intermediate certificates they use to issue leaf certificate or the its created based on the Certificate Request information provided by the requester?

## Validity of in-line help content over time as users graduate from novice to Intermediate stages

This is a question in regards to an Enterprise product.

Option 1

help text (2 liner max)

Option 2

help text (2 liner max)

Option 3

help text (2 liner max)

Notes –

1) This help text was added below the Options as there was feedback from new users that the Option Label itself was not sufficient to communicate the intent of the option.

2) Advanced users have come back saying that they do not need to see the help text every time as they are well aware of the options. This is very much understandable.

Questions –

Our product has both ends of the user expertise spectrum fairly distributed. Also, let’s note that users graduate overtime. A tooltip cannot be used as we have seen very less usage of the same and creates extra friction for new users, compared to immediate help. Considering that standard interaction design principles recommend designing for the ‘Intermediate User’ (Alan Cooper, Dan Normal) – is tooltip the only way out? Or are there other thoughts?

## SQL Server Agent – Report Failure but continue When intermediate step fails

I have a SQL Server Agent job that has three steps with the following control flow:

• Step 1 – on success – Go to next Step. on fail – job fails
• Step 2 – on success – Go to next Step. on fail – Go to next Step
• Step 3 – on success – report success, on fail – report fail

However, What I want to happen is, if step 2 fails, run step 3 but report that the job has failed (regardless of whether step 3 is successful or not)

The only way I can think to do this is as per the screenshot below which duplicates the final step but the duplicate step reports failure if it succeeds

Is there a better way of doing this?

## SElinux blocks rotatelogs when called from an intermediate script

I am waving my self-esteem bye bye. SElinux gives me depressions.

What happened:

I need to intercept log messages from Apache (used as reverse proxy and authentication gateway). In order to do this reliably, I want to use a CustomLogs configuration to run a python script. This is not too complicated. I understood that e.g. a file-context like http_unconfined_script_exec_t will get me what I want.

So the construct that works with SElinux permissive is:

CustomLogs pipes into a Python-script. The python script does some filtering and on certain situations decides to manages a memcached used for authentication, elsewhere. In order to have this human-auditable it writes two logs via rotatelogs-subprocesses. So far the theory. In permissive mode this works. With SElinux enforcing, it so fails.

The trouble with this seems to be that my script tries to use (multiple) rotatelogs as subprocess(es). Now rotatelogs has a defined transition for SElinux policy module like (httpd_t, httpd_rotatelogs_exec_t) -> httpd_rotatelogs_t. Without running in this context rotatelogs seems to have trouble accessing the log-directory.

So calling rotatelogs directly from httpd works. Calling it from my script renders it into the httpd_unconfined_script_t, wich denies certain actions as far as I understood.

I tried to read an SElinux book, but apparantly it seems to be a larger labyrinth for a quick understanding. I tried to read the related CentOS source RPMs, the reference source for a start, i.e.. I ended up in vertigo.

I do have a few (time-consuming) ideas how to get out of this. I failed to simply use httpd_t on the script, it ran, no audit logs, but it did not work anymore. This is so frustrating.

So here is the question-set: – is there a reference documentation on what targeted-policies do actually what include their transitions/relations? Somewhat like a dictionary? – is there an advisable context-type for such a script that does not render rotatelogs unusable when being called as subprocess?

My targeted last resorts for a way out are:

1. keep the script as httpd_t and use audit2allow to capture the required changed. The final system will be Ansible-controlled, so this would be pretty straightforward to deploy.
2. switch back to a single-ended rotatelogs and try piping the CustomLogs, does that still work? My first tests failed, but this might have been due to wrong file-contexts.
3. forget about rotatelogs and do my own within the script.

Happy for any clue or even just some sympathy.

## Best practices: Save or return intermediate results?

Assuming IO is not an issue, is saving intermediate results considered a best practice? What are the pros and cons, and situations that warrant doing so or not?

Say I have two components along a long pipeline,
others--> Component_1 --> Component_2 --> others.

I can either save the output from Component_1, pass the path to Component_2, and have Component_2 read and process from there. Or, I can return output from Component_1, and pass output to Component_2.

Context:
This is for data processing tasks, where the server process itself can run continuously, but each user input data causes a single run through the pipeline, and it completes before it retrieves the next item from the input queue.

Pros of saving:
1. Makes testing and debugging a bit easier? I don’t have to save the output from Component_1 in my test/debug code, before doing stuff to it, if I don’t want to rerun Component_1. A debugger that saves all intermediate data can do that as well of course, but it saving everything means it might take a while to run.
2. Makes debugging if actual runs fail easier. Same point as previous.

Cons:
1. Performance hit, but we assume it is negligible here.
2. Having to move all intermediate files to trash somewhere during/at the end of runs.
3. Having a separate debug folder with unique ID tag for each run, but that’s usually necessary anyway, if only to store output that the UI retrieves and presents.

## Book discrete mathematics for beginners and intermediate

Need a book on discrete mathematics after reading which it will be easier to learn algorithms.