Difficulty understanding Algorithm to print Partitions of a Number [on hold]

Here’s a code I came across, which aims to print the partitions of a given number $ n$

I am unable to understand how the code accomplishes what it does, i.e. I’ve difficulty in understanding the algorithm. How does the code work?

#include <stdio.h> #include <string.h> #include <math.h> // Take a number and print it textually // For non-positive numbers it returns the empty string void itoa(int n, char *str){ if(n <= 0){     str[0] = '';     return; } int i, len = floor(log10(n) + 1); // number of digits in n for(i = len - 1; i >= 0; i--){     str[i] = '0' + (n % 10);     n /= 10; } str[len] = ''; }  // n tell us which number is to be partitioned // next tells us the location in the string where to print next // min tell us the minimum number we must choose next void partition(char *str, int n, int next, int min){ if(n == 0){     str[next] = '';     printf("%s\n", str);     return; } int i; // If this is not the first number in the partition // we need a plus sign before we print the next number if(next)     str[next++] = '+'; // Start from min so that numbers in a partition are always // in a non-decreasing order. This ensures that we will never // repeat a partition twice for(i = min; i <= n; i++){     itoa(i, str + next);     // We have already absorbed i so now partitions of n-i needed     // All future numbers in this partition must be at least i     partition(str, n - i, next + strlen(str + next), i); } }  int main(){ char str[1000]; // This code will output partitions in lexicographically increasing // order. However, lexicographic order means something a bit different // for this code. Nevertheless, no partition will be repeated twice. int n; scanf("%d",&n); // This code can handle any positive number, with any number of digits,  // so long as writing the parition does not take more than 999 chars partition(str,n, 0, 1); return 0; } 

Any and all help will be appreciated. Thank you!

I need some help with understanding +3 Piercing [duplicate]

This question already has an answer here:

  • How do I figure the dice and bonuses for attack rolls and damage rolls? 5 answers

Iā€™m super new to DnD and I was hoping to get some clarification on a few things. My character has a long bow and a short sword. For the longbow it has an ATK bonus of +5 and damage/type of 1d8 +3 piercing… my question is how do I calculate that? Do I role the d8 then add +5 and +3? Or what do?

Having trouble understanding the use of a label in Assembly

I am currently having trouble understanding what this label means in Assembly as it has no variable size with it. In the following program that declares several variables in the stack offset the variable is named SCMP_VARSIZE. I have seen many other variables that have a postfix of VARSIZE attached to them and can’t understand why it is used in programs.

/Stack Usage:     OFFSET 0 SCMP_RETVAL DS.B 1 ; Return value SCMP_VARSIZE SCMP_PRY DS.W 1 ; Preserve Register Y SCMP_PRX DS.W 1 ; Preserve Register X SCMP_RA DS.W 1 ; return address SCMP_STR1 DS.W 1 ; address of first string SCMP_STR2 DS.W 1 ; address of first string  strcmp: pshx ; preserve registers         pshy         leas -SCMP_VARSIZE,sp         clr SCMP_RETVAL,sp ... 

The program compares two strings but that is not important here. I just don’t understand what the VARSIZE label is used for in assembly programs.

Understanding memory mapping conceptually

I’ve already read several blogs and questions on stack exchange, but I’m unable to grasp what the real drawbacks of memory mapped files are. I see the following are frequently listed:

  1. You can’t memory map large files (>4GB) with a 32-bit address space.

QUESTION #1: Why? Isn’t that the whole point of virtual memory? If a file is greater than 4GB, it may cause trashing by swapping out some memory mapped pages, but why is there a limitation?

  1. If the application is trying to read from a part of the file that is not loaded in the page cache, it (the application) will incur a penalty in the form of a page-fault, which in turn means increased I/O latency for the operation.

QUESTION #2: Isn’t this the case for a standard file I/O operation as well? If an application tries to read from a part of a file that is not yet cached, it will result in a syscall that will cause the kernel to load the relevant page/block from the device. And on top of that, the page needs to be copied back to the user-space buffer.

Is the concern here that page-faults are somehow more expensive than syscalls in general – my interpretation of what Linus Torvalds says here? Is it because page-faults are blocking => the thread is not scheduled off the CPU => we are wasting precious time? Or is there something I’m missing here?

  1. Overhead of kernel mappings and data structures – according to Linus Torvalds. I won’t even attempt to question this premise, because I don’t know much about the internals of Linux kernel. šŸ™‚

  2. No support for async I/O for memory mapped files.

QUESTION #3: Is there an architectural limitation with supporting async I/O for memory mapped files, or is it just that it no one got around to doing it?

  1. One drawback that I thought of was that if too many files are memory mapped, this can cause lower available system resources (memory) => can cause pages to be evicted => potentially more page faults. So some prudence is required in deciding what files to memory map and their access patterns.

QUESTION #4: Vaguely related, but my interpretation of this article is that the kernel can read-ahead for standard I/O (even without fadvise()) but does not read-ahead for memory mapped files (unless issued an advisory with madvice()). Is this accurate? If this statement is in-fact true, is that why syscalls for standard I/O maybe faster, as opposed to a memory mapped file which will almost always cause a page-fault?

Trouble Understanding Machine Allocation when deploying Charmed Kubernetes

I was working through the quick start guide for deploying Kubernetes to an Ubuntu ec2 instance in AWS.

I got through the guide without any trouble. One thing I don’t understand is how and why adding a k8s model adds 10 “ec2 machines”. Are these machines virtual machines that exist on the ec2 instance that I am running the controller on? I would assume so, because I don’t see 10 additional ec2 instances show up in my AWS account.

Understanding ticksPerSecond and duration with skeletal animations

That’s my first question here so i hope to do all correctly. From various weeks i started surfing the net about Skeletal animation aiming to add a simple animation controller into my small game engine. After following a video tutorial by ThinMatrix, i successfully added skeletal animation into my engine (i used Assimp to load a .dae file from Blender with one animation in it). Once finished all the base stuff i started thinking about how to change animations speed by a factor (like x2 x3 etc…) and here i found some problems. As i think i understand every animation is measured by a duration field (in ticks) that, as i suppose, should be at least something like 25fps (to obtain some kind of smooth animation) times animation’s length in seconds. Then for every animation there is also another field called ticksPerSecond that (as the name says) is the amount of ticks in every second.

Into my engine i’ve a data structure called Animator that contains an array of Animation objects and for each one it has a ticks_per_second and duration array. The following code shows how i took data from assimp.

animator->ticks_per_second[animation_index] = scene->mAnimations[animation_index]->mTicksPerSecond != 0 ?     scene->mAnimations[animation_index]->mTicksPerSecond : 25.f; animator->duration[animation_index] = scene->mAnimations[animation_index]->mDuration; 

If i print the two variables i get this result:

DEBUG: ticks per sec: 1.000000 DEBUG: total ticks: 0.833333 

So here is my question: Why do these variables take on these values? I am trying to explain it to myself and the idea I found is that if ticks is equal to 1 the animation would go at the same mainloop’s frame rate but, at the end, I am not sure about it and I would appreciate your help.

understanding bash redirection using > char

I am learning bash and am not able to understand what is going wrong with the output redirection in the following example:

I have a file called myfile.txt with the following content.

Practice makes Perfect

I am going to use tr command to replace P with p:

cat myfile.txt | tr P p 

This does what I want, now I am going to put the result back into the original file:

cat myfile.txt | tr P p > myfile.txt 

But after executing the above command myfile.txt is empty… why is this happening?


If I send the output to a different file, then it works as expected:

cat myfile.txt | tr P p > anotherfile.txt 

PCI DSS – Trouble understanding SAQ A

I am considering using a payment gateway for an e-commerce store to decrease the number of requirements for being PCI DSS compliant.

As stated in SAQ A, merchant has to confirm that: Any cardholder data your company retains is on paper (for example, printed reports or receipts), and these documents are not received electronically.

PCI DSS glossary of terms defines Cardholder data as follows: “At a minimum, cardholder data consists of the full PAN. Cardholder data may also appear in the form of the full PAN plus any of the following: cardholder name, expiration date and/or service code.”

For accounting purposes I need a receipt for each purchase made in the e-commerce store. This receipt has to contain cardholder name.

As far as I understand, the receipt for each purchase made in the store is available from the payment gateway. And I doubt that the payment gateway will send reports by post in 21st century.

How is it then possible to simultaneously be PCI DSS SAQ A and the accounting laws compliant?

Understanding JWT and SSO

I’m having trouble understanding how to set up SSO between my app and another app we have deployed. I’m new to trying to set this kind of thing up, so I was hoping someone could explain whether I’m on the right path, both from a standpoint of making it work as well as a standpoint of making it secure.

We have our app (let’s say it’s https://app.mydomain.com).

  • Our users log in there with their user/pass.
  • This app uses angular and webapi.
  • The login endpoint makes use of Microsoft.AspNet.Identity.Owin.SignInManager.

All of this was set up by our previous developer, so my knowledge about how it works isn’t as much as I’d like.

Now we have a desire to allow our users to use Jupyter Notebook. Let’s say we have this set up at https://jupyter.mydomain.com. And, since they are already logged into our app, we don’t want them to have to log in again to Jupyter.

One of our AWS guys set the Jupyter side up using jwtauthenticator. His initial request was that I code our app to do the following:

  1. Generate a JWT token in our app.
  2. Add a link in the app which would open jupyter.mydomain.com in a new tab.
  3. Pass an Authorization: bearer <jwttoken> header when this link is clicked.

He tested sending this header in PostMan successfully. However, on the app side, I don’t believe it’s possible to tell the browser to send headers when opening in a new tab, so that solution won’t work.

Next suggestion from our AWS guy decided to set it up using query parameters. Basically something like:

https://jupyter.mydomain.com?jwt=<jwt token>

This also worked for him in PostMan and I’m pretty sure it would work in the app. But isn’t this a security risk, even with HTTPS? I believe at the very least, the token goes into the server logs which I don’t think is a great idea.

The query param idea made me think about using POST, as I believe the token would be encrypted at that point. But, even if I’m correct, it appears jwtauthenticator won’t handle POST requests. I suppose it could be modified to handle them, but I’m not sure that’s the right solution.

After reviewing jwtauthenticator code, it would appear that it also will look for a cookie named XSRF-TOKEN. This is what I think is the appropriate solution, but I’m not sure. Plus, I can’t seem to make it work.

Here’s what I’m doing:

  1. User clicks link which hits an endpoint (let’s say api/auth/jupyter)
  2. Endpoint generates JWT token
  3. Endpoint sets cookie.
    • Name is XSRF-TOKEN.
    • Value is the JWT token string.
    • Secure is true
    • HttpOnly is false
    • Domain is mydomain.com
    • Path is /
  4. OK response with cookie is returned to client (DevTools for app.mydomain.com shows the cookie is set).
  5. Call window.open(“https://jupyter.mydomain.com”) (DevTools for jupyter.mydomain.com also shows the cookie is set).

Unfortunately I’m getting a 401 error. I’ve also tried setting this cookie in PostMan and testing that way, which also responds with a 401. Judging from the jwtauthenticator code, I think the only thing that could be happening is that the cookie is not being read.

So, I guess I have a few questions:

  1. Is there some security reason that would be keeping the cookie from being read? I assumed since they are on the same domain that wouldn’t be the problem.

  2. Assuming I can get the problem with reading the cookie figured out, is this secure? I’m concerned there’s still a CSRF problem.

  3. In general, is this an appropriate way to provide SSO? Or are we going down the wrong path? I know security is not always easy, but both our AWS guy and I have put in a lot of time trying to come up with a solution. It seems easy. All we have to do is get the token to the Jupyter server. But this is proving very difficult, which makes me think we’re doing the wrong thing.