Using Expand, Guess, Verify to solve the following recurrence relation

Hello and thanks to those who bothered reading! I am trying to solve the following recurrence relation, $ S(n) = S(n-1) + (2n-1)$ , with the following base case: $ S(1) = 1$ .

I already used the Solution Formula and got the closed form solution $ 1^n + n^2 – 1$ , but for the expansion part I am having trouble with the $ g(n)$ term. Perhaps even my solution formula answer is wrong. Any help is appreciated and I am more than willing to further explain the problem! The Solution Formula is $ S(n) = c^{n-1} S(1) + \sum_{i=2}^n (c^{n-i} g(i))$ and $ g(n) = 2n-1$ . $ c$ is the constant in front of the $ S(n)$ term, which in this case is $ 1$ .

Can someone verify the correctness of this Candidate-Elimination exercise?

enter image description here

My attempt:

Step 0: GH: {< ?,?,?,? >}, SH: {< ∅,∅,∅,∅ >}

Step 1: GH: {< f,?,?,? >, < ?,t,?,? >, < ?,?,t,? >, < ?,?,?,f >}, SH: {< ∅,∅,∅,∅ >}

Step 2: GH: {< ?,?,t,? >, < ?,?,?,f >}, SH: {< t,f,t,f >}

Step 3: GH: {< ?,?,t,? >, < t,?,?,f >}, SH: {< t,f,t,f >}

Step 4: GH: {< ?,?,t,? >}, SH: {< t,f,t,? >}

Step 5: GH: {< ?,?,t,? >}, SH: {< t,?,t,? >}

The versionspace are exactly those GH and SH of the last step.

Can backend verify mobile client using OpenID Connect?


Objective

My goal is to implement a generic mobile client and backend authentication flow, just for practice. Imagine that I am building a note app that stores user notes on the backend. Instead of implementing my own user management in my backend, I want to rely on some popular OIDC providers to authenticate users from my backend.

The important thing is I am not interested in accessing any user data that OIDC Provider offers. My goal is to verify the user and the client whenever something hits my backend.


My understanding of OIDC Authentication flow is as follows:

  • IdProvider: the oidc provider
  • MyClient: mobile application. has client_id
  • MyBackend: has client_secret

Steps:

  1. MyClient generates PKCE code challenge.
  2. IdProvider authenticates the user and MyClient receives a temporary authorization_code.
  3. (not sure on this) MyClient sends MyBackend both the temp authorization_code and the PKCE code verifier for token exchange.
  4. MyBackend does token exchange with the IdProvider.
  5. (also not sure on this) MyBackend sends id_token and refresh_token back to MyClient.

My justification on step 3 and 5 are this:

  • Only MyBackend can access client_secret. Therefore token exchange can only be done by MyBackend and MyClient is responsible for sending the temp authorization_code and the PKCE code verifier.
  • MyClient needs id_token to hit normal MyBackend endpoints. MyClient also needs refresh_token to initiate the token refresh flow in case id_token expires.

Problem

Now in above flow it looks like there is no way I can prevent an attacker from stealing the client_id and impersonate MyClient. I have tried to search for sample implementation on the internet but many of them simply rely on the client-side authentication only. For example, this one: https://github.com/awslabs/aws-sdk-android-samples/tree/master/AmazonCognitoAuthDemo asks you to store client_secret in the client side.. I am not sure why this is acceptable and AWS even built a sample for it?

Any help would be appreciated.

How to verify that Google’s apt signing key change is not malicious?

I have an Ansible script that setup google chrome apt repo. I keep Google’s signing key together with the scripts (rather than download it every time) because I think it minimizes the chance of getting malicious key (TOFU security model).

Now the key no longer works:

W: GPG error: http://dl.google.com/linux/chrome/deb stable Release:     The following signatures couldn't be verified because the public key    is not available: NO_PUBKEY 78BD65473CB3BD13 E: The repository 'http://dl.google.com/linux/chrome/deb stable Release' is not signed. 

The url from which I’ve originally downloaded it points to a different key (as in: the files differ). Moreover, I tried getting the key by fingerprint from a different source:

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 78BD65473CB3BD13 apt-key export 78BD65473CB3BD13 

And I got yet another, different file. Which one should I use? How do I make sure that I can trust it? Is there a way to check that the old key just expired and the new one is a valid successor?

what the trustworthy way to verify the integrity of source code?

In some of the organizations, takes the hash value of source code build to maintain the integrity of source code review process.According to reviewer if you change the source code than the hash of that source code is changed. But what about when we zip that source code files than also hash of that source code build is changed.

So, My question is

1) I am not sure this method fulfill the requirement of integrity check ?

2) Is there any alternatives for such kind of integrity check ?

Verify Quad9 is working

Disclaimer: I am a complete neophyte.

I set 9.9.9.9, Quad9, as my DNS in my router’s configuration.

  1. How do I verify Quad9 is working and that I am benefiting from its features, especially encrypted DNS and DNSSEC?
  2. Do I need to use a client on each computer/device, or does configuring my router make it work for the entire network?
  3. If using encrypted DNS and HTTPS (only), am I protected from snooping, including by my ISP?
  4. Is it bad to use encrypted DNS with Tor as mentioned here: https://www.privacytools.io/providers/vpn/#info (“However you shouldn’t use encrypted DNS with Tor.”)?

I am signing (HMAC) outgoing webhooks to allow users to verify their source, should I also sign outgoing responses?

To allow api users to verify the authenticity of outgoing webhooks, I am using a similar model to slack:

  • Concatenate timestamp and body, HMAC with pre-shared key, add timestamp and HMAC digest to headers.

  • Recipient does the same, and compares to the digest in the header.

I can either implement this exclusively on outgoing webhooks, or I can implement it as middleware that performs this process on both outgoing webhooks, and responses to requests.

Is doing the latter good practice? A good idea?

Impacts of a change to the Page Verify option

There’s no doubt about importance of doing regular integrity checks on production databases.
I’m testing the impact of changing the maintenance plans and changing how databases are backed up.
First of all, it is necessary to activate the “checksum” pages verification mode, this changing I think gradually impact the performance because it activates when a certain page is read in memory, changed, and written back to disk.

I have started to test this change with backups. I prepared a backup TSQL script that save the duration of each backup command and I ran 100 times each of the following command:

  • BACKUP DATABASE [MyDB] TO DISK = N'nul' (ran 100 times)
  • BACKUP DATABASE [MyDB] TO DISK = N'nul' WITH CHECKSUM (ran 100 times)

MyDB it’s a 50gb database with page verify set to “checksum”. The results is:

  • Backup with checksum is 20% slowler

I started some test with a larger database (400GB) and I noticed that first backups (with empty cache) are generally slower. After the first backups have completed the duration tends to stabilize.

My questions are:

  • Does the backup process buffers data in cache ? If yes, could this cause a variation of PLE trend or memory pressure ?
  • If backups with checksum are 20% slower what happens with application queries? I don’t think my tests are the absolute truth but..
  • Is there any kind of waiting time linked to the checksum process?
  • Do you know if there are people online who have tested the impacts of this change ? Any additional material can be useful.