Identified Folder does not reduce in size over time

Hi @Sven

I am trying to figure out how to exhaust my identified folder to make it all verified (how to find more verified faster).

When I monitored over last 2 days, with a project that only uses Identified links, these are my folder sizes:

Day 0 
Identified 494mb
Submitted 549mb
Verified 829mb

Day 2
Identified 511mb
Submitted 572mb
Verified 857mb

My expectation would be the Identified folder reduces in size as  links are deleted from this folder when they are submitted/verified, however the Identified folder is increasing in size instead??

Am I missing something?

What does a kernel of size n,n^2 ,… mean?

So according to Wikipedia,

In the Notation of [Flum and Grohe (2006)], a ”parameterized problem” consists of a decision problem $ L\subseteq\Sigma^*$ and a function $ \kappa:\Sigma^*\to N$ , the parameterization. The ”parameter” of an instance $ x$ is the number $ \kappa(x)$ . A ”’kernelization”’ for a parameterized problem $ L$ is an algorithm that takes an instance $ x$ with parameter $ k$ and maps it in polynomial time to an instance $ y$ such that

  • $ x$ is in $ L$ if and only if $ y$ is in $ L$ and
  • the size of $ y$ is bounded by a computable function $ f$ in $ k$ . Note that in this notation, the bound on the size of $ y$ implies that the parameter of $ y$ is also bounded by a function in $ k$ .

The function $ f$ is often referred to as the size of the kernel. If $ f=k^{O(1)}$ , it is said that $ L$ admits a polynomial kernel. Similarly, for $ f={O(k)}$ , the problem admits linear kernel. ”’

Stupid question, but since the parameter can be anything can’t you just define the parameter to be really large and then you always have linear kernel?

Can a warlock with Repelling Blast use Eldritch Blast to push 10 feet a creature of any size?

From the RAW ruling, unless I am missing official errata or clarifications documents from WotC I do not see any size limitations to the use of the Repelling Blast power of Eldritch Blast.

So it means you could push back up to 10 feet a creature of any sizes regardless of any context, weight, mass or your own size ?

How could a key could be inserted in a heap without increasing the size of an array?

MAX-HEAP-INSERT(A, key)     A.heap-size = A.heap-size + 1     A[A.heap-size] = -infinity     HEAP-INCREASE-KEY(A,A.heap-size,key) 

How could a key could be inserted in a heap without increasing the size of an array? With this code from Introduction To Algorithms, you can’t just randomly increase the heap size upon wish. Did I miss something? All all online lectures I have seen do no talk about this issue. Neither does the book touch this issue. Or is it that the lowest key in an array would be dropped automatically?

Is it possible for the runtime and input size in an algorithm to be inversely related?

I’m wondering if it’s possible for algorithms that have monotonically decreasing runtime with the input-size – just as a fun mental exercise. If not, is it possible to disprove this claim? I haven’t been able to come up with an example or counterexample so far, and this sounds like an interesting problem.

P.S. Something like $ O(\frac{1}{n})$ , I guess (if it exists)

Multi-level paging where the inner level page tables are split into pages with entries occupying half the page size

A processor uses $ 36$ bit physical address and $ 32$ bit virtual addresses, with a page frame size of $ 4$ Kbytes. Each page table entry is of size $ 4$ bytes. A three level page table is used for virtual to physical address translation, where the virtual address is used as follows:

  • Bits $ 30-31$ are used to index into the first level page table.
  • Bits $ 21-29$ are used to index into the 2nd level page table.
  • Bits $ 12-20$ are used to index into the 3rd level page table.
  • Bits $ 0-11$ are used as offset within the page.

The number of bits required for addressing the next level page table(or page frame) in the page table entry of the first, second and third level page tables are respectively

(a) $ \text{20,20,20}$

(b) $ \text{24,24,24}$

(c) $ \text{24,24,20}$

(d) $ \text{25,25,24}$

I got the answer as (b) as in each page table we are after all required to point to a frame number in the main memory for the base address.

But in this site here it says that the answer is (d) and the logic which they use of working in chunks of $ 2^{11} B$ I feel ruins or does not go in with the entire concept of paging. Why the system shall suddenly start storing data in main memory in chucks other than the granularity defined by the page size of frame size. I do not get it.

Do multiple sources of counting as one size larger for carrying capacity stack?

I am currently mocking up a Goliath Barbarian, and was wondering if there is a limit to the amount of sources of “You count as if you were one size larger for the purpose of determining your carrying capacity” that stack.

From prior browsing, I’ve found that Goliath’s Powerful Build & Totem Barbarian’s Bear Aspect feature stack in that regard, but could you stack, for example:

  • Powerful Build (innate Goliath feature)
  • 6th level Totem Barbarian Bear Aspect
  • Brawny feat

Essentially, could you double your carrying capacity and lift/pull capacity thrice?

openssl: How to configure private key size for secp256k1

I am trying to understand if it’s possible to configure a private key size for the given curve.

openssl ecparam -name secp256k1 -genkey  -----BEGIN EC PARAMETERS----- BgUrgQQACg== -----END EC PARAMETERS----- -----BEGIN EC PRIVATE KEY----- MHQCAQEEIMda3jdFuTnGd2Y9s9lZiQJXKSpxBp6WQWcurn4FnYogoAcGBSuBBAAK oUQDQgAEI272v3lIoVkLZEbsJ/1l6Wfqbk8ZeybzzhtUN60EOhCRsR8rOLAIbbDl ncOT1vtzEj5NZxQEYdopFMb10CfccQ 

Was trying to read openssl documentation to see how to configure the length of the private key but failed to find anything regarding it. Could you advise?