I’m working on a problem to generate all powersets of a given set. The algorithm itself is relatively straightforward:

`def power_sets(A): """Returns the power set of A as a list of lists""" if len(A) == 0: return [[]] else: ps_n1 = power_sets(A[:-1]) # Powerset of set without last element # Add nth element to each subset of the powerset of n-1 elements ps_n = [sub + [A[-1]] for sub in ps_n1] # Combine the two sets and return return ps_n1 + ps_n `

It’s clear that the space complexity is $ O(n2^n)$ , since there are $ n$ items in the set, and each element is in half of the $ 2^n$ sets.

However, the book that I’m working from says the time complexity is also $ O(n2^n)$ , which makes perfect intuitive sense, but I’m having a hard time justifying it mathematically.

Other than by saying “there are x number of items to mess with, so time complexity is at least as much”, can anyone offer an explanation of the runtime analysis based on the runtime of the statements in my algorithm?

This answer pretty much only says that the runtime is such because of the space complexity (not very satisfying, but as an aside–can the runtime ever be better than the space complexity?)

I saw this answer, but to be honest it is a bit difficult to understand. It seems to suggest in the last line that the runtime is $ O(n^2*2^{n!})$ (since (I think) $ |P_i| = 2^i$ ), and that doesn’t seem right either.

I tried drawing out the call tree and that didn’t help, as there are only $ n-1$ recursive calls made, and from what I can tell each spends $ O(n^2)$ time.

Anyway, can someone offer a compelling explanation for this runtime?