Consider the following function that generates a sparse diagonal Matrix

`W[n_] := W[n] = Exp[(2 Pi I)/n]//N; Clear[X]; X[n_, L_, power_ : 1][i_, chain_] := X[n, L, power][i, chain] = DiagonalMatrix@SparseArray@Table[ W[n]^(-IntegerDigits[a,n,2 L][[chain*L-i+1]]power), {a, 0,n^(2 L)-1}] `

Due to the f[x_]:=f[x]=SlowFunction definition, I expect this code to be much faster on a second run. If I evaluate the following on my laptop several times

`X[3, 7][1, 1] // Timing `

I get $ 12$ seconds on the first run then around $ 2.8$ seconds on any evaluation after that. Clearly the memoization trick seems to have made this faster, but it is still much slower than it should be. For example if I run

`a = X[3, 7][1, 1] `

then

`a //Timing `

I get $ 10^{-6}$ seconds. Running X[3, 7][1, 1] the second time should also be this fast, since the matrix is already computed and saved. But it seems that instead it is still doing some computation. Any reason this happens?