Let’s say I have a JavaScript loop iterating over input of size N. Let’s say all elements in N are unique, so the includes method traverses the entire output array on each loop iteration:

`let out = [] for (x in N) } if (!out.includes(x)) { out.push(x) } } `

The worst case runtime of the code inside the loop seems to be not O(N), but the summation of N, which is substantially faster.

Is this properly expressed as O(N^2) overall or is there a standard way to convey the faster asymptotic behavior given the fact that the output array is only of size N at the end of the loop?