# Asymptotic of divide-and-conquer type recurrence with non-constant weight repartition between subproblems and lower order fudge terms

While trying to analyse the runtime of an algorithm, I arrive to a recurrence of the following type :

$$\begin{cases} T(n) = \Theta(1), & \text{for small enough n ;}\ T(n) \leq T(a_n n + h(n)) + T((1-a_n)n+h(n)) + f(n), & \text{for larger n .} \end{cases}$$ Where:

• $$a_n$$ is unknown and can depend on $$n$$ but is bounded by two constants $$0.
• $$h(n)$$ is some “fudge term” which is negligeable compared to $$n$$ (say $$O(n^\epsilon)$$ for $$0\leq \epsilon < 1$$).

If $$a_n$$ was a constant, then I could use the Akra-Bazzi method to get a result. On the other hand, if the fudge term didn’t exist, then some type of recursion-tree analysis would be straight-forward.

To make things a bit more concrete, here is the recurrence I want to get the asymptotic growth of:

$$\begin{cases} T(n) = 1, & \text{for n = 1;}\ T(n) \leq T(a_n n + \sqrt n) + T((1-a_n)n+\sqrt n) + n, & \text{for n \geq 2 } \end{cases}$$ Where $$\frac{1}{4} \leq a_n \leq \frac{3}{4}$$ for all $$n\geq 1$$.

I tried various guesses on upper bounds or transformations. Everything tells me this should work out to $$O(n\log(n))$$ and I can argue informaly that it does (although I might be wrong). But I can only prove $$O(n^{1+\epsilon})$$ (for any $$\epsilon>0$$), and this feels like something that some generalization of the Master theorem à la Akra-Bazzi should be able to take care of.

Any suggestions on how to tackle this type of recurrence?