Why $2R\sigma\sqrt{T+logT+1}=\tilde{\mathcal{O}}(\sigma\sqrt{T})$?

On page 17 on the paper Online Learning with Predictable Sequences, we find a regret of an algorithm equal to

$ $ \text{Reg}_T=\frac{R^2}{\eta}+\frac{\eta}{2}\sigma^2(T+logT+1) $ $ where $ T$ is the entire time, $ \eta$ is learning rate, $ R$ is constant, and $ \sigma^2$ is the variance of data. The above using arithmetic mean-geometric mean inequality can be lower bounded by

$ $ 2R\sigma\sqrt{T+logT+1}\leq \frac{R^2}{\eta}+\frac{\eta}{2}\sigma^2(T+logT+1) $ $

why the left hand side is $ \tilde{\mathcal{O}}(\sigma\sqrt{T})$ ?