A bit confused about the time complexity of this theorem vs. my situation

In this paper polynomiality of n-fold integer programming problem is discussed.

(My question is very simple and it is stated at the end. There is no need to deeply understand the excerpts they are provided for the purpose of being complete.)

Excerpt 1:
The n-fold integer programming problem. Fix a $ p \times q$ integer matrix $ A$ . Given positive integer $ n$ and integer vectors $ b = (b^0, b^1, \ldots , b^n)$ and $ c = (c^1, \ldots , c^n)$ , where $ b^0 \in \mathbb{Z}^q$ , and $ b^k \in \mathbb{Z}^p$ and $ c^k \in \mathbb{Z}^q$ for $ k = 1, \ldots, n$ , find a nonnegative integer vector $ x = (x^1, \ldots , x^n)$ , where $ x^k \in \mathbb{N}^q$ for $ k = 1, \ldots, n$ , which minimizes $ cx= \sum_{k=1}^{n}c^k x^k$ subject to the equations $ \sum_{k=1}^{n} x^k = b^0$ and $ Ax^k = b^k$ for $ k = 1, \ldots, n$ .

Excerpt 2:
In this article, we establish the following theorem. Naturally, the input size is $ n$ plus the bit size of the integer objective vector $ c \in \mathbb{Z}^{nq}$ and the integer right-hand side vector $ b \in \mathbb{Z}^{q+np}$ .
Theorem 1.1. Fix any integer matrix $ A$ . Then there is a polynomial-time algorithm that, given any $ n$ and any integer vectors $ b$ and $ c$ , solves the corresponding n-fold integer programming problem.

My Question:
I have an Integer Programming at hand. For a given integer $ k$ , my $ A$ (like above) is $ 3 \times 2k$ , and my $ n$ (like above) is $ k$ , and let us assume that bit size of my integers is of $ O(\log (k))$ . Is the aforementioned theorem still applicable to my situation? Especially because $ q$ in my situation depends on $ k.$