Math behind Multi-class linear discriminate analysis (LDA)


I have a question about Linear Discriminant Analysis (LDA) for the purpose of Dimensionality Reduction.

So I understand for the algorithm to calculate for $ k$ projection vector(s) you need to determine the eigenvector(s) that corresponds to the top $ k$ eigenvalue(s). But can anyone explain what you do with those eigenvectors after you have calculated them to get a final output?

My guess is to multiply all of the eigenvectors (projection vectors) together and then multiply that with each point, $ x$ , in the original dataset to produce a new corresponding point $ y$ . Does this sound right?