What is the right term/theory for prediction of Binary Variables based upon their continuous value?

I am working with a linear programming problem in which we have around 3500 binary variables. Usually IBM’s Cplex takes around 72 hours to get an objective with a gap of around 15-20% with best bound.In the solution, we get around 85-90 binaries which have value of 1 and others are zero. The objective value is around 20 to 30 million. I have created an algorithm in which I am predicting (fixing their values) 35 binaries (with the value of 1) and letting the remaining ones solved through the Cplex. This has reduced the time to get the same objective to around 24 hours (the best bound is slightly compromised). I have tested this approach with the other (same type of problems) and it worked with them also. I call this approach as “Probabilistic Prediction”, but I don’t know what is the standard term for it in mathematics?

Below is the algorithm:

Let y=ContinousObjective(AllBinariesSet); WriteValuesOfTheContinousSolution(); Let count=0;  Let processedbinaries= EmptySet; while (count < 35 ) { Let maxBinary =AllBinariesSet.ExceptWith(processedJourneys).Max();//Having Maximum Value between 0 & 1 (usually lesser than 0.6)             processedJourneys.Add(maxBinary); maxBinary=1; Let z = y; y = ContinousObjective(AllBinariesSet); if (z > y + 50000)                  { //Reset maxBinary maxBinary.LowerBound = 0; maxBinary.UpperBound = 1; y = z; } else { WriteValuesOfTheContinousSolution(); count=count+1; }              } 

According to me, it’s working because the solution matrix is very sparse and there are too many good solutions.