How train using the absolute partial derivative of the error function?

I have to do this: Two multi-layer perceptron (NN) were calibrated in the context of regression analyses, one to estimate the diameter (d) and another to estimate the accumulated volume (Vac) from the base up to a given height (h). Both contained two hidden layers: 25 neurons in the first and 10 neurons in the second, all containing the logistic as the activation function. The NN training was oriented to minimize the sum of squared errors through resilient backpropagation algorithm with weight backtracking. For each iteration of the cross-validation the NN was initialized 50 times, and the training ended when the absolute partial derivative of the error function, with respect to the weights, was smaller than 0.01

Im triying to do something but is apparently bad

while True: from sklearn.model_selection import train_test_split xvac_train, Xvac_test, yvac_train, yvac_test = train_test_split(xvac, yvac) mlr=MLPRegressor(solver=’lbfgs’, activation=’logistic’,alpha=1e-5,hidden_layer_sizes=(20,10),random_state=1) mlr.fit(xvac_train,yvac_train) print(mlr.score(xvac_train,yvac_train)) if mlr.score(xvac_train,yvac_train) > 0.99: break

please help