# How can a classifier using lapacian kernel achieve no error on the input samples?

If we have a sample dataset $$S = \{(x_1, y_i),…(x_n,y_n)\}$$ where $$y_i = \{0,1\}$$, how can we tune $$\sigma$$ such that there is no error on $$S$$ from a classifier using the Laplacian kernel?

Laplacian Kernel is

$$K(x,x’) = exp(-\dfrac{\| x – x’\|}{\sigma})$$

If this is true, does it mean that if we run hard-SVM with the Laplacian kernel and $$\sigma$$ from the above on $$S$$, we can find no error separing classifier also?