Why don’t they use all kinds of non-linear functions in Neural Network Activation Functions? [duplicate]


Pardon my ignorance, but after just learning about Sigmoid and Tanh activation functions (and a few others), I am wondering why they choose functions that always go up and to the right? Why not use all kinds of crazy input functions, those that fluctuate up and down, ones that are directed down instead of up, etc.? What if used functions like those in your neurons, what is the problem, why isn’t it done? Why do they stick to very primitive very simple functions?

enter image description here enter image description here enter image description here