Why doesn’t relu cause back propegation to get stuck

Say you have a neural net that is being trained using back propagation and you are using relu activation. Say the input to a node is a weighted sum of the previous layer with a bias term and say for a particular data point, this weighted sum plus bias is negative. Then relu returns 0. Notice the change in the loss as a function of the change in one of these weights or the bias is 0. Therefore the network won’t improve the bias as the network does back propagation. Why is this not a problem?

How is RELU used on convolutional layer

I know that when dealing with artificial neural networks, RELU yields a value based on the weighted sum of the inputs plus a bias term. However, this logic does not seem to apply to convolutional neural networks.

Looking at the ResNet architecture, the outputs of the convolutional neural nets (what I believe to be feature maps), is added to the input x, and then RELU is applied onto it. What exactly does the RELU function do in this case? Do the convolution layers output feature maps, or something else?