Signal translation with Seq2Seq model

I’m currently doing some research on signal processing and I got a dataset which includes the signal in itself and its "translation".

A signal and its translation

So I want to use a Many-to-Many RNN to translate the first into the second.

After spending a week reading about the different option I have, I ended up learning about RNN and Seq2Seq models. I believe this is the right solution for the problem (correct me if I’m wrong).

Now, as the input and the output are of the same length, I don’t need to add padding and thus I tried a simple LSTM layer and TimeDistributed Dense layer (Keras):

model = Sequential([     LSTM(256, return_sequences=True, input_shape=SHAPE, dropout=0.2),     TimeDistributed(Dense(units=1, activation="softmax")) ])  model.compile(optimizer='adam', loss='categorical_crossentropy') 

But the model seems to learn nothing from the sequence and when I plot the "prediction", it nothing but values between 0 and 1.

As you can see, I’m a beginner and the code I wrote might not make sense to you but I need guidance on few questions:

  • Does the model make sense for the problem I’m trying to solve ?
  • Am I’m using the right loss/activation functions ?
  • And finally, please correct/teach me