Performance of large transactions and concurrency?

If I have a multi-million row table and I run a transaction that updates 50k rows, what are the performance implications of this?

Assuming it’s indexed correctly, it shouldn’t take long, but what rows are locked and how is the usage of that table affected?

  1. Are rows being updated during the transaction able to be read after the transaction starts and before it finishes?
  2. Are rows not being updated during the transaction able to be read after the transaction starts and before it finishes?
  3. If another transaction starts trying to change rows that are being changed by a previously unfinished transaction, will that transaction fail at start or after it tries to commit (assuming conflict)?

My question is for Postgres 9.3; I assume there are variations.