Derandomization of vertex cover algorithm

I have the following randomized-algorithm for the vertex cover problem. Let $ B_0$ be the output set:

Fix some order $ e_1, e_2,…,e_m$ over all edges in the edge set E of G, and set $ B_0=∅$ .

Add to $ B_0$ all isolated vertices, i.e. the ones without any incident edges.

For every edge $ e$ in $ e_1,e_2,…,e_m$ if both endpoints of e are not contained in $ B_0$ , then flip a fair coin deciding which of the endpoints to choose, and add this endpoint to $ B_0$ .

I have already proved that this algorithm has $ E[|B_0|] \le 2|OPT|$ .

Now I don’t know how to apply the method of conditional expectations (defined here) to derandomize the algorithm in order to show that we can’t obtain an efficient deterministic version and that gives the same result of the expected value found previously. Can you show me to do this?