An odd number of voting disks is required for proper clusterware configuration. A node must be able to strictly access more than half of the voting disks at any time. So, in order to tolerate a failure of n voting disks, there must be at least 2n+1 configured.
When you have one voting disk and it goes bad, the cluster stops functioning and when you have two and one goes bad, the same happens because the nodes realize they can only write to half of the original disks , violating the rule that they must be able to write > half.When you have three and one goes bad, the cluster runs fine because the nodes know they can access more than half of the original voting disks (2/3 > half). That's why oracle recommend 3 voting disks for two node cluster.