The classic tradeoff is security vs. operations - one group tries to make things hard to break in to, the other just tries to keep things running. This has been summed up as "Keep the bad guys out, let the good guys in, and don't make the wheels fall off."
The problem is that really good security guys are typically good at looking at the trees, not at the forest. When you're creating encryption algorithms that's exactly what you want, but security often fails because a good idea can't scale to hundreds of thousands or millions of systems. Operationally, nobody bothered to figure out how to make it work.
The Heartbleed security bug is showing this. Since the attack can compromise your encryption keys, everyone is rushing out to get new X.509 certificates issued (these are essentially the "ignition keys" for the crypto systems used on the Internet). This is bogging down the companies that issue certificates, but everyone will work through that.
The real problem is that when you get a certificate reissued, the old one gets revoked (i.e. "Nobody should trust that old one, mkay?"). The people who designed this system never seem to have stopped to consider what happens if everybody's certificate gets revoked at once.
Something like half a million web servers are effected by Heartbleed, and all of them will get new certificates. Half a million old certificates will get revoked. The Certificate Revocation List (CRL) will grow to unimagined size - some people are talking maybe over a gigabyte (!).
What this means to you, gentle reader, is that when you take your browser and go surfing to Amazon, there will be an increasing lag that you experience as your browser downloads the latest monster sized CRL. There are protocols that let you look up certificates for validity (OCSP), but we don't have any idea how those will scale when a billion people are hitting the servers. Probably poorly.
Oh well, mustn't grumble. Job security for the security teams, what?