Photo by .reid.
The impending exhaustion of IPv4 addresses just got a step closer as APNIC (the Asia/Pacific organization in charge of IP address allocation) has issued their last addresses. Technically, they still have some addresses in reserve but these are for use only for organizations that need them within their IPv6 infrastructure and will only be doled out in very small chunks (/22 size chunks, or 1024 addresses).
I mentioned this back in February, how the top level manager of all things IP (ARIN) allocated their last blocks to APNIC. It was expected that APNIC would last until at least summer but they got used up much quicker than expected. It doesn't look good for the other regional registries, they will probably all be exhausted by the end of 2011.
So, what's a sys admin to do? Well, you can read through the excellent tutorial from Michael Pietroforte at 4sysops. It's pretty short and to the point and should put you at ease about the complexity of IPv6 (as it did for me).
The other thing you can do is wait. Not necessarily the most prudent thing to do in the world, but not the worst either. The programmer's procrastination mantra is "Why do today what you may not have to do tomorrow?" The truth of the matter, and the reason that IPv6 uptake has been slow, is very simple. It's an economic reality that people aren't going to move until the cost of not switching is higher than the cost of switching. An IPv6 migration costs time and money and that comes at the expense of other things that need to be done within a network. Future costs are still a bit nebulous at this time and so are hard to factor into ones decision making process.
It's not like Y2K where there was a set-in-stone drop-dead date and the non-tech world was fully aware of it and putting on pressure. Also, as has been shown in the past, there are technical solutions that will probably keep IPv4 alive (on life support) for a considerable time. As the costs of those solutions continue to climb, they will eventually meet the slowly dropping cost if IPv6. When they meet there may be a tipping point and some will be left scrambling. Even then, it may still be cheaper to wait and scramble. That's just one of the risks in living in a dynamic tech world.
So, from the trenches, where do you stand? What have you already done and what are you planning to do about IPv6?
In a bit of a divergence from system administration I wanted to post about this excellent chart created by the brilliant author of XKCD. For those of you that aren't familiar, XKCD is a web comic that touches on science and computing topics with a wonderful wit. Some of my favourite graphs include the Map of the Internet and Optimal Tic-Tac-Toe moves.
This latest chart is a map of the relative sizes of different radiation doses, which is relevant with the tragedy of Japan's recent tsunami and its ageing reactors. It helps to clarify some of inaccuracies (and plain-old scaremongering) in the media right now.
(click to enlarge)
My heart goes out to the people of Japan right now, and it's an added tragedy that so much focus is given to the reactors when there is so much else that needs to be focused on.
Australian Red Cross Japan and Pacific Disaster Appeal 2011
Photo by Keith Williamson
I was reading with interest this overview of the "Cyberwar Panel" at RSA 2011 and ran across this sentence about regulating IT security:
"Regulate results, not technology." Schneier said. "If you regulate technology, you stifle innovation. If you regulate results, you incent innovation."
I got to thinking what that could mean. It's pretty obvious how to regulate results when you have a specific measurable goal such as smog reduction or crash safety standards. But how do you regulate results when the ideal result is "nothing happening?" It seems to me that there isn't a meaningful way to regulate without regulating technology (at least to some level) and the rate of change in the world of computing and networking is just too high to allow that to work.
Then there was talk of modelling security regulations on Sarbanes-Oxley:
Chertoff called for a regulatory framework where company executives and board members sign on the dotted line, certifying what steps they have taken to secure their network, what backup systems they have in place and what level of resiliency is built into their IT system. “People take that seriously. Is it dramatic? No, but it moves the ball down the field,” Chertoff said.
Schneier concurred, noting that holding individuals at a company accountable for certain protections has worked with environmental regulations and Sarbanes-Oxley, the post-Enron law that requires directors and executives to certify their financial results.
Sarbanes-Oxley has certainly worked if the goal was to prevent companies from going public in the US. I don't really think this is any kind of model to base regulations off of.
I personally think that the main reason we haven't had a "Cyber Pearl Harbor" is due in large part to the absence of regulations. The rate of unfettered innovation has meant that the security environment is so diverse that it's not possible to lauch such an attack at a single point. The Stuxnet worm is a good example. Not only is it probably the most advanced and complicated attack yet devised (and possibly with the resources a nation state behind it) but it also got nowhere near the destructive power that's got regulators worried.
I'm not foolish enough to think that a "Cyber Peral Harbor" is impossible, but I also think that if such a thing is possible it's going to be happening in a way that no foreward thinking (short of divine prophecy) is going to prevent. As soon as regulations come into play then technology will begin to homegenize and a single point of failure will become more obvious.
But I'm not a security expert, just some idiot with a keyboard, I'd love to hear what your thoughts are on the topic.