Dan Kaminsky is a well-known and respected security researcher, particularly
famous for his work with DNS -- the fundamental naming technology which
powers the Internet. When he announced a protocol-level flaw in DNS
affecting almost every available implementation, vendors and other security
researchers paid attention.
Now it's time for administrators and users to pay attention. On August 7, the
Internet might get much more dangerous. Patches are available to ameliorate
the problem. In this video, recorded at Foo Camp 2008, Dan explains how he
discovered the flaw and what you need to do to keep your users and their data
Geek Alert: Dan Kiminsky on the DNS bug of 2008
Hey guys, it's good to be at another Foo Camp. So, I've been playing with something -- well, I wish I could say it was new but I've been kind of looking at it for a very long time. It's called DNS. It's the Domain Name system. It is the system that converts the names that you might put into a web browser into the addresses that can actually route around the internet.
Now I've been looking at DNS for a long time. Been doing strange things. Playing audio and eventually video over DNS -- What? It's a great way to get free wireless at coffee shops.
But, recently I sort of found something that's a little more serious than Darth Vader dancing. And it involves an attack called cache poisoning. Now every name server when it figures out a mapping between a name and a number, you know, www.yahoo.com and some IP address, it doesn't want to have to keep looking it up, traversing all of these various hosts, so it remembers things for a little while.
Ideally it remembers the correct things. But what if it didn't? What if when a request came in some other address came back? Well, I don't know what web you'd be browsing but it wouldn't be the right one, it'd be someone else's. And I don't know where your email would go, but it's not going to go to where you think.
This was not what I was looking for. I wish I could give you the exact details, but I wasn't even looking at anything security related, I was looking for a new toy.
And then you know, do the whole thing -- oh, this one thing can't work because if it worked the internet would be in so much trouble. And then it worked.
Hrm. So, normally in computer security when you find something like this the normal operating procedure would be to go ahead, tell the vendors, get a patch out and tell everyone what the bug is. And do, you know...
Everything comes out on the same day in the best of situations. And honestly, not everyone' s going to patch. This, this was a unique situation.
First off, the bug is by design, meaning it's actually in the protocol itself. It's not the Microsoft implementation of DNS or the Cisco implementation of DNS. Or any particular vendors. It's everyone.
It's really simple. It's really good. Well, by good in our industry that means really, really bad. And the effects are pretty astonishing because everything maps names to numbers using DNS.
So, upon recognizing the scope of this, I realized we have to do something a little unusual. First thing I did, was I contacted Paul Vixie. Paul Vixie was the original maintainer of BIND, the most popular nameserver in the world. Even though this wasn't limited to BIND. He's been doing this... I was in grade school when he started. I'm like, Paul, uh, we got a problem.
Paul ended up pulling together a lot of people. I ended up pulling together a lot of people and a lot of companies. I have spent my entire career in corporate America, that's rather good for having contacts.
Within a few weeks we realized we needed to stop talking over email. And actually if we were going to do something about this we needed to meet. So what we did, we all showed up. Showed up in... actually Microsoft was very kind. They provided hosting for a summit to be held in Redmond, Washington on March 31, 2008. We showed up and there were three questions.
One, what the heck are we looking at here? Because this is on the scale of something we hadn't seen in a very long time.
Two, what are we going to do about it? There are many, many possible fixes to this issue. What fix would most protect people? We ended up choosing a very interesting fix, which I'll tell you about in a second.
But the big thing we did was decide this issue was on a scale that it couldn't be any one company, any one vendor who came out with a fix on their own schedule. Everyone. If we were going to do this everyone needed to do it on the same day because everyone needs to patch this. And this has never happened before. Nothing like this has even come close. And I'll be honest, we didn't even think, eh you know we'll just go ahead and get everyone to do it, and then eventually we're like, wait -- there's a reason this doesn't happen. It's really hard. You need to synchronize all of these corporate things.
But, you know after -- I think we were looking at maybe 8 or 9 years of computer security research being... saying... that's really, really done by independent security auditors. I have to give the vendors a lot of credit. There were a lot of vendors. People responded very, very well.
Microsoft was extremely cooperative, Cisco was very cooperative, ISC, employees of Paul Vixie so of course they were cooperative. But all of these companies really just didn't bat an eye once they were told the depth of the issue.
They stepped up to the plate. CERT, of course, the Department of Homeland Security's Computer Emergency Response Team was of course instrumental in getting that kind of cooperation, but you know, really this wasn't a surprise to them.
We've known for a long time that DNS was in trouble, we just didn't know exactly how. And now we did, and now we had a way to fix it.
So the fix we chose was interesting and unusual in that it doesn't actually describe the bug. Now the way I explain this is, look -- You have a code base. And the code base has all of these things that could possibly go wrong, and all these paths you could possibly take. And the bug that was found was deep really, really, particular, obscure code base. But we didn't touch anything there, because if we touched anything there it would put these big huge arrows saying 'problem here'.
I'm curious. Was there anywhere else we could put a fix that would not say 'problem here'. And, yeah, all the way when a packet is received, it turns out there's a random number from 0 to 65,000. And we said, hey, what if it said 0 to 65,000, what if we said we could make this on a range from 0 to the hundreds of millions or even the billions. Is there anywhere else we could get more random bits?
And it turns out because all packets on the internet are layered even though there's only 16 bits, 65,000 options at the DNS layer, there was another 16 bits one layer up over in UDP. If we just started using it the system would adapt to it automatically. Why don't we do this? Oh wait this was suggested 5 years ago by Dan J. Bernstein, the infamous djb. An ornery guy, but man the guy can code. And you know what, djb was right -- we should have done what he said years ago.
And so we did the sledgehammer fix. We did the fix we were told to do years and years and years ago and just didn't.
And that actually became public on July 8th. July 8th, Microsoft patched, Cisco patched, Sun patched, ISC patched, Red Hat patched, Debian patched, Ubuntu patched, we got so many patches out on one day.
Now, how long will the information stay under the covers? Not forever.
I've basically done everything in my power to ask the open security community, yes, you know there is an issue. Talk to me, email me, communicate with me, give me a couple of weeks. I know it's a big ask. Asking computer security researchers not to publicly research something is the biggest, most ridiculous request that could ever be made. But it's something, you know,... I've been in the industry my entire career it's something I had to at least try. And so far, so far things are ok. I'd say if there's anything I've done wrong in this process, it's not --
I actually went out when I announced this I had all the vendors on board, I had Microsoft, I had everyone else. I had all the DNS people on board. I had Paul Vixie and Florian Weimar and David Dagon and everyone else.
But I went out with no other hackers. And there was a lot of skepticism and there should have been. Because I've got to tell you -- something of this scale -- to go ahead and go out and say that there's all these issues and spin up all this press and all of this hype and ask everyone to patch with no good technical details. This is the mark of bull, this is -- it's so easy to make stuff up. If I'm doing it here, wouldn't anyone be able to do that?
Now I eventually remediated this to some degree by bringing in some of my loudest detractors, pulling them aside getting a con call and saying alright, guys, here's the deal. And, to their credit, Tom Ptacek's credit, Dino Dai Zovi's credit, they ahead and they went online and said, 'Oh my God, we're in trouble.' I think the exact quote is, "Dan's got the goods."
Well, this is true. I kind of wish I didn't, 'cause it's a lot of responsibility, but yeah, I've got the goods. And on August 6, 2008 the goods are getting out. The bug is not going to last much longer. I don't even know if it's going to last until August 6th, frankly, based on the emails that I'm getting.
So this is my request to all of you in the room and this is my request to everyone watching this video. The DNS bug is real. I am not messing around here. I am doing absolutely everything above and beyond what I ever thought was possible and a lot of people are cooperating well.
We need to fix this. If it is a recursive nameserver that does lookups to names on the internet you must, must patch it, or you must, must decommission it.
It's not too hard to spin up another nameserver, it's not too hard to forward your queries to Open DNS. It's not too hard to have your host themselves use Open DNS. Open DNS is a very nice free and open resolver. You don't pay anything and it is probably higher performance than what you have today.
It is true, and I have to point it out, that it's not just nameservers that are vulnerable. Every node that is internet connected is a DNS client. And there are scenarios where DNS clients themselves are vulnerable. I can't tell you all the scenarios, because it would describe the bug a little too well. It's a horrible thing to say, but what am I gonna do? I'm trying to buy people at least a couple of weeks to patch. But what I will say is, if you've got a box on the Internet, if you've got a box more accurately on your DMZ, on your de-militarized zone, that is hosting content to the internet, that might receive DNS traffic from untrusted sources. These boxes, either patch them, Microsoft has a server-side, uh client side patch that's very nice. Or, have them talk to another nameserver on the LAN on the DMZ. Don't send them out to some nameserver out in untrusted territory. This works very, very well.
So, this is what I'm asking for, it is a big ask. But, this is probably the most time that anybody is going to get to deal with a bug of this nature. I have spend the majority of this year -- it was three days to find this bug. It has been over six months of work by a lot of people to give everyone at least 30 days to try.
You know, the code -- at least it doesn't always install itself. We got the bug, we got the fix, we got the fix out on the same day. And now I'm asking, before August 6, 2008, get this fix on your networks, and that's what I'm trying for.
Thank you very much.