Spectre
Contributor
blacknet:The real nasty/ugly/horrible thing about dns is many servers are setup to do the caching thing and it can take up to a year for the new updates to fully propagate. Dns is one of those things you avoid touching at all costs.
Absolute BS. The amout of time it will take to propagate your zone is quite simple to calculate. It's the longest TTL for the records [or the TTL in the SOA for older versions that use that as a default and not a minimum]. That is the amount of time that a resolving server can cache the resource records from that zone.
Now you then have to take into account the expire time on the zone. The refresh time is generally how long it's going to take for a slave server to get an update from a primary server [even less now a days with notify and incremental zone transfers]. You -should- be safe with largest TTL + refresh time * depth of slave servers. Depth of slave servers is if you have a slave server using a slave server as master for the zone.
However it -can- take up to TTL+expire time*depth of servers to propogate all changes, since that slave server will be giving out authoritative information until that expire time, even if it hasn't been able to get a zone transfer off the primary.
Genesis mentioned a good point about the serial numbers [almost makes me wonder if he's seen my lecture on DNS]. The -reason- the expire time and depth of servers is important is because you need to understand the sequence space that the serial number arithmetic uses [RFC1982 IIRC] so that you don't exceed the largest meaningful integer when making serial number changes within an expire time. I once had to write up a white paper putting RFC1982 is more reasonable terms because I had a customer that insisted that I was wrong when I told him that yes, regardless what he thinks, 3000120340 was greater than 1. unfortunately I can't seem to find that whitepaper my corporate website anywhere so I can't link you to it.