The value and the cost of pushing normalization all the way depends. It depends mostly on what you will be doing with the data.
There are (at least) two radically different ways of using the data. One is On Line Transaction Processing (OLTP). The other is On Line Analytical Processing (OLAP).
In OLTP, the cost of not normalizing can be quite high. Transactions get to be more complex and slower, and bottlenecks degrade performance. In OLAP, the benefits of normalizing are limited, and there are other design disciplines that can yield more benefits for the same effort. One of those alternatives is star schema design, which you can look up.
But it isn't so much a matter of NOT normalizing, or of DEnormalizing, but of following a different design discipline, even if it doesn't result in a normalized database.
Getting back to the speciifc case you outlined, there are lots of systems where there is a heavy transaction load on customer activity, but the customer table is used for read only purposes in those transactions.
Failure to conform to 3NF is only going to hurt you when you have to enter a new customer, and you have to enter the zip code all over again, when there are already other customers with the same city, street, and zip code. And in the event that the post office changes the zip code assignment of a given street, you'll have to update lots of addresses instead of just one row in a normalized table.
That's not a very high cost, and not a very likely event.
On the other hand, how likely is it that the Post Office will take a single street, and split that street between two zip codes, depending on which block in the street the address is on? If this latter event happens, you're actually better off with the structure that violates 3NF. You are free to enter different zip codes for each address, using the information the Post Office gave about the split.
So, how likely is this second scenario? I think it's more likely than the first. But you need to go with your guess, and not mine.