Failures due to stale elements, element not clickable at a point, timing issues, etc. must be handled and dealt with in your automation framework - the methods you're creating and using to construct the cases.
They should not propagate and lead to case failures - they are tech issues, not a product problem, or test case one. As such they must be accounted for (try/catch blocks for example) and dealt with (retry mechanisms, or re-getting a web element) promptly.
In essence - treat these kinds of failures the same way as you treat syntax errorrs - there should not be such.
In the same time - and speaking simply out of my experience - cases dealing with live/dynamic data may sometimes randomly fail.
For instance, a SUT I'm working on displays some metrics and aggregations based on data and actions outside of my control (life traffic from upstream systems). There are cases checking a particular generated artifact behaves according to the set expecations (imagine a monthy graph for instance, which simply doesn't have a number of data points - there just was no activity on those days) - cases for it will fail, not because they were constructed incorrectly, and certainly not because there is a product bug - but because of the combination of the time of execution and the dataset.
Over time I've come to the conclusion having those failures is OK, getting them "fixed"- reselecting data sets, working around such outside fluctuations, etc. is an activity with diminishing value, and questionable ROI. Out of the current ~10,000 cases for this system, around 1.5% are failing because of this (disclaimer: the SUT is working exclusively with live/production data).
This is hardly a rule of thumb - it's just a number I've settled on as acceptable given the context.
And important note - if I had full control of this very data, I would have gotten rid of those "random" failures also. I've chosen to use the real data deliberately - thus my cases also verifying its integrity; with this negative side effect.