In order to analyse an algorithm that relies on convergence, it seems that we have to prove something about the rate of convergence.
Convergence usually has a termination condition that checks if our error metric is below some threshold:
do {
// some operation with time complexity O(N)
} while (errorMetric > 0.01) // if this is false, we've reached convergence
Generally, we seek to define something about the algorithm's manner of convergence - usually by identifying that its a function of something.
For instance, we might be able to show that an algorithm's measure of error is a function of the number of iterations so that the error = 1 / 2^i, where i is the number of iterations.
This can be re-written in terms of the number of iterations like so: iterations = log(1 / E), where E is the desired error value.
Therefore, if we have an algorithm that performs some linear operation on each iteration of the convergence loop (as in the example above), we can surmise that our time complexity is O(N * log(1 / E)). Our function's rate of growth is dependent on the amount of error we're willing to tolerate, in addition to the input size.
So, if we're able to determine some property about the behaviour of convergence, such as if its a function of the error, or size of the input, then we can perform asymptotic analysis.
Take, for example, PageRank, an algorithm called power iteration is used in its computation, which is an algorithm that approximates the dominant eigenvector of a matrix. It seems possible that the rate of convergence can be shown to be a function of the first two eigenvalues (shown in the link).