This is a great question. I haven't run across any techniques for estimating client speed from a browser before. I do have an idea, though; I haven't put more than a couple minutes of thought into this, but hopefully it'll give you some ideas. Also, please forgive my verbosity:
First, there are two things to consider when dealing with client-server performance: throughput and latency. Generally, a mobile client is going to have low bandwidth (and therefore low throughput) compared to a desktop client. Additionally, the mobile client's connection may be more error prone and therefore have higher latency. However, in my limited experience, high latency does not mean low throughput. Conversely, low latency does not mean high throughput.
Thus, you may need to distinguish between latency and throughput. Suppose the client sends a timestamp (let's call it "A") with each HTTP request and the server simply echos it back. The client can then subtract this returned timestamp with its current time to estimate how long it took the request to make the round trip. This time includes almost everything, including network latency, and the time it took the server to fully receive your request.
Now, suppose the server sends back the timestamp "A" first in the response headers before sending the entire response body. Also assume you can incrementally read the server's response (e.g. nonblocking IO. There are a variety of ways to do this.) This means you can get your echoed timestamp before reading the server response. At this point, the client time "B" minus the request timestamp "A" is an approximation of your latency. Save this, along with the client time "B".
Once you've finished reading the response, the amount of data in the response body divided by the new client time "C" minus the previous client time "B" is an approximation of your throughput. For example, suppose C - B = 100ms, and you've read 100kb of data, then your throughput is 10kb/s.
Once again, mobile client connections are error prone and have a tendency to change in strength over time. Thus, you probably don't want test the throughput once. In fact, you might as well measure the throughput of every response, and keep a moving average of the client's throughput. This will reduce the likelihood that an unusually bad throughput on one request causes the client's quality to be downgraded, or vice versa.
Provided this method works, then all you need to do is decide what the policy is for deciding what content the client gets. For example, you could start in "low quality" mode and then if the client has good enough throughput for some period of time, then upgrade them to high quality content. Then, if their throughput goes back down, downgrade them back to low quality.
EDIT: clarified some things and added throughput example.