AFAIK, by design, it's not possible. If you imagine the architecture of dataflow, what do you have?
A main server that take your code, compile it, package it, and deploy it on workers (that's why, at the beginning, you have only 1 instance, the main server, and then you have an automatic scalability).
Then, the data are pulled and transformed on the worker. The code is immutable in the workers. The main server will never update that code (except if you perform a roll-out in streaming mode).
Of course, you could imagine that, on a special value read by the worker, the worker update locally its own logs level. But you can assume that all the workers will receive the information (because the data are sharded and each worker see only a subset of the data)
But, at the end of the day, what's your concern? Do you have too much logs? If so, you can use Cloud Logging router to exclude some logs. They won't be ingested and therefore won't be charged
If the logs slow your workload, this time, you have to rethink/redesign your logging strategy and level, before launching your code