Generally, it's one of the classic problems of signal processing and there are several approaches, based on how do you define "brightness". It's generally the same for "brightness" of an image, "loudness" of a sound signal, etc.
Some ideas of what you can use as a generic "brightness" is:
- Average value of all the pixels (i.e. sum up all the brightnesses of all the pixels, then divide by total amount of pixels, i.e. width * height).
- Build up a histogram of brightness distribution, then choose such point x in that histogram that 98% (95%, 90%, 70% - this number can vary) of all the pixels in your image would be less bright than this x. It's called percentile.
- Calculate "root mean square" (RMS) for all the pixels (sum up squares of all the pixels, divide by total amount of pixels, extract square root from this one).
There are multiple image libraries that can yield good results. The easiest one to use from a shell script is probably ImageMagick/GraphicsMagick - you can get both simple averages and do some more complex histogramming to check out percentiles if you want to.