I would like to get the temperature of a hard disk using Python (under Linux). I'm currently calling hddtemp
with subprocess.popen
, but I call it often enough that it is a performance bottleneck in my script. I think it should be possible to do something similar to question 4193514?
-
3How often are you checking the temperature? You should be able to cache the value for a minute or so. It doesn't change very quickly. – John La Rooy May 02 '12 at 21:07
-
4@gnibbler: But how would he be able to measure the circumstance of throwing the machine into a fireplace? – jdi May 02 '12 at 21:10
-
It's for a web page showing realtime server status. When the page is open, it refreshes about every 5-10 seconds. – mrtasktat May 02 '12 at 21:11
-
Add a slight random jitter that makes it appear like the value is changing and only get the actual value every minute :)? – Glider May 02 '12 at 21:16
3 Answers
You can run hddtemp as a daemon (the -d option), then use sockets to query it - defaults to port 7634.
Edit: see some code that does this.

- 55,315
- 8
- 84
- 99
-
That is exactly what I was looking for, Thanks! Bonus points for the code example. – mrtasktat May 02 '12 at 22:17
Expanding on what @gnibbler suggested in his main comment, what about a cached approach? This is a stupidly simple example just to show the concept:
from collection import defaultdict
class CachedValue(object):
def __init__(self):
self.timestamp = -1
self._value = None
@property
def value(self):
return self._value
@value.setter
def value(self, val):
self._value = val
self.timestamp = time.time()
def isOld(self, seconds):
return (time.time() - self.timestamp) >= seconds
>>> _cached = defaultdict(CachedValue)
>>> _cached['hddtemp'].isOld(10)
True
>>> _cached['hddtemp'].value = 'Foo'
>>> _cached['hddtemp'].isOld(10)
False
# (wait 10 seconds)
>>> _cached['hddtemp'].isOld(10)
True
And in your specific case:
def reportHDD(self):
if self._cached['hddtemp'].isOld(10):
self._cached['hddtemp'].value = self.getNewHDDValue()
return self._cached['hddtemp'].value
This approach is really more of a general solution to caching an expensive operation. In larger applications, the CachedValue could easily be replaced with a simple memcached/redis lookup which maintains its own TTL value for you. But on a small scale, this is just a fancy way of organizing the local cached values.

- 90,542
- 19
- 167
- 203
-
Great example. I think I will use this for the other functions, but for getting the harddrive temperature, I've resorted to spawning a thread (to try and mitigate the overhead of the slow system call) to update a dict entry. – mrtasktat May 02 '12 at 21:54
-
@lyineyes: But why even incur the overhead be it in a thread or wherever, if the temp can really only fluctuate within a reasonable timeframe? But Hugh Bothwell does mention in his answer that you can just monitor the daemonized version of this tool specifically. So thats probably your very specific fix. But I'm glad you liked this approach for a general caching solution! – jdi May 02 '12 at 21:59
I was googling around for a while and this hit kept coming up close to the top no matter how I formatted my search. I have smartmontools and at least python 2.7.6 installed on all of my hosts and I didn't want to install a new package to pipe hdd temp data to graphite/statsd so I made the following.
I am not a developer and I don't know python (as obvious) so this is my 1-2 day attempt at figuring it out. I am too embarrassed to post all of the code here but here is the main dealy::
enter code here
#!/usr/bin/env python
import os
import subprocess
import multiprocessing
def grab_hdd_temp(hdd, queue):
for line in subprocess.Popen(['smartctl', '-a', str('/dev/' + hdd)], stdout=subprocess.PIPE).stdout.read().split('\n'):
if ( 'Temperature_Celsius' in line.split() ) or ('Temperature_Internal' in line.split() ):
queue.put([hdd, line.split()[9]])
def hddtmp_dict(hdds_list):
procs = []
queue = multiprocessing.Queue()
hddict={}
for hdd in hdds_list:
p = multiprocessing.Process(target=grab_hdd_temp, args=(hdd, queue))
procs.append(p)
p.start()
for _ in procs:
val = queue.get()
hddict[val[0]]=val[1]
p.join()
return hddict
if __name__ == '__main__':
hdds_list = [ x for x in os.listdir('/sys/block/') if x.startswith('sd') ]
hddict = hddtmp_dict(hdds_list)
for k in hddict:
print(k, hddict[k])
On my storage server this returned the complete list of 38 drives in 2 seconds vs 50 seconds to iterate through all of the disks serially. That said the load jumps up from around 1.08 to 3.50 on a 40 core box. So take it as you will. I am trying to find a way to use proc or possibly fcntl based on another stack overflow I found to pull the data instead of using subprocess.popen but eh.
it's 2:22am here and I need to start heading home. Once there I'll try and outline the snippet above and say what I think everything is doing but I hope it's somewhat self-explanatory.
Sorry for the kludge code. I think the buffer is the better way to go at this point but it was a cool exercise. If you don't want to install hddtemp and this I think a buffer + the above may be the best option. Still trying to figure that out as well as I don't understand how to do classes yet.
I hope this helps someone.
-
Sadly I am currently suspended from the ServerFault for a year, because I adviced newbies to use the shift key if they start a sentence :-) – peterh Sep 25 '17 at 18:16