I am trying to make use of the recently introduced twisted.application.internet.ClientService
class in a twisted application that does simple modbus-tcp polling using pymodbus
. I feel my issues have nothing to do with the modbus Protocol that I am using, as I have created quite a few other working prototypes using the lower level twisted APIs; but this new ClientService
looks like it fits my needs exactly, thus should reduce my code footprint and keep it neat if I can get it to work.
My tests show the ClientService
handles reconnections just as it is expected to and I have easy access to the first connections Protocol
. The problem that I am having is getting hold of subsequent Protocol
objects for the reconnections. Here is a simplified version of the code I am having the issue with:
from twisted.application import internet, service
from twisted.internet.protocol import ClientFactory
from twisted.internet import reactor, endpoints
from pymodbus.client.async import ModbusClientProtocol
class ModbusPollingService(internet.ClientService):
def __init__(self, addrstr, numregs=5):
self.numregs=numregs
internet.ClientService.__init__(self,
endpoints.clientFromString(reactor, addrstr),
ClientFactory.forProtocol(ModbusClientProtocol))
def startService(self):
internet.ClientService.startService(self)
self._pollWhenConnected()
def _pollWhenConnected(self):
d = self.whenConnected()
d.addCallback(self._connected)
d.addErrback(self._connfail)
def _connected(self, p):
self._log.debug("connected: {p}", p=p)
self._mbp = p
self._poll()
return True
def _connfail(self, failstat):
self._log.failure('connection failure', failure=failstat)
self._mbp = None
self._pollWhenConnected()
def _poll(self):
self._log.debug("poll: {n}", n=self.numregs)
d = self._mbp.read_holding_registers(0, self.numregs)
d.addCallback(self._regs)
d.addErrback(self._connfail)
def _regs(self, res):
self._log.debug("regs: {r}", r=res.registers)
# Do real work of dealing storing registers here
reactor.callLater(1, self._poll)
return res
application = service.Application("ModBus Polling Test")
mbpollsvc = ModbusPollingService('tcp:127.0.0.1:502')
mbpollsvc.setServiceParent(application)
When the connection fails (for whatever reason) the errback
of the deferred
returned from read_holding_registers()
gets called with the intention that my service can abandon that Protocol
and go back into a state of waiting for a new connections Protocol
to be returned by the whenConnected()
callback... however what seems to be happening is that the ClientService does not yet realise the connection is dead and returns me the same disconnected Protocol, giving me a log full of:
2016-05-05 17:28:25-0400 [-] connected: <pymodbus.client.async.ModbusClientProtocol object at 0x000000000227b558>
2016-05-05 17:28:25-0400 [-] poll: 5
2016-05-05 17:28:25-0400 [-] connection failure
Traceback (most recent call last):
Failure: pymodbus.exceptions.ConnectionException: Modbus Error: [Connection] Client is not connected
2016-05-05 17:28:25-0400 [-] connected: <pymodbus.client.async.ModbusClientProtocol object at 0x000000000227b558>
2016-05-05 17:28:25-0400 [-] poll: 5
2016-05-05 17:28:25-0400 [-] connection failure
Traceback (most recent call last):
Failure: pymodbus.exceptions.ConnectionException: Modbus Error: [Connection] Client is not connected
or very similar, note repeated ModbusClientProtocol object address.
I'm pretty sure that I've probably just made a poor choice of pattern for this API, but I've iterated through a few different possibilities such as creating my own Protocol
and Factory
based on ModbusClientProtocol
and handling the polling mechanism entirely within that class; but it felt a bit messy passing the persistent config and mechanism to store the polled data that way, it seems like handling this at or above the ClientService level is a cleaner approach but I can't work out the best way of keeping track of the currently connected Protocol. I guess what I'm really looking for is a best practice recommendation for usage of the ClientService
class in extended polling situations.