4

I am trying to make use of the recently introduced twisted.application.internet.ClientService class in a twisted application that does simple modbus-tcp polling using pymodbus. I feel my issues have nothing to do with the modbus Protocol that I am using, as I have created quite a few other working prototypes using the lower level twisted APIs; but this new ClientService looks like it fits my needs exactly, thus should reduce my code footprint and keep it neat if I can get it to work.

My tests show the ClientService handles reconnections just as it is expected to and I have easy access to the first connections Protocol. The problem that I am having is getting hold of subsequent Protocol objects for the reconnections. Here is a simplified version of the code I am having the issue with:

from twisted.application import internet, service
from twisted.internet.protocol import ClientFactory
from twisted.internet import reactor, endpoints
from pymodbus.client.async import ModbusClientProtocol

class ModbusPollingService(internet.ClientService):
    def __init__(self, addrstr, numregs=5):
        self.numregs=numregs
        internet.ClientService.__init__(self,
            endpoints.clientFromString(reactor, addrstr),
            ClientFactory.forProtocol(ModbusClientProtocol))

    def startService(self):
        internet.ClientService.startService(self)
        self._pollWhenConnected()

    def _pollWhenConnected(self):
        d = self.whenConnected()
        d.addCallback(self._connected)
        d.addErrback(self._connfail)

    def _connected(self, p):
        self._log.debug("connected: {p}", p=p)
        self._mbp = p
        self._poll()
        return True

    def _connfail(self, failstat):
        self._log.failure('connection failure', failure=failstat)
        self._mbp = None
        self._pollWhenConnected()

    def _poll(self):
        self._log.debug("poll: {n}", n=self.numregs)
        d = self._mbp.read_holding_registers(0, self.numregs)
        d.addCallback(self._regs)
        d.addErrback(self._connfail)

    def _regs(self, res):
        self._log.debug("regs: {r}", r=res.registers)
        # Do real work of dealing storing registers here
        reactor.callLater(1, self._poll)
        return res

application = service.Application("ModBus Polling Test")
mbpollsvc = ModbusPollingService('tcp:127.0.0.1:502')
mbpollsvc.setServiceParent(application)

When the connection fails (for whatever reason) the errback of the deferred returned from read_holding_registers() gets called with the intention that my service can abandon that Protocol and go back into a state of waiting for a new connections Protocol to be returned by the whenConnected() callback... however what seems to be happening is that the ClientService does not yet realise the connection is dead and returns me the same disconnected Protocol, giving me a log full of:

2016-05-05 17:28:25-0400 [-] connected: <pymodbus.client.async.ModbusClientProtocol object at 0x000000000227b558>
2016-05-05 17:28:25-0400 [-] poll: 5
2016-05-05 17:28:25-0400 [-] connection failure
    Traceback (most recent call last):
    Failure: pymodbus.exceptions.ConnectionException: Modbus Error: [Connection] Client is not connected

2016-05-05 17:28:25-0400 [-] connected: <pymodbus.client.async.ModbusClientProtocol object at 0x000000000227b558>
2016-05-05 17:28:25-0400 [-] poll: 5
2016-05-05 17:28:25-0400 [-] connection failure
    Traceback (most recent call last):
    Failure: pymodbus.exceptions.ConnectionException: Modbus Error: [Connection] Client is not connected

or very similar, note repeated ModbusClientProtocol object address.

I'm pretty sure that I've probably just made a poor choice of pattern for this API, but I've iterated through a few different possibilities such as creating my own Protocol and Factory based on ModbusClientProtocol and handling the polling mechanism entirely within that class; but it felt a bit messy passing the persistent config and mechanism to store the polled data that way, it seems like handling this at or above the ClientService level is a cleaner approach but I can't work out the best way of keeping track of the currently connected Protocol. I guess what I'm really looking for is a best practice recommendation for usage of the ClientService class in extended polling situations.

DanSut
  • 474
  • 8
  • 22

2 Answers2

1

This is an old question. But, hopefully, it will help somebody else.

The problem that I am having is getting hold of subsequent Protocol objects for the reconnections.

Supply prepareConnection callable to ClientService constructor. It will supply current connection.

In the example below MyService attaches itself to MyFactory. The main reason for this is so that MyFactory can let MyService know when ClientService disconnected. It's possible because ClientService calls Factory.stopFactory on disconnect.

Next time ClientService reconnects it will call its prepareConnection supplying current protocol instance.

(Reconnecting) ClientService:

# clientservice.py
# twistd -y clientservice.py

from twisted.application import service, internet
from twisted.internet.protocol import Factory
from twisted.internet import endpoints, reactor
from twisted.protocols import basic
from twisted.logger import Logger


class MyProtocol(basic.Int16StringReceiver):
    _log = Logger()

    def stringReceived(self, data):
        self._log.info('Received data from {peer}, data={data}',
                       peer=self.transport.getPeer(),
                       data=data)


class MyFactory(Factory):
    _log = Logger()
    protocol = MyProtocol

    def stopFactory(self):
        # Let service know that its current connection is stale
        self.service.on_connection_lost()


class MyService(internet.ClientService):
    def __init__(self, endpoint, factory):
        internet.ClientService.__init__(self,
            endpoint,
            factory,
            prepareConnection=self.on_prepare_connection)

        factory.service = self # Attach this service to factory
        self.connection = None # Future protocol instance

    def on_prepare_connection(self, connection):
        self.connection = connection # Attach protocol to service
        self._log.info('Connected to {peer}',
                       peer=self.connection.transport.getPeer())
        self.send_message('Hello from prepare connection!')

    def on_connection_lost(self):
        if self.connection is None:
            return

        self._log.info('Disconnected from {peer}',
                       peer=self.connection.transport.getPeer())
        self.connection = None

    def send_message(self, message):
        if self.connection is None:
            raise Exception('Service is not available')

        self.connection.sendString(bytes(message, 'utf-8'))


application = service.Application('MyApplication')
my_endpoint = endpoints.clientFromString(reactor, 'tcp:localhost:22222')
my_factory = MyFactory()
my_service = MyService(my_endpoint, my_factory)
my_service.setServiceParent(application)

Slightly modified echo server from twisted examples:

#!/usr/bin/env python
# echoserv.py
# python echoserv.py

# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.

from twisted.internet.protocol import Protocol, Factory
from twisted.internet import reactor
from twisted.protocols import basic

### Protocol Implementation

# This is just about the simplest possible protocol
class Echo(basic.Int16StringReceiver):
    def stringReceived(self, data):
        """
        As soon as any data is received, write it back.
        """
        print("Received:", data.decode('utf-8'))
        self.sendString(data)


def main():
    f = Factory()
    f.protocol = Echo
    reactor.listenTCP(22222, f)
    reactor.run()

if __name__ == '__main__':
    main()
K.Novichikhin
  • 339
  • 2
  • 8
0

You're not calling self.transport.loseConnection() anywhere that I can see in response to your polling, so as far as twisted can tell, you aren't actually disconnected. It may later, when you stop doing anything on the old transport, but by then you've lost track of things.

cwillu
  • 1