-3

I have this script I was able to hack together and make work.

However, I know it's horribly inefficient and could definitely use this as an opportunity to learn from others on how to handle this efficiently.

Here is the code (brace yourself):

# !/usr/bin/env python
from __future__ import print_function
from functools import wraps
from pprint import pprint
import sys
import requests
import datetime
import acos_client as acos
import json
import influxdb
from influxdb import client as influxdb

# Define InfluxDB Client Information

db = influxdb.InfluxDBClient(host='127.0.0.1', port=8086, username='root', password='root', database='metrics')


# A10 ACOS Client single connection to LB01
# Look into a DICT/LIST of LB's that we could iterate through?

# Define details of LB01 Connection
c = acos.Client('10.10.10.1', acos.AXAPI_21, 'username', 'password')

# Define details of LB02 Connection
d = acos.Client('10.10.10.2', acos.AXAPI_21, 'username', 'password')


# Define a DICT/LIST of ServiceGroup names that we will pull stats for each LoadBalancer?

name = 'SG_ACCOUNT.BUSINESS.COM_443'
name2 = 'SG_ACCOUNT.BUSINESS.COM_80'
name3 = 'SG_ACCOUNT_MESSENGER_80'
name4 = 'SG_ACCOUNT_MESSENGER_81'


# These will poll LB01 with different ServiceGroup Names:
# Has to be a way to maybe iterate through a list of names?

data = c.slb.service_group.stats(name)
data2 = c.slb.service_group.stats(name2)

# These will poll LB02 with different ServiceGroup Names:
# Has to be a way to maybe iterate through a list of names?

data3 = d.slb.service_group.stats(name3)
data4 = d.slb.service_group.stats(name4)

# Take the data for LB01 and ServiceGroup tied to (name) and 'package' it up and send to InfluxDB

for server in data['service_group_stat']['member_stat_list']:
    metricslist = []
    metricsentry = {}
    metricsentry['measurement'] = "LB01"
    metricsentry['tags'] = {}
    metricsentry['fields'] = {}
    metricsentry['tags']['SGNAME'] = name
    metricsentry['tags']['SRVNAME'] = server['server']
    metricsentry['fields']['CURCONNS'] = server['cur_conns']
    metricsentry['fields']['TOTCONNS'] = server['tot_conns']
    metricsentry['fields']['REQBYTES'] = server['req_bytes']
    metricsentry['fields']['REQPKTS'] = server['req_pkts']
    metricsentry['fields']['RESPBYTES'] = server['resp_bytes']
    metricsentry['fields']['RESPPKTS'] = server['resp_pkts']
    metricslist.append(metricsentry)
    # Write the list to InfluxDB
    db.write_points(metricslist)

# Take the data for LB01 and ServiceGroup tied to (name2) and 'package' it up and send to InfluxDB

for server in data2['service_group_stat']['member_stat_list']:
    metricslist2 = []
    metricsentry = {}
    metricsentry['measurement'] = "LB01"
    metricsentry['tags'] = {}
    metricsentry['fields'] = {}
    metricsentry['tags']['SGNAME'] = name2
    metricsentry['tags']['SRVNAME'] = server['server']
    metricsentry['fields']['CURCONNS'] = server['cur_conns']
    metricsentry['fields']['TOTCONNS'] = server['tot_conns']
    metricsentry['fields']['REQBYTES'] = server['req_bytes']
    metricsentry['fields']['REQPKTS'] = server['req_pkts']
    metricsentry['fields']['RESPBYTES'] = server['resp_bytes']
    metricsentry['fields']['RESPPKTS'] = server['resp_pkts']
    metricslist2.append(metricsentry)
    # Write the list to InfluxDB
    db.write_points(metricslist2)

# Take the data for LB02 and ServiceGroup tied to (name3) and 'package' it up and send to InfluxDB

for server in data3['service_group_stat']['member_stat_list']:
    metricslist3 = []
    metricsentry = {}
    metricsentry['measurement'] = "LB02"
    metricsentry['tags'] = {}
    metricsentry['fields'] = {}
    metricsentry['tags']['SGNAME'] = name3
    metricsentry['tags']['SRVNAME'] = server['server']
    metricsentry['fields']['CURCONNS'] = server['cur_conns']
    metricsentry['fields']['TOTCONNS'] = server['tot_conns']
    metricsentry['fields']['REQBYTES'] = server['req_bytes']
    metricsentry['fields']['REQPKTS'] = server['req_pkts']
    metricsentry['fields']['RESPBYTES'] = server['resp_bytes']
    metricsentry['fields']['RESPPKTS'] = server['resp_pkts']
    metricslist3.append(metricsentry)
    # Write the list to InfluxDB
    db.write_points(metricslist3)

# Take the data for LB02 and ServiceGroup tied to (name4) and 'package' it up and send to InfluxDB

for server in data4['service_group_stat']['member_stat_list']:
    metricslist4 = []
    metricsentry = {}
    metricsentry['measurement'] = "LB02"
    metricsentry['tags'] = {}
    metricsentry['fields'] = {}
    metricsentry['tags']['SGNAME'] = name4
    metricsentry['tags']['SRVNAME'] = server['server']
    metricsentry['fields']['CURCONNS'] = server['cur_conns']
    metricsentry['fields']['TOTCONNS'] = server['tot_conns']
    metricsentry['fields']['REQBYTES'] = server['req_bytes']
    metricsentry['fields']['REQPKTS'] = server['req_pkts']
    metricsentry['fields']['RESPBYTES'] = server['resp_bytes']
    metricsentry['fields']['RESPPKTS'] = server['resp_pkts']
    metricslist4.append(metricsentry)
    # Write the list to InfluxDB
    db.write_points(metricslist4)

Ideally, I want to be able to iterate through a list of "LoadBalancer Connections", which is the c and d" (acos.Client) lines.

Then I guess I would have multiple lists of "ServiceGroup" names that have to be associated to the LoadBalancer they exist on.

I think you would have something like this:

LB01
    SG1
    SG2

LB02
    SG3
    SG4

Connect to LB01 pull down data for SG1 format it and send to InfluxDB Connect to LB01 pull down data for SG2 format it and send to InfluxDB Continue cycling through any SG(?) associated to LB01

Then do the same for the next Load Balancer, LB02.

Has to be a way to leverage some lists or dicts, iterate through things and update InfluxDB without having to recreate so much of the code each time.

There are lots of Service Groups for each Load Balancer so this code just doesn't scale to accommodate many load balancers with many more service groups.

Really looking forward to learning from this as it will surely come in handy for future projects.

martineau
  • 119,623
  • 25
  • 170
  • 301
ddevalco
  • 1,209
  • 10
  • 20
  • Possible duplicate of [Pythonic iteration over multiple lists in parallel](http://stackoverflow.com/questions/21911483/pythonic-iteration-over-multiple-lists-in-parallel) – DYZ Apr 28 '17 at 21:53
  • 1
    Well. If this was asked in the context of influxdb, it definitely is relevant question imho. – Grimmy Apr 28 '17 at 21:55
  • 1
    I'm not sure what exactly you're asking for, but looking at your code it seems like you could get rid of the repeated code by creating a function that you call with four different sets of arguments. That is, pass a `data` value, a `name` and maybe a `load_balancer` value (whatever `"LB01"` and `"LB02"` are). – Blckknght Apr 28 '17 at 21:56
  • I do need to investigate how to do the function you are talking about. Seems like there is multiple steps to iterate through in each case and being new to this I struggle with it. I'll keep trying though – ddevalco Apr 28 '17 at 22:10

1 Answers1

1

There seems like a lot of duplication that can be abstracted into a couple of functions. You can also build up the lists/dicts inline and save a bit of typing.

def db_write_metrics(db, measurement, name, server):
    metricslist = [
        {
        'measurement':measurement,
        'tags':{
            'SGNAME':name,
            'SRVNAME':server['server']},
        'fields':{
            'CURCONNS':server['cur_conns'],
            'TOTCONNS':server['tot_conns'],
            'REQBYTES':server['req_bytes'],
            'REQPKTS':server['req_pkts'],
            'RESPBYTES':server['resp_bytes'],
            'RESPPKTS':server['resp_pkts'],
        }]
    db.write_points(metricslist)

def db_write_metrics_list(db, data, measurement, name):
    for server in data['service_group_stat']['member_stat_list']:
        db_write_metrics(db, measurement, name, server)

Now those for loops become

db_write_metrics_list(db, data, "LB01", name)
db_write_metrics_list(db, data2, "LB01", name2)
db_write_metrics_list(db, data3, "LB02", name3)
db_write_metrics_list(db, data4, "LB02", name4)

And, assuming there aren't interdependencies requires this stuff to be done in parallel, you can put them in a thread pool.

import multiprocessing.pool
pool = multiprocessing.pool.ThreadPool(4)
pool.map(db_write_metrics_list,
    (   (db, data, "LB01", name),
        (db, data2, "LB01", name2),
        (db, data3, "LB02", name3),
        (db, data4, "LB02", name4)))
pool.close()
pool.join()
tdelaney
  • 73,364
  • 6
  • 83
  • 116
  • WOW! Ok that is very informative and I will go work on trying to implement this and testing. I may save the multiprocessing for later. – ddevalco Apr 28 '17 at 22:18
  • 1
    @martineau - I might have a couple in there... thanks, I'll fix it. – tdelaney Apr 28 '17 at 22:33
  • This actually worked very well and is a huge improvement. I really want to become more aware of defining functions and the use of them. Just that alone made a big difference. I still believe there is something I can do about how to define different groups with regards to the data variables and loadbalancer connections. I will keep thinking about it and trying different ideas. – ddevalco Apr 29 '17 at 17:15