1

I am trying to get CPU Logs from different computers in a system. I have a python script which fetches the logs for a single computer at different timestamp. Now integrating this to a system of computers would need Logs from different computers at same timestamp.

Let's consider I have Computer A and Computer B. I execute the command on my server at timestamp X, now Logs are collected at time X on computer A and the output file is received on server. Now Log would be received from the Computer B at time X+delta X(as delta X amount has passed in execution on computer A) and the output would be received on server.

I want both the logs to be received at same timestamps. I thought of Multithreading but creating too many threads on a large system would not work. Is there a better alternative?.

Thanks

PS: I am using psutil to collect logs and ssh to log in remoter server

amarnath
  • 785
  • 3
  • 19
  • 23
Falcon
  • 372
  • 4
  • 20
  • 1
    Wy reinvent the wheel? I assume you are using LINUX, setup a Logging server and redirect **all* logging to this server. – stovfl May 09 '17 at 14:20
  • @stovfl Yeah i am using linux server to log in to the remote linux server and running the script on the remote server and using scp to bring the output logs back to the host computer – Falcon May 09 '17 at 14:35
  • Maybe this is what you are looking for: http://stackoverflow.com/questions/68335/how-to-copy-a-file-to-a-remote-server-in-python-using-scp-or-ssh – stovfl May 09 '17 at 17:43
  • @stovfl i already did that,i am looking for a way to execute an operation on servers simultaneously so that different files have same timestamp of program execution – Falcon May 09 '17 at 18:23
  • Didn't think it's possible _**same timestamp of execution**_. Could you live with setting **one** timestamp after execution? Why is _**same timestamp**_ important? – stovfl May 09 '17 at 18:40
  • @stovfl i want to collect cpu logs from different computers at same time if possible so i could make an analysis which cpu is idle and assign it the work using job scheduling.i think the only alternative left now is to store the ip addresses in a list and traverse the list iteratively to make an ssh connection and fetch the logs. – Falcon May 09 '17 at 19:09
  • I am totally confused about your term **log**. Isn't that a solution: http://stackoverflow.com/questions/17530524/python-using-remote-managers-and-multiprocessing – stovfl May 09 '17 at 19:49
  • log here means cpu processes like idle,iowait,interrupt time etc.let me explain briefly.The code execution proceeds chronologically,In this case if i have make a ssh connection with 10 servers,i would make connection with server 1 and execute my program and would use scp to copy the output text file back in my host computer,right? now after this i would move on to remaining 9 systems.now if observe the timestamp of command execution on computer 1 to computer 10..there will be a significant time lag,right? i want the operation to occur altogether on 10 computer simultaneously if it is possible – Falcon May 09 '17 at 20:20
  • I could imagine this: Host receives continues _log data packages​_ from **all** 10 remote servers. On Host **set event.start**, all received data until **set event.stop** from all 10 remote servers are relevant and shouldn be have same timestamp. – stovfl May 09 '17 at 20:55

0 Answers0