0

I'm maintaining one huge bash script, written by many people over time. The whole script is set of many functions that are executed one after another when the script is run ( some logic before determining on what OS its running, some variables and so on ).

I'm looking for a way to test each function ( without going over it manually each time ), some framework that would load the whole script, and allow me to write test s for each function passing to it different variables, manipulating variables and so on..

For example. There is a function "check_file" this function go over files ( that are stored in variable at start of the script ), and do some changes to a file ( adds a word after specific word, adds a whole line with parameters if its not there and so on ) and than moves on to another function... this function also use another "helper" functions that are somewhere in the script... ( so it can't be taken out on its own )

I would need to make a test, that would pass different files ( set of test files with possible variations that could occur ) and observe output if the files were changed correctly

above is just and example but something like that. I really would not like to start inventing wheel again if something like that exist already.

VladoPortos
  • 563
  • 5
  • 21
  • There are no shortcuts. You will need to familiarize yourself with the different functions regardless. If they are all in a single file, that makes it difficult to compartmentalize for testing. If the functions are in separate files that get sourced into the main script, then you can simply write a unit test for that function. There is no silver-bullet or existing code that says - *"Push this button to test your script."* The closest thing is the `bash -x scriptname` (or `-xe`) and then redirect `stdout` (and `stderr`) to a file to capture the output and then step through it. – David C. Rankin Feb 17 '19 at 12:52
  • @DavidC.Rankin yep sadly all functions are in one script ( its massive) I'm familiar with the functions and what they do, but I'm trying to put some automatic testing in place when function is updated by me or somebody else I can have it easily tested with multiple variations of files that the function can encounter... manual testing is a pain in the ass right now and prone to mistakes... The whole thing would be much better in ansible but I can't go and rewrite it in that... – VladoPortos Feb 17 '19 at 13:35
  • Well, one thought is to split the functions out into individual files that would allow unit testing and then `source` them into the main script for operations.Or break the total script into functional blocks that would then at least compartmentalize a subset of the total. Good luck. I'm not sure ansible offers that much of a benefit. – David C. Rankin Feb 17 '19 at 18:45
  • I suspect a big, if not the biggest, problem is going to be isolating side-effects. It's highly likely that global state is being manipulated. So, a function called with identical arguments is likely to not always cause the same change of state. – jhnc Feb 17 '19 at 19:25
  • @jhnc yes thats an issue too. I was thinking maybe to leverage ansible, have a test playbook for each function that would contain template which would comment out other functions, changed some variables directly in script... than run the script and use ansible modules if possible to check the changes id they match expected results... – VladoPortos Feb 18 '19 at 10:23
  • Splitting out single functions is one aspect, but also try to separate computations from interactions: https://stackoverflow.com/questions/54529245/how-to-write-unit-testable-bash-shell-code – Dirk Herrmann Feb 19 '19 at 19:31

0 Answers0