12

In the OOP universum, there is a lot of information on how to desgin and refactor code to make it unit testing friendly. But I wonder, how to apply/translate those principles/practices (make mocking easier etc.) to shell scripting, which is obviously different programming.

I have to tackle a very huge code base; many executable and non-executable procedures, large functions, big global state, many environment variables, and everywhere (unnecessary) interprocess communication and file handling through redirection/pipelines and (unnecessary) use of external utilities.

How to refactor shell code (or design it in the beginning) to be able to make "good" automated unit tests with a framework like bats and a mocking-plugin?

KamilCuk
  • 120,984
  • 8
  • 59
  • 111
D630
  • 121
  • 1
  • 5
  • 4
    Why on earth are you using `bash` for something this big? – chepner Feb 05 '19 at 12:17
  • 2
    @chepner You typically don't get to decide on how things should have been done historically, that's one of the problems with the lack of practical solutions for time travel. :-) – Per Lundberg Sep 08 '20 at 10:32

4 Answers4

2

Unit-testing is for findings bugs in the isolated code. Typical shell code, however, is dominated by interactions with other executables or the operating system. The type of problems that lies in interactions in shell code goes in the direction of, am I calling the right executables in the right order with the arguments in the right order with properly formatted argument values, and are the outputs in the form I expect them to be etc. To test all this, you should not apply unit-testing, but integration testing instead.

However, there is shell code that is suitable for unit-testing. This is, for example, code performing computations within the shell, or string manipulations. I would even consider shell code with calls to certain fundamental function-like tools like basename as suitable for unit-testing (interpreting such tools as being part of the 'standard library' if you like).

How to make those code parts in a shell that are suitable for being unit-tested actually testable with unit-testing? One of the most useful approaches in my experience is to separate interactions from computations. That is, try to put the computational parts in separate shell functions to be tested, or extract the interaction dominated parts in separate shell functions. That saves you a lot of mocking effort.

Dirk Herrmann
  • 5,550
  • 1
  • 21
  • 47
1

Good question!

IMHO shellscripts often just call other programms to get stuff done, like cp, mv, tar, rsync, ... even for expressions bash is using the binary test if you use [ and ] (eg. if [ -f $file ]; then; fi).

Having that in mind, think about that stuff that really happens just in the bash script: Call that programm with three arguments. So you could write a unit-tests, which checks if the bash script calls the desired programm and uses the right arguments and checks the return values / exit codes from the programm.

You definitly don't want to put things in unit-tests for you shell script, which is effectivly done by another programm (e.g. check if rsync really copied files from machine A to machine B).

Just my two cents

Mirko Steiner
  • 354
  • 1
  • 5
1

TL;DR

Here is a template repository* that has continuous integration unit tests of shell files using Travis-CI: https://github.com/a-t-0/shell_unit_testing_template

Since the repo might some day disappear, here is the idea for reproducability. (Note that this is not necessarily the best way to do this, it is just a way I found to be working).

File Structure

The shell scripts are inside a /src/ folder. The unit tests are inside the /test/ folder. In /src/ there is a main.sh which can call other shell scripts. The other shell scripts can consist of separately testable functions, for example the file active_function_string_manipulation.sh. (included below)

To make it work, I needed to install support for batsfiles which are the unit test files. This was done with the file install-bats-libs.sh with content:

mkdir -p test/libs

git submodule add https://github.com/sstephenson/bats test/libs/bats
git submodule add https://github.com/ztombol/bats-support test/libs/bats-support
git submodule add https://github.com/ztombol/bats-assert test/libs/bats-assert

Shell Script

An example of a shell script in /src/ is:active_function_string_manipulation.sh`.


##################################################################
# Purpose: Converts a string to lower case
# Arguments:
#   $@ -> String to convert to lower case
##################################################################
function to_lower() 
{
    local str="$@"
    local output
    output=$(tr '[A-Z]' '[a-z]'<<<"${str}")
    echo $output
}
to_lower "$@"

Unit Test

Unit tests are ran by a file called test.sh in the root directory of the repository. It has content:

# Run this file to run all the tests, once
./test/libs/bats/bin/bats test/*.bats

An example would be the testing of: active_function_string_manipulation.sh with: /test/test_active_function_string_manipulation.bats:

#!./test/libs/bats/bin/bats

load 'libs/bats-support/load'
load 'libs/bats-assert/load'

@test "running the file in /src/active_function_string_manipulation.sh." {
    input="This Is a TEST"
    run ./src/active_function_string_manipulation.sh "This Is a TEST"
    assert_output "this is a test"
}

Travis CI

The Travis CI is implemented using a yml file which basically creates an environment and runs the tests in an automated environment. The file is named: .travis.yml and contains:

language: bash

script:
    - ./test.sh

Disclosure*

I am involved in building this repository, and it is the "for dummies/me"-implementation of the instructions in this article.

Note

I currently do not have much insight in how well this system scales, and I am currently not able to estimate whether this has the potential to be a "production ready" system, or whether it is suitable for such big projects, it is merely an automated unit testing environment for shell code.

a.t.
  • 2,002
  • 3
  • 26
  • 66
0

I assume that it s not more difficult than wrap all your functions, create a test_Utils.sh file and a a test file. as far as all your functions are wrapped , you can just use : source main.sh in your test file. Prepare as utils 2 functions. The first is an assertEqual one. the second is a tiny logger that could print in green or red. The red way could hold an exit. This way it could give you something like that / Main.sh :

build_destination_file() {
local fichier_source="$1"
local extension="${fichier_source##*.}"
echo "${fichier_source%.*}2.$extension"
}

Test.sh ( following the Assign Act Assert pattern ) source "main.sh"

    local fichier_source="fichier.txt"
    local expected_destination="fichier2234.txt"
    local destination=$(build_destination_file "$fichier_source")
    AssertEquals "build_destination_file"  "$destination" "$expected_destination"
    Err=$?
    print_test_result "build_destination_file" $Err

A last file could launch test as an option before running the main.

As you can see it s no more than that and friendly enough don t you think ?

In those example there is not a lot of meaning , but keep in mind taht every test has to be the reflect of the expression of a need through the cases. If someone ask you that your programm has to ...just write a simple test

check_file_exists "fichier_source.txt" Err=$? print_test_result "vérifier_file_exists" $Err

And you will have a proof and everything you need to trace and locate errors.