0

I own a medium-sized nodejs application (https://github.com/pgonzaleznetwork/sfdc-happy-soup) that heavily relies on 3rd party APIs.

In a nutshell, what the app does is to call different Salesforce (low-code platform) APIs, both REST and SOAP, based on the input parameters AND based on the responses of some of the previous calls. This means that the execution path and the number of API calls for one request can be completely different from another request. Based on some input parameters the app could make more than 300 API calls in one request (via async processing of jobs).

I'm having a hard time figuring out how to create tests (mocha, jest, etc) for this app given the complexity of the API calls and their conditional logic.

Most of the examples online are about simple endpoints with a generic or predictable response and that can be tested with something like

assert(response)ToBe(200)

But my application has just too many different APIs, with a wide range of responses.

I could mock all these endpoints separately via unit tests, but frankly, that seems like way too much work and it would not give me end-to-end integration tests.

I could also mock the entire response of a complete job, and just create tests to assert that the code that processes that response works correctly, however, I'm more interested in testing the data transformation code that aggregates the results from all these API calls.

And to make matters worse, most of the code that deals with the APIs is encapsulated in closures, which means I can't easily test them on their own. Testing only the outer function would be impossible without having a way to mock the responses of the inner functions.

Has anyone been successful in created automated integration tests for apps that heavily relay and transform 3rd party data? If so, can you provide lessons learned, patterns, etc?

Pablo Gonzalez
  • 1,710
  • 3
  • 21
  • 36
  • 4
    Making real calls to third-party API in your own tests doesn't sound good. This will make tests slow, unreliable and potentially destructive. For many requests a reasonable approach would be to snapshot server responses and use them for mocks instead of writing fixtures manually. Nock seems to be capable of that but I'm not sure if it's the best tool for the job. This also provides a dataset for API health checks which can be a separate task. – Estus Flask Nov 08 '20 at 11:43
  • Take a look at node-tdd. We wrote that for exactly your use case. Basically you'd record your requests once and subsequent test runs would just use the recordings. We have thousands of tests with hundreds of thousands of recorded / mocked requests. It works really well. Uses nock and currently only works with mocha – vincent May 09 '21 at 04:55

1 Answers1

0

Slowness and unreliability

I understand that you have many API calls. If you actually call the third-party APIs, then you will have to wait for them. Even worse, if the tests have some irreversible effects (inserting/updating/removing information), then your tests will make your application unreliable. It's not acceptable to avoid doing automated tests for write operations, so actually calling the API functions is something you need to avoid.

An alternative

Okay, but how to avoid it? You will want to mock. Remember, you are not testing the third-party APIs, but rather, you are testing your usage of them. So, your tests should try out meaningful scenarios.

Planning meaningful scenarios

Each API call that you have can have multiple scenarios, depending on what your expectations are. Choose a representative case of all possible responses (or their lack of response, i.e. refusal/timeout) and your full scenario tree will be definable, like:

if first call of API results in then the second API call will be this and that and if it has , ...

Naturally, you will be able to prune a few branches, like in the case when your first API request times out. In that case you may want to finish your test by trying out whether a timeout is properly handled.

Refactor your code

You have mentioned that you need to test code inside closures. This clearly shows that your code is not well-suited for unit tests. You will need to refactor your code, make sure that, instead of relying on the encapsulating code, your code to be tested is a separate function, that receives parameters, among which you may pass the information that was formerly automatically known due to the closure. This would separate your functions-to-be-tested from their context, so you would be able to test them by themselves.

Too many scenarios?

It's possible that you may have too many scenarios, which would mean that you have a lot of labor to do. If that's the case, then you may want to create an array of rules. A rule would look like this:

let dummyRule = {
    called: 'API1',
    formerResult: 'someresult1',
    toBeCalled: 'API2'
    expectedResult: 'expectedresult2',
    parameters: {/*some object*/}
}

and you would have a graph representation of your test-cases. You can do a depth-first or breadth-first traversal of your graph to generate all the actual scenarios. Based on the scenarios you can generate your test functions on the source-code level, assuming that you are able to separate all the custom logic into usable functions. After generating your test scenarios, you will only have to integrate the generated file into your project.

Lajos Arpad
  • 64,414
  • 37
  • 100
  • 175
  • "*After generating your test scenarios, you will only have to integrate the generated file into your project.*" - it's generally not recommended to commit generated code. Why not integrate the rules and the test generator themselves into the project? – Bergi Jan 13 '21 at 13:46
  • @Bergi I would integrate the rules and test generator into the project as well. But I would not generate the code upon each run of the unit test. I would win time by generating the test code once per rule/test generator change. – Lajos Arpad Jan 14 '21 at 09:54