You have several options:
Stub the NetworkingProvider
Create your service with a custom NetworkingProvider
implementation.
// App
var myAppNetworkingProvider: NetworkingProviderConvertible =
URLSessionConfiguration.ephemeral // Siesta default
...
Service(baseURL: "...", networking: myAppNetworkingProvider)
// Tests
myAppNetworkingProvider = NetworkStub()
Your StubbedNetworkingProvider
can return a single hard-coded URLResponse
, or match on URLRequest
if you want to stub multiple responses at once.
This is the best option for most apps. You can see an example of it in Siesta’s own performance tests. It’s simple, fast, and gives fine-grained control, but still lets you test with realistic Siesta behavior.
Stub the network
Siesta works with network stubbing libraries like OHHTTPStubs, Mockingjay, and Nocilla. (Siesta itself uses Nocilla for its own internal regression tests, although the library has internal race conditions and is not especially well maintained as of this writing, so I can’t wholeheartedly recommend it.)
Stubbing the network itself has the advantage of testing the full interaction of your app with the underlying networking API. This approach may be best for full-on integration tests, particularly if you want to record and replay responses from a real API.
Custom Resource Protocol
Because Swift supports retroactive modeling, Resource
doesn’t need to be (or implement) a testable protocol. You can create one of your own:
protocol ResourceProtocol {
// Resource methods your app uses
}
// No new methods; just adding conformance
extension Resource: ResourceProtocol { }
This sounds the most like what you’re looking for in your original question. However, I don’t especially recommend it:
- It’s the most complex to implement — and the most error-prone. You’ll find it surprisingly difficult to accurately mimic all of Siesta’s behavior. Trust me: the Resource API seems innocent enough at first, but you’ll find yourself reimplementing half the library if you try to exercise your whole app this way.
- It’s likely to miss problems and not catch regressions. Many of the dangerous spots using Siesta have to do with the exact sequence of calls: which events happen and in what order, what happens immediately vs. on a subsequent turn of the main run loop, what observer/owner relationships do or don’t create retain cycles, etc. You’ll have to make assumptions about all these things, and you’ll end up testing your code against your assumptions — not against the library’s real behavior.
In short, compared to the other approaches, it’s higher effort for lower value. It’s certainly not an effective way to do regression testing.
That said, if you are adhering to a purist “don’t test past the boundaries” unit test philosophy, then this is the way to do it.