This operation matches the standard semantics of the GET method, and therefore the expectations of various software. For example:
- many HTTP clients know that they can automatically retry GET requests in case of errors
- it’s easier to cache responses to GET
If your book IDs are independent from library IDs, then it may be better to drop the reference to the library, and do just
GET /api/books/3,5,10,33/pages
Books Ids on URL will get really messy if there was like 100-200 books to retrieve
If every book ID is 6 digits long, this adds up to just 700-1400 bytes. This is well within the range supported by any good HTTP client. To really push the practical limits on URL length, you would need many more books — but do you really need (or want) to support retrieval of so many pages at once?
(Alternatively, though, your book IDs might be much longer — perhaps UUIDs.)
If you do run into limits on URL length, it’s OK to use POST to a dedicated “endpoint”:
POST /api/books/bulk-pages
{"books_id": [3, 5, 10, 33]}
POST is defined in RFC 7231 § 4.3.3 as a sort of “catch-all” method:
process the
representation enclosed in the request according to the resource's
own specific semantics. For example, POST is used for the following
functions (among others):
o Providing a block of data, such as the fields entered into an HTML
form, to a data-handling process;
As a curiosity, there has been a recent attempt to standardize a SEARCH method that would allow request payloads like POST, but also be safe and idempotent like GET. Unfortunately, that effort has stalled, so you probably shouldn’t try to use SEARCH now.
Technically, the protocol allows you to send a payload even with a GET request, but as RFC 7231 § 4.3.1 notes, this is unusual and may cause trouble:
A payload within a GET request message has no defined semantics;
sending a payload body on a GET request might cause some existing
implementations to reject the request.