Server-driven paging can make client applications "fragile", as follows...
Suppose that a server doesn't support server-driven paging (or supports it only for responses over a certain response size that the client has never queried for).
Now suppose that a client has been developed without regard for server-driven paging. That client may be easily "broken" (fail to receive all applicable response data) if:
(1) The server is upgraded to support server-driven paging, or
(2) Response sizes go over a threshold due to growth in the volume of data at the server.
Now things can get worse, since depending on interpretation of the spec, next-links can appear at any level within a result, e.g.
This means that clients have to mix parsing with network-level access, i.e. a client that wants to be sure to receive all of the requested data must be prepared to walk over the initial response "graph" (e.g. JSON result) issuing additional network requests to follow all next-links which may appear in the graph. This is such an onerous requirement that it is unlikely that many clients will properly implement it. The result: a "fragile" client which may silently fail to retrieve all relevant expected response content.
Another case to consider is where a client uses a batch request to batch several queries (to avoid round-trips). This can be easily defeated by a server which may return next-links in each of the batched responses. The client may be unable to reduce round-trips as much as expected if many next-links are returned (or if the server uses a page size which is unreasonably small).
The above difficulties with next-links suggest a redesign of server-driven paging may be warranted.