$batch used to return a 202 accepted to indicate it accepted the $batch request even though it might not have read, let alone processed, all the requests in the batch request. This 202 accepted response conflicted with the use of 202 for asynchronous requests; this has been dealt with in ODATA-233.
So now we can distinguish between batch requests that are handled synchronously and asynchronously but that brings up the question of, in asynchronous mode, when to return results from a batch request?
Some of the requests/change sets in the batch might take almost no time to process while others might take minutes if not longer. Our proposal for asynchronous requests wraps the result of such an asynchronous request in a message/http response. Would we collect all the results and only return once the complete batch is processed? Or would we start returning results as soon as the result of the first request is available turning the asynchronous request into a synchronous request from that point onwards?
At a minimum we would have to describe the expected behavior of an asynchronous batch request but we might need to consider adding to what's there already and make it possible to return the results of a batch in chunks (not to confuse with chuck encoding).
A proposal could be to allow the return of a application/http at any time with a 202 accepted and a Location header indicating that the remainder of the response to the batch request is being accepted and dealt with and will be returned later by querying the URL provided in the location just as any other async request. Using the example in section 10.5.3 of the OData Core Part-1 document the response, after handling the first request (and including the new 200 OK response on the $batch), would look like:
HTTP/1.1 200 OK
DataServiceVersion: 4.0
Content-Length: ####
Content-Type: multipart/mixed; boundary=batch_36522ad7-fc75-4b56-8c71-56071383e77b
--batch_36522ad7-fc75-4b56-8c71-56071383e77b
Content-Type: application/http
Content-Transfer-Encoding: binary
HTTP/1.1 200 Ok
Content-Type: application/atom+xml;type=entry
Content-Length: ###
<AtomPub representation of the Customer entity with EntityKey ALFKI>
--batch_36522ad7-fc75-4b56-8c71-56071383e77b
Content-Type: application/http
HTTP/1.1 202 Accepted
Retry-After: ###
Location: https://services.odata.org/monitor/12345
After which the async pattern continues until the processing has progressed and the remainder of the response is available which in this example presumably would look like:
HTTP/1.1 200 OK
DataServiceVersion: 4.0
Content-Length: ####
Content-Type: multipart/mixed; boundary=batch_36522ad7-fc75-4b56-8c71-56071383e77b
--batch_36522ad7-fc75-4b56-8c71-56071383e77b
Content-Type: multipart/mixed; boundary=changeset_77162fcd-b8da-41ac-a9f8-9357efbbd621
Content-Length: ###
--changeset_77162fcd-b8da-41ac-a9f8-9357efbbd621
Content-Type: application/http
Content-Transfer-Encoding: binary
HTTP/1.1 201 Created
Content-Type: application/atom+xml;type=entry
Location: http://host/service.svc/Customer('POIUY')
Content-Length: ###
<AtomPub representation of a new Customer entity>
--changeset_77162fcd-b8da-41ac-a9f8-9357efbbd621
Content-Type: application/http
Content-Transfer-Encoding: binary
HTTP/1.1 204 No Content
Host: host
-changeset_77162fcd-b8da-41ac-a9f8-9357efbbd621-
--batch_36522ad7-fc75-4b56-8c71-56071383e77b
Content-Type: application/http
Content-Transfer-Encoding: binary
HTTP/1.1 404 Not Found
Content-Type: application/xml
Content-Length: ###
<Error message>
Note that I'd propose that the results of a change-set would never be split up as a result of asynchronous processing of a $batch request (probably don't need to as all request in such a set have to succeed or fail 'together' anyway).
Field | Original Value | New Value |
---|---|---|
Fix Version/s | WD01 [ 10247 ] | |
Affects Version/s | WD01 [ 10247 ] | |
Description |
$batch used to return a 202 accepted to indicate it accepted the $batch request even though it might not have read, let alone processed, all the requests in the batch request. This 202 accepted response conflicted with the use of 202 for asynchronous requests; this has been dealt with in So now we can distinguish between batch requests that are handled synchronously and asynchronously but that brings up the question of, in asynchronous mode, when to return results from a batch request? Some of the requests/change sets in the batch might take almost no time to process while others might take minutes if not longer. Our proposal for asynchronous requests wraps the result of such an asynchronous request in a message/http response. Would we collect all the results and only return once the complete batch is processed? Or would we start returning results as soon as the result of the first request is available turning the asynchronous request into a synchronous request from that point onwards? At a minimum we would have to describe the expected behavior of an asynchronous batch request but we might need to consider adding to what's there already and make it possible to return the results of a batch in chunks (not to confuse with chuck encoding)? Would we have a next-link which we could follow which in turn could return a 202 again if the next chunk of the batch isn't available yet? |
$batch used to return a 202 accepted to indicate it accepted the $batch request even though it might not have read, let alone processed, all the requests in the batch request. This 202 accepted response conflicted with the use of 202 for asynchronous requests; this has been dealt with in So now we can distinguish between batch requests that are handled synchronously and asynchronously but that brings up the question of, in asynchronous mode, when to return results from a batch request? Some of the requests/change sets in the batch might take almost no time to process while others might take minutes if not longer. Our proposal for asynchronous requests wraps the result of such an asynchronous request in a message/http response. Would we collect all the results and only return once the complete batch is processed? Or would we start returning results as soon as the result of the first request is available turning the asynchronous request into a synchronous request from that point onwards? At a minimum we would have to describe the expected behavior of an asynchronous batch request but we might need to consider adding to what's there already and make it possible to return the results of a batch in chunks (not to confuse with chuck encoding). A proposal could be to allow the return of a application/http at any time with a 202 accepted and a Location header indicating that the remainder of the response to the batch request is being accepted and dealt with and will be returned later by querying the URL provided in the location just as any other async request. Using the example in section 10.5.3 of the OData Core Part-1 document the response, after handling the first request (and including the new 200 OK response on the $batch), would look like: HTTP/1.1 200 OK DataServiceVersion: 4.0 Content-Length: #### Content-Type: multipart/mixed; boundary=batch_36522ad7-fc75-4b56-8c71-56071383e77b --batch_36522ad7-fc75-4b56-8c71-56071383e77b Content-Type: application/http Content-Transfer-Encoding: binary HTTP/1.1 200 Ok Content-Type: application/atom+xml;type=entry Content-Length: ### <AtomPub representation of the Customer entity with EntityKey ALFKI> --batch_36522ad7-fc75-4b56-8c71-56071383e77b Content-Type: application/http HTTP/1.1 202 Accepted Retry-After: ### Location: https://services.odata.org/monitor/12345 After which the async pattern continues until the processing has progressed and the remainder of the response is available which in this example presumably would look like: HTTP/1.1 200 OK DataServiceVersion: 4.0 Content-Length: #### Content-Type: multipart/mixed; boundary=batch_36522ad7-fc75-4b56-8c71-56071383e77b --batch_36522ad7-fc75-4b56-8c71-56071383e77b Content-Type: multipart/mixed; boundary=changeset_77162fcd-b8da-41ac-a9f8-9357efbbd621 Content-Length: ### --changeset_77162fcd-b8da-41ac-a9f8-9357efbbd621 Content-Type: application/http Content-Transfer-Encoding: binary HTTP/1.1 201 Created Content-Type: application/atom+xml;type=entry Location: http://host/service.svc/Customer('POIUY') Content-Length: ### <AtomPub representation of a new Customer entity> --changeset_77162fcd-b8da-41ac-a9f8-9357efbbd621 Content-Type: application/http Content-Transfer-Encoding: binary HTTP/1.1 204 No Content Host: host --changeset_77162fcd-b8da-41ac-a9f8-9357efbbd621-- --batch_36522ad7-fc75-4b56-8c71-56071383e77b Content-Type: application/http Content-Transfer-Encoding: binary HTTP/1.1 404 Not Found Content-Type: application/xml Content-Length: ### <Error message> Note that I'd propose that the results of a change-set would never be split up as a result of asynchronous processing of a $batch request (probably don't need to as all request in such a set have to succeed or fail 'together' anyway). |
Proposal | The same async pattern is applied to $batch requests then to normal requests. Once results are ready to be returned the same 200 OK is returned with a content-type header with value application/http. The response body encloses a single multipart/mixed with the respond to the batch request. In contrast with a synchronous $batch request however the server is allowed to return a partial set of the results for those requests in the batch that have been processed thus far, followed by a 202 accepted with a Location header specifying the monitor which the client can use to continue monitoring the progress of executing the remaining requests in the batch. | |
Environment | [Proposed] |
Proposal | The same async pattern is applied to $batch requests then to normal requests. Once results are ready to be returned the same 200 OK is returned with a content-type header with value application/http. The response body encloses a single multipart/mixed with the respond to the batch request. In contrast with a synchronous $batch request however the server is allowed to return a partial set of the results for those requests in the batch that have been processed thus far, followed by a 202 accepted with a Location header specifying the monitor which the client can use to continue monitoring the progress of executing the remaining requests in the batch. |
The same async pattern is applied to $batch requests then to normal requests. Once results are ready to be returned the same 200 OK is returned with a content-type header with value application/http. The response body encloses a single multipart/mixed with the respond to the batch request. In contrast with a synchronous $batch request however the server is allowed to return a partial set of the results for those requests in the batch that have been processed thus far, followed by a 202 accepted with a Location header specifying the monitor which the client can use to continue monitoring the progress of executing the remaining requests in the batch. Note that changesets are still atomic, and therefore the responses to all requests in a changeset are allways contained in the same, potential partial, response to a $batch request. |
Environment | [Proposed] | [Resolved] |
Status | New [ 10000 ] | Open [ 1 ] |
Resolution | Fixed [ 1 ] | |
Status | Open [ 1 ] | Resolved [ 5 ] |
Environment | [Resolved] | [Applied] |
Resolution | https://www.oasis-open.org/committees/download.php/48963/odata-core-v4.0-wd01-part1-protocol-2013-4-26PR1.docx | |
Environment | [Applied] | |
Status | Resolved [ 5 ] | Applied [ 10002 ] |