3

I'm trying to send off the following post request with a 200MB attachment to the Autodesk API bucket endpoint:

Douglass-MBP-2:Desktop douglasduhaime$ curl -v 'https://developer.api.autodesk.com/oss/v2/buckets/secret-bucket/objects/200mbfile.nwd' -X 'PUT' -H 'Authorization: Bearer myOauthCredentials' -H 'Content-Type: application/octet-stream' -H 'Content-Length: 308331' -T '200mbfile.nwd'

This request yields the following response:

*   Trying 52.7.124.118...
* Connected to developer.api.autodesk.com (52.7.124.118) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: developer.api.autodesk.com
* Server certificate: Symantec Class 3 Extended Validation SHA256 SSL CA
* Server certificate: VeriSign Universal Root Certification Authority
> PUT /oss/v2/buckets/secret-bucket/objects/200mbfile.nwd HTTP/1.1
> Host: developer.api.autodesk.com
> User-Agent: curl/7.43.0
> Accept: */*
> Authorization: Bearer MyOauthCredentials
> Content-Type: application/octet-stream
> Content-Length: 308331
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
< Access-Control-Allow-Credentials: true
< Access-Control-Allow-Headers: Authorization, Accept-Encoding, Range, Content-Type
< Access-Control-Allow-Methods: GET
< Access-Control-Allow-Origin: *
< Content-Type: application/json; charset=utf-8
< Date: Fri, 09 Sep 2016 15:36:51 GMT
< Server: Apigee Router
< Content-Length: 467
< Connection: keep-alive
<
* Excess found in a non pipelined read: excess = 66, size = 467, maxdownload = 467, bytecount = 0

Does anyone know how I can indeed continue and send off the full packet? I'd be grateful for any advice others can offer!

duncan
  • 31,401
  • 13
  • 78
  • 99
duhaime
  • 25,611
  • 17
  • 169
  • 224

2 Answers2

2

For large uploads, it is preferable to use the /resumable endpoint. An example of use is available from that sample, also pasted below:

program
.command ('resumable')
.description ('upload a file in multiple pieces (i.e. resumables)')
.arguments ('<file> <pieces>')
.action (function (file, pieces) {
    pieces =pieces || 2 ;
    var bucketKey =readBucketKey () ;
    if ( !checkBucketKey (bucketKey) )
        return ;
    var fileKey =makeKey (file) ;
    fs.stat (file, function (err, stats) {
        if ( err )
            return (console.log (error.message)) ;
        var size =stats.size ;
        var pieceSz =parseInt (size / pieces) ;
        var modSz =size % pieces ;
        if ( modSz )
            pieces++ ;
        console.log ('Uploading file: ' + file + ' in ' + pieces + ' pieces') ;
        var piecesMap =Array.apply (null, { length: pieces }).map (Number.call, Number) ;
        var sessionId =Math.random ().toString (36).replace (/[^a-z]+/g, '').substr (0, 12) ;
        async.eachLimit (piecesMap, 1,
            function (i, callback) {
                var start =i * pieceSz ;
                var end =Math.min (size, (i + 1) * pieceSz) - 1 ;
                var range ="bytes " + start + "-" + end + "/" + size ;
                var length =end - start + 1 ;
                console.log ('Loading ' + range) ;
                // For resumable (large files), make sure to renew the token first
                //access_token (function () {
                oauthExec ()
                    .then (function (accessToken) {
                        var readStream =fs.createReadStream (file, { 'start': start, 'end': end }) ;
                        return (ossObjects.uploadChunk (bucketKey, fileKey, length, range, sessionId, readStream, {})) ;
                    })
                    .then (function (data) {
                        callback () ;
                        if ( data === undefined )
                            return (console.log ('Partial upload accepted')) ;
                        fs.writeFile (__dirname + '/data/' + bucketKey + '.' + fileKey + '.json', JSON.stringify (data, null, 4), function (err) {
                            if ( err )
                                return (console.error ('Failed to create ' + bucketKey + '.' + fileKey + '.json file')) ;
                        }) ;
                        console.log ('Upload successful') ;
                        console.log ('ID: ' + data.objectId) ;
                        console.log ('URN: ' + new Buffer (data.objectId).toString ('base64')) ;
                        console.log ('Location: ' + data.location) ;
                    })
                    .catch (function (error) {
                        errorHandler (error, 'Failed to upload file') ;
                    })
                ;
            }) ;
    }) ;
}) ;
Felipe
  • 4,325
  • 1
  • 14
  • 19
  • Thanks @Philippe, this is really helpful. On the topic of large files in the Autodesk API world, are you aware of any resources on optimizing large files for rendering with their viewer? I haven't reached that step yet but am already dreading loading this 200MB monster in a client... – duhaime Sep 09 '16 at 16:44
  • There is not much you can do to optimize large files, if the file is large it's because of the geometry/metadata it contains, so except dropping some of those I don't see a way to make it smaller. Note that after the translation process the total amount of data which is streamed to the viewer is generally much smaller than the original file you uploaded. – Felipe Sep 10 '16 at 20:59
1

That didn't take long to figure out--I just needed to remove the -H 'Content-Length: 308331' header from the POST request (I had copied the content length from their tutorial, and the content length was smaller than the packet I was sending, hence the continue message).

duhaime
  • 25,611
  • 17
  • 169
  • 224