3

I'm using CfS for uploading some images to Amazon S3. Therefore I've got two stores: The original file and the thumbnail. For the original file I'm adding the dimensions of the image and the thumbnail is resized to a squared 96px image. This is working well.

var original = new FS.Store.S3("original", {
    accessKeyId: "XXX",
    secretAccessKey: "XXX",
    bucket: "XXX",
    folder: "original",

    transformWrite: function (fileObj, readStream, writeStream) {
        readStream.pipe(writeStream);
        var transformer = gm(readStream, fileObj.name());
        transformer.size({bufferStream: true}, FS.Utility.safeCallback(function (err, size) {
            if (!err) fileObj.update({$set: {'metadata.orgWidth': size.width, 'metadata.orgHeight': size.height}});
        }));
    }           
});

var thumbnail = new FS.Store.S3("thumbnail", { 
    accessKeyId: "XXX",
    secretAccessKey: "XXX",
    bucket: "XXX",
    folder: "thumbnail",

    // Create squared thumbnail
    transformWrite: function (fileObj, readStream, writeStream) {
        var size = '96';
        gm(readStream, fileObj.name()).autoOrient().resize(size, size + '^').gravity('Center').extent(size, size).stream().pipe(writeStream);
    } 
});

Images = new FS.Collection("images", {
    stores: [ original, thumbnail ]
});

Now I need two more stores for the same image: A "working" image and a "public" image. The working image is used for image mainpulation/editing like pixelate, desaturate and so on (you can always do a reset by copying the original store file) and a public image, which will use the working image but resize it to 900px if it is wider than this.

The user will only see the public image, the editor is working with the working image. That's what I want to do.

So I added this two stores:

var working = new FS.Store.S3("main", {
    accessKeyId: "XXX",
    secretAccessKey: "XXX",
    bucket: "XXX",
    folder: "working",

    transformWrite: function (fileObj, readStream, writeStream) {
        readStream.pipe(writeStream);
        var transformer = gm(readStream, fileObj.name());
        transformer.size({bufferStream: true}, FS.Utility.safeCallback(function (err, size) {
            if (!err) fileObj.update({$set: {'metadata.width': size.width, 'metadata.height': size.height}});
        }));
    }
});

var public = new FS.Store.S3("public", {
    accessKeyId: "XXX",
    secretAccessKey: "XXX",
    bucket: "XXX",
    folder: "public",
    // Do resize to 900px, if image is larger then this
    transformWrite: function (fileObj, readStream, writeStream) { 
        var transformer = gm(readStream, fileObj.name());
        transformer.size({bufferStream: true}, FS.Utility.safeCallback(function (error, size) {
            if (error) console.warn(error);
            else {
                if(size.width > 900) transformer.resize('900').stream().pipe(writeStream);
                else transformer.stream().pipe(writeStream);
            }
        }));
    }
}); 

Images = new FS.Collection("images", {
    stores: [ original, working, public, thumbnail ]
});

Basicly the working store is the same than the original store - as by uploading both are the same. The public store should always be a copy of the working image, with a maximum width of 900px.

But if I add this two stores, sometimes the upload and the server is crashing. In the logs I found the error

/docker/xxx/bundle/programs/server/npm/cfs_gridfs/node_modules/mongodb/lib/mongodb/connection/base.js:246
    throw message;      
          ^
Error: 56e715494166660700edb9c3 does not exist

I'm just surprised why this happens only if I'm adding the working and public stores. So what is wrong with that?

user3142695
  • 15,844
  • 47
  • 176
  • 332
  • Did you find a solution for this? I'm running a similar issue. It looks like it happens when you are uploading many files at once. And the problem is probably happening with the `pipe()` method. Any sort of bug should be. – Menda May 13 '16 at 13:24
  • I left cfs (and it is deprecated now), as there is a architecture bug for that. Most user - as in my case - run into that problem, if they are using a different mongoDB (which should be used on productive). That's why everything is working locally, but not on the productive server. – user3142695 May 13 '16 at 13:28

0 Answers0