87

Please have a look at the following example:

http://jsfiddle.net/MLGr4/47/

var canvas = document.getElementById("canvas");
var ctx = canvas.getContext("2d");

img = new Image();
img.onload = function(){
    canvas.width = 400;
    canvas.height = 150;
    ctx.drawImage(img, 0, 0, img.width, img.height, 0, 0, 400, 150);
}
img.src = "http://openwalls.com/image/1734/colored_lines_on_blue_background_1920x1200.jpg";

As you see, the image is not anti-aliased although it is said that drawImage applies anti aliasing automatically. I tried many different ways but it doesn't seem to work. Could you please tell me how I can get anti-aliased image? Thanks.

Neuron
  • 5,141
  • 5
  • 38
  • 59
Dundar
  • 1,311
  • 1
  • 14
  • 25

7 Answers7

190

Cause

Some images are just very hard to down-sample and interpolate such as this one with curves when you want to go from a large size to a small one.

Browsers appear to typically use bi-linear (2x2 sampling) interpolation with the canvas element rather than bi-cubic (4x4 sampling) for (likely) performance reasons.

If the step is too huge then there are simply not enough pixels to sample from which is reflected in the result.

From a signal/DSP perspective, you could see this as a low-pass filter's threshold value set too high, which may result in aliasing if there are many high frequencies (details) in the signal.

Solution

Update 2018:

Here's a neat trick you can use for browsers that support the filter property on the 2D context. This pre-blurs the image which is in essence the same as a resampling, then scales down. This allows for large steps but only needs two steps and two draws.

Pre-blur using a number of steps (original size/destination size / 2) as the radius (you may need to adjust this heuristically based on browser and odd/even steps - here only shown simplified):

const canvas = document.getElementById("canvas");
const ctx = canvas.getContext("2d");

if (typeof ctx.filter === "undefined") {
 alert("Sorry, the browser doesn't support Context2D filters.")
}

const img = new Image;
img.onload = function() {

  // step 1
  const oc = document.createElement('canvas');
  const octx = oc.getContext('2d');
  oc.width = this.width;
  oc.height = this.height;

  // step 2: pre-filter image using steps as radius
  const steps = (oc.width / canvas.width)>>1;
  octx.filter = `blur(${steps}px)`;
  octx.drawImage(this, 0, 0);

  // step 3, draw scaled
  ctx.drawImage(oc, 0, 0, oc.width, oc.height, 0, 0, canvas.width, canvas.height);

}
img.src = "//i.stack.imgur.com/cYfuM.jpg";
body{ background-color: ivory; }
canvas{border:1px solid red;}
<br/><p>Original was 1600x1200, reduced to 400x300 canvas</p><br/>
<canvas id="canvas" width=400 height=250></canvas>

Support for filter as ogf Oct/2018:

CanvasRenderingContext2D.filter                                                   
api.CanvasRenderingContext2D.filter                                               
On Standard Track, Experimental                                                   
https://developer.mozilla.org/docs/Web/API/CanvasRenderingContext2D/filter        
                                                                                  
DESKTOP >        |Chrome    |Edge      |Firefox   |IE        |Opera     |Safari   
:----------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------
filter !         |    52    |    ?     |    49    |    -     |    -     |    -    
                                                                                  
MOBILE >         |Chrome/A  |Edge/mob  |Firefox/A |Opera/A   |Safari/iOS|Webview/A
:----------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------
filter !         |    52    |    ?     |    49    |    -     |    -     |    52   
                                                                                  
! = Experimental                                                                  
                                                                                  
Data from MDN - "npm i -g mdncomp" (c) epistemex

Update 2017: There is now a new property defined in the specs for setting resampling quality:

context.imageSmoothingQuality = "low|medium|high"

It's currently only supported in Chrome. The actual methods used per level is left to the vendor to decide, but it's reasonable to assume Lanczos for "high" or something equivalent in quality. This means step-down may be skipped altogether, or larger steps can be used with fewer redraws, depending on the image size and

Support for imageSmoothingQuality:

CanvasRenderingContext2D.imageSmoothingQuality
api.CanvasRenderingContext2D.imageSmoothingQuality
On Standard Track, Experimental
https://developer.mozilla.org/docs/Web/API/CanvasRenderingContext2D/imageSmoothingQuality

DESKTOP >              |Chrome    |Edge      |Firefox   |IE        |Opera     |Safari
:----------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:
imageSmoothingQuality !|    54    |    ?     |    -     |    ?     |    41    |    Y

MOBILE >               |Chrome/A  |Edge/mob  |Firefox/A |Opera/A   |Safari/iOS|Webview/A
:----------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:
imageSmoothingQuality !|    54    |    ?     |    -     |    41    |    Y     |    54

! = Experimental

Data from MDN - "npm i -g mdncomp" (c) epistemex

browser. Until then..:
End of transmission

The solution is to use step-down to get a proper result. Step-down means you reduce the size in steps to allow the limited interpolation range to cover enough pixels for sampling.

This will allow good results also with bi-linear interpolation (it actually behaves much like bi-cubic when doing this) and the overhead is minimal as there are less pixels to sample in each step.

The ideal step is to go to half the resolution in each step until you would set the target size (thanks to Joe Mabel for mentioning this!).

Modified fiddle

Using direct scaling as in original question:

NORMAL DOWN-SCALED IMAGE

Using step-down as shown below:

DOWN-STEPPED IMAGE

In this case you will need to step down in 3 steps:

In step 1 we reduce the image to half by using an off-screen canvas:

// step 1 - create off-screen canvas
var oc   = document.createElement('canvas'),
    octx = oc.getContext('2d');

oc.width  = img.width  * 0.5;
oc.height = img.height * 0.5;

octx.drawImage(img, 0, 0, oc.width, oc.height);

Step 2 reuses the off-screen canvas and draws the image reduced to half again:

// step 2
octx.drawImage(oc, 0, 0, oc.width * 0.5, oc.height * 0.5);

And we draw once more to main canvas, again reduced to half but to the final size:

// step 3
ctx.drawImage(oc, 0, 0, oc.width * 0.5, oc.height * 0.5,
                  0, 0, canvas.width,   canvas.height);

Tip:

You can calculate total number of steps needed, using this formula (it includes the final step to set target size):

steps = Math.ceil(Math.log(sourceWidth / targetWidth) / Math.log(2))
Jeff Tian
  • 5,210
  • 3
  • 51
  • 71
  • 4
    Working with some very big initial images (8000 x 6000 and upward) I find it useful to basically iterate step 2 until I get within a factor of 2 of the desired size. – Joe Mabel Oct 21 '14 at 21:40
  • Is there some quick way to resize picture knowing that the width and the height are the same, for example 500x500, but I want it to be 300x300? Is this process any simpler than with rectangle images? – Core_dumped Mar 16 '15 at 13:23
  • @Core_dumped not for canvas, The shape doesn't matter as the data is stored in a 1D array. What could affect speed is if a bitmap is stored in a size which is compatible with 2^n, ie. 1024 to 512, 512 to 256, 256 to 128 etc. as you could take every other pixel instead of resampling here and there. Just my 2 cents (search mipmap for more info on this). –  Mar 16 '15 at 13:59
  • @KenFyrstenberg what about an algorithm? Is there some simple algorithm for resizing squared images to squared images? – Core_dumped Mar 16 '15 at 17:40
  • @KenFyrstenberg Thanks for this - but it seems to have problems with images that have transparency - for each iteration you ll see the image drawn in the background. Do you have any solution for this? - http://jsfiddle.net/v8app/842/ for example – Frnak Mar 21 '15 at 11:53
  • @FrankProvost draw the temporary steps to an off-screen canvas. At end, only use the final size to draw from (use the clipping parameters of the drawImage method, as shown in the fiddle). –  Mar 21 '15 at 15:52
  • SO AWESOME>>>THANK YOU. I wish I could understand the why though....i don't know enough about down sampling though – carinlynchin Feb 09 '16 at 19:19
  • 1
    I'm confused on the difference between the 2nd and 3rd step...can anyone explain? – carinlynchin Feb 09 '16 at 21:15
  • @Carine the last step is just to compensate for mis-match in size. If you have a width of lets say 400 and downsample twice -> 200 -> 100 but your destination canvas is 80 then it won't fit - the last step sort of nudges it into place but within the 50% range. Downsampling takes the average of 2x2 pixels (bilinear) or 4x4 (bicubic) to produce a new pixel (there exists other methods as well). Makes sense? :) –  Apr 04 '16 at 18:51
  • awesome..thanks so much. i have noticed something though. On some images, the canvas file size ends up being sometimes much larger than the original file. I tried to downsize more but I lose resolution and end up with pixelated images. How can I keep a small file size using canvas? – carinlynchin Apr 07 '16 at 18:59
  • 1
    @Carine it's a bit complicated, but canvas tries to save out a png as fast as it can. The png file supports 5 different filter types internally which can improve upon compression (gzip), but in order to find the best combination all these filters has to be tested per line of the image. That would be time consuming for large images and could block the browser, so most browsers just use filter 0 and push it out hoping to gain some compression. You could do this process manually but it is a bit more work obviously. Or run it through service APIs such as that of tinypng.com. –  Apr 07 '16 at 21:53
  • In the fiddle, you forgot to add `octx.globalCompositeOperation = "copy"` to clear the first step : http://jsfiddle.net/n1ox8gb9/ – Kaiido Sep 13 '17 at 02:05
  • 1
    @Kaiido it's not forgotten and "copy" is very slow. It you need transparency it's faster to use clearRect() and use main or alt. canvas as target. –  Sep 13 '17 at 07:51
  • When the image is slowly moving the top left corner is always jumping from one pixel to the next one. I guess there is no solution for that? – Bitterblue Aug 07 '18 at 09:41
  • @Bitterblue there is a new property, though not implemented in all browsers yet (only Chrome, Opera and Safari according to MDN), that can skip this solution entirely. `imageSmoothingQuality = "high"`. There is also the approach of using [Lanczos algorithm](https://en.wikipedia.org/wiki/Lanczos_algorithm) in JavaScript. Beyond that you will probably suffer from rounding errors (you could try rendering it once to an offscreen canvas and draw back that instead). –  Aug 07 '18 at 11:49
13

I highly recommend pica for such tasks. Its quality is superior to multiple downsizing and is quite speedy at the same time. Here is a demo.

avalanche1
  • 3,154
  • 1
  • 31
  • 38
5

As addition to Ken's answer, here another solution to perform the downsampling in halves (so the result looks good using the browser's algorithm):

  function resize_image( src, dst, type, quality ) {
     var tmp = new Image(),
         canvas, context, cW, cH;

     type = type || 'image/jpeg';
     quality = quality || 0.92;

     cW = src.naturalWidth;
     cH = src.naturalHeight;

     tmp.src = src.src;
     tmp.onload = function() {

        canvas = document.createElement( 'canvas' );

        cW /= 2;
        cH /= 2;

        if ( cW < src.width ) cW = src.width;
        if ( cH < src.height ) cH = src.height;

        canvas.width = cW;
        canvas.height = cH;
        context = canvas.getContext( '2d' );
        context.drawImage( tmp, 0, 0, cW, cH );

        dst.src = canvas.toDataURL( type, quality );

        if ( cW <= src.width || cH <= src.height )
           return;

        tmp.src = dst.src;
     }

  }
  // The images sent as parameters can be in the DOM or be image objects
  resize_image( $( '#original' )[0], $( '#smaller' )[0] );
Jean-François Fabre
  • 137,073
  • 23
  • 153
  • 219
Jesús Carrera
  • 11,275
  • 4
  • 63
  • 55
4
    var getBase64Image = function(img, quality) {
    var canvas = document.createElement("canvas");
    canvas.width = img.width;
    canvas.height = img.height;
    var ctx = canvas.getContext("2d");

    //----- origin draw ---
    ctx.drawImage(img, 0, 0, img.width, img.height);

    //------ reduced draw ---
    var canvas2 = document.createElement("canvas");
    canvas2.width = img.width * quality;
    canvas2.height = img.height * quality;
    var ctx2 = canvas2.getContext("2d");
    ctx2.drawImage(canvas, 0, 0, img.width * quality, img.height * quality);

    // -- back from reduced draw ---
    ctx.drawImage(canvas2, 0, 0, img.width, img.height);

    var dataURL = canvas.toDataURL("image/png");
    return dataURL;
    // return dataURL.replace(/^data:image\/(png|jpg);base64,/, "");
}
laaposto
  • 11,835
  • 15
  • 54
  • 71
kamil
  • 41
  • 1
3

In case someone else looking for answer still, there is another way which you can use background image instead of drawImage(). You won't lose any image quality this way.

JS:

    var canvas=document.getElementById("canvas");
    var ctx=canvas.getContext("2d");
   var url = "http://openwalls.com/image/17342/colored_lines_on_blue_background_1920x1200.jpg";

    img=new Image();
    img.onload=function(){

        canvas.style.backgroundImage = "url(\'" + url + "\')"

    }
    img.src="http://openwalls.com/image/17342/colored_lines_on_blue_background_1920x1200.jpg";

working demo

2

I created a reusable Angular service to handle high quality resizing of images for anyone who's interested: https://gist.github.com/fisch0920/37bac5e741eaec60e983

The service includes Ken's step-wise downscaling approach as well as a modified version of the lanczos convolution approach found here.

I included both solutions because they both have their own pros / cons. The lanczos convolution approach is higher quality at the cost of being slower, whereas the step-wise downscaling approach produces reasonably antialiased results and is significantly faster.

Example usage:

angular.module('demo').controller('ExampleCtrl', function (imageService) {
  // EXAMPLE USAGE
  // NOTE: it's bad practice to access the DOM inside a controller, 
  // but this is just to show the example usage.

  // resize by lanczos-sinc filter
  imageService.resize($('#myimg')[0], 256, 256)
    .then(function (resizedImage) {
      // do something with resized image
    })

  // resize by stepping down image size in increments of 2x
  imageService.resizeStep($('#myimg')[0], 256, 256)
    .then(function (resizedImage) {
      // do something with resized image
    })
})
Community
  • 1
  • 1
fisch2
  • 2,574
  • 2
  • 26
  • 29
1

Scaling for high resolution displays
You may find that canvas items appear blurry on higher-resolution displays. While many solutions may exist, a simple first step is to scale the canvas size up and down simultaneously, using its attributes, styling, and its context's scale.

// Get the DPR and size of the canvas
const dpr = window.devicePixelRatio;
const rect = canvas.getBoundingClientRect();

// Set the "actual" size of the canvas
canvas.width = rect.width * dpr;
canvas.height = rect.height * dpr;

// Scale the context to ensure correct drawing operations
ctx.scale(dpr, dpr);

// Set the "drawn" size of the canvas
canvas.style.width = `${rect.width}px`;
canvas.style.height = `${rect.height}px`;
Erhan Namal
  • 159
  • 1
  • 4