8

I'm looking to convert a Float32Array into an Int16Array.

Here's what I have (i'm not providing data).

  var data = ...; /*new Float32Array();*/
  var dataAsInt16Array = new Int16Array(data.length);
  for(var i=0; i<data.length; i++){
    dataAsInt16Array[i] = parseInt(data[i]*32767,10);
  }

I'm not convinced that I'm doing it correctly and looking for some direction.

Nirvana Tikku
  • 4,105
  • 1
  • 20
  • 28

6 Answers6

7

You can do it directly from the ArrayBuffer

var dataAsInt16Array = new Int16Array(data.buffer);

var f32 = new Float32Array(4);
f32[0] = 0.1, f32[1] = 0.2, f32[2] = 0.3, f32[3] = 0.4;
// [0.10000000149011612, 0.20000000298023224, 0.30000001192092896, 0.4000000059604645]

var i16 = new Int16Array(f32.buffer);
// [-13107, 15820, -13107, 15948, -26214, 16025, -13107, 16076]

// and back again
new Float32Array(i16.buffer);
// [0.10000000149011612, 0.20000000298023224, 0.30000001192092896, 0.4000000059604645]
Paul S.
  • 64,864
  • 9
  • 122
  • 138
  • Voted up for this answer, but then I realized it's wrong. Buffer of Float32Array is an array of raw bytes. Treating raw 32 bit float bytes as integers is something weird. – Sergey P. aka azure Oct 28 '15 at 16:20
  • @SergeyP.akaazure It's what OP was asking for – Paul S. Oct 28 '15 at 16:41
  • Note that f32.buffer might be contain more data than the original Int16Array uses. The proper way to do that is to do new Float32Array(i16.buffer, i16.byteOffset, i16.byteLength / Float32Array.BYTES_PER_ELEMENT); – heiner Jun 04 '18 at 19:16
6

If you're after converting the raw underlying data you can use the approach Paul S. is describing in his answer.

But be aware of that you will not get the same numbers as you are dealing with 32-bit IEEE 754 representation of the number in the case of Float32. When a new view such as Int16 is used you are looking at the binary representation of that, not the original number.

If you are after the number you will have to convert manually, just modify your code to:

var data = ...; /*new Float32Array();*/
var len = data.length, i = 0;
var dataAsInt16Array = new Int16Array(len);

while(i < len)
  dataAsInt16Array[i] = convert(data[i++]);

function convert(n) {
   var v = n < 0 ? n * 32768 : n * 32767;       // convert in range [-32768, 32767]
   return Math.max(-32768, Math.min(32768, v)); // clamp
}
  • Could you explain your code a little bit? Is preserving IEEE 754 representation possible without losing data? Converting to unsigned ints this way is legit as well? Is it possible to achieve the same thing using [DataView](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/DataView)'s setFloat32 - getInt16? @K3N – starkm Mar 18 '18 at 17:19
  • @starkm preserving depends on the number, if it's integer and in the 16-bit range then fine, if not it won't be possible. DV's get(U)Int16() do something similar as here as well as dealing with byte-order and memory alignment (it may be a tad slower too for these reasons). If you copy out the raw underlying buffer as Int16 then read back those values from the same buffer using a Float32 view you should be OK (as long as byte-order is the same). –  Mar 18 '18 at 20:59
  • Tried doing this using DataViews and I am stuck :). It'd be really helpful if you check out [my question](https://stackoverflow.com/questions/49348743/converting-float32array-to-uint8array-while-preserving-ieee-754-representation) 'cause trying to explain what I am trying to do is being painful. @K3N – starkm Mar 18 '18 at 21:07
4
    var floatbuffer = audioProcEvent.inputBuffer.getChannelData(0);
    var int16Buffer = new Int16Array(floatbuffer.length);

    for (var i = 0, len = floatbuffer.length; i < len; i++) {
        if (floatbuffer[i] < 0) {
            int16Buffer[i] = 0x8000 * floatbuffer[i];
        } else {
            int16Buffer[i] = 0x7FFF * floatbuffer[i];
        }
    }
StuS
  • 817
  • 9
  • 14
2

ECMAScript 2015 and onwards has TypedArray.from which converts any typed array (and indeed, any iterable) to the specified typed array format.

So converting a Float32Array to a Uint8Array is now as easy as:

const floatArray = new Float32Array()
const intArray = Int16Array.from(floatArray)

...albeit with truncation.

robjtede
  • 736
  • 5
  • 16
2

Combining answers from robjtede and StuS here is one for conversion and scaling of an Float32Array to Int16Array. The scaling is range 1 to -1 in Float32Array becomes 32767 and -32768 in Int16Array:

myF32Array=Float32Array.from([1,0.5,0.75,-0.5,-1])
myI16Array=Int16Array.from(myF32Array.map(x => (x>0 ? x*0x7FFF : x*0x8000)))
myNewF32Array=Float32Array.from(Float32Array.from(myI16Array).map(x=>x/0x8000))
console.log(myF32Array)
console.log(myI16Array)
console.log(myNewF32Array)

//output
> Float32Array [1, 0.5, 0.75, -0.5, -1]
> Int16Array [32767, 16383, 24575, -16384, -32768]
> Float32Array [0.999969482421875, 0.499969482421875, 0.749969482421875, -0.5, -1]
giwyni
  • 2,928
  • 1
  • 19
  • 11
0

It seems that you are trying not only to convert data format, but to process original data and store it in different format.

The direct way of converting Float32Array to Int16Array is as simple as

var a = new Int16Array(myFloat32Array);

For processing data you can use the approach that you provided in the question. I'm not sure if there's a need to call parseInt.

Sergey P. aka azure
  • 3,993
  • 1
  • 29
  • 23