4

How would I convert a object to an array of objects while keeping key names?

// actual 
obj = {
  key1: null,
  key2: "Nelly",
  key3: [ "suit", "sweat" ]
} 

// expected 
arr = [
  { key2: "Nelly" },
  { key3: [ "suit", "sweat" ] }
]

currently my solution is...

 var arr = Object.keys(obj).map(key => { if (obj[key]) return { key: obj[key] } });

which returns

arr = [
  undefined,
  { key: "Nelly" },
  { key: [ "suit", "sweat" ] }
]
Chris
  • 609
  • 3
  • 9
  • 21
  • I don't believe the code shown returns what you say it does: it would return an array of three elements where the first element has the value `undefined`. – nnnnnn May 17 '17 at 03:29
  • Implement it with a plain old loop first. When you get a solution that works - you can try "improve" it. – zerkms May 17 '17 at 03:31
  • To clarify what @nnnnnn means, the first node of `arr` is `undefined` – Will Reese May 17 '17 at 03:36
  • thanks, forgot to put that in there. How would I be able to return the expected value? – Chris May 17 '17 at 03:37
  • Start with simple: can you implement it in any way? Asking to show you a solution using concepts you don't understand yet is almost never teaches you how to use those. – zerkms May 17 '17 at 03:38

4 Answers4

10

.map() returns an array of the same length as the original array. Code like yours with a callback that doesn't return a value in some cases will result in elements with the value undefined. One way to deal with that is to first .filter() out the elements you don't want to keep.

Anyway, to get the key names you want you can use an object literal with a computed property name:

{ [key]: obj[key] }

In context:

const obj = {
  key1: null,
  key2: 'Nelly',
  key3: [ 'suit', 'sweat' ]
}

const arr = Object.keys(obj)
  .filter(v => obj[v] != null)
  .map(key => ({ [key]: obj[key] }))

console.log(arr)
nnnnnn
  • 147,572
  • 30
  • 200
  • 241
6

Transducers

There's heaps of answers here to help you reach your answer in a practical way – filter this, map that, and voilà, the result you're looking for. There's other answers using primitive for loops, but those make you sad.

So you're wondering, "is it possible to filter and map without iterating through the array more than once?" Yes, using transducers.


Runnable demo

I might update this paragraph with more code explanation if necessary. ES6 comin' at you …

// Trans monoid
const Trans = f => ({
  runTrans: f,
  concat: ({runTrans: g}) =>
    Trans(k => f(g(k)))
})

Trans.empty = () =>
  Trans(k => k)

const transduce = (t, m, i) =>
  i.reduce(t.runTrans((acc, x) => acc.concat(x)), m.empty())

// complete Array monoid implementation
Array.empty = () => []

// transducers
const mapper = f =>
  Trans(k => (acc, x) => k(acc, f(x)))
  
const filterer = f =>
  Trans(k => (acc, x) => f(x) ? k(acc, x) : acc)
  
const logger = label =>
  Trans(k => (acc, x) => (console.log(label, x), k(acc, x)))

// your function, implemented with transducers  
const foo = o => {
  const t = logger('filtering')
    .concat(filterer(k => o[k] !== null))
    .concat(logger('mapping'))
    .concat(mapper(k => ({ [k]: o[k] })))
    .concat(logger('result'))
  return transduce(t, Array, Object.keys(o))
}

console.log(foo({a: null, b: 2, c: 3}))

Output; notice the steps appear interlaced – filtering, mapping, result, repeat – this means each of the combined transducers run for each iteration of the input array. Also notice how because a's value is null, there is no mapping or result step for a; it skips right to filtering b – all of this means we only stepped thru the array once.

// filtering a
// filtering b
// mapping b
// result { b: 2 }
// filtering c
// mapping c
// result { c: 3 }
// => [ { b: 2 }, { c: 3 } ]

Finishing up

Of course that foo function has lots of console.log stuff tho. In case it's not obvious, we just want to remove the logger transducers for our actual implementation

const foo = o => {
  const t = filterer(k => o[k] !== null)
    .concat(mapper(k => ({ [k]: o[k] })))
  return transduce(t, Array, Object.keys(o))
}

console.log(foo({a: null, b: 2, c: 3}))
// => [ {b: 2}, {c: 3} ]

Attribution

My enlightenment on the subject is owed exclusively to Brian Lonsdorf and accompanying work: Monoidal Contravariant Functors Are Actually Useful

Mulan
  • 129,518
  • 31
  • 228
  • 259
  • This subject is really interesting to me but it's late and I have to go to bed. If you need any additional help, just ask ^_^ – Mulan May 17 '17 at 10:26
  • 1
    Your answers are always worth upvoting; you always give me something to think about. I find this code confusing (except for the straightforward foo() function), but interesting, so I'll go away and study it. Can you please talk a little about how this performs compared with separately filtering then mapping? (On the one hand it only iterates once rather than twice, but on the other hand there seem to be a lot more function calls on each iteration.) – nnnnnn May 18 '17 at 04:29
  • @nnnnnn such a compliment, thank you ^_^ i'm happy to write some more about topic. i'll get to it soon. – Mulan May 18 '17 at 07:29
  • Yes, this is nice. However, introducing a specific contravariant type class that depends on another type class (monoid) isn't particularly useful in Javascript - at least it is hardly idiomatic. Why not simply rely on a combinator `mapper = f => g => acc => x => g(acc) (f(x))` that takes the monoid properties explicitly? Javascript works nicely with higher order functions. –  May 18 '17 at 08:34
  • I got a chance to figure out for myself how it all works, though I'm not sure I could explain it clearly to someone else. I now wonder why `transduce()` bothers with the `m` argument, because it seems if that argument must be an object with an `.empty()` method that must return an accumulator object with a `.concat()` method then for practical purposes when would the accumulator not be an array? I guess strings have a `.concat()` method too, but still...why the indirection of the `.empty()` instead of accepting an accumulator directly? – nnnnnn May 18 '17 at 08:57
  • It took me a while but now I got it. Most of this convoluted code in your `Trans` constructor is merely there to enable method chaining. More precisely, you've mixed the contravariant functor with the code for method chaining. Compared to this a pure combinator approach (`mapper`, `filterer` and `comp`) is much more succinct and easier to comprehend - in my pov. –  May 18 '17 at 12:38
  • @nnnnnn What makes transducers useful is there ability to fuse several iterations and to abstract from data sources, so that `mapper`/`filterer` and other composable transformations can be applied to `Array`s, linked lists, trees, homogeneous `Map`s etc. –  May 18 '17 at 12:47
  • In an attempt to understand what's happening here and to apply it in my own project I got lost in all the `f`, `k` and `g`s. I posted my own question on this subject [here](https://stackoverflow.com/questions/44198833/how-to-chain-map-and-filter-functions-in-the-correct-order). I'd be very happy with some pointers in the right direction, if anyone has the time :) – user3297291 May 26 '17 at 14:41
  • 1
    @user3297291 learning to read/write programs using combinators is certainly tricky at first. The variable are purposefully named using a single letter in most cases because a more specific word/name would diminish the generic (polymorphic) nature of the (combinator) function. This shows how a function's expression represents the *combination* of terms and cares less about what the actual terms are. In an imperative program, this would be a nightmare, but because each function (above) is just a *single* expression, seeing how the variables fit together is becomes easier on the reader over time. – Mulan May 27 '17 at 00:09
  • 1
    @user3297291 I [replied to your question directly](https://stackoverflow.com/a/44211131/633183), but if you have any other questions, feel free to ask ^_^ – Mulan May 27 '17 at 00:11
  • Interesting. you have one crucial typo BTW: "Also notice how because `a`'s value is `null`, there is" [[*no*]] "*mapping* or *result* step for `a`; ...". :) --- if `k` would be named `cons` it could be easier for us the non-JS literate folks to follow (and `k` is also used for "key"...) --- Just for comparison, in Haskell it could be `mapping g cons x r = cons (g x) r; filtering p cons x r = if (p x) then (cons x r) else r` and `transduce tdr cons z xs = foldr (tdr cons) z xs`; so we can call `transduce (mapping g . filtering p) (:) [] alist` (sans the mutable state). – Will Ness Jul 15 '17 at 22:01
3

As @zerkms says, I don't think using multiple es6 functions is going to improve your code. Try a loop!

// actual 
let obj = {
  key1: null,
  key2: "Nelly",
  key3: [ "suit", "sweat" ]
};

let arr = [];
let k = Object.keys(obj);

for(let i = 0, len = k.length; i < len; i++) {
  let key = k[i];
  if (obj[key]) {
    arr.push({key: obj[key]});
  }
}
Will Reese
  • 2,801
  • 2
  • 15
  • 27
  • 1
    In this case keys name wont be `key2` or `key3`. It will be key everywhere.You can check this https://jsfiddle.net/cvmyf4gz/ – brk May 17 '17 at 03:52
  • 1
    I don't think that's what @zerkms said. I think the point was that if stuck about how to do something going back to a plain loop can make it easier to figure out the logic required. Following that up by adding/chaining several array methods in place of the loop can be considered an improvement if it makes the final code easier to follow. – nnnnnn May 17 '17 at 03:52
1

If you use map, the length of your expected array will be the same as the number of keys in your input. So map is not appropriate in this case. My solution is to use a reduce function like so:

var obj = {
  key1: null,
  key2: 'Nelly',
  key3: [ 'suit', 'sweat' ]
} 

var res = Object.keys(obj).reduce(
  (acc, curr) => {
    // if current key's value is not null
    // insert object to the resulting array acc
    if (obj[curr])  { 
      acc.push({[curr] : obj[curr]}); 
      return acc; 
    }
    // if they key value is null, skip it
    return acc; 
}, [] );

console.log(res);
Mμ.
  • 8,382
  • 3
  • 26
  • 36