Lodash v4.0.0: Not All Bugs are Created `isEqual()`

25 Jan 2016 by ggreer

On January 13th (two weeks ago), Lodash v4.0 was released. After waiting a week (for others to shake out the bugs), I branched our repos, bumped the lodash dependency in them, and fixed compatibility issues. I tried lodash-migrate, but it wasn’t particularly useful. lodash-migrate works by running all lodash functions twice (once with the old version and once with the new), then logging if the results are different. That means any code with side effects will trigger a false positive or just break. Instead, I read the v4.0 changelog and reviewed every incompatible call in our codebase. (This wasn’t as hard as it sounds. There were only a few dozen.) After reviewing, testing, and manually poking around after deploying to staging, I deployed the new release to prod. Success!

…or maybe not. Soon after deploying, I noticed increased load on our back-end servers. When I ssh’d in to diagnose the issue, I saw that our back-end was replicating data like crazy. This didn’t endanger any information, but it was incredibly wasteful. Digging deeper, I noticed that if a single buffer in a workspace had changed, the entire contents of the workspace were copied in the next replication pass. Clearly, the lodash upgrade was responsible for this behavior, but how? I went back to my editor and looked at the code responsible:

187 ...
188 let local_buf = local_bufs[rbuf.id];
189 rbuf.deleted = !!rbuf.deleted;
190 if (!local_buf || local_buf.md5 !== rbuf.md5) {
191   log.debug("to_fetch: %s/%s", workspace_id, rbuf.id);
192   to_fetch.push(rbuf);
193   buf_cb();
194   return;
195 }
196 if (_.isEqual(local_buf, rbuf)) {
197   log.debug("Local copy of %s/%s matches remote. Not fetching.", workspace_id, rbuf.id);
198   buf_cb();
199   return;
200 }
201 log.log("Local copy of %s/%s differs from remote. Fetching.", workspace_id, rbuf.id);
202 log.debug("local: %s remote %s", _.keys(local_buf), _.keys(rbuf));
203 if (settings.log_data) {
204   log.debug("local:  %s", JSON.stringify(local_buf));
205   log.debug("remote: %s", JSON.stringify(rbuf));
206 }
207 to_fetch.push(rbuf);
208 ...

I could reproduce the issue on staging, so I enabled debug logging and data logging. I immediately saw that not only did rbuf and local_buf have the keys, they also had the same the data! What was going on?! The only thing that changed was lodash, so I opened a REPL and double-checked the behavior of _.isEqual():

> require("lodash").isEqual({"a": "a", "b": "b"}, {"a": "a", "b": "b"})
> require("lodash").isEqual({"a": "a", "b": "b"}, {"b": "b", "a": "a"})

WTF? I couldn’t believe it. I ran similar object comparisons a dozen times. I triple-checked the JS objects I was comparing. I had to be making a mistake. I thought, “This can’t be a problem with lodash. It’s so basic.” I double-checked the ECMA spec to make sure object keys aren’t ordered. (They’re not.) I installed lodash 3 and re-ran the offending line in a REPL:

> require("lodash").isEqual({"a": "a", "b": "b"}, {"b": "b", "a": "a"})

At this point, I was pretty sure this was a bug in lodash. I went to lodash’s GitHub project and prepared a bug report. Out of habit, I searched for existing issues. The result? An issue that had been closed 10 days earlier. A fix had been committed, but a new release hadn’t been tagged. :(To fix the issue in production, I applied the patch to lodash in each project’s node_modules, then deployed again. Not ideal, but at least the bug didn’t affect Floobits anymore.

Despite my comment on the issue, a new release still hasn’t been tagged. In my opinion, this is a critical bug. Object comparison is simply broken in lodash 4.0.0. I’m not sure why there’s been such a delay. While the code has been committed, a bug isn’t truly fixed until it’s deployed.

About the Author

I’m Geoff Greer, CEO & co-founder of Floobits.

About Floobits

Floobits lets you collaborate on code like you're in the same room. Think Etherpad or Google Docs, but in Sublime Text, Vim, Emacs, or IntelliJ.

If you're interested, sign up.