Home > Blockchain >  Filtering an Object with VueJS slows my application down
Filtering an Object with VueJS slows my application down

Time:04-16

I'm trying to set up a filter for a search bar.

However, when I'm using it like that and I enter something in the searchbar, it completely hangs for a second until it's loading the entries. Has this something to do with render issues or is it just the wrong solution for my wantings? The object is quite big, it has about 2500 entries.

I have a big Object with keys like this:

{
  key1: {
    title: 'title',
    description: 'description',
    footerContent: 'footerContent'
  },
  key2: {
    title: 'title',
    description: 'description',
    footerContent: 'footerContent'
  },
  key3: {
    title: 'title',
    description: 'description',
    footerContent: 'footerContent'
  }
}

And I'm trying to filter it with this function in computed:

filteredItems() {
  if(this.searchQuery) {
    return Object.fromEntries(
      Object.entries(this.items).filter(([key]) => {
        return this.searchQuery
        .toLocaleLowerCase()
        .split(" ")
        .every(
          (word) =>
          key.toLocaleLowerCase().includes(word)
        )
      })
    )
  }
  return this.items
}

Then doing a v-for in my template:

<tbody>
  <tr v-for="item, key in filteredItems" :key="key">
    <td >{{key}}</td>
    <td><input type="text"  v-model="item.title"/></td>
    <td><input type="text"  v-model="item.description"/></td>
    <td><input type="text"  v-model="item.footerContent"/></td>
  </tr>
</tbody>

CodePudding user response:

It's not filtering that's expensive, but rendering. In short, the problem you're having led to the development of "autocomplete" (a.k.a. "typeahead") in web design. What you're doing (re-rendering a 2.5k rows HTML table on every keystroke), would be as slow in React, Angular, Svelte or vanilla JS, if not slower.

In this particular case, rendering 2.5k rows means rendering 12.5k DOM nodes. And, because it's an HTML table, every cell influences the size of its respective row and column. Rendering tables are known to be one of the most expensive operations in DOM layout rendering.
To be fair, I'm not convinced a grid would be significantly faster with the same contents (but that's another discussion, for another question). What I'll mention, though, is that both Google and Microsoft preferred <div>s (with display:flex) when they implemented their online versions of Excel.

Getting back to our case: in theory, the app should only render what fits on viewport. One of the simplest fixes would be to implement pagination, basically limiting the number of rows rendered at any one time. This will likely make filtering apply instantly, and it's low effort.

To be clear: pagination here should only be a visual gimmick (keep all items in memory and apply filtering to all of them. But don't display more than a slice of pageSize elements).


If, for whatever reason, you decide pagination doesn't cut it for your scenario, and you'd rather present users with a single scrollable table, you could implement what's known as a virtual scroller. The bad news is this typically implies doing the scrolling math yourself, using lower level DOM Api's. The good news is you might find packages suited for the job.

The virtual scroller concept is quite simple: calculate the size of the table/container and render it completely empty, for obtaining scrollbars (let's call this element the scroller - SC). Then create another container (let's call it display window - DW), placed on top of SC, in charge of rendering the currently visible slice.

When scrolling SC, update contents of DW accordingly. That's it!

If all rows have equal height (and width, if you also need horizontal scrolling), the necessary calculation to determine DW contents is fairly simple. Here's a demo I wrote a few months back, with (in theory) 10 billion cells (100k × 100k).

I won't go into details, but if you inspect the source while scrolling, you'll see how it works.


If your rows don't have equal heights, and/or if your columns don't have pre-determined widths, you need to render the table in its entirety at least once. Why? Because you need the browser's ability to auto-adjust the column widths based on contents with all the content in and then set the column widths in stone. If you don't, the column widths will vary as you scroll, based on current contents of DW.
That effect should be avoided, it makes the table seem unnatural. The good part is you only need to do it at start. When rendering is complete, you save all column widths and row heights and then you can easily calculate the contents of DW at any given scroll position.
Be wary of displaying loading indicators during this initial render phase, as creating DOM elements is render-blocking, so your animation options are quite limited. But that's separate question, and the solutions may vary based on the specifics of the table. Besides, that answer would probably be as long, if not longer, than this one.

So here are the steps for making a virtual scroller with custom cell sizes:

  • On mounting the table, create a parsing container. This element is only created for calculating (and storing) row heights & scrollTops, and column widths. The parsing container should have the same width as the table container but should not be displayed, and it should not be part of flow content (it shouldn't influence the layout) at all. (We can achieve this either with absolute positioning and negative z-index, or by using a height-less wrapper with hidden overflow.
  • After storing row heights, their scrollTops and the column widths, we could discard this element. That's what I typically do. As I'm writing this, I realised keeping this element rendered until the component unmounts might actually be smarter - since it doesn't affect the layout and re-building it is even more expensive.
  • The rest is the same as in the example above, except we need to update the formulas for determining the current first visible row by querying the stored row scrollTops.
  • The above operation should be contained in a method, as it needs to be repeated on window.resize event and if/when the data collection changes.
  • for smooth UX and to avoid confusing scroll jumps on layout orientation switch, we should also keep track of the current first row in DW. This way we'll be able to scroll to it on window resize and on data change.
  • Last, but not least, deciding when the saved scrollTops should be invalidated is probably important, too

You should now have a clearer understanding on the mechanics behind your problem, and should be able to take an informed decision on how to tackle it, depending on use-case. If you decide you do want to implement a virtual scroller and find it too challenging, I might assist. But first, give it a go yourself, create a prototype (on codesandbox.io or similar) and see if/where you get stuck.

CodePudding user response:

Some optimizations can be done to improve the performance of that search, I don't know if this can solve your issue completely, but for sure will help.

The very first is not to trigger the search at every hit on the keyboard, use debounce to delay the search, import it from Lodash.

The second optimization is avoid calling searchQuery.toLocaleLowerCase().split() every time, it will return the same value no matter the object that is iterating through.

import debounce from "lodash/debounce"

filteredItems: debounce(function () {
  if(!this.searchQuery) {
    return this.items
  }

  const searchQueryAsArray = this.searchQuery
    .toLocaleLowerCase()
    .split(" ")

  return Object.fromEntries(
    Object.entries(this.items)
      .filter(([key]) => {
        return searchQueryAsArray.every((word) => key.toLocaleLowerCase().includes(word))
    })
  )
}
  • Related