Home > Enterprise >  How to optimize real-time database calls when getting 10 random users?
How to optimize real-time database calls when getting 10 random users?

Time:12-23

I read this:

Where it is said that to optimize performance we have to duplicate data. So I created a section in the database called users to hold all user objects:

users
 \
  uid
   \
    -- name: "john"
   \
    -- email: "[email protected]"
   \
    -- age: 22

And a section to hold only the uid of the users:

uids
 \
  --- uid: true
 \
  --- uid: true

I have over 1250 users. What I need is to select 10 random users and display their data in the UI. I do that by reading the uids node, and I generate 10 random uids. Now for each one I create database request to read the details of the users. The problem is that each request takes ~ 1 second. To get 10 users, it takes 10 seconds. That's too much. How can I optimize this process?


Code:

while (userList.size < 10) {
    val randomId = Random.nextInt(userIdsList.size)
    val randomUser = usersRef.child(questionIdsList[randomId]).get().await().getValue(User::class.java)
    if (!userList.contains(randomUser)) {
        userList.add(randomUser)
    }
}

It looks to me that .await() gets the users one after another and not in parallel. Any ideas?

CodePudding user response:

You are right, users come one by one, not in parallel. To achieve the parallelism you can use a couple of features from kotlinx-coroutines-play-services library:

val tasks: MutableList<Deferred<DataSnapshot>> = mutableListOf()
for (i in 1..10) {
    val randomId = Random.nextInt(i)
    val deferredTask = usersRef.child(questionIdsList[randomId]).get().asDeferred()
    tasks.add(deferredTask)
}

tasks.awaitAll().forEach { dataSnapshot ->
    val randomUser = dataSnapshot.getValue(User::class.java)
    if (!userList.contains(randomUser)) {
        userList.add(randomUser)
    }
}

Using asDeferred extension function we can convert a Task into a Deferred.

And then using awaitAll() extension function on Collection<Deferred<T>> we will wait while all users are loaded in parallel.

  • Related