Home > Software design >  Firebase Android: push().getKey() but insert element at beginning, not at the end
Firebase Android: push().getKey() but insert element at beginning, not at the end

Time:12-26

String key = databaseReference.child("Photo").push().getKey();
// ...
databaseReference.child("Photo").child(key).setValue(post);

As far as I know, Firebase creates a statistically guaranteed unique ID here and it also has an order. I checked it and indeed it does have an order. Likes this, it always inserts new elements at the end of the data structure. For example like this:

-MqWU0En0OezPglE6SfL: true
-MqWUFoFqhIaT6fSk-DT: true
-MrmJaNZ8lbcOFPodTBO: true

becomes:

-MqWU0En0OezPglE6SfL: true
-MqWUFoFqhIaT6fSk-DT: true
-MrmJaNZ8lbcOFPodTBO: true
-Mrq162Eba8KHgc_B0lh: true (New Element was inserted here at the end!)

I need to have it the other way around. I need to have it inserted at the beginning. Is there a way to do that? Maybe something like push().getKeyAtBeginning()? I couldn't find anything.


If not, what do you suggest?

In the child, photo are thousands of photos. I know that you can insert a child element negativetimestamp to each photo that has the value -1 * currentMillis, and then just use orderByChild(negativetimestamp). But I don't think this would be a good idea since there are thousands or even ten-thousands of photos that Firebase would have to order then.

CodePudding user response:

Unfortunately, there is no way you can use something like this:

push().getKeyAtBeginning()

You cannot add a node to a specific index. Besides that, there is also no way you can change the order of the nodes in the Firebase Console. All the nodes are by default ordered by key.

If you need to order your elements by specific criteria, then your idea is the way to go ahead with. I have also answered a similar question:

If there are thousands or even ten-thousands of photos, then you should consider getting them in smaller chunks. This approach is called pagination.

CodePudding user response:

I figured out a way. I don't know if this is good style, but it works for me.


Firebase uses a timestamp to ensure the chronological order of the generated keys. I wrote a method to lexicographically invert a key (in a bijective way), so all order relations between two keys still exist, but in reverse. So if key1 > key2, then after that key1 < key2.

I noticed that all keys ever generated in my project consist of a-z, A-Z, 0-9, -, _. So I wrote that method only for those characters. The method is not coded very efficiently, but all operations are simple String comparisons and my keys are 17 chars long, so I have a maximum of 17*64 String comparisons which is easy. If you want it to be more efficient, use two HashMaps instead of CHAR_INT_REPR_SPEICHERSTRING

public static final String CHAR_INT_REPR_SPEICHERSTRING = "_-0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";

public static String invertiereKey(String key) {
    String invertiert = "";

    for (int i = 0; i < key.length(); i  ) {
        String c = key.charAt(i)   "";
        invertiert  = invertiereChar(c);
    }

    return invertiert;
}

public static String invertiereChar(String c) {
    int intRepr = gibCharIntRepr(c);

    // invertiere zahl
    int neueRepr = 64-intRepr-1;

    return gibChar_FromRepr_At(neueRepr);
}

public static int gibCharIntRepr(String c) {
    if (c.length() != 1) {
        return -1;
    }

    for (int i = 0; i < CHAR_INT_REPR_SPEICHERSTRING.length(); i  ) {
        String cTemp = CHAR_INT_REPR_SPEICHERSTRING.charAt(i)   "";
        if (cTemp.equals(c)) {
            return i;
        }
    }

    return -1;
}

public static String gibChar_FromRepr_At(int at) {
    return CHAR_INT_REPR_SPEICHERSTRING.charAt(at) "";
}

And I use it this way:

String key = databaseReference.child("Photo").push().getKey();
key = invertiereKey(key);
// ...
databaseReference.child("Photo").child(key).setValue(post);

This way the order is indeed the way I need it to be. And since my mapping is bijective it should be working without any bugs, also on very large datasets.

  • Related