The Cloud Firestore best practices docs for Realtime updates read,
For the best snapshot listener performance, keep your documents small and control the read rate of your clients. The following recommendations provide guidelines for maximizing performance. Exceeding these recommendations can result in increased notification latency.
...
Keep the rate of documents the database pushes to an individual client under 1 document/second.
Keep the rate of documents the database pushes to all clients under 1,000,000 documents/second.
Keep the maximum document size downloaded by an individual client under 10 KiB/second.
Keep the maximum document size downloaded across all clients under 1 GiB/second.
Do these limits or suggestions apply to the first time the query returns a snapshot, i.e., the first time a snapshot is created via the method onSnapshot()
as well? Or are these just for the updates to the snapshot?
EDIT: If these are for the first snapshot as well, what would be the adverse effects, given that limits like 10KB/second and especially 1 doc/second would almost always be exceeded for the first snapshot due to Firestore's low latency?
Understanding Firestore's recommended max client push rate of 1 document/second was probably too specific and hence didn't have the answer to my question.
CodePudding user response:
firebaser here
This section of the documentation is about the data that the Firestore servers send to all connected listeners, including both the initial reads, the updates to those listeners, and one-time reads.
CodePudding user response:
About 10 KiB/second:
Imagine you have 1 000 000 users online and all of them are listening for changes to one document. If this document has more than 10 KiB and you change this document, database will push changes to all clients in more than 10 seconds because your instance of database have internet connection speed of 1 GiB/second. If people will listen to one document 300 KiB, you will block entire database for around 5 min because around that time will take for database to send this 300 GiB of data to users. Some users won't just be listening of that document, they may want to open other separate document, and they will get it but after 5 min. It's just example listeners might have less priority than simple "get" requests I'm just trying to show some simple example.
So if you have some data, all users need to listen, just make more copies of them and spreed listeners across those documents. And if you want to update them, do it one by one with some time gap and you won't block the database.