Imagine an API with an endpoint called /valuableData. The server clearly states that the limits for the API are 500 requests/s in total and 20 requests/s per person. This server does not check your IP so I could make 500 request/s with 500 good working proxies without the API going offline. Now imagine that person A sends 1000 requests/s and person B sends 20 requests/s which causes the 500 limit to be hit and the API goes offline. As soon as the API comes back online, how many valid answers does person A get back per second and how many person B? How does the server/API handle this? Does person A get back 480 valid answers per second and person B 20? Or does person A get all the max of 500 answers/s?
CodePudding user response:
This depends on the protocol.
HTTP which normally is used for REST is in itself stateless and client-initiated and therefore falls short of the requirements.
Your scenario would require a protocol which:
- Knows who the clients are/were
- Can push responses after coming online again without client involvement
- Can prioritize responses to client requests
I imagine such a protocol exists or could be implemented using WebSocket and storing state in a database.