Home > Back-end > 1 second processing 200 HTTP requests need what kind of server?
1 second processing 200 HTTP requests need what kind of server?
Time:10-24
Client request to send smaller amounts of data, each request within the 200 bytes, the server returns the amount of data, too small, the server USES the Linux + tomcat, every second will be received about 200 from different client's request, can a VPS host? Or need to separate the server? VPS host need hundreds of dollars a year, and independent server requires a few hundred dollars a month, his play can province province, want to ask everybody's experience,
CodePudding user response:
This to the actual pressure test, if the server to access the file or database, it is easy to read and write in the file or database server bottlenecks, According to my guess, general enterprise Web applications, a server is not a second processing 200 HTTP requests
CodePudding user response:
You should play have a workstation computer can support
CodePudding user response:
Due to ajax polling mode takes the server, so I decided to use HTML 5 WebSocket, but WebSocket need to solve the problem of heavy even without Internet, online articles is to use the heart reconnection mechanism, but I think this plan is not perfect, because the heartbeat interval is short, the server pressure will still is bigger, it is better to use ajax polling, and longer intervals, real-time and will discount, I think the server is generally stable, broken network is mainly on the client side, so I want to a method, the client code timing to query the network is connected, but requests are not sent to my server, because my server is too weak, but to a certain well-known website, to apply for a relatively small volume of resources, if the application is successful, indicate that the network unobstructed, WebSocket are basic sure won't hang, otherwise indicate that the network is broken, the WebSocket will certainly die, so to my server initiate reconnection, such concurrent let some famous website is equal to 200 times per second to bear, you said this way?
CodePudding user response:
High concurrency core personal think, is to reasonable use of caching, reduce unnecessary IO. So, depending on the kind of service you provide, and in the right layer Settings cache. If this all the same 200 requests that you directly after the first request will result in a cache, behind the 199 requests is very fast, server stress is small. But if the 200 requests is different, to calculate returns results, that is about to consider distributed, message queue technology. Such as still depends on the specific business. This blog is quite good, can see: https://blog.csdn.net/DreamWeaver_zhou/article/details/78587580