Given a query:
SELECT * FROM users;
Entered into two different MySQL clients, I'm getting radically different elapsed time (or whatever it's called):
1537 rows in set (0.01 sec)
1537 rows in set (0.36 sec)
This mirrors the experience in the (LAMP) application, so I'm trying to debug that, but don't know specifically what it encompasses in that calculation.
What is included in the 0.01 sec
/0.30 sec
readouts?
Edit: Here is the show profile
on the 36 secs
query:
mysql> show profile;
--------------
show profile
--------------
------------------------ ----------
| Status | Duration |
------------------------ ----------
| Starting | 0.000045 |
| checking permissions | 0.000005 |
| Opening tables | 0.000018 |
| After opening tables | 0.000004 |
| System lock | 0.000005 |
| table lock | 0.000007 |
| init | 0.000037 |
| Optimizing | 0.000009 |
| Statistics | 0.000013 |
| Preparing | 0.000015 |
| Executing | 0.000002 |
| Sending data | 0.010229 |
| End of update loop | 0.000010 |
| Query end | 0.000002 |
| Commit | 0.000004 |
| closing tables | 0.000003 |
| Unlocking tables | 0.000001 |
| closing tables | 0.000007 |
| Starting cleanup | 0.000002 |
| Freeing items | 0.000006 |
| Updating status | 0.000014 |
| Reset for next command | 0.000003 |
------------------------ ----------
22 rows in set (0.05 sec)
Which don't add up to that number (it was giving 0.2363
earlier on "Sending data" but that came down, not sure why).
CodePudding user response:
Summary of comments above:
The MySQL client time includes the time it takes to transfer query results over the network, but the query profile time does not include network transfer time. Since the client that reports fast time is on an instance in AWS, and is therefore close to the RDS database, that is expected to have a better time.
The other client is on the OP's Mac, so data must transfer over the WAN. Depending on the length of the 1537 rows in the result set, it could easily account for the time difference.
Running some tests using ping
to check network latency from each client to the RDS server should reveal some of the difference. Here's an example on my laptop, pinging the Google DNS server:
% ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=117 time=19.681 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=19.863 ms
That shows that the latency is under 20 milliseconds. I would expect the latency from your AWS MySQL client to the AWS RDS instance to be very short, probably under 2ms.
Throughput is different from latency, but it also affects perceived network time if the result set is large enough. I suggest using the mysql client on your cloud instance to save your query result to a file, then use a file transfer program on your Mac to download that file, and see how long it takes.
Re your comment:
How does anyone use RDS outside AWS if it performs like this?
Unsuccessfully, unless their app is tolerant of high latency. If you need an app to have very low latency between the client and the database, then put the client in the same region as the database.
The same problem would exist, for example, if you had a client in AWS but in a different region from your RDS instance. Like querying an RDS in us-west-2 from a client in us-east-1.
This is not something that can be fixed by getting a more powerful computer or new network technology. It's constrained by the speed of electricity, which is the speed of light. But that's a theoretical best case, because networks are not a straight line, and junctions like switches and bridges incur some extra latency.
Clients in the same region have much faster access to the database, because the network distance is much shorter. Even if they are in different Availability Zones, as long as they are in the same AWS region, the latency is low enough to satisfy their needs.