Home > Net >  Why sometimes the DynamoDB is extremely slow?
Why sometimes the DynamoDB is extremely slow?

Time:09-05

I am developing an application using DynamoDB. This application is not yet open to the public so only certain employees can access the application.

Generally, the application is very fast and there are no performance issues. Sometimes, however, the application is extremely slow.

At first I suspected that the problem comes from React JS application or from the API but that problem is from DynamoDB.

How can I affirm this?

  1. I tested by stopping Node JS (so the API was offline)
  2. I tested directly in the AWS console in "Explore table items" screens and in "PartiQL editor" screens

And DynamoDB was very very slow and I get this error:

The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API

I cannot understand because no application is running.

So why DynamoDB because slow ?
---> Maybe there is a bug in the API. Engineer are works on that.

But why does the DynamoDB keep running slow when API was offline?

How can I "restart" and/or "stop" DynamoDB service?

Best regards

CodePudding user response:

The error message is giving you a potential cause for your perceived slowness.

I suspect that what you perceive as slowness is because the throughput of the Global Secondary Index your app is reading from is exhausted, and the app (or the AWS SDK) is performing exponential backoff to retry the API call.

The one dimension you scale DynamoDB with aside from the Key schema is Throughput. You decide how many requests per second (it's a bit more complicated than that) DynamoDB can handle, and AWS ensures that load can be served. If you go beyond that, AWS throttles API calls, and you receive the errors.

GSIs have their own throughput that you can manage. I suggest you take a look at the provided metrics to identify where your throughput bottleneck is and adjust the throughput accordingly. If you don't want to deal with throughput at all, switch the table to On-Demand Capacity (Pay per request) and AWS handles that for you at a small premium.

CodePudding user response:

The error message mentions provisioned throughput of a GSI, so it is quite likely that this is your problem:

The DynamoDB GSI documentation https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html#GSI.ThroughputConsiderations explains that

When you create a global secondary index on a provisioned mode table, you must specify read and write capacity units for the expected workload on that index. The provisioned throughput settings of a global secondary index are separate from those of its base table. A Query operation on a global secondary index consumes read capacity units from the index, not the base table. When you put, update or delete items in a table, the global secondary indexes on that table are also updated. These index updates consume write capacity units from the index, not from the base table.

For example, if you accidentally set a GSI's read provisioning to 1, then you can only do on average one read per second from this GSI. If you do a scan that needs to return 10 items, it may take around 10 seconds to complete. Even if no other application is using the table.

Please read the aforementioned link for the full story on how to provision secondary indexes in DynamoDB.

If this is not your problem, please update your question with details on the provisioned throughput settings of your base table and its GSI.

  • Related