Home > Enterprise >  Moving from t2 to t3 (or t4) instance times out
Moving from t2 to t3 (or t4) instance times out

Time:02-26

I have an EB environment with t2.xxxxx instances, I wish to change to t3 (or any other instance in the future)

Running eb config opens up my config file.

I change the InstanceType from t2.small to t3.small and the InstanceTypes from t2.small, t2.medium to t3.small, t3.medium.

I have checked that ENA is enabled using AWS cli:

aws ec2 describe-instances --instance-ids i-xxxx --query "Reservations[].Instances[].EnaSupport"
aws ec2 describe-images --image-id ami-xxxx --query "Images[].EnaSupport"

which both return [ true ]

Error:

Printing Status:
Environment update is starting.      
Updating environment xxxx's configuration settings.
Created Auto Scaling launch configuration named: xxxx
Auto Scaling group update progress: Rolling update initiated. Terminating 1 obsolete instance(s) in batches of 1, while keeping at least 1 instance(s) in service. Waiting on resource signals with a timeout of PT30M when new instances are added to the autoscaling group.
Auto Scaling group update progress: Temporarily setting autoscaling group MinSize and DesiredCapacity to 2.
Auto Scaling group update progress: New instance(s) added to autoscaling group - Waiting on 1 resource signal(s) with a timeout of PT30M.
Still waiting for the following 1 instances to become healthy: [i-xxxx].
                                                                      
ERROR: TimeoutError - The EB CLI timed out after 10 minute(s). The operation might still be running. To keep viewing events, run 'eb events -f'. To set timeout duration, use '--timeout MINUTES'.

edit

using eb events -f I can see further logs

ERROR   Service:AmazonCloudFormation, Message:Stack named 'awseb-x-xxxx-xxxx' aborted operation. Current state: 'UPDATE_ROLLBACK_IN_PROGRESS'  Reason: null
INFO    Auto Scaling group update progress: Failed to receive 1 resource signal(s) for the current batch.  Each resource signal timeout is counted as a FAILURE.
ERROR   Updating Auto Scaling group named: awseb-e-xxxx-xxxx-AWSEBAutoScalingGroup-xxxx failed Reason: Received 0 SUCCESS signal(s) out of 1.  Unable to satisfy 100% MinSuccessfulInstancesPercent requirement
ERROR   Failed to deploy configuration.
INFO    Created Auto Scaling launch configuration named: awseb-e-xxxx-xxxx-AWSEBAutoScalingLaunchConfiguration-xxxx
INFO    Auto Scaling group update progress: Rolling update initiated. Terminating 1 obsolete instance(s) in batches of 1, while keeping at least 1 instance(s) in service. Waiting on resource signals with a timeout of PT30M when new instances are added to the autoscaling group.

edit

See redacted version of config below:

ApplicationName: xxxx
DateUpdated: 2022-02-24 12:31:13 00:00
EnvironmentName: xxxx-dev
PlatformArn: arn:aws:elasticbeanstalk:xxxx::platform/Python 3.8 running on 64bit
  Amazon Linux 2/3.3.9
settings:
  aws:autoscaling:launchconfiguration:
    BlockDeviceMappings: null
    DisableIMDSv1: xxxx
    EC2KeyName: xxxx
    IamInstanceProfile: xxxx
    ImageId: xxxx
    InstanceType: t2.small
    MonitoringInterval: 5 minute
SSHSourceRestriction: xxxx
    SecurityGroups: xxxx
  aws:ec2:instances:
    EnableSpot: 'false'
    InstanceTypes: t2.small, t2.medium
    SpotFleetOnDemandAboveBasePercentage: '70'
    SpotFleetOnDemandBase: '0'
    SpotMaxPrice: null
    SupportedArchitectures: x86_64
  aws:elasticbeanstalk:command:
    DeploymentPolicy: Rolling
  aws:elasticbeanstalk:container:python:
    NumProcesses: '1'
    NumThreads: '15'
    WSGIPath: application
  aws:elasticbeanstalk:control:
    DefaultSSHPort: '22'
    LaunchTimeout: '0'
    LaunchType: Migration
    RollbackLaunchOnFailure: 'false'
  aws:elasticbeanstalk:environment:
    EnvironmentType: LoadBalanced
    LoadBalancerIsShared: 'false'
    LoadBalancerType: application

CodePudding user response:

Look into the backend CloudFormation stack to check why the configuration update failed. By the looks of it, it seem like the wait condition was not able to receive the signal in time.

This could occur, if the instances does not have the connectivity with the AWS or if the application deployment on the new instance took more time than configured for the deployment to be complete.

Check if you are using any lifecycle hook as well, they might be causing the issue too.

You can look into the logs for the same as well. If the issue is with the signal, check the cfn-wire.log file.

CodePudding user response:

so to resolve the issue, based on the input from:

  1. I deleted the .elasticbeanstalk folder in my current directory.
  2. eb init (setup as desired)
  3. eb create (setup as desired)
  4. eb config (setup as I wished in the question)

worked first time.

  • Related