I recently created a EC2 instance and a 30GB EBS (gp3) volume. The logic to attach and mount the volume is in the EC2 instance's user data script (see below). The issue I am experiencing is that after several days the mounted EBS volume would be unmounted on its own. The EBS volume would be still attached to the EC2 instance, though.
this is my user-data
script (simplified):
#!/bin/bash -xe
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
MONGO_DATA_DIR="/data"
EBS_DEVICE_NAME="nvme1n1"
VOLUME="/dev/$EBS_DEVICE_NAME"
sudo yum install -y mongodb-org
# attach EBS volume
EC2_INSTANCE_ID="$(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id)"
aws ec2 attach-volume --volume-id ${volumeId} --instance-id $EC2_INSTANCE_ID --device /dev/sdh
# Create File System on the eEBS bs volume if needed
FileSystemType="$(lsblk -f | grep $EBS_DEVICE_NAME | awk '{print $2}')"
sudo mkfs -t xfs /dev/$EBS_DEVICE_NAME
# create mount point & mount volume disk if needed
if [ ! -d "$MONGO_DATA_DIR" ]; then
echo "$MONGO_DATA_DIR does not exist."
sudo mkdir -p $MONGO_DATA_DIR
sudo mount /dev/$EBS_DEVICE_NAME $MONGO_DATA_DIR
# change permissions to the data dir so it is owned by the mongod user
sudo chown -R mongod $MONGO_DATA_DIR
fi
sudo systemctl start mongod
sudo systemctl enable mongod
As the mounted EBS data - /data
is storing our Mongodb database the MongoDB service stops working.
What could be causing this automatic unmount behaviour and what would be the best way to resolve the issue?
Update
After adding the below changes to /etc/fstab
the issue was fixed:
if [ ! -d "$MONGO_DATA_DIR" ]; then
...
# make EBS mount permanent
EBS_DEVICE_ID="$(sudo blkid -o value -s UUID /dev/$EBS_DEVICE_NAME)"
sudo su -c "echo 'UUID=$EBS_DEVICE_ID /data xfs defaults,nofail 0 2' >> /etc/fstab"
fi
CodePudding user response:
It sounds like your server is probably rebooting for some reason, and since you haven't modified /etc/fstab
the volume won't be remounted automatically on boot.