I am currently trying to use VPA for my AWS EKS fargate application, after all the config has been set up, the fargate pod failed to autoscale. Does anyone know what I have done wrong?
Btw, I deployed core-dns, metrics-server, vpa-admission-controller, vpa-recommender, vpa-updater using NodeGroup(ec2 instance), while vpa-test was deployed to fargate. It is a possible problem for this?
Thank you.
my deploy.yaml file
apiVersion: "autoscaling.k8s.io/v1"
kind: VerticalPodAutoscaler
metadata:
name: vpa
namespace: k8s-fargate
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: vpa-test
updatePolicy:
updateMode: "Auto"
resourcePolicy:
containerPolicies:
- containerName: jboss-api
minAllowed:
cpu: 200m
maxAllowed:
cpu: 1
controlledResources: ["cpu"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: k8s-fargate
name: vpa-test
spec:
replicas: 1
selector:
matchLabels:
app: vpa-test
template:
metadata:
labels:
app: vpa-test
spec:
volumes:
- name: properties
configMap:
name: properties
containers:
- name: jboss-api
image: amazonaws.com/imageName:1.1.3
command: ["/bin/sh"]
args:
- "-c"
- "while true; do timeout 15s yes >/dev/null; done"
imagePullPolicy: Always
ports:
- containerPort: 8443
resources:
requests:
memory: "1024Mi"
cpu: "200m"
limits:
memory: "2500Mi"
cpu: "1000m"
volumeMounts:
- name: properties
mountPath: "/path/"
readOnly: false
result for kubectl describe pod podName
:
Name: vpa-test-79c9fc869f-p8tm9
Namespace: k8s-fargate
Priority: 2000001000
Priority Class Name: system-node-critical
Node: fargate-ip-10-0-128-246.compute.internal/10.0.128.246
Start Time: Wed, 10 Aug 2022 16:47:50 0800
Labels: app=vpa-test
eks.amazonaws.com/fargate-profile=k8s-fargate
pod-template-hash=79c9fc869f
Annotations: CapacityProvisioned: 0.25vCPU 2GB
Logging: LoggingDisabled: LOGGING_CONFIGMAP_NOT_FOUND
kubernetes.io/psp: eks.privileged
vpaObservedContainers: jboss-api
vpaUpdates: Pod resources updated by vpa: container 0: cpu request, cpu limit
Status: Running
IP: 10.0.128.246
IPs:
IP: 10.0.128.246
Controlled By: ReplicaSet/vpa-test-79c9fc869f
Containers:
jboss-api:
Image: dkr.ecr.amazonaws.com/api:1.1.3
Port: 8443/TCP
Host Port: 0/TCP
Command:
/bin/sh
Args:
-c
while true; do timeout 15s yes >/dev/null; done
State: Running
Started: Wed, 10 Aug 2022 16:48:25 0800
Ready: True
Restart Count: 0
Limits:
cpu: 1355m
memory: 2500Mi
Requests:
cpu: 271m
memory: 1Gi
Environment: <none>
Mounts:
/usr/local (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bts2q (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
ica-co1-properties:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: properties
Optional: false
kube-api-access-bts2q:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning LoggingDisabled 2m38s fargate-scheduler Disabled logging because aws-logging configmap was not found. configmap "aws-logging" not found
Normal Scheduled 105s fargate-scheduler Successfully assigned k8s-fargate/vpa-test-79c9fc869f-p8tm9 to fargate-ip-10-0-128-246.compute.internal
Normal Pulling 104s kubelet Pulling image "dkr.ecr.amazonaws.com/api:1.1.3"
Normal Pulled 71s kubelet Successfully pulled image "dkr.ecr.amazonaws.com/api:1.1.3" in 33.012615032s
Normal Created 70s kubelet Created container jboss-api
Normal Started 70s kubelet Started container jboss-api
kubectl describe vpa
:
Name: vpa
Namespace: k8s-fargate
Labels: <none>
Annotations: <none>
API Version: autoscaling.k8s.io/v1
Kind: VerticalPodAutoscaler
Metadata:
Creation Timestamp: 2022-08-10T08:45:53Z
Generation: 4
Managed Fields:
API Version: autoscaling.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:resourcePolicy:
.:
f:containerPolicies:
f:targetRef:
.:
f:apiVersion:
f:kind:
f:name:
f:updatePolicy:
.:
f:updateMode:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-08-10T08:45:53Z
API Version: autoscaling.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:recommendation:
.:
f:containerRecommendations:
Manager: recommender
Operation: Update
Time: 2022-08-10T08:45:59Z
Resource Version: 112490
Spec:
Resource Policy:
Container Policies:
Container Name: jboss-api
Controlled Resources:
cpu
Max Allowed:
Cpu: 1
Min Allowed:
Cpu: 200m
Target Ref:
API Version: apps/v1
Kind: Deployment
Name: vpa-test
Update Policy:
Update Mode: Auto
Status:
Conditions:
Last Transition Time: 2022-08-10T08:45:59Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: jboss-api
Lower Bound:
Cpu: 267m
Target:
Cpu: 271m
Uncapped Target:
Cpu: 271m
Upper Bound:
Cpu: 1
Events: <none>
CodePudding user response:
Your pod has been assigned with 0.5vCPU (500m) and the recommendation (271m) is way below. Try stress your pod with while :; do yes > /dev/null; done