There is such structure of the project, I'm trying to understand:
charts/
spark-0.0.1-100.tgz
templates/
Chart.yaml
values.yaml
Chart.yaml
appVersion: 0.1.0
dependencies:
- name: spark
version: "0.0.1-100"
repository: https://helm.<corporation>.com/<project>
condition: spark.enabled
values.yaml (some values are omitted for simplicity)
spark:
enabled: true
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<account-id>:role/spark-service-account
image:
tag: "3.3.0-dev-28"
extraEnv:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
Master:
Requests:
Cpu: ...
Memory: ...
Disk: ...
Limits:
Cpu: ...
Memory: ...
Disk: ...
Worker:
Replicas: 3
Requests:
Cpu: ...
Memory: ...
Disk: ...
Limits:
Cpu: ...
Memory: ...
Disk: ...
zookeeper:
host: "project-zookeeper"
port: 2181
Then, I have unzipped charts/spark-0.0.1-100.tgz
into folder charts/spark/
:
charts/
spark/
templates/
Chart.yaml
values.yaml
charts/spark/values.yaml:
global:
aci:
sdrAppname: spark
image:
repository: "docker.<corporation>.com/<project>/spark"
tag: "1.0.1"
spark:
path: "/opt/spark"
user: 1000
group: 1000
config: |
SPARK_WORKER_OPTS="-Dspark.worker.cleanup.enabled=true -Dspark.worker.cleanup.appDataTtl=21600"
Master:
Name: master
Replicas: 1
Component: "spark-core"
Requests:
Cpu: ...
Memory: ...
Disk: ...
Limits:
Cpu: ...
Memory: ...
Disk: ...
ServicePort: <port>
ContainerPort: <port>
RestEnabled: "true"
RestPort: <port>
ServiceType: LoadBalancer
WebUi:
Name: webui
ServicePort: <port>
ContainerPort: <port>
The question is, how values from values.yaml
and charts/spark/values.yaml
are being corresponded?
Are values from root values.yaml
are replaced with values from charts/spark/values.yaml
?
Thank you in advance.
CodePudding user response:
The question is, how values from values.yaml and charts/spark/values.yaml are being corresponded?
The short version is they have almost nothing to do with one another, they're just the defaults used by their respective charts. The medium version is that the outer chart can supersede values in the subordinate charts if it chooses to, but ultimately the user has the final word in that discussion because helm --values
win out over the defaults (same for usage of --set
, but --values
are far easier to discuss due to not having to delve into that --set
DSL)
The subcharts are not aware of the parent chart's defaults. The parent chart doesn't have to be aware of the child chart's values, and cannot -- itself -- refer to the subchart's defaults
Are values from root values.yaml are replaced with values from charts/spark/values.yaml?
For the most part, no, they're completely separate namespaces. However, as we saw above, for every - name:
key in the dependencies:
list, those top-level keys in the parent chart's values.yaml becomes special in that it overlays the values on top of the subordinate chart's defaults (you see in your example of { spark: { path: "/opt/spark" } }
; that top level spark:
key matches up with - name:
in the dependencies:
list)
It's kind of like "duck typing" though, because the top level key is free to use any random structure and the child chart will use ones that it is aware of. For example:
# values.yaml
spark:
path: /opt/spark
value-of-pi: 3.1415
is perfectly legal in the top chart's values.yaml even though the child spark chart will only notice the { spark: { path: "" } }
because it does not have any {{ .Values.value-of-pi }}
in its templates
But, again, for clarity: the top level values.yaml, even if it includes spark: { path: /alpha }
can be superseded by the user with --values <(echo '{spark: {path: /beta } }')
and the resulting spark install will have path: /beta
when that chart is templated out