i have an on-premise kubernetes cluster setup with 1 master and 2 workernodes. I have two adress-ranges where master and workernodes having ips in boths nets. Master (192.168.0.10 and 192.168.1.10), node1 (192.168.0.11 and 192.168.1.11), node2 (192.168.0.12 and 192.168.1.12). I can ping from each node to each other node with either ip adress. I can also ping both all adresses 192.168.0.x and 192.168.1.x from external network.
the adress range named "intnet" is 192.168.0.200-192.168.0.250 the adress range named "extnet" is 192.168.1.200-192.168.0.250 services requiering ips from extnet stall in pending state.
my metallb adress pool config is as follows
address-pools:
- name: intnet
protocol: layer2
addresses:
- 192.168.0.200-192.168.0.250
address-pools:
- name: extnet
protocol: layer2
addresses:
- 192.168.1.200-192.168.1.250
my service.yml is like the following
apiVersion: v1
kind: Service
metadata:
name: sth1
annotations:
metallb.universe.tf/address-pool: extnet
spec:
selector:
app: local-web
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
is not getting an external ip adress kubectl get svc shows "pending" forever. Whereas when i use "intnet" the service get quickly an ip address from intnet.
Can anyone help here? What do I miss here?
CodePudding user response:
Guided by the official documentation on this issue, you have to configure Metallb file removing the second line address-pools:
address-pools:
- name: intnet
protocol: layer2
addresses:
- 192.168.0.200-192.168.0.250
- name: extnet
protocol: layer2
addresses:
- 192.168.1.200-192.168.1.250