Question
stringlengths 65
39.6k
| Answer
stringlengths 38
29.1k
|
---|---|
<p>I am able to create multiple clusters locally using minikube but unable to find a way to run them simultaneously.</p>
<p>I created 2 clusters using the below command.</p>
<pre><code>minikube start --profile <clusterName>
</code></pre>
<p>Then I switched to particular cluster,</p>
<pre><code>kubectl config use-context <clusterName>
</code></pre>
<p>When I tried to run another cluster by opening a new terminal, it gets switched to the new cluster and the previously running cluster gets stopped.</p>
<p>Is there any possible way to run multiple clusters simultaneously using <strong>minikube</strong>?</p>
| <p>I think it has stopped running because it might be using the same port. Have you checked it with <code>docker ps</code>?</p>
|
<p>I am experiencing a lot of CPU throttling (see nginx graph below, other pods often 25% to 50%) in my Kubernetes cluster (k8s v1.18.12, running 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3+deb9u2 (2019-11-11) x86_64 GNU/Linux).</p>
<p>Due to backports, I do not know whether my cluster contains the Linux kernel bug described in <a href="https://lkml.org/lkml/2019/5/17/581" rel="nofollow noreferrer">https://lkml.org/lkml/2019/5/17/581</a>. How can I find out? Is there a simple way to check or measure?</p>
<p>If I have the bug, what is the best approach to get the fix? Or should I mitigate otherwise, e.g. not use CFS quota (<code>--cpu-cfs-quota=false</code> or no CPU limits) or reduce <code>cfs_period_us</code> and <code>cfs_quota_us</code>?</p>
<p>CPU Throttling Percentage for nginx (scaling horizontally around 15:00 and removing CPU limits around 19:30): <a href="https://i.stack.imgur.com/M1oi3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M1oi3.png" alt="enter image description here" /></a></p>
| <p>Recently, I'm working on debuging the cpu throttling issue, with the following 5 tests, I've tested out the bug in kernel (Linux version 4.18.0-041800rc4-generic)</p>
<p>This test case is intended to hit 100% throttling for the test 5000ms / 100 ms periods = 50 periods. A kernel without this bug should be able to have a CPU usage stats about 500ms.</p>
<p>Maybe you can try these tests to check whether your kernel will be throttlled.</p>
<p>[Multi Thread Test 1]</p>
<pre><code>./runfibtest 1; ./runfibtest
From <https://github.com/indeedeng/fibtest>
</code></pre>
<p>[Result]</p>
<pre><code>Throttled
./runfibtest 1
Iterations Completed(M): 465
Throttled for: 52
CPU Usage (msecs) = 508
./runfibtest 8
Iterations Completed(M): 283
Throttled for: 52
CPU Usage (msecs) = 327
</code></pre>
<p>[Multi Thread Test 2]</p>
<pre><code>docker run -it --rm --cpu-quota 10000 --cpu-period 100000 hipeteryang/fibtest:latest /bin/sh -c "runfibtest 8 && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu"
</code></pre>
<p>[Result]</p>
<pre><code>Throttled
Iterations Completed(M): 192
Throttled for: 53
CPU Usage (msecs) = 227
nr_periods 58
nr_throttled 56
throttled_time 10136239267
267434463
209397643 2871651 8044402 4226146 5891926 5532789 27939741 4104364
</code></pre>
<p>[Multi Thread Test 3]</p>
<pre><code>docker run -it --rm --cpu-quota 10000 --cpu-period 100000 hipeteryang/stress-ng:cpu-delay /bin/sh -c "stress-ng --taskset 0 --cpu 1 --timeout 5s & stress-ng --taskset 1-7 --cpu 7 --cpu-load-slice -1 --cpu-delay 10 --cpu-method fibonacci --timeout 5s && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu"
</code></pre>
<p>Result</p>
<pre><code>Throttled
nr_periods 56
nr_throttled 53
throttled_time 7893876370
379589091
330166879 3073914 6581265 724144 706787 5605273 29455102 3849694
</code></pre>
<p>For the following kubernetes test, we can use "kubectl logs pod-name" to get the result once the job is done</p>
<p>[Multi Thread Test 4]</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: fibtest
namespace: default
spec:
template:
spec:
containers:
- name: fibtest
image: hipeteryang/fibtest
command: ["/bin/bash", "-c", "runfibtest 8 && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu"]
resources:
requests:
cpu: "50m"
limits:
cpu: "100m"
restartPolicy: Never
</code></pre>
<p>Result</p>
<pre><code>Throttled
Iterations Completed(M): 195
Throttled for: 52
CPU Usage (msecs) = 230
nr_periods 56
nr_throttled 54
throttled_time 9667667360
255738571
213621103 4038989 2814638 15869649 4435168 5459186 4549675 5437010
</code></pre>
<p>[Multi Thread Test 5]</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: stress-ng-test
namespace: default
spec:
template:
spec:
containers:
- name: stress-ng-test
image: hipeteryang/stress-ng:cpu-delay
command: ["/bin/bash", "-c", "stress-ng --taskset 0 --cpu 1 --timeout 5s & stress-ng --taskset 1-7 --cpu 7 --cpu-load-slice -1 --cpu-delay 10 --cpu-method fibonacci --timeout 5s && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu
"]
resources:
requests:
cpu: "50m"
limits:
cpu: "100m"
restartPolicy: Never
</code></pre>
<p>Result</p>
<pre><code>Throttled
nr_periods 53
nr_throttled 50
throttled_time 6827221694
417331622
381601924 1267814 8332150 3933171 13654620 184120 6376208 2623172
</code></pre>
<p>Feel free to leave any comment, I’ll reply as soon as possible.</p>
|
<p>As per <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">Kubernetes Doc</a> , to reference a env variable in config use expr $(ENV_VAR).<br />
But, this is not working for me.
In Readiness and Liveliness Probe API, I am getting token value as <strong>$(KUBERNETES_AUTH_TOKEN)</strong> instead of <strong>abcdefg</strong></p>
<pre><code>containers:
env:
- name: KUBERNETES_AUTH_TOKEN
value: "abcdefg"
lifecycle:
preStop:
exec:
command:
- "sh"
- "-c"
- |
curl -H "Authorization: $(KUBERNETES_AUTH_TOKEN)" -H "Content-Type: application/json" -X GET http://localhost:9443/pre-stop
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 9853
scheme: HTTP
httpHeaders:
- name: Authorization
value: $(KUBERNETES_AUTH_TOKEN)
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 9853
scheme: HTTP
httpHeaders:
- name: Authorization
value: $(KUBERNETES_AUTH_TOKEN)
</code></pre>
| <p>There is an issue opened about a very similar question <a href="https://github.com/kubernetes/kubernetes/issues/40846" rel="nofollow noreferrer">here</a></p>
<p>Anyway, i want to share some yaml lines.<br />
This is not intended for production env, obviously it's just to play around with the commands.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
labels:
run: ngtest
name: ngtest
spec:
volumes:
- name: stackoverflow-volume
hostPath:
path: /k8s/stackoverflow-eliminare-vl
initContainers:
- name: initial-container
volumeMounts:
- mountPath: /status/
name: stackoverflow-volume
image: bash
env:
- name: KUBERNETES_AUTH_TOKEN
value: "abcdefg"
command:
# this only writes a text file only to show the purpose of initContainers (here you could create a bash script)
- "sh"
- "-c"
- "echo $(KUBERNETES_AUTH_TOKEN) > /status/$(KUBERNETES_AUTH_TOKEN).txt" # withi this line, substitution occurs with $() form
containers:
- image: nginx
name: ngtest
ports:
- containerPort: 80
env:
- name: KUBERNETES_AUTH_TOKEN
value: "abcdefg"
volumeMounts:
- mountPath: /status/
name: stackoverflow-volume
lifecycle:
preStop:
exec:
command:
- "sh"
- "-c"
- |
echo $KUBERNETES_AUTH_TOKEN > /status/$KUBERNETES_AUTH_TOKEN.txt &&
echo 'echo $KUBERNETES_AUTH_TOKEN > /status/anotherFile2.txt' > /status/exec-$KUBERNETES_AUTH_TOKEN-2.sh && . /status/exec-$KUBERNETES_AUTH_TOKEN-2.sh &&
echo 'echo $(KUBERNETES_AUTH_TOKEN) > /status/anotherFile3.txt' > /status/exec-$(KUBERNETES_AUTH_TOKEN)-3.sh && . /status/exec-$(KUBERNETES_AUTH_TOKEN)-3.sh &&
echo 'curl -H "Authorization: $KUBERNETES_AUTH_TOKEN" -k https://www.google.com/search?q=$KUBERNETES_AUTH_TOKEN > /status/googleresult.txt && exit 0' > /status/exec-a-query-on-google.sh && . /status/exec-a-query-on-google.sh
# first two lines are working
# the third one with $(KUBERNETES_AUTH_TOKEN), do not
# last one with a bash script creation works, and this could be a solution
resources: {}
# # so, instead of use http liveness
# livenessProbe:
# failureThreshold: 3
# httpGet:
# path: /status
# port: 80
# scheme: HTTP
# httpHeaders:
# - name: Authorization
# value: $(KUBERNETES_AUTH_TOKEN)
# u can use the exec to call the endpoint from the bash script.
# reads the file of the initContainer (optional)
# here i omitted the url call , but you can copy from the above examples
livenessProbe:
exec:
command:
- sh
- -c
- "cat /status/$KUBERNETES_AUTH_TOKEN.txt && echo -$KUBERNETES_AUTH_TOKEN- `date` top! >> /status/resultliveness.txt &&
exit 0"
initialDelaySeconds: 15
periodSeconds: 5
dnsPolicy: ClusterFirst
restartPolicy: Always
</code></pre>
<p>This creates a pod with an hostPath volume (only to show you the output of the files) where will be create files based on the command of the yaml.
More details on the yaml.
If you go in you cluster machine, you can view the files produced.</p>
<p>Anyway, you should use ConfigMap, Secrets and/or <a href="https://helm.sh/docs/chart_template_guide/values_files/" rel="nofollow noreferrer">https://helm.sh/docs/chart_template_guide/values_files/</a>, which allow us to create your own charts an separate you config values from the yaml templates.</p>
<p>Hopefully, it helps.</p>
<p>Ps. This is my first answer on StackOverflow, please don't be too rude with me!</p>
|
<p>I need to create Kubernetes services that their nodePort are auto-allocated by K8S, and port/targetPort must equal to nodePort. (The requirement comes from the spec of a spark YARN node as the backend of the services).
Maybe I can first create the service with a fixed dummy port/targetPort and an auto-allocated nodePort, then update the service to set port/targetPort to the same value as the nodePort.
But is there any better way of doing this?</p>
| <p>There are two main ways to expose a resource on k8s</p>
<p>First one using <code>kubectl expose</code> command: using this one you can choose the pod/deploy to expose but not the nodePort value. Then as you already know you must set the nodeport value on the created yaml</p>
<p>Another way to expose is using <code>kubectl create service nodeport</code> command: using this one you can set the port, target port and nodeport.</p>
<p>If you know the label of the pod to expose (for example <code>app: superPod</code> ) you can create a file, then replace a label (for example <code>TOREPLACE</code>)
with the value of your choosen port (for example 30456):</p>
<p>On linux:</p>
<pre class="lang-bash prettyprint-override"><code>portValue=30456 && k create service nodeport TOREPLACE \
--node-port=$portValue --tcp=$portValue --dry-run=client -oyaml > file.yaml \
&& sed -i 's/app: TOREPLACE/app: yourselector/g' file.yaml \
&& sed -i 's/name: TOREPLACE/name: yourselector-name/g' file.yaml
</code></pre>
<p>This will creates the file with preferred values.</p>
<p>After that, you can apply the file using <code>kubectl -f file.yaml apply</code></p>
<p>However, depending on your needs, and if you want a reliable customizations of your resources, you could try to use:</p>
<p><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/</a>
or
<a href="https://helm.sh/docs/" rel="nofollow noreferrer">https://helm.sh/docs/</a></p>
<p>Hope it helps.</p>
|
<p>I'm creating a kind cluster with <code>kind create cluster --name kind</code> and I want to access it from another docker container but when I try to apply a Kubernetes file from a container (<code>kubectl apply -f deployment.yml</code>) I got this error:</p>
<pre><code>The connection to the server 127.0.0.1:6445 was refused - did you specify the right host or port?
</code></pre>
<p>Indeed when I try to curl kind control-plane from a container, it's unreachable.</p>
<pre><code>> docker run --entrypoint curl curlimages/curl:latest 127.0.0.1:6445
curl: (7) Failed to connect to 127.0.0.1 port 6445 after 0 ms: Connection refused
</code></pre>
<p>However kind control-plane is publishing to the right port but only to the localhost.</p>
<pre><code>> docker ps --format "table {{.Image}}\t{{.Ports}}"
IMAGE PORTS
kindest/node:v1.23.4 127.0.0.1:6445->6443/tcp
</code></pre>
<p>Currently the only solution I found is to set the host network mode.</p>
<pre><code>> docker run --network host --entrypoint curl curlimages/curl:latest 127.0.0.1:6445
Client sent an HTTP request to an HTTPS server.
</code></pre>
<p>This solution don't look to be the most secure. Is there another way like connecting the kind network to my container or something like that that I missed ?</p>
| <p>Don't have enough rep to comment on the other answer, but wanted to comment on what ultimately worked for me.</p>
<h2>Takeaways</h2>
<ul>
<li>Kind cluster running in it own bridge network <code>kind</code></li>
<li>Service with kubernetes client running in another container with a mounted kube config volume</li>
<li>As described above the containers need to be in the same network unless you want your service to run in the <code>host</code> network.</li>
<li>The server address for the kubeconfig is the container name + internal port e.g. <code>kind-control-plane:6443</code>. The port is NOT the exposed port in the example below <code>6443</code> <strong>NOT</strong> <code>38669</code>
<pre><code>CONTAINER ID IMAGE PORTS
7f2ee0c1bd9a kindest/node:v1.25.3 127.0.0.1:38669->6443/tcp
</code></pre>
</li>
</ul>
<h2>Kube config for the container</h2>
<pre class="lang-yaml prettyprint-override"><code># path/to/some/kube/config
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true # Don't use in Prod equivalent of --insecure on cli
server: https://<kind-control-plane container name>:6443 # NOTE port is internal container port
name: kind-kind # or whatever
contexts:
- context:
cluster: kind-kind
user: <some-service-account>
name: kind-kind # or whatever
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: <some-service-account>
user:
token: <TOKEN>
</code></pre>
<h2>Docker container stuff</h2>
<ul>
<li><p>If using docker-compose you can add the kind network to the container such as:</p>
<pre class="lang-yaml prettyprint-override"><code>#docker-compose.yml
services:
foobar:
build:
context: ./.config
networks:
- kind # add this container to the kind network
volumes:
- path/to/some/kube/config:/somewhere/in/the/container
networks:
kind: # define the kind network
external: true # specifies that the network already exists in docker
</code></pre>
</li>
<li><p>If running a new container:</p>
<pre class="lang-bash prettyprint-override"><code>docker run --network kind -v path/to/some/kube/config:/somewhere/in/the/container <image>
</code></pre>
</li>
<li><p>Container already running?</p>
<pre class="lang-bash prettyprint-override"><code>docker network connect kind <container name>
</code></pre>
</li>
</ul>
|
<p>kubectl version
/home/lenovo/.local/bin/kubectl: line 1: syntax error near unexpected token <code><' /home/lenovo/.local/bin/kubectl: line 1: </code><Code>NoSuchKey</Code>The specified key does not exist.No such object: kubernetes-release/release//bin/linux/amd64/kubectl'</p>
| <p>I cleared kubectl from <code>/usr/local/bin</code></p>
<p>and also from <code>/home/$USER/.local/bin</code></p>
<p>And run the commands below:</p>
<pre class="lang-bash prettyprint-override"><code>curl -LO "https://dl.k8s.io/release/v1.24.7/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/v1.24.7/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
sudo install -o root -g root -m 0755 kubectl /home/$USER/.local/bin/kubectl
</code></pre>
|
<p>I try to deploy a spring api-gateway app to GKE using jenkins and helm chart</p>
<p>for RoleBinding I have this manifest</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: api-gateway-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: api-gateway-cr
rules:
- apiGroups: [""]
resources: [ "services", "pods", "configmaps", "endpoints" ]
verbs: [ "get", "watch", "list" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: api-gateway-rb
namespace: default
roleRef:
kind: ClusterRole
name: api-gateway-cr
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: api-gateway-sa
</code></pre>
<p>and have this command</p>
<p><code>helm upgrade --install -f helm-values/values.stag.common.yaml -f helm-values/values.stag.secret.yaml -f helm-values/values.stag.yaml stag-api-gateway helm-api-gateway || build_error=true</code></p>
<p>But I keep getting error this</p>
<p><code>Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "RoleBinding" in version "rbac.authorization.k8s.io/v1beta1"</code></p>
<p>What I had done:</p>
<ul>
<li>Update kubernetes version in jenkins-slave node to 1.20</li>
<li>make sure that all manifest using rbac.authorization.k8s.io/v1</li>
</ul>
<p>What can I do next?</p>
| <ol>
<li><p>Run the command <code>kubectl version</code> to check the version of Kubernetes you're using. Make sure that you are using the latest kubernetes and <a href="https://github.com/DNXLabs/terraform-aws-eks-cloudwatch-logs/issues/7#issuecomment-1143525388" rel="nofollow noreferrer">Helm version</a> to support <code>rbac.authorization.k8s.io/v1</code>.</p>
</li>
<li><p>Examine the <code>apiVersion</code> field in your manifest to confirm it supports RoleBinding and it corresponds to the version supported by your Kubernetes cluster.</p>
</li>
<li><p>The API version is different in the error and the manifest file, Check whether you used the below command after updating the version in the manifest file.</p>
</li>
</ol>
<blockquote>
<p><code>Kubectl apply -f <file name></code></p>
</blockquote>
<p>Refer to the following official docs <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC Authorization</a>, <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#rbac-resources-v122" rel="nofollow noreferrer">Deprecated API Migration Guide</a> for more information</p>
|
<p>I want to autoscale a pod based on how many threads a container uses, so:
if container threads > 10 then scale from 1 -> 2.
Is this possible?</p>
<p>CPU utilization as metric is not going to work unfortunately.</p>
<p>Haven't found anything on this yet</p>
| <p>As mentioned by David Maze, Kubernetes does not track this as a statistic on its own, however if you have another metric system that is linked to HPA, it should be doable.</p>
<ol>
<li><p>Try to gather metrics on the number of threads used by the container using a monitoring tool such as Prometheus.</p>
</li>
<li><p>Create a custom auto scaling script that checks the thread count using the monitoring tool's API and triggers a scaling event if it reaches 10.</p>
</li>
<li><p>Scale pods using K8S HPA based on a defined metric.</p>
</li>
</ol>
<p>Refer to the doc <a href="https://cloud.google.com/monitoring/custom-metrics" rel="nofollow noreferrer">User-defined metrics overview</a> for more information.</p>
|
<p>This works fine</p>
<pre><code>kind create cluster --name newl
kubectl cluster-info --context kind-newl
Kubernetes control plane is running at https://127.0.0.1:33933
CoreDNS is running at https://127.0.0.1:33933/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
</code></pre>
<p>But my yaml multi-node failed</p>
<pre><code>kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
</code></pre>
<p>I got error</p>
<pre><code>ERROR: failed to create cluster: failed to join node with kubeadm: command "docker exec --privileged mmulti-node-worker kubeadm join --config /kind/kubeadm.conf --skip-phases=preflight --v=6" failed with error: exit status 1
Command Output: I0727 10:09:26.061729 249 join.go:413] [preflight] found NodeName empty; using OS hostname as NodeName
I0727 10:09:26.061754 249 joinconfiguration.go:76] loading configuration from "/kind/kubeadm.conf"
I0727 10:09:26.062531 249 controlplaneprepare.go:220] [download-certs] Skipping certs download
I0727 10:09:26.062544 249 join.go:530] [preflight] Discovering cluster-info
I0727 10:09:26.062557 249 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "mmulti-node-control-plane:6443"
</code></pre>
<p>At the end</p>
<pre><code>Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
systemctl status kubelet shows
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2022-07-27 12:17:57 CEST; 9s ago
Docs: https://kubernetes.io/docs/home/
Process: 60094 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, sta>
Main PID: 60094 (code=exited, status=1/FAILURE)
</code></pre>
<p>I thought that something has to do with 1.24 dropping dockershim.I am not sure about that.</p>
<p>How it comes that I can make cluster in the first case and fails in second?</p>
| <p>The following commands have to be executed, link:
<a href="https://github.com/kubernetes-sigs/kind/issues/2744#issuecomment-1127808069" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kind/issues/2744#issuecomment-1127808069</a></p>
<pre class="lang-bash prettyprint-override"><code>echo fs.inotify.max_user_watches=655360 | sudo tee -a /etc/sysctl.conf
echo fs.inotify.max_user_instances=1280 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
</code></pre>
|
<p>kubectl cannot connect to private gke-cluster with Enable control plane global access. getting the error Unable to connect to the server: dial tcp IP_ADDRESS:443: i/o timeout.</p>
<p>checked the cluster configurations and network settings .</p>
| <p>Looks like Kubectl is unable to communicate with the cluster control plane.
Could you kindly check to see whether the <strong>Control plane authorised networks</strong> settings is enabled or disabled?</p>
<p>If its enabled you must edit the settings of your cluster and add your IPaddress/32 to control plane authorised networks. see<a href="https://i.stack.imgur.com/KuCaK.png" rel="nofollow noreferrer">cluster settings</a></p>
<p>As Control plane authorised networks is enabled, it will allow only configured source ip ranges</p>
|
<p>I have deployed Nginx Ingress Controller in EKS cluster v1.20.15-eks using helm chart from <a href="https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx/" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx/</a> version 4.4.2</p>
<p>The controller is deployed successfully but when creating Ingress Object I am getting below error.</p>
<pre><code>W0206 09:46:11.909381 8 reflector.go:424] k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0206 09:46:11.909410 8 reflector.go:140] k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
</code></pre>
<p>kubectl version is</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.15-eks-fb459a0", GitCommit:"165a4903196b688886b456054e6a5d72ba8cddf4", GitTreeState:"clean", BuildDate:"2022-10-24T20:31:58Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Can anyone help me with this. Thank you in advance.</p>
| <p>chart version 4.4.2 has the application version of 1.5.1.</p>
<p>Version 1.5.1 of nginx only supports kubernetes versions of 1.25, 1.24, 1.23.
For kubernetes v1.20 the latest supported version was v1.3.1.</p>
<p>The chart version for v1.3.1 is v4.2.5.</p>
<p>The error you are facing, is due to nginx not finding v1.EndpointSlice, since the EndpointSlice was GA on k8s v1.21 as can be seen <a href="https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/" rel="nofollow noreferrer">here</a>. In previous versions it would be running on alpha/beta not v1.</p>
<p>Please refer to the <a href="https://github.com/kubernetes/ingress-nginx#supported-versions-table" rel="nofollow noreferrer">table here</a>.</p>
|
<p>I'm sorry if it is noob question, but I can't find an explanation for this anywhere.</p>
<p>I need to know which version of the helm server are running on the cluster. In my machine I have the helm client in both versions (helm2 and helm3), when I run the command <code>helm2 version</code> or <code>helm3 version</code>, I'm not sure if I am getting the client or the server version.</p>
<p>For example, on kubectl it describes both client and server version:</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.7", GitCommit:"0c38c362511b20a098d7cd855f1314dad92c2780", GitTreeState:"clean", BuildDate:"2018-08-20T10:09:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.4", GitCommit:"bf9a868e8ea3d3a8fa53
</code></pre>
<p>What could be the best way to determine it?</p>
| <p>helm3 does not have a server version unlike helm v2.</p>
<p>You can run <code>helm version</code> for helm v2.</p>
<p>Here is the <a href="https://helm.sh/docs/faq/changes_since_helm2/#removal-of-tiller" rel="nofollow noreferrer">tiller removal doc</a>.</p>
|
<p>When a container image is not present on the cluster the pod fails with the error <code>ErrImageNeverPull</code> but the job never fails. Is there a configuration that I can add to make sure the job fails if the pod startup fails.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: image-not-present
spec:
backoffLimit: 0
ttlSecondsAfterFinished: 120
template:
spec:
serviceAccountName: consolehub
containers:
- name: image-not-present
image: aipaintr/image_not_present:latest
imagePullPolicy: Never
restartPolicy: OnFailure
</code></pre>
| <p>You can config <strong>activeDeadlineSeconds</strong> for this case. However, you have know how long your job take to reach <strong>Complete</strong> status to avoid this timeout can kill your pod processing.</p>
<p>From the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup" rel="nofollow noreferrer">documents</a>:</p>
<blockquote>
<p>The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded.</p>
</blockquote>
<p>For example: I have created job with wrong image and <strong>activeDeadlineSeconds</strong>: 100. Obviously, the pod stuck with status <strong>Pending</strong> because of wrong image.<a href="https://i.stack.imgur.com/e5y4i.png" rel="nofollow noreferrer">kubectl describe pod</a></p>
<p>After 100 seconds, the Job was <strong>Fail</strong> and the pod was killed as well.
<a href="https://i.stack.imgur.com/XJpT1.png" rel="nofollow noreferrer">kubectl describe job</a></p>
|
<p>I want to remove cpu limits for a specific container where the namespace has default cpu limits (via limitrange).
Per <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/#what-if-you-specify-a-container-s-request-but-not-its-limit" rel="nofollow noreferrer">docs</a> the default value will be assigned.
When I explicitly set limits.cpu to 0, I got an error that tells me requests value 20m should be less than or equal to limits value.</p>
<p>So is it possible to force no limits in that case?</p>
| <p>Seems like you can remove limits with <code>null</code> value.
At least it worked from helmfile:</p>
<pre><code> resources:
requests:
memory: "100Mi"
cpu: "100m"
limits:
memory: "100Mi"
cpu: null
</code></pre>
|
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Node IP</th>
<th>Role</th>
<th>OS</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>192.x.x.11</code></td>
<td>Master 1</td>
<td>RHEL8</td>
</tr>
<tr>
<td><code>192.x.x.12</code></td>
<td>Master 2</td>
<td>RHEL8</td>
</tr>
<tr>
<td><code>192.x.x.13</code></td>
<td>Master 3</td>
<td>RHEL8</td>
</tr>
<tr>
<td><code>192.x.x.16</code></td>
<td>VIP</td>
<td></td>
</tr>
</tbody>
</table>
</div><h1>Use-Cases</h1>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>No of Masters Ready or Running</th>
<th>Expected</th>
<th>Actual</th>
</tr>
</thead>
<tbody>
<tr>
<td>3 Masters</td>
<td>Ingress Created with VIP IP and ping to VIP should work</td>
<td>VIP is working</td>
</tr>
<tr>
<td>2 Masters</td>
<td>Ingress Created with VIP IP and ping to VIP should work</td>
<td>VIP is working</td>
</tr>
<tr>
<td>1 Master</td>
<td>Ingress Created with VIP IP and ping to VIP should work</td>
<td>VIP is not working, Kubectl is not responding</td>
</tr>
</tbody>
</table>
</div>
<p>I have Created a RKE2 HA Cluster with <strong>kube-vip</strong> and the cluster is working fine only when at least 2 masters are in Running, but I want to test a use case where only 1 master is available the VIP should be able to ping and any ingress created with VIP address should work.</p>
<p>In my case when 2 masters are down I'm facing an issue with kube-vip-ds pod, when i check the logs using crictl command I'm getting the below error can someone suggest to me how to reslove this issue.</p>
<pre><code>
E0412 12:32:20.733320 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: etcdserver: request timed out
E0412 12:32:20.733715 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: etcdserver: request timed out
E0412 12:32:25.812202 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
E0412 12:32:25.830219 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
E0412 12:33:27.204128 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-cp-lock)
E0412 12:33:27.504957 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-svcs-lock)
E0412 12:34:29.346104 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-cp-lock)
E0412 12:34:29.354454 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-svcs-lock)
</code></pre>
<p>Thanks.</p>
| <p>Kindly check if you have stacked up etcd datastore as part of your k8s cluster.
etcd for its quorum requires at least 2 masters to be running and in that case failure toleration is n-1, for 3 nodes it shall tolerate only one failure..so in your case as 2 masters are down your cluster is non-operational</p>
|
<p>UPDATE:</p>
<p>I have domain my.shops.de which will be internal forward all traffic to kubernetes service "shop-service".</p>
<p>On this domains I am managing different shops for my clients. eg. my.shops.de/11111 for client with id 11111 and other eg. my.shops.de/22222 for client with id 22222.</p>
<p>Now the client with id 11111 have own domain "example.com" which should be referenced to "my.shops.de/11111" domain.</p>
<p>To achieve this, client has created a CNAME record with "my.shops.de." as value.</p>
<p>Now on my side, I have some how to tell ingress, if I got request from the host "example.com/(<em>)" please forward all request to my internal kubernetes service "shop-service" with path /11111/(</em>).</p>
<p>I'm trying to rewrite the path to the backend service with nginx ingress on kubernetes.</p>
<p>i have service in kubernetes which called shop-service</p>
<p>What I need is this:</p>
<ul>
<li><p><a href="http://example.com" rel="nofollow noreferrer">http://example.com</a> => shop-service/11111</p>
</li>
<li><p><a href="http://example.com/path/bar" rel="nofollow noreferrer">http://example.com/path/bar</a> => shop-service/11111/path/bar</p>
</li>
<li><p><a href="http://example.com/any/path?with=query" rel="nofollow noreferrer">http://example.com/any/path?with=query</a> => shop-service/11111/any/path?with=query</p>
</li>
<li><p><a href="http://other-example.com" rel="nofollow noreferrer">http://other-example.com</a> => shop-service/22222</p>
</li>
<li><p><a href="http://other-example.com/path/bar" rel="nofollow noreferrer">http://other-example.com/path/bar</a> => shop-service/22222/path/bar</p>
</li>
<li><p><a href="http://other-example.com/any/path?with=query" rel="nofollow noreferrer">http://other-example.com/any/path?with=query</a> => shop-service/22222/any/path?with=query</p>
</li>
</ul>
<p>Is this possible nginx ingress?</p>
<p>I've tried it with this code, but it doesn't work.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cname-redirect-ingress
namespace: shop
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /11111$2
spec:
rules:
- host: example.com
http:
paths:
- path: (/|$)(.*)
pathType: Prefix
backend:
service:
name: shop-service
port:
number: 80
---
</code></pre>
| <p>Try something like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cname-redirect-ingress
namespace: shop
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt"
acme.cert-manager.io/http01-edit-in-place: "true"
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/server-snippet: |
if ($host = 'example.com') {
rewrite ^/(/?)$ https://$host/shop-service/11111$1 permanent;
rewrite ^/path/bar(/?)$ https://$host/shop-service/11111/path/bar$1 permanent;
rewrite ^/(/?)$ /shop-service/11111$1 break;
rewrite ^/path/bar(/?)$ /shop-service/11111$1 break;
}
if ($host = 'other.example.com') {
rewrite ^/(/?)$ https://$host/shop-service/22222$1 permanent;
rewrite ^/path/bar(/?)$ https://$host/shop-service/22222/path/bar$1 permanent;
rewrite ^/(/?)$ /shop-service/22222$1 break;
rewrite ^/path/bar(/?)$ /shop-service/22222$1 break;
}
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: shop-service
port:
number: 80
- host: other.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: shop-service
port:
number: 80
</code></pre>
<p>Use <code>permanent</code> for explicit redirect, <code>break</code> for implicit rewrite.<br />
I advise against having path as anything but <code>/</code> (unless you have specific route you want to restrict to) and from using <code>rewrite-target</code>, because at least on AWS first blocks you off anything above subpath due to how AWS load balancer works, and latter leads to redirect loops, from my experience.<br />
And yes, having multiple hosts pointing to the same service is perfectly fine.</p>
<p>Be sure to include this if you intend to use HTTPS, since without this you may have redirect loop headache:</p>
<pre><code>spec:
tls:
- secretName: shop-tls
hosts:
- example.com
- other.example.com
</code></pre>
<p>Be sure to check <a href="https://getbetterdevops.io/k8s-ingress-with-letsencrypt/" rel="nofollow noreferrer">here</a> for prerequisites of using the above if using LE fits your needs (if you install everything correctly, it does the rest except announcing the secret for you in the ingress manifest), unless you already have compatible SSL certificate you can put into secret (AWS ACM, for example, doesn't allow you to just export certificates left and right).</p>
|
<p><strong>This is my ingress yaml file</strong></p>
<pre><code> apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: ingress-nginx
spec:
ingressClassName: nginx
rules:
- host: ticketing.dev
http:
paths:
- pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 5000
path: /
</code></pre>
<h1>when ever I go to ticketing.dev it shows</h1>
<p><a href="https://i.stack.imgur.com/dV5jW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dV5jW.png" alt="enter image description here" /></a></p>
<p><strong>As all of the services are working as expected</strong>
<a href="https://i.stack.imgur.com/37nWp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/37nWp.png" alt="enter image description here" /></a></p>
<p>**All of the pods are also working just fine"
<a href="https://i.stack.imgur.com/sS4gb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sS4gb.png" alt="enter image description here" /></a></p>
<p><strong>Following is my Service and Deployment yaml code</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: 9862672975/auth
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 5000
targetPort: 5000
</code></pre>
<p>I am trying build microservices with Nodejs and Nextjs. As I try to add both frontend and backend to ingress it did not respond, and I tried removing frontend and just running backend with following code it is not working.</p>
| <p>You have not specified a path in your ingress-file, only a pathType.</p>
<p>Below <code>paths</code> you want to add <code>path: "/"</code>.</p>
<p>If you look at the <a href="https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec" rel="nofollow noreferrer">Ingress reference</a>, you may see that the <code>path</code> field is not marked as "required", but with this note:</p>
<blockquote>
<p>Paths must begin with a '/' and must be present when using PathType with value "Exact" or "Prefix".</p>
</blockquote>
<p>Since you have specified your pathType as "Prefix", you need to include the path. In general, I would advise explicitly specifying both path and prefix whenever possible, rather than relying on defaults.</p>
|
<p>I have a single node kubernetes cluster running in a VM in azure. I have a service running SCTP server in port 38412. I need to expose that port externally. I have tried by changing the port type to NodePort. But no success. I am using flannel as a overlay network. using Kubernetes version 1.23.3.</p>
<p>This is my service.yaml file</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: fivegcore
meta.helm.sh/release-namespace: open5gs
creationTimestamp: "2022-02-11T09:24:09Z"
labels:
app.kubernetes.io/managed-by: Helm
epc-mode: amf
name: fivegcore-amf
namespace: open5gs
resourceVersion: "33072"
uid: 4392dd8d-2561-49ab-9d57-47426b5d951b
spec:
clusterIP: 10.111.94.85
clusterIPs:
- 10.111.94.85
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: tcp
nodePort: 30314
port: 80
protocol: TCP
targetPort: 80
- name: ngap
nodePort: 30090
port: 38412
protocol: SCTP
targetPort: 38412
selector:
epc-mode: amf
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
<p>As you can see I changed the port type to NodePort.</p>
<pre><code>open5gs fivegcore-amf NodePort 10.111.94.85 <none> 80:30314/TCP,38412:30090/SCTP
</code></pre>
<p>This is my Configmap.yaml. In this configmap that ngap dev is the server I want to connect which is using default eth0 interface in the container.</p>
<pre><code>apiVersion: v1
data:
amf.yaml: |
logger:
file: /var/log/open5gs/amf.log
#level: debug
#domain: sbi
amf:
sbi:
- addr: 0.0.0.0
advertise: fivegcore-amf
ngap:
dev: eth0
guami:
- plmn_id:
mcc: 208
mnc: 93
amf_id:
region: 2
set: 1
tai:
- plmn_id:
mcc: 208
mnc: 93
tac: 7
plmn_support:
- plmn_id:
mcc: 208
mnc: 93
s_nssai:
- sst: 1
sd: 1
security:
integrity_order : [ NIA2, NIA1, NIA0 ]
ciphering_order : [ NEA0, NEA1, NEA2 ]
network_name:
full: Open5GS
amf_name: open5gs-amf0
nrf:
sbi:
name: fivegcore-nrf
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: fivegcore
meta.helm.sh/release-namespace: open5gs
creationTimestamp: "2022-02-11T09:24:09Z"
labels:
app.kubernetes.io/managed-by: Helm
epc-mode: amf
</code></pre>
<p>I exec in to the container and check whether the server is running or not.
This is the netstat of the container.</p>
<pre><code>Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 10.244.0.31:37742 10.105.167.186:80 ESTABLISHED 1/open5gs-amfd
sctp 10.244.0.31:38412 LISTEN 1/open5gs-amfd
</code></pre>
<p>sctp module is also loaded in the host.</p>
<pre><code>$lsmod | grep sctp
sctp 356352 8
xt_sctp 20480 0
libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,ip_vs,sctp
x_tables 49152 18 ip6table_filter,xt_conntrack,xt_statistic,iptable_filter,iptable_security,xt_tcpudp,xt_addrtype,xt_nat,xt_comment,xt_owner,ip6_tables,xt_sctp,ipt_REJECT,ip_tables,ip6table_mangle,xt_MASQUERADE,iptable_mangle,xt_mark
</code></pre>
<p>Is it possible to expose this server externally?</p>
| <p>You cannot expose port 38412 externally because the default node port range in Kubernetes is 30000-32767.</p>
<p>The best solution (which I tried and working) is to deploy a router/firewall in between Kubernetes cluster and the external srsRAN. On firewall map SCTP port 38412 --> 31412.</p>
<p>Initiate the connection from srsRAN/UERANSIM and the firewall will do the magic and make your RAN connect to the 5g CORE.</p>
<p>PS: I deployed this solution on OpenStack.</p>
|
<p>I am currently trying to set up an horizontal pod autoscaler for my application running inside Kubernetes. The HPA is relying on external metrics that are fetched from Prometheus by a Prometheus adapter (<a href="https://github.com/kubernetes-sigs/prometheus-adapter" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/prometheus-adapter</a>).</p>
<p>The metrics are fetched by the adapter and made available to the Kubernetes metrics API successfully, but the metricLabels map is empty, making it impossible for the HPA to associate the correct metrics with the correct pod.</p>
<p>Eg. of a query to the metrics API</p>
<pre><code>kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/<namespace>/batchCommandsActive_totalCount/"
{"kind":"ExternalMetricValueList","apiVersion":"external.metrics.k8s.io/v1beta1","metadata":{},"items":[{"metricName":"batchCommandsActive_totalCount",**"metricLabels":{}**,"timestamp":"2023-02-10T11:38:48Z","value":"0"}]}
</code></pre>
<p>Those metrics should have three labels associated to them (hostname, localnode and path) in order for the correct pod to retrieve them.</p>
<p>Here is an extract of the Prometheus adapter configmap that defines the queries made to Prometheus by the Prometheus adapter</p>
<pre><code> - seriesQuery: '{__name__="batchCommandsActive_totalCount",hostname!="",localnode!="",path!=""}'
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (name)
resources:
namespaced: false
</code></pre>
<p>Thanks for your help!</p>
<p>So far, no answer from StackOverflow or tutorial (eg. <a href="https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/docs/walkthrough.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/docs/walkthrough.md</a>) have helped with my problem.</p>
| <p>I've found the answer to my own question. Here it is for those of you who might have the same problem.</p>
<p>The metrics in Prometheus and the Kubernetes metrics API were using camel casing (eg. sessionTotal). Even though those naming were used in the HorizontalPodAutoscaler definition, HPA was lowercasing behind the scene the metrics name (sessionTotal -> sessiontotal) resulting in 404 response by the metrics API.</p>
<p>Be therefore aware of this issue in case you use camel case for your metrics names</p>
|
<p>I'm trying to delete resources of a particular kind in a k8s cluster using client-go.</p>
<p>I'm using this code but it requires a specific namespace to be declared, but i want to delete this resource in all namespaces.</p>
<pre><code> u.SetName("test")
u.SetNamespace(v1.NamespaceAll)
u.SetGroupVersionKind(schema.GroupVersionKind{
Group: "group",
Kind: "kind",
Version: "v1",
})
err := k8sClient.Delete(context.TODO(), u)
if err != nil {
fmt.Println(err.Error())
return err
}
</code></pre>
<p>Found the example here - <a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/client" rel="noreferrer">https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/client</a>
but it doesn't mention anything about all namespaces.
Could someone plz provide a way to figure this out.</p>
<p>NOTE:
This is custom resource. not default kind such as pod or deployment etc</p>
| <p>Use the <code>List</code> method to get a list of all resources in <code>all namespaces</code> and then loop through the list and delete each resource using the <code>Delete</code> method.</p>
<pre class="lang-js prettyprint-override"><code>cr := &v1alpha1.CustomResource{}
// Get a list of all instances of your custom resource in all namespaces
listOpts := []client.ListOption{
client.InNamespace(v1.NamespaceAll),
}
err := k8sClient.List(context.Background(), cr, listOpts...)
if err != nil {
return err
}
// Loop through the list and delete each instance of your custom resource
for _, item := range cr.Items {
err = k8sClient.Delete(context.Background(), &item)
if err != nil {
return err
}
}
</code></pre>
|
<p>I have two projects in GCP. Let's call them project <code>A</code> and project <code>B</code>. Project <code>A</code> will contain the cluster for the deployment, while project <code>B</code> will contain the Artifact Registry (please note that this is not the same as the Container Registry).</p>
<p>The deployment is pulling a docker image from the Artifact Registry. Let's say the docker image is a stock standard Ubuntu image that our deployment is trying to deploy, and let's assume that the image is already in the Artifact Registry with no issues.</p>
<p>The deployment of the ubuntu image to the cluster is done via a GitLab CI/CD pipeline. It uses a service account that has the following roles:</p>
<ul>
<li>Artifact Registry Reader</li>
<li>Kubernetes Engine Developer</li>
<li>Viewer</li>
</ul>
<p>Additionally we also note that the cluster features two node pools. One is a custom node pool and the other is the default. They both have the following access scopes:</p>
<ul>
<li><a href="https://www.googleapis.com/auth/cloud-platform" rel="nofollow noreferrer">https://www.googleapis.com/auth/cloud-platform</a></li>
<li><a href="https://www.googleapis.com/auth/devstorage.read_only" rel="nofollow noreferrer">https://www.googleapis.com/auth/devstorage.read_only</a></li>
<li><a href="https://www.googleapis.com/auth/logging.write" rel="nofollow noreferrer">https://www.googleapis.com/auth/logging.write</a></li>
<li><a href="https://www.googleapis.com/auth/monitoring.write" rel="nofollow noreferrer">https://www.googleapis.com/auth/monitoring.write</a></li>
<li><a href="https://www.googleapis.com/auth/service.management.readonly" rel="nofollow noreferrer">https://www.googleapis.com/auth/service.management.readonly</a></li>
<li><a href="https://www.googleapis.com/auth/servicecontrol" rel="nofollow noreferrer">https://www.googleapis.com/auth/servicecontrol</a></li>
</ul>
<p>The <code>cloud-platform</code> scope is enabled to ensure that the node pools can pull from the Artifact Registry which is in a different project.</p>
<p>It is also important to note that both node pools use the default service account which has the roles:</p>
<ul>
<li>Artifact Registry Reader</li>
<li>Editor</li>
</ul>
<p>Upon deployment, the GitLab pipeline completes with no issues. However, the deployment workload fails in the cluster. The following events occur:</p>
<ol>
<li><code>Pulling image "europe-west1-docker.pkg.dev/B/docker-repo/ubuntu:latest"</code></li>
<li><code>Failed to pull image "europe-west1-docker.pkg.dev/B/docker-repo/ubuntu:latest": rpc error: code = Unknown desc = failed to pull and unpack image "europe-west1-docker.pkg.dev/B/docker-repo/ubuntu:latest": failed to resolve reference "europe-west1-docker.pkg.dev/B/docker-repo/ubuntu:latest": failed to authorize: failed to fetch oauth token: unexpected status: 403 Forbidden</code></li>
<li><code>Error: ErrImagePull</code></li>
<li><code>Error: ImagePullBackOff</code></li>
<li><code>Back-off pulling image "europe-west1-docker.pkg.dev/B/docker-repo/ubuntu:latest"</code></li>
</ol>
<p>The deployment YAML file is as follows:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-build-deploy-demo
labels:
app: ubuntu
namespace: customnamespace
spec:
replicas: 1
selector:
matchLabels:
app: ubuntu
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: europe-west1-docker.pkg.dev/B/docker-repo/ubuntu:latest
command: ["sleep", "123456"]
</code></pre>
<p>Why is the image not pulled from correctly? Why am I getting an auth issue despite the correct service account roles and access scopes? How can I resolve this issue?</p>
<p>I have double checked that the image name, tag, and path are correct many times. I have also double checked that the service accounts have the correct roles (specified earlier). I have also ensured that the node pool access scopes are correct and do indeed have the correct access scopes (specified earlier).</p>
<p>I am at a loss of how to resolve this issue. Any help would be greatly appreciated.</p>
| <p>Putting in answer form. As @avinashjha said: the default service account from project <code>A</code> needs to have the role <code>Artifact Registry Reader</code> in project <code>B</code>. This allows for the default service account to oauth and pull the docker image from the registry.</p>
|
<p>I have installed the kube prometheus stack helm chart. The chart appears to install on my 3 node Kubernetes cluster (1.27.3) without issues. All the deployments and pods seems to enter a ready state. I port forwarded the prometheus pod and attempted to connect but was not able to.</p>
<p>When I review the logs from the prometheus pod it seems that there is an error or warning that something is wrong with the rule set it generated:</p>
<pre><code>ts=2023-07-15T22:26:26.110Z caller=manager.go:663 level=warn component="rule manager" file=/etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0/default-prometheus-kube-prometheus-kubernetes-system-kubelet-e76e1f61-e704-4f1c-a9f8-87d91012dd7c.yaml group=kubernetes-system-kubelet name=KubeletPodStartUpLatencyHigh index=5 msg="Evaluating rule failed" rule="alert: KubeletPodStartUpLatencyHigh\nexpr: histogram_quantile(0.99, sum by (cluster, instance, le) (rate(kubelet_pod_worker_duration_seconds_bucket{job=\"kubelet\",metrics_path=\"/metrics\"}[5m])))\n * on (cluster, instance) group_left (node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"}\n > 60\nfor: 15m\nlabels:\n severity: warning\nannotations:\n description: Kubelet Pod startup 99th percentile latency is {{ $value }} seconds\n on node {{ $labels.node }}.\n runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeletpodstartuplatencyhigh\n summary: Kubelet Pod startup latency is too high.\n" err="found duplicate series for the match group {instance=\"192.168.2.10:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"192.168.2.10:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"kubenode03\", service=\"prometheus-prime-kube-prom-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"192.168.2.10:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"kubenode03\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side
</code></pre>
<p>I am a newbie at prometheus so am unsure what to look for or how to troubleshoot this. Can someone help me understand:</p>
<ol>
<li>What is this error complaining about?</li>
<li>How would I fix this error?</li>
</ol>
<p>It appears that it is something in the cube-system namespace but I only have 3 pods in that namespace all with unique names:</p>
<p>CoreDNS
Local Path Provisioner
metrics-server</p>
<p>I appreciate any help or advice on how to troubleshoot this.</p>
| <blockquote>
<p>What is this error complaining about?</p>
</blockquote>
<p>Problem lies within your expression:</p>
<pre><code>histogram_quantile(0.99,
sum by (cluster, instance, le) (
rate(
kubelet_pod_worker_duration_seconds_bucket{job="kubelet",metrics_path="/metrics"}
[5m])))
* on (cluster, instance) group_left (node) kubelet_node_name{job="kubelet",metrics_path="/metrics"}
> 60
</code></pre>
<p>It seems like there are multiple metrics on the both sides of <code>on</code> with the same values of labels <code>cluster</code> and <code>instance</code>. It constitutes many-to-many relation, and Prometheus doesn't allow those.</p>
<blockquote>
<p>How would I fix this error?</p>
</blockquote>
<p>Go to Prometheus' web ui, execute both parts of you on clause. You'll see labelsets produced by them. Correct your on clause, so that no many-to-many relations occur.</p>
<p>I'm not familiar with this exact metrics, so it's hard to say more specific what to look on.</p>
|
<p>I am new to Kubernetes, Istio and so on, so please be gentle :)</p>
<p>I have minikube running, I can deploy services and they run fine.
I have installed istio following this guide:
<a href="https://istio.io/latest/docs/setup/install/istioctl/" rel="nofollow noreferrer">https://istio.io/latest/docs/setup/install/istioctl/</a></p>
<p>If I tag the default namespace with</p>
<pre><code>kubectl label namespace default istio-injection=enabled
</code></pre>
<p>the deployment fails. The service is green on the minikube dashboard, but the pod doesn't start up.</p>
<pre><code>Ready: false
Started: false
Reason: PodInitializing
</code></pre>
<p>Here are a couple of print screens from the dashboard:</p>
<p><a href="https://i.stack.imgur.com/o13fz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o13fz.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/c7nY7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c7nY7.png" alt="enter image description here" /></a></p>
<p>This is clearly related to istio.
If I remove the istio tag from the namespace, the deployment works and the pod starts.</p>
<p>Any help would be greatly appreciated.</p>
<p><strong>EDIT</strong></p>
<p>Running</p>
<pre><code>kubectl logs mypod-bd48d6bcc-6wcq2 -c istio-init
</code></pre>
<p>prints out</p>
<pre><code>2022-08-24T14:07:15.227238Z info Istio iptables environment:
ENVOY_PORT=
INBOUND_CAPTURE_PORT=
ISTIO_INBOUND_INTERCEPTION_MODE=
ISTIO_INBOUND_TPROXY_ROUTE_TABLE=
ISTIO_INBOUND_PORTS=
ISTIO_OUTBOUND_PORTS=
ISTIO_LOCAL_EXCLUDE_PORTS=
ISTIO_EXCLUDE_INTERFACES=
ISTIO_SERVICE_CIDR=
ISTIO_SERVICE_EXCLUDE_CIDR=
ISTIO_META_DNS_CAPTURE=
INVALID_DROP=
2022-08-24T14:07:15.229791Z info Istio iptables variables:
PROXY_PORT=15001
PROXY_INBOUND_CAPTURE_PORT=15006
PROXY_TUNNEL_PORT=15008
PROXY_UID=1337
PROXY_GID=1337
INBOUND_INTERCEPTION_MODE=REDIRECT
INBOUND_TPROXY_MARK=1337
INBOUND_TPROXY_ROUTE_TABLE=133
INBOUND_PORTS_INCLUDE=*
INBOUND_PORTS_EXCLUDE=15090,15021,15020
OUTBOUND_OWNER_GROUPS_INCLUDE=*
OUTBOUND_OWNER_GROUPS_EXCLUDE=
OUTBOUND_IP_RANGES_INCLUDE=*
OUTBOUND_IP_RANGES_EXCLUDE=
OUTBOUND_PORTS_INCLUDE=
OUTBOUND_PORTS_EXCLUDE=
KUBE_VIRT_INTERFACES=
ENABLE_INBOUND_IPV6=false
DNS_CAPTURE=false
DROP_INVALID=false
CAPTURE_ALL_DNS=false
DNS_SERVERS=[],[]
OUTPUT_PATH=
NETWORK_NAMESPACE=
CNI_MODE=false
EXCLUDE_INTERFACES=
2022-08-24T14:07:15.232249Z info Writing following contents to rules file: /tmp/iptables-rules-1661350035231776045.txt1561657352
* nat
-N ISTIO_INBOUND
-N ISTIO_REDIRECT
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp --dport 15008 -j RETURN
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A ISTIO_INBOUND -p tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
COMMIT
2022-08-24T14:07:15.232504Z info Running command: iptables-restore --noflush /tmp/iptables-rules-1661350035231776045.txt1561657352
2022-08-24T14:07:15.256253Z error Command error output: xtables parameter problem: iptables-restore: unable to initialize table 'nat'
Error occurred at line: 1
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2022-08-24T14:07:15.256845Z error Failed to execute: iptables-restore --noflush /tmp/iptables-rules-1661350035231776045.txt1561657352, exit status 2
</code></pre>
| <p>This might help you:</p>
<p>I am having same error in injected container istio-init.</p>
<p>My system is : k8s 1.26 and istio 1.17.1, installed on rocky 8.5 stations.</p>
<p>This solved my problem:</p>
<p>1.</p>
<pre><code>cat <<EOT >> /etc/modules-load.d/k8s.conf
overlay
br_netfilter
nf_nat
xt_REDIRECT
xt_owner
iptable_nat
iptable_mangle
iptable_filter
EOT
</code></pre>
<ol start="2">
<li></li>
</ol>
<pre><code>modprobe br_netfilter ; modprobe nf_nat ; modprobe xt_REDIRECT ; modprobe xt_owner; modprobe iptable_nat; modprobe iptable_mangle; modprobe iptable_filter
</code></pre>
<p>I got this solution from:
<a href="https://github.com/istio/istio/issues/23009" rel="nofollow noreferrer">enter link description here</a></p>
|
<p>I want to change kube-apiserver log level use --v parameter , I checked all the documents, and none of them gave the value range of this parameter.</p>
<p>Check the code, the value is int32 type, I tried to fill in 0, 1, 2, and even 9999, all can pass the verification and start kube-apiserver normally.</p>
<p>So I want to know what is the value range of this parameter.Is the value 9999 really meaningful?</p>
| <p>9999 isn't a valid value.</p>
<p><a href="https://i.stack.imgur.com/bcFc4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bcFc4.png" alt="Table" /></a></p>
<p><a href="https://docs.openshift.com/container-platform/4.8/rest_api/editing-kubelet-log-level-verbosity.html#log-verbosity-descriptions_editing-kubelet-log-level-verbosity" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.8/rest_api/editing-kubelet-log-level-verbosity.html#log-verbosity-descriptions_editing-kubelet-log-level-verbosity</a></p>
<p><a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md</a></p>
<p>I suggest not to increase the verbosity of the logs in the production environment (activate them only if necessary, otherwise you may have performance problems).</p>
|
<p>I'm pretty new to Kubernetes, I have docker for mac, not minikube.</p>
<p>I have the deployment file running with one pod (containing an express app) in the cluster, then i created a NodePort service.</p>
<p>Inside the express app i have a get request to '/posts' and I want to access it from the localhost.</p>
<p>i have this service file here:</p>
<p><a href="https://i.stack.imgur.com/nZXWY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nZXWY.png" alt="enter image description here" /></a></p>
<p>However, when i'm trying to access my localhost in the port that is given to the NodePort service,
for example localhost:30134/posts,
i get page isn't working on Chrome.</p>
<p>Anyone has an idea why?</p>
| <p>PREMISE:</p>
<blockquote>
<p>Every node in the cluster configures itself to listen on that assigned port and to forward traffic to one of the ready endpoints associated with that Service. You'll be able to contact the type: NodePort Service, from outside the cluster, by connecting to any node using the appropriate protocol (for example: TCP), and the appropriate port (as assigned to that Service).</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport</a></p>
<p>First you should figure out what node the "posts" pod is running on.</p>
<pre><code>kubectl -n NAMESPACE get pods -owide | grep -i posts
</code></pre>
<p>Once you figure out which node it's running on, retrieve its INTERNAL-IP:</p>
<pre><code>kubectl get nodes -owide
</code></pre>
<p>After that you will be able to reach the node via Browser (NODE-INTERNAL-IP:NODEPORT-PORT).</p>
<p>If you absolutely want to reach the service via "localhost" name, add the entry in the /etc/hosts file.</p>
<p>For example:</p>
<pre><code>echo "127.0.0.1 NODE-INTERNAL-IP" >> /etc/hosts
</code></pre>
|
<p>We currently have several microservices running on-premises, and we use shell scripts to monitor their performance, including JVM, heap size, and number of running threads. However, we are planning to move to the cloud using Kubernetes. We want to monitor service level stats like the number of threads open, JVM, heap size, etc., in the cloud. We are planning to use Prometheus, Grafana, and Elasticsearch for this purpose.</p>
<p>Can anyone suggest a way to monitor microservice performance in a Kubernetes cluster using Prometheus, Grafana, and Elasticsearch?</p>
<p>We believe this can be done using Spring Boot Actuator , but don't have clear idea on how it works on kubernetes for multiple services.</p>
| <p>Your question requires a series of reflections that you don't do.. for example, which Cloud provider are you going to? Why do you want to create a monitoring (Prometheus/Alertmanager/Grafana) and a logging stack (Elasticsearch/fluentd)?
Is there a particular reason why you want to remain "untied" from the Provider's products?
On most Cloud Service Providers, you already have these tools as a service.</p>
<p>Anyway, for the "monitoring" stack you can use the Prometheus operator; this provides all the tools you need in one solution.</p>
<p><a href="https://prometheus-operator.dev/docs/prologue/introduction/" rel="nofollow noreferrer">https://prometheus-operator.dev/docs/prologue/introduction/</a></p>
<p>On your applications side you will have to export the metrics you want to monitor and add the various "scrape-jobs" to your Prometheus. After that you can have fun creating dashboards of all kinds (you will find a lot of docs online).</p>
<p>For the logging stack, you'll need a tool like fluentd to "fetch" and collect logs from your Kubernetes cluster, and a tool that allows you to intelligently view and process this information like Elasticsearch.</p>
<p>The tools in question are not as closely related as the monitoring ones, so it's up to you to decide how to install them. Surely I would create a single namespace for Logging and consider using the Helm Charts provided by the Vendors.</p>
|
<p><strong>I am deploying PostgreSQL cluster using kubernetes runnung 3 Instance How do you add postgresql connection string url by using kubernetes yaml file</strong></p>
<blockquote>
<p>postgresql://bigdata:bigdata@dbhost1:5432,dbhost2:5432/bigdata?target_session_attrs=primary</p>
</blockquote>
| <p>Try like this:</p>
<pre><code>jdbc:postgresql://<database_host>:<port>/<database_name>
</code></pre>
<p>Credentials will need to be managed via Secrets.</p>
|
<h2>Background</h2>
<p>I have two Kubernetes nodepools - <code>A</code> and <code>B</code>.</p>
<p><code>A</code> nodepool is the right hardware and the preferred one. <code>B</code> be nodepool is less preferred since it using different hardware.</p>
<p>My pods allocation preferring nodepool <code>A</code>. But, <code>A</code> is not always available. Its available 70% of the time.</p>
<p>Once a pod spawned in nodepool <code>B</code>, it will stay there forever. This is inefficient for me.</p>
<h2>Question:</h2>
<p>I want to schedule auto termination of a pod, if its living inside of nodepool <code>B</code>. The auto termination can be done after a fix time (or even better random time between X to Y minutes).</p>
<p>How to implement this?</p>
| <p>The need is very clear.</p>
<p>You can add this piece of code to your Deployment (.spec.affinity.nodeAffinity):</p>
<pre><code> nodeAffinity:
# Soft anti affinity between nodes A and B
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: nodepool-a
operator: In
values:
- "true"
topologyKey: "kubernetes.io/hostname"
</code></pre>
<p>Assuming that the nodepool-a has a label "nodepool-a=true" (it could be any other), adding this configuration the Pods will be assigned to the nodepool-a; if nodepool-a is not available, they are scheduled on other nodepools that don't match the label.</p>
<p>With this configuration, you just need to schedule a rollout restart of your Deployment to move the Pods to nodepool-a when it is available.</p>
<p>Unfortunately there is no other way to achieve your goal; maybe you could automate the Deployment rollout restart triggered by a monitoring event, which after some time "notifies" that the nodepool-a is available.</p>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/</a></p>
<p>Taking a look in the <em>nodeaffinity</em> code, I understand that the <em>requiredDuringSchedulingRequiredDuringExecution</em> function will be implemented which should even eviction Pods that are on nodes that do not match the label.</p>
<p>There are no details on when it will be implemented though.</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/v1.26.3/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go#L36" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/v1.26.3/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go#L36</a> (line 258)</p>
|
<p>I am just learning containers and kubernetes and everything around it. There has been a use case to build a reliable setup, where we can store all our python scripts(small, usecase defined scripts that do one only job each). There are some scripts in other languages like perl too.</p>
<p>Not sure if this is the correct place to ask, but I will ask anyway.</p>
<p>The requirement is to build a solution that will have less to no dependency on the underlying operating system so even if we were to switch operating systems/servers in the future, the scripts can remain/run as it is.</p>
<p>Was thinking if I can build a 2 node kubernetes cluster and run each script in a container and trigger them using a cron job. Not sure if this is an optimal and efficient approach. The python virtual environments is not our way to go given the python version is symmlinked back to the python version on the server, causing a server/os dependency.</p>
<p>Appreciate any ideas and advice if someone else has done something similar. I've google enough for such usecases. Didn't find solutions that match specifically to my need. But please feel free to share, ideas, thoughts any good reads too. Thanks!</p>
<p>Note: The server operating system is RHEL 8 and above</p>
| <p>The idea of containerizing your scripts allows you to have a highly customized "environment" that doesn't change wherever you deploy it.</p>
<p>For the management of these containers then, you decide according to your needs... If they are management scripts, you can think of creating a management Pod that always stays up&running (I'll link you below a couple of examples on how Do).</p>
<p><a href="https://stackoverflow.com/questions/31870222/how-can-i-keep-a-container-running-on-kubernetes">How can I keep a container running on Kubernetes?</a></p>
<p>Otherwise, it may be a good idea to prepare a Job and run it as needed.</p>
<p>In case of PROD, remember to have at least 3 nodes (HA), do a pre-assessment to understand how many resources you can assign to your Pods (assuming that the resource consumption of these scripts has already been tested), think about the roles assigned to Worker nodes, so as to avoid Pods being randomly scheduled everywhere (perhaps where there is a business critical workload that risks saturating resources), autoscaling, etc.</p>
|
<p>I have a Google Cloud Composer environment set up that has 3 nodes in the worker pool. Each node has memory of 16GB (using n1-standard-4) instance. I have tasks inside a DAG that takes around 7-8GB of memory. The allocable memory for the worker nodes is roughly 12GB and hence, these tasks should run without encountering a OOM error.</p>
<p>B
My intuition is that each worker node has some (variable)number of pods and when airflow scheduler queues tasks, the tasks run inside a worker pod and not the node itself. And there might be memory limits set for the pods that you can have inside worker nodes. Note that the worker node are in kubernetes node pool exclusively used by my composer instance.</p>
<p>How can I make sure that all my tasks run without encountering OOM given that my nodes have enough memory? Is there a way to set the pod memory limits to be higher?</p>
<p>I looked at the workloads for the worker inside Kubernetes workloads and I can see that memory limit is 3.7GB which I guess is the limit for the pods.</p>
<p>What should I do?</p>
<p>Thanks in advance.</p>
| <p>It's certainly good practice to pre-assess the resources available in your node-pool and know in advance how "big" your Pods will be.</p>
<p>Once you know how many resources you have available, you can do 2 things:</p>
<p>1.set up resources/limits for all your Pods, making sure you never reach the maximum available in your node-pool;</p>
<p><a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/</a></p>
<p>2.try to understand, in case of malfunction or deployment of other NOT estimated resources in your K8s cluster, which are the Pods you would be willing to sacrifice and in which order to do it.</p>
<p>This will allow you to shutdown critical services ONLY after all other services have already been shut down.</p>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/</a></p>
<p>Obviously there is a 3 and 4 point in case you want to scale horizontally and vertically, with autoscaling setup for the node-pool and for your Deployments, but they are not strictly related to your question.</p>
|
<p>When releasing a Helm chart, Kubernetes resources have Helm's <code>.Release.Name</code> prepended to their names. In the Chart I'm writing separate namespaces with default resource naming is enough, and the additional prefix is verbose and unwanted. Vault is a dependency of my Chart.</p>
<p>Is there a way I can achieve this using Helm? Ideally, I should only be able to deploy this chart once per namespace, supplying a namespace and release name also feel redundant.</p>
<pre class="lang-bash prettyprint-override"><code># Specify namespace and release name
helm install --create-namespace -n my-namespace my-release hashicorp/vault
# Vault is named "my-release-vault-..."
kubectl get svc -n my-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-release-vault ClusterIP 10.98.169.234 <none> 8200/TCP,8201/TCP 8s
</code></pre>
| <p>I took a quick look at the Chart and I don't think there is the possibility of doing what is requested.</p>
<p>What you can do is modify the templates yourself, adding the override parameter for names/namespaces and try doing a PR against the Hashicorp repo; maybe they didn't think this feature could be useful.</p>
<p>Otherwise, you simply create a custom CHANGELOG where you track all these changes that remind you that you have to make changes to the repository every time you download a new version. It's a maintenance job you won't ignore, but it often happens in Enterprise and highly customized environments that you have to modify the Vendor templates.</p>
<p>This is where the definition of the name happens:
<a href="https://raw.githubusercontent.com/hashicorp/vault-helm/main/templates/_helpers.tpl" rel="nofollow noreferrer">https://raw.githubusercontent.com/hashicorp/vault-helm/main/templates/_helpers.tpl</a>
(First block)</p>
|
<p>I have an issue with my GKE cluster. I am using two node pools: secondary - with standard set of highmen-n1 nodes, and primary - with preemptible highmem-n1 nodes. Issue is that I have many pods in Error/Completed status which are not cleared by k8s, all ran on preemptible set. THESE PODS ARE NOT JOBS.</p>
<p>GKE documentation says that:
"Preemptible VMs are Compute Engine VM instances that are priced lower than standard VMs and provide no guarantee of availability. Preemptible VMs offer similar functionality to Spot VMs, but only last up to 24 hours after creation."</p>
<p>"When Compute Engine needs to reclaim the resources used by preemptible VMs, a preemption notice is sent to GKE. Preemptible VMs terminate 30 seconds after receiving a termination notice."
Ref: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms</a></p>
<p>And from the kubernetes documentation:
"For failed Pods, the API objects remain in the cluster's API until a human or controller process explicitly removes them.</p>
<p>The Pod garbage collector (PodGC), which is a controller in the control plane, cleans up terminated Pods (with a phase of Succeeded or Failed), when the number of Pods exceeds the configured threshold (determined by terminated-pod-gc-threshold in the kube-controller-manager). This avoids a resource leak as Pods are created and terminated over time."
Ref: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection</a></p>
<p>So, from my understanding every 24 hours this set of nodes is changing, so it kills all the pods running on them and depending on graceful shutdown pods are ending up in Completed or Error state. Nevertheless, kubernetes is not clearing or removing them, so I have tons of pods in mentioned statuses in my cluster, which is not expected at all.</p>
<p>I am attaching screenshots for reference.
<a href="https://i.stack.imgur.com/MOnNk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MOnNk.png" alt="Pods in Error/Completed State" /></a>
Example <code>kubectl describe pod</code> output:
Status: Failed
Reason: Terminated
Message: Pod was terminated in response to imminent node shutdown.</p>
<p>Apart from that, no events, logs, etc.</p>
<p>GKE version:
1.24.7-gke.900</p>
<p>Both Node pools versions:
1.24.5-gke.600</p>
<p>Did anyone encounter such issue or knows what's going on there? Is there solution to clear it in a different way than creating some script and running it periodically?</p>
<p>I tried digging in into GKE logs, but I couldn't find anything. I also tried to look for the answers in docs, but I've failed.</p>
| <p>The given commands does not work for me.</p>
<p>I have created a few manifests that you can apply in your cluster to automatically delete the Pods matching the criteria with a kubernetes CronJob.</p>
<p><a href="https://github.com/tyriis/i-see-dead-pods" rel="nofollow noreferrer">https://github.com/tyriis/i-see-dead-pods</a></p>
<p>this is working for me</p>
<pre><code>kubectl get pods \
--all-namespaces \
-o go-template \
--template='{{range .items}}{{printf "%s %s %s\n" .metadata.namespace .metadata.name .status.message}}{{end}}' \
| grep "Pod was terminated in response to imminent node shutdown." \
| awk '{print $1, $2}' \
| xargs -r -n2 kubectl delete pod -n
</code></pre>
|
<p>Here is my ingress.yaml file that I'm trying to patch.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: notesapp-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/proxy-body-size: 100m
cert-manager.io/issue-temporary-certificate: "true"
acme.cert-manager.io/http01-edit-in-place: "true"
spec:
tls:
- hosts:
- notesapp.plan.com
secretName: tls-secret
rules:
- host: notesapp.plan.com
http:
paths:
- backend:
service:
name: notesapp-service
port:
number: 4000
path: /
pathType: Prefix
</code></pre>
<p>I want to patch <strong>spec.tls[0].hosts</strong> url to custom url and patched like following.
Here is ingress-patch.yaml file.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: notesapp-ingress
namespace: default
spec:
tls:
- hosts:
- staging.notesapp.plan.com
rules:
- host: staging.notesapp.plan.com
</code></pre>
<p>But after flux reconciled this patch, I see this error.</p>
<p>Here, Look.</p>
<p><strong>kubectl describe ingress notesapp-ingress</strong></p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BadConfig 54m (x2 over 55m) cert-manager-ingress-shim Skipped a TLS block: spec.tls[0].secretName: Required value
Normal Sync 54m (x2 over 55m) nginx-ingress-controller Scheduled for sync
</code></pre>
<p>It looks like spec.tls[0] element was completely replaced with my patch instead of patching only spec.tls[0].hosts.
How can I patch only spec.tls[0].hosts?</p>
<p><strong>The problem:</strong>
Imagine there are many other key-values in spec.tls[0] object(just like when I want to patch spec.rules[0].host). Then it would cause code duplication.
For example, when I patch spec.rules[0].host value, I have to add this code, where it's not necessary for me.</p>
<pre><code> http:
paths:
- backend:
service:
name: notesapp-service
port:
number: 4000
path: /
pathType: Prefix
</code></pre>
| <p>You are currently replacing the list and not only a single item.</p>
<p>you can use the patch replace operation to target list item replacement, be careful when mixing with other operations like delete or add as index can change in such szenarios!</p>
<pre class="lang-yaml prettyprint-override"><code>---
# yaml-language-server: $schema=https://json.schemastore.org/kustomization
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ingress.yaml
patches:
- patch: |
- op: replace
path: /spec/tls/0/hosts/0
value: staging.notesapp.plan.com
- op: replace
path: /spec/rules/0/host
value: staging.notesapp.plan.com
target:
kind: Ingress
version: v1
name: notesapp-ingress
</code></pre>
<p>As flux kustomization is using kustomize under the hood you can add this to a flux kustomization aswel.</p>
|
<p>I have a kubernetes cluster on which I have deployed a opensearch cluster and opensearch dashboard using Helm, I am also able to deploy logstash using helm successfully but I am confused on how to integrate those, I want to feed data to my Opensearch using logstash as my OBJECTIVE as I am not able to find much documentation on it as well. Any help is appreciated....Thanks in advance!</p>
<p>Deployed opensearch using Helm and logstash as well but unable to integrate them</p>
<p><strong>Update here!!!</strong></p>
<p>Have made a few changes to simplify the deployment and more control over the function,</p>
<p>I am testing deployment and service files this time, I will add the files below</p>
<p>Opensearch deployment file</p>
<pre><code>
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: logging
name: opensearch
labels:
component: opensearch
spec:
selector:
matchLabels:
component: opensearch
replicas: 1
serviceName: opensearch
template:
metadata:
labels:
component: opensearch
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: opensearch
securityContext:
capabilities:
add:
- IPC_LOCK
image: opensearchproject/opensearch
env:
- name: KUBERNETES_CA_CERTIFICATE_FILE
value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "cluster.name"
value: "opensearch-cluster"
- name: "network.host"
value: "0.0.0.0"
- name: "discovery.seed_hosts"
value: "[]"
- name: discovery.type
value: single-node
- name: OPENSEARCH_JAVA_OPTS
value: -Xmx512M -Xms512M
- name: "plugins.security.disabled"
value: "false"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: os-mount
mountPath: /data
volumes:
- name: os-mount
persistentVolumeClaim:
claimName: nfs-pvc-os-logging
</code></pre>
<p>Opensearch svc file</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: opensearch
namespace: logging
labels:
service: opensearch
spec:
type: ClusterIP
selector:
component: opensearch
ports:
- port: 9200
targetPort: 9200
</code></pre>
<p>Opensearch dashboard deployment</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: open-dash
namespace: logging
spec:
replicas: 1
selector:
matchLabels:
app: open-dash
template:
metadata:
labels:
app: open-dash
spec:
# securityContext:
# runAsUser: 0
containers:
- name: opensearch-dashboard
image: opensearchproject/opensearch-dashboards:latest
ports:
- containerPort: 80
env:
# - name: ELASTICSEARCH_URL
# value: https://opensearch.logging:9200
# - name: "SERVER_HOST"
# value: "localhost"
# - name: "opensearch.hosts"
# value: https://opensearch.logging:9200
- name: OPENSEARCH_HOSTS
value: '["https://opensearch.logging:9200"]'
</code></pre>
<p>Opensearch Dashboard svc</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: opensearch
namespace: logging
labels:
service: opensearch
spec:
type: ClusterIP
selector:
component: opensearch
ports:
- port: 9200
targetPort: 9200
</code></pre>
<p>with the above configuration I am able to get the Dashboard UI open but in Dashboard pod logs I can see a 400 code logs can anyone please try to reproduce this issue, Also I need to integrate the logstash with this stack.</p>
<blockquote>
<p>{"type":"response","@timestamp":"2023-02-20T05:05:34Z","tags":[],"pid":1,"method":"head","statusCode":400,"req":{"url":"/app/home","method":"head","headers":{"connection":"Keep-Alive","content-type":"application/json","host":"3.108.199.0:30406","user-agent":"Manticore 0.9.1","accept-encoding":"gzip,deflate","securitytenant":"<strong>user</strong>"},"remoteAddress":"10.244.1.1","userAgent":"Manticore 0.9.1"},"res":{"statusCode":400,"responseTime":2,"contentLength":9},"message":"HEAD /app/home 400 2ms - 9.0B</p>
</blockquote>
<p>When deploying a <strong>logstash</strong> pod I get an error that</p>
<blockquote>
<p>[WARN ] 2023-02-20 05:13:52.212 [Ruby-0-Thread-9: /usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-opensearch-2.0.1-java/lib/logstash/outputs/opensearch/http_client/pool.rb:217] opensearch - Attempted to resurrect connection to dead OpenSearch instance, but got an error {:url=>"http://logstash:[email protected]:9200/", :exception=>LogStash::Outputs::OpenSearch::HttpClient::Pool::HostUnreachableError, :message=>"OpenSearch Unreachable: [http://logstash:[email protected]:9200/][Manticore::ClientProtocolException] opensearch.logging:9200 failed to respond"}</p>
</blockquote>
<p>Can somebody please try to help me with this puzzle</p>
<p>@Benla have made changes as per your recommendation to image now I am getting the following logs in logstash</p>
<blockquote>
<p>[2023-02-20T05:18:43,028][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2023-02-20T05:18:43,147][INFO ][org.reflections.Reflections] Reflections took 70 ms to scan 1 urls, producing 127 keys and 444 values
[2023-02-20T05:18:43,351][INFO ][logstash.javapipeline ] Pipeline <code>main</code> is configured with <code>pipeline.ecs_compatibility: v8</code> setting. All plugins in this pipeline will default to <code>ecs_compatibility => v8</code> unless explicitly configured otherwise.
[2023-02-20T05:18:43,370][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>16, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x3bf49916@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:131 run>"}
[2023-02-20T05:18:43,811][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.44}
[2023-02-20T05:18:43,816][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2023-02-20T05:18:43,821][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2023-02-20T05:18:43,835][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2023-02-20T05:18:43,869][INFO ][org.logstash.beats.Server][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] Starting server on port: 5044</p>
</blockquote>
<p>I started getting these endless loop of logs</p>
<blockquote>
<p>[WARN ] 2023-02-20 05:13:37.191 [Ruby-0-Thread-9: /usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-opensearch-2.0.1-java/lib/logstash/outputs/opensearch/http_client/pool.rb:217] opensearch - Attempted to resurrect connection to dead OpenSearch instance, but got an error {:url=>"http://logstash:[email protected]:9200/", :exception=>LogStash::Outputs::OpenSearch::HttpClient::Pool::HostUnreachableError, :message=>"OpenSearch Unreachable: [http://logstash:[email protected]:9200/][Manticore::ClientProtocolException] opensearch.logging:9200 failed to respond"}</p>
</blockquote>
| <p>for opensearch sso(keycloak) please use below steps:
opensearch:</p>
<ol>
<li><p>Make a custom iamge for opensearch, for this make 2 files as below.
i.config.yml(for opensearch security plugin)
---
_meta:
type: "config"
config_version: 2</p>
<pre><code> config:
dynamic:
http:
anonymous_auth_enabled: false
authc:
internal_auth:
order: 0
description: "HTTP basic authentication using the internal user database"
http_enabled: true
transport_enabled: true
http_authenticator:
type: basic
challenge: false
authentication_backend:
type: internal
openid_auth_domain:
http_enabled: true
transport_enabled: true
order: 1
http_authenticator:
type: openid
challenge: false
config:
subject_key: preferred_username
roles_key: roles
openid_connect_url: "https://keycloak-url/realms/realm-name/.well-known/openid-configuration"
authentication_backend:
type: noop
---
</code></pre>
</li>
</ol>
<p>ii.
log4j2.properties(this file will start logs in opensearch so we can see logs which are otherwise turned-off)</p>
<pre><code> ---
logger.securityjwt.name = com.amazon.dlic.auth.http.jwt
logger.securityjwt.level = trace
---
</code></pre>
<p>iii. Dockerfile</p>
<pre><code>---
FROM opensearchproject/opensearch:2.5.0
RUN mkdir /usr/share/opensearch/plugins/opensearch-security/securityconfig
COPY config.yaml /usr/share/opensearch/plugins/opensearch-security/securityconfig/config.yml
COPY config.yaml /usr/share/opensearch/config/opensearch-security/config.yml
COPY log4j2.properties /usr/share/opensearch/config/log4j2.properties
---
</code></pre>
<ol start="2">
<li><p>Deploy opensearch with opensearch helm chart(change image with your customimage built using above configs).
opensearch will deploy 3 pods.now go in each pod and fire belo command to start security plugin(do this only once for each pod of opensearch).</p>
<hr />
<p>/usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh<br />
-cacert /usr/share/opensearch/config/root-ca.pem <br />
-cert /usr/share/opensearch/config/kirk.pem <br />
-key /usr/share/opensearch/config/kirk-key.pem <br />
-cd /usr/share/opensearch/config/opensearch-security <br />
-h localhost</p>
<hr />
<p>make sure all 3 pods are up and in ready state.
opensearch-dashboard</p>
</li>
</ol>
<p>3.Now we will configure opensearch-dashboard
i. In values.yml of helm chart of opensearch-dashboard search for config</p>
<pre><code>---
config:
opensearch_dashboards.yml: |
opensearch.hosts: [https://localhost:9200]
opensearch.ssl.verificationMode: none
opensearch.username: admin
opensearch.password: admin
opensearch.requestHeadersWhitelist: [authorization, securitytenant]
opensearch_security.multitenancy.enabled: true
opensearch_security.multitenancy.tenants.preferred: [Private, Global]
opensearch_security.readonly_mode.roles: [kibana_read_only]
opensearch_security.cookie.secure: false
server.host: '0.0.0.0'
opensearch_security.auth.type: "openid"
opensearch_security.openid.connect_url: "https://keycloak-url/realms/realm-name/.well-known/openid-configuration"
opensearch_security.openid.client_id: "admin"
opensearch_security.openid.client_secret: "asgduasdjsadk"
opensearch_security.openid.scope: "email openid"
opensearch_security.openid.base_redirect_url: "https://opensearch_dashboards-url.io"
---
</code></pre>
<p>ii. deploy opensearch_dashboards.</p>
<pre><code>Now once opensearch_dashboards is deployed and pod is in up and ready state you can go to https://opensearch_dashboards-url.io (your opensearch_dashboards url ) and you will see keycloak login form.
</code></pre>
|
<p>I got a .net6 service hosted in an AKS cluster with app insight and profiler enabled. Logs appear in app insight and live metrics are working and I can see every action in app insight.</p>
<p>When I click the "Profile now" button in the performance tab, it says a profiling session is in progress and I fire a few requests. Eventually I get a message in app insight of a timeout and no session is added to the list. Why could it happen?</p>
| <p>Sadly, Azure profiler just does not support dotnet 6. There might be other solutions for Azure witr this dotnet version</p>
|
<p>I have a system consist of 2 applications: <code>Application 1</code> and <code>Application 2</code>. Fundamentally, those applications are perfectly sync in term of reading\writing from storage.</p>
<h2>System arch:</h2>
<p><a href="https://i.stack.imgur.com/1VfMq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1VfMq.png" alt="enter image description here" /></a></p>
<h2>How it working today?</h2>
<p><code>Application 1</code> is downloading a file from the internet and store it on Google Cloud Storage. Than, the URI passed to the shared queue. Once <code>Application 2</code> is ready, it pull the queue message and downloading the URI to the memory and continue the processing.</p>
<h2>What change I want to make?</h2>
<p>I want to replace the Google Cloud Storage with another internal storage system that built-in in Kubernetes. Since my application are perfectly synced, its guaranteed that <code>Application 2</code> will read from the storage only when <code>Application 1</code> already finished the writing.</p>
<p>What kind of Kubernetes storage you recommend to adapt? How to implement it?</p>
| <p>You can use Filestore service offered witihin GCP as a storage solution for Kubernetes app that you will be creating.</p>
<p>The Filestore supports "<code>ReadWriteMany</code>" access for your apps to use. You can see official documentation and implementation via this <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/filestore-for-gke" rel="nofollow noreferrer">link</a>. For additional info about Kubernetes <code>volumeMount</code> and <code>volumeClaims</code> you check this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#mounting-the-same-persistentvolume-in-two-places" rel="nofollow noreferrer">link</a>.</p>
|
<p>I'm trying to understand how securityContext work in my cluster (k8s v 1.24 & the node is an Ubuntu 18.04).</p>
<p>I simply want to do a simple cat /dev/tty0 from the container.</p>
<p>Here is my simplified configuration of the node :</p>
<pre><code>bash-4.2# ls -al /dev/tty0
crw--w---- 1 root tty 4, 0 Jul 25 05:16 /dev/tty0
bash-4.2# grep tty /etc/group
tty:x:5
</code></pre>
<p>I mounted /dev/tt0 to access it from the container & run the container with group Id 5 & userId 0 (i tried also without the runAsUser but the behaviour is the same)</p>
<pre><code>spec:
volumes:
- name: tty
hostPath:
path: /dev/tty0
containers:
- name: mycontainer
image: ...
volumeMounts:
- name: tty
mountPath: /dev/tty0
securityContext:
runAsUser: 0
runAsGroup: 5
</code></pre>
<p>When I log in the container:</p>
<pre><code>bash-4.2# id
uid=0(root) gid=5(tty) groups=5(tty)
bash-4.2# ls -al /dev/tty0
crw--w---- 1 root tty 4, 0 Jul 25 05:16 /dev/tty0
</code></pre>
<p>But i cannot access /dev/tty0.</p>
<pre><code>bash-4.2# cat /dev/tty0
cat: /dev/tty0: Operation not permitted
</code></pre>
<p>While from the node I don't have this error message.</p>
<p>This is just for testing purpose, my originale use case is the launching of Xorg but I get the Operation not permitted error message.</p>
<p>I tried adding the "privileged: true" securityContext, and with this it works. However, it is not a good practise to use this maximum capacity to the container and I'm trying to understand what is the minimal security context to give to the pod.</p>
<p>Thanks!</p>
| <p>The securityContext specifies that the container should run as the user with UID0 and GID 5. When the container runs with root privileges, it still could have certain restrictions imposed by the underlying container runtime for security reasons.</p>
<p>In your case, the issue is, that accessing <strong>/dev/tty0 typically requires elevated privileges because it's a critical device file representing the first virtual terminal.</strong></p>
<p>You can grant elevated privileges like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
volumes:
- name: tty
hostPath:
path: /dev/tty0
containers:
- name: mycontainer
image: ...
volumeMounts:
- name: tty
mountPath: /dev/tty0
securityContext:
privileged: true
</code></pre>
<p>But attention, It's generally a good practice to run containers with the least privileged user necessary to perform their intended task, because running a container with root privileges can be dangerous as it allows direct access to system resources and can compromise the host system</p>
|
<p>I'm trying to deploy a container to AKS but the pod is constantly in CrashLoopBackOff and restarting. I tried with different pods but apparently is unrelated with that regard.</p>
<p>The pod has very simple functional behaviour that works fine locally. It's a simple express server listening to the port 9002 and send the message "HELLO" in the path "/".</p>
<p>The deployed container has the following yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: be-simply-depl
spec:
replicas: 1
selector:
matchLabels:
app: be-simply
template:
metadata:
labels:
app: be-simply
spec:
containers:
- name: be-simply
image: dockeraccount/be-simply
---
apiVersion: v1
kind: Service
metadata:
name: be-simply-srv
spec:
selector:
app: be-simply
type: ClusterIP
ports:
- name: be-simply
protocol: TCP
port: 9002
targetPort: 9002
</code></pre>
<p>When running:</p>
<blockquote>
<p>kubectl describe pod be-simply-depl-79f49b58d7-wksms</p>
</blockquote>
<pre><code>Name: be-simply-depl-79f49b58d7-wksms
Namespace: default
Priority: 0
Service Account: default
Node: aks-agentpool-24486413-vmss000000/10.224.0.4
Start Time: Tue, 25 Jul 2023 05:17:17 +0000
Labels:
app=be-simply
pod-template-hash=79f49b58d7
Annotations: <none>
Status:
Running IP: 10.244.0.42
IPs:
IP: 10.244.0.42
Controlled By: ReplicaSet/be-simply-depl-79f49b58d7
Containers: be-simply:
Container ID: containerd://eddf0b34eaeaa8a0223631976d175b1e86d24129300d669a79c217cab4ef5531
Image: manelcastro/be-simply
Image ID: docker.io/manelcastro/be-simply@sha256:c933ed74ed2d1281e793beb9d4555b8951a5dba3ed21bc8dd27c65e0896e13ea
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 25 Jul 2023 05:18:06 +0000
Finished: Tue, 25 Jul 2023 05:18:06 +0000 Ready: False Restart Count: 3
Environment: <none>
Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pnhv2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-pnhv2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true QoS
Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: Type Reason Age From Message ---- ------ ---- ---- -------
Normal Scheduled 101s default-scheduler Successfully assigned default/be-simply-depl-79f49b58d7-wksms to aks-agentpool-24486413-vmss000000
Normal Pulled 101s kubelet Successfully pulled image "manelcastro/be-simply" in 749.891764ms (749.897064ms including waiting)
Normal Pulled 99s kubelet Successfully pulled image "manelcastro/be-simply" in 735.883614ms (735.889814ms including waiting)
Normal Pulled 82s kubelet Successfully pulled image "manelcastro/be-simply" in 728.026859ms (728.037459ms including waiting)
Normal Created 53s (x4 over 101s) kubelet Created container be-simply
Normal Started 53s (x4 over 101s) kubelet Started container be-simply
Normal Pulled 53s kubelet Successfully pulled image "manelcastro/be-simply" in 733.54339ms (733.55049ms including waiting)
Warning BackOff 15s (x8 over 99s) kubelet Back-off restarting failed container
</code></pre>
<blockquote>
<p>kubectl logs be-simply-depl-79f49b58d7-wksms</p>
</blockquote>
<pre><code>exec /usr/local/bin/docker-entrypoint.sh: exec format error
</code></pre>
<p>Anybody can shed light on this topic? Thanks</p>
| <p>Finally solved the issue, it was due to a mismatch in the CPU architecture between where I was building the docker image and deploying it. I was building on a Mac and deploying to a x86_64 cloud server</p>
|
<p>I’m trying to deploy a Flask application using Apache Spark 3.1.1 on Kubernetes.</p>
<p><strong>app.py</strong></p>
<pre><code>from flask import Flask
from pyspark.sql import SparkSession
app = Flask(__name__)
app.debug = True
@app.route('/')
def main():
print("Start of Code")
spark = SparkSession.builder.appName("Test").getOrCreate()
sc=spark.sparkContext
spark.stop()
print("End of Code")
return 'hi'
if __name__ == '__main__':
app.run()
</code></pre>
<p><strong>requirements.txt</strong></p>
<pre><code>flask
pyspark
</code></pre>
<p><strong>Dockerfile</strong></p>
<ul>
<li><p>NOTE: "spark-py" is the vanilla Spark image, obtainable by running "./bin/docker-image-tool.sh -p ./kubernetes/dockerfiles/spark/bindings/python/Dockerfile build" in "$SPARK_HOME" directory.</p>
</li>
<li><p>NOTE: I saved the result of this Dockerfile in local registry as "localhost:5000/k8tsspark".</p>
<pre><code> FROM spark-py
USER root
COPY . /app
RUN pip install -r /app/requirements.txt
EXPOSE 5000
</code></pre>
</li>
</ul>
<p><strong>hello-flask.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-flask
name: hello-flask
spec:
selector:
matchLabels:
app: hello-flask
replicas: 1
template:
metadata:
labels:
app: hello-flask
spec:
containers:
- name: hello-flask
image: localhost:5000/k8tsspark:latest
command: [
"/bin/sh",
"-c",
"/opt/spark/bin/spark-submit \
--master k8s://https://192.168.49.2:8443 \
--deploy-mode cluster \
--name spark-on-kubernetes \
--conf spark.executor.instances=2 \
--conf spark.executor.memory=1G \
--conf spark.executor.cores=1 \
--conf spark.kubernetes.container.image=localhost:5000/k8tsspark:latest \
--conf spark.kubernetes.container.image.pullPolicy=Never \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.pyspark.pythonVersion=3 \
--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \
--conf spark.dynamicAllocation.enabled=false \
local:///app/app.py"
]
imagePullPolicy: Never
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: hello-flask
labels:
app: hello-flask
spec:
type: LoadBalancer
ports:
- name: http
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: hello-flask
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: spark
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: spark-role
subjects:
- kind: ServiceAccount
name: spark
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
</code></pre>
<p><strong>terminal - kubectl apply</strong></p>
<pre><code>kubectl apply -f ./hello-flask.yaml
PROBLEM: using the dashboard I can see executor pods being created while booting
(the idea is to keep spark-driver always active and trigger spark-executors via API call)
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-flask-86689bdf84-ckkj4 1/1 Running 0 5m33s
spark-on-kubernetes-811fd878ef3d3c16-driver 1/1 Running 0 5m31s
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-flask LoadBalancer 10.103.254.34 <pending> 5000:32124/TCP 6m1s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6m13s
spark-on-kubernetes-811fd878ef3d3c16-driver-svc ClusterIP None <none> 7078/TCP,7079/TCP,4040/TCP 5m59s
</code></pre>
<p><strong>terminal - kubectl service</strong></p>
<pre><code>minikube service hello-flask
|-----------|-------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|---------------------------|
| default | hello-flask | http/5000 | http://192.168.49.2:32124 |
|-----------|-------------|-------------|---------------------------|
🎉 Opening service default/hello-flask in default browser...
sudo -E kubefwd svc
ERROR while opening hello-flask:5000 via browser:
"The connection was reset"
Consequent ERROR in kubefwd:
"ERRO[14:34:43] Runtime: an error occurred forwarding 5000 -> 5000: error forwarding port 5000 to pod bfa5f111e9f32f04a554975046539962734e4cf3fb05690d71697cedc49715a9, uid : exit status 1: 2021/04/20 12:34:43 socat[80737] E connect(5, AF=2 127.0.0.1:5000, 16): Connection refused"
</code></pre>
<p>I'm new to Kubernetes so I'm not sure this architecture is correct. Thanks!</p>
| <p>I don't think that's a good idea, maybe you need to use client mode and then put the load (executor) on pods.</p>
<p><a href="https://github.com/Wh1isper/sparglim#pyspark-app" rel="nofollow noreferrer">https://github.com/Wh1isper/sparglim#pyspark-app</a></p>
<pre><code>NAME READY STATUS RESTARTS AGE
sparglim-825bf989955f3593-exec-1 1/1 Running 0 53m
sparglim-825bf989955f3593-exec-2 1/1 Running 0 53m
sparglim-825bf989955f3593-exec-3 1/1 Running 0 53m
sparglim-app-8495f7b796-2h7sc 1/1 Running 0 53m
</code></pre>
<p>The sparglim-app acts as a client mode driver, distributing tasks to the executor, so you can use the flask to call SparkSession for programming and debugging.</p>
|
<p>I'm experimenting with the new Flink Kubernetes operator and I've been able to do pretty much everything that I need besides one thing: getting a JAR file from the S3 file system.</p>
<h2>Context</h2>
<p>I have a Flink application running in a EKS cluster in AWS and have all the information saved in a S3 buckets. Things like savepoints, checkpoints, high availability and JARs files are all stored there.</p>
<p>I've been able to save the savepoints, checkpoints and high availability information in the bucket, but when trying to get the JAR file from the same bucket I get the error:
<code>Could not find a file system implementation for scheme 's3'. The scheme is directly supported by Flink through the following plugins: flink-s3-fs-hadoop, flink-s3-fs-presto.</code></p>
<p>I was able to get to <a href="https://www.mail-archive.com/[email protected]/msg48176.html" rel="nofollow noreferrer">this thread</a>, but I wasn't able to get the resource fetcher to work correctly. Also the solution is not ideal and I was searching for a more direct approach.</p>
<h2>Deployment files</h2>
<p>Here's the files that I'm deploying in the cluster:</p>
<p>deployment.yml</p>
<pre><code>apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-deployment
spec:
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: pod-template
spec:
containers:
- name: flink-main-container
env:
- name: ENABLE_BUILT_IN_PLUGINS
value: flink-s3-fs-presto-1.15.3.jar;flink-s3-fs-hadoop-1.15.3.jar
volumeMounts:
- mountPath: /flink-data
name: flink-volume
volumes:
- name: flink-volume
hostPath:
path: /tmp
type: Directory
image: flink:1.15
flinkVersion: v1_15
flinkConfiguration:
state.checkpoints.dir: s3://kubernetes-operator/checkpoints
state.savepoints.dir: s3://kubernetes-operator/savepoints
high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
high-availability.storageDir: s3://kubernetes-operator/ha
jobManager:
resource:
memory: "2048m"
cpu: 1
taskManager:
resource:
memory: "2048m"
cpu: 1
serviceAccount: flink
</code></pre>
<p>session-job.yml</p>
<pre><code>apiVersion: flink.apache.org/v1beta1
kind: FlinkSessionJob
metadata:
name: flink-session-job
spec:
deploymentName: flink-deployment
job:
jarURI: s3://kubernetes-operator/savepoints/flink.jar
parallelism: 3
upgradeMode: savepoint
savepointTriggerNonce: 0
</code></pre>
<p>The Flink Kubernetes operator version that I'm using is 1.3.1</p>
<p>Is there anything that I'm missing or doing wrong?</p>
| <p>For anyone else looking this up in future, here's what I did that helped.</p>
<p>Depending on your version of the k8s operator, you may now have a <a href="https://github.com/apache/flink-kubernetes-operator/pull/609" rel="nofollow noreferrer"><code>postStart</code></a> helm chart value. As of this writing, that version is yet unreleased, but if you read the source code, all it does is inject a place in the operator deployment for you to run arbitrary code.</p>
<p>So I went into edit the k8s deployment:<code>kubectl edit deployment/flink-kubernetes-operator</code>, and added it there. Note that you might need to set the plugin folder env var. Obviously if you run <code>helm install</code> / <code>helm upgrade</code>, again, this will get erased, but hopefully they release the new helm chart with the <code>postStart</code> config soon.</p>
<pre class="lang-yaml prettyprint-override"><code># This is the values.yml file for the operator.
operatorPod:
env:
- name: FLINK_PLUGINS_DIR
value: /opt/flink/plugins
</code></pre>
<pre class="lang-yaml prettyprint-override"><code># This is the deployment definition of the operator. i.e. when you do kubectl edit.
# Look for the main container spec
# ... truncated
containers:
- command:
- /docker-entrypoint.sh
- operator
env:
- name: OPERATOR_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
# there might be more things in here...
image: ghcr.io/apache/flink-kubernetes-operator:be07be7
imagePullPolicy: IfNotPresent
# add this following section, making sure you choose the right version for you.
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- 'mkdir $FLINK_PLUGINS_DIR/flink-s3-fs-hadoop && curl -H "Accept: application/zip"
https://repo1.maven.org/maven2/org/apache/flink/flink-s3-fs-hadoop/1.15.4/flink-s3-fs-hadoop-1.15.4.jar
--output $FLINK_PLUGINS_DIR/flink-s3-fs-hadoop/flink-s3-fs-hadoop-1.15.4.jar'
# ... truncated
</code></pre>
<p>Then, in your FlinkDeployment yaml, all you have to do is find the main container and add these 2 env vars. Flink will automagically copy the plugin(s) (semicolon or ' ' separated) into the folder so you dont have to run anything custom.</p>
<pre class="lang-yaml prettyprint-override"><code># truncated
- name: flink-main-container
env:
- name: ENABLE_BUILT_IN_PLUGINS
value: flink-s3-fs-hadoop-1.15.4.jar
- name: "FLINK_PLUGINS_DIR"
value: "/opt/flink/plugins"
# truncated
</code></pre>
<p>Not entirely sure why this works with the Flink image but not on the Operator one. Also not sure why there aren't clearer docs around this.</p>
<p>Unlike the previous comments, you <em>should not</em> add <code>fs.s3a.aws.credentials.provider</code> to your config. The plugin has a default list of credentials providers that it will try based on your configuration environment. I use IAM roles, making the flink serviceaccount a trusted entity in MyCustomS3FlinkWritePolicy so I don't have to put AWS secrets in EKS.</p>
|
<blockquote>
<p>in kubernetes i have postgres as a stateful set and i have it defined as service postgres then i want to expose it as an ingress.
i have changed type from clusterip to NodePort of the service and i have created ingress for postgres like below</p>
</blockquote>
<pre><code>kind: Ingress
metadata:
name: postgres
namespace: postgres
annotations:
alb.ingress.kubernetes.io/group.name: eks-dev-test-postgres-group
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":5432}]'
alb.ingress.kubernetes.io/load-balancer-name: eks-dev-test-alb-postgres
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/tags: Environment=dev-test,Team=devops
spec:
ingressClassName: alb
rules:
- host: postgres.test.XXXXXX.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: postgres
port:
number: 5432
</code></pre>
<blockquote>
<p>i use alb ingress controller to create/manage ingress</p>
</blockquote>
<blockquote>
<p>here in AWS, i have created new load balancer name : eks-dev-test-alb-postgres
region : us-east-1
load balancer arn: arn:aws:elasticloadbalancing:us-east-1:XXXXX:loadbalancer/app/eks-dev-test-alb-postgres/XXXXX
and security group inbound rules updated to 5432 opens to everyone
this vpc is secured by aws vpn and i`m connected to vpn</p>
</blockquote>
<blockquote>
<p>i turned off ssl in pgadmin and attached the snip of error</p>
</blockquote>
<blockquote>
<p>in DB weaver, when i<code>m trying to TEST CONNECTION in universal database management tool (DBeaver) i</code>m facing "An error occurred while setting up the SSL connection".</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/fO07e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fO07e.png" alt="enter image description here" /></a></p>
<p>ANSWER:</p>
<blockquote>
<p>Service type should be : LoadBalancer</p>
</blockquote>
<blockquote>
<p>in service annotations : nlb and internal should add like below</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres-postgresql-external
labels:
app: postgresql
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
type: LoadBalancer
ports:
- name: postgresql
port: 5432
targetPort: postgresql
selector:
app: postgresql
release: "postgres"
loadBalancerSourceRanges:
- "172.16.0.0/16"
</code></pre>
<hr />
<p>REF link: <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.5/guide/service/nlb/" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.5/guide/service/nlb/</a></p>
<blockquote>
<p>Network Load Balancer (NLB) doesnt have SG.</p>
</blockquote>
<blockquote>
<p>In EC2 Instance Security Group need to open as 5432</p>
</blockquote>
<p>REF link: <a href="https://repost.aws/questions/QUuueXAi20QuisbkOhinnbzQ/aws-nlb-security-group" rel="nofollow noreferrer">https://repost.aws/questions/QUuueXAi20QuisbkOhinnbzQ/aws-nlb-security-group</a></p>
<blockquote>
<p>Check telnet</p>
</blockquote>
<blockquote>
<p>then disable SSL in pgamin while creating connection</p>
</blockquote>
<blockquote>
<p>Test the connection</p>
</blockquote>
| <p>Ingress only works for HTTP traffic. You will not able to expose TCP/5432 using ingress.</p>
<p>Consider creating a service of type LoadBalancer with the appropriate annotations by following the AWS Load Balancer Controller documentation:</p>
<p><a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.5/guide/service/nlb/" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.5/guide/service/nlb/</a></p>
|
<p>I have an app with a React frontend and Flask backend that I can run successfully both locally and in Docker, and I'm now trying to deploy it to Kubernetes. The Flask part works fine, but when I try to access the React service and it sends requests back to the Flask service, I get an AxiosEror saying "timeout of 300000ms exceeded".</p>
<p>The YAML specs for the React deployment look like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: react-deployment
labels:
app: react
spec:
replicas: 3
selector:
matchLabels:
app: react
template:
metadata:
labels:
app: react
spec:
containers:
- name: react-app
image: host.docker.internal:5002/react-app:latest
env:
- name: REACT_APP_BACKEND_HOST
value: http://flask-service.default.svc.cluster.local:5000
ports:
- containerPort: 3000
imagePullSecrets:
- name: hello-world-cred
</code></pre>
<p>Where REACT_APP_BACKEND_HOST is the URL that the React deployment sends requests to.</p>
<p>I've checked the YAML file and I've confirmed that the URL for the requests is the correct one to access the backend from within the K8s cluster. I SSHed into one of the pods for the React deployment and curled the URL to make sure, and it responded normally. I can't figure out why it's not responding when the requests are sent by the React deployment.</p>
| <p>The error is resolved. I set up a reverse proxy server using NGINX to route my requests and deployed it to K8s along with the other layers. The app is working as intended now.</p>
|
<p>For reasons I need to parse a certain value from the <code>kubectl get pods</code> respsonse.</p>
<p>I need the value of <code>status.conditions..status</code> where <code>status.conditions..type=="ContainersReady"</code>.</p>
<p>But I'm struggling with the required <code>jsonpath</code> query.</p>
<p>I can retrieve the structure as follows:</p>
<p><code>-o jsonpath='{range @.status.conditions}{@}{end}{"\n"}' | jq</code>:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"lastProbeTime": null,
"lastTransitionTime": "2023-10-02T10:45:42Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2023-10-02T10:45:46Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2023-10-02T10:45:46Z",
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2023-10-02T10:45:35Z",
"status": "True",
"type": "PodScheduled"
}
]
</code></pre>
<p>But when I try something like <code>range .items[?(@.status.conditions..type=="ContainersReady")]</code>, I don't receive the expected element.</p>
<p>Question: Is is possible to utilize the <code>jsonpath</code> option in the intended way and if so, how should the expression look like?</p>
| <pre><code>kubectl get pods -o=jsonpath='{.items[*].status.conditions[?(@.type=="ContainersReady")].status}'
</code></pre>
|
<p>I created some Go code to list all resources in all namespaces.</p>
<pre><code>package main
import (
"context"
"fmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
configOverrides := &clientcmd.ConfigOverrides{}
kubeconfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loadingRules, configOverrides)
config, err := kubeconfig.ClientConfig()
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
dynClient, err := dynamic.NewForConfig(config)
if err != nil {
panic(err.Error())
}
discoveryClient := clientset.Discovery()
// Get the list of all API resources available
serverResources, err := discoveryClient.ServerPreferredResources()
if err != nil {
panic(err.Error())
}
for _, group := range serverResources {
for _, resource := range group.APIResources {
// Skip subresources like pod/logs, pod/status
if containsSlash(resource.Name) {
continue
}
gvr := schema.GroupVersionResource{
Group: group.GroupVersion,
Version: resource.Version,
Resource: resource.Name,
}
if gvr.Group == "v1" {
gvr.Version = gvr.Group
gvr.Group = ""
}
// if resource.Name != "machines" {
// continue
// }
var list *unstructured.UnstructuredList
if resource.Namespaced {
list, err = dynClient.Resource(gvr).List(context.TODO(), metav1.ListOptions{})
if err != nil {
fmt.Printf("..Error listing %s: %v. group %q version %q resource %q\n", resource.Name, err,
gvr.Group, gvr.Version, gvr.Resource)
continue
}
printResources(list, resource.Name, gvr)
} else {
list, err = dynClient.Resource(gvr).List(context.TODO(), metav1.ListOptions{})
if err != nil {
fmt.Printf("..Error listing %s: %v\n", resource.Name, err)
continue
}
printResources(list, resource.Name, gvr)
}
}
}
}
func containsSlash(s string) bool {
return len(s) > 0 && s[0] == '/'
}
func printResources(list *unstructured.UnstructuredList, resourceName string, gvr schema.GroupVersionResource) {
fmt.Printf("Found %d resources of type %s. group %q version %q resource %q\n", len(list.Items), resourceName, gvr.Group, gvr.Version, gvr.Resource)
}
</code></pre>
<p>Unfortunately it takes more than 20 seconds in my small development cluster.</p>
<p>I guess I am doing something wrong.</p>
<p>Is there a way to reduce the number of API calls (or other ways to make it faster)?</p>
| <p>you can use goroutines to asynchronsly call the apis and print them (not in order) :</p>
<pre><code>var wg sync.WaitGroup
for _, group := range serverResources {
wg.Add(1)
// goroutine for executing the seconde loop for
go func() {
defer wg.Done()
for _, resource := range group.APIResources {
// the code seconde for here....
}
}()
}
wg.Wait()
</code></pre>
|