Question
stringlengths 65
39.6k
| Answer
stringlengths 38
29.1k
|
---|---|
<p>I'm trying to deploy a HA Keycloak cluster (2 nodes) on Kubernetes (GKE). So far the cluster nodes (pods) are failing to discover each other in all the cases as of what I deduced from the logs. Where the pods initiate and the service is up but they fail to see other nodes.</p>
<p>Components</p>
<ul>
<li>PostgreSQL DB deployment with a clusterIP service on the default port.</li>
<li>Keycloak Deployment of 2 nodes with the needed ports container ports 8080, 8443, a relevant clusterIP, and a service of type LoadBalancer to expose the service to the internet</li>
</ul>
<p>Logs Snippet:</p>
<pre><code>INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000078: Starting JGroups channel ejb
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000094: Received new cluster view for channel ejb: [keycloak-567575d6f8-c5s42|0] (1) [keycloak-567575d6f8-c5s42]
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000094: Received new cluster view for channel ejb: [keycloak-567575d6f8-c5s42|0] (1) [keycloak-567575d6f8-c5s42]
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-3) ISPN000094: Received new cluster view for channel ejb: [keycloak-567575d6f8-c5s42|0] (1) [keycloak-567575d6f8-c5s42]
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000079: Channel ejb local address is keycloak-567575d6f8-c5s42, physical addresses are [127.0.0.1:55200]
.
.
.
INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: Keycloak 15.0.2 (WildFly Core 15.0.1.Final) started in 67547ms - Started 692 of 978 services (686 services are lazy, passive or on-demand)
INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management
INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
</code></pre>
<p><strong>And as we can see in the above logs the node sees itself as the only container/pod ID</strong></p>
<h2>Trying KUBE_PING protocol</h2>
<p>I tried using the <strong>kubernetes.KUBE_PING</strong> protocol for discovery but it didn't work and the call to the kubernetes downward API. With a <strong>403 Authorization error</strong> in the logs (BELOW IS PART OF IT):</p>
<pre><code>Server returned HTTP response code: 403 for URL: https://[SERVER_IP]:443/api/v1/namespaces/default/pods
</code></pre>
<p>At this point, I was able to log in to the portal and do the changes but it was not yet an HA cluster since changes were not replicated and the session was not preserved, in other words, if I delete the pod that I was using I was redirected to the other with a new session (as if it was a separate node)</p>
<h2>Trying DNS_PING protocol</h2>
<p>When I tried DNS_PING things were different I had no Kubernetes downward API issues but I was not able to log in.</p>
<p>In detail, I was able to reach the login page normally, but when I enter my credentials and try logging in the page tries loading but gets me back to the login page with no logs in the pods in this regard.</p>
<p>Below are some of the references I resorted to over the past couple of days:</p>
<ul>
<li><a href="https://github.com/keycloak/keycloak-containers/blob/main/server/README.md#openshift-example-with-dnsdns_ping" rel="nofollow noreferrer">https://github.com/keycloak/keycloak-containers/blob/main/server/README.md#openshift-example-with-dnsdns_ping</a></li>
<li><a href="https://github.com/keycloak/keycloak-containers/blob/main/server/README.md#clustering" rel="nofollow noreferrer">https://github.com/keycloak/keycloak-containers/blob/main/server/README.md#clustering</a></li>
<li><a href="https://www.youtube.com/watch?v=g8LVIr8KKSA" rel="nofollow noreferrer">https://www.youtube.com/watch?v=g8LVIr8KKSA</a></li>
<li><a href="https://www.keycloak.org/2019/05/keycloak-cluster-setup.html" rel="nofollow noreferrer">https://www.keycloak.org/2019/05/keycloak-cluster-setup.html</a></li>
<li><a href="https://www.keycloak.org/docs/latest/server_installation/#creating-a-keycloak-custom-resource-on-kubernetes" rel="nofollow noreferrer">https://www.keycloak.org/docs/latest/server_installation/#creating-a-keycloak-custom-resource-on-kubernetes</a></li>
</ul>
<h2>My Yaml Manifest files</h2>
<p><strong>Postgresql Deployment</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
value: "postgres"
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
</code></pre>
<p><strong>Keycloak HA cluster Deployment</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: jboss/keycloak
env:
- name: KEYCLOAK_USER
value: admin
- name: KEYCLOAK_PASSWORD
value: admin123
- name: DB_VENDOR
value: POSTGRES
- name: DB_ADDR
value: "postgres"
- name: DB_PORT
value: "5432"
- name: DB_USER
value: "postgres"
- name: DB_PASSWORD
value: "postgres"
- name: DB_SCHEMA
value: "public"
- name: DB_DATABASE
value: "keycloak"
# - name: JGROUPS_DISCOVERY_PROTOCOL
# value: kubernetes.KUBE_PING
# - name: JGROUPS_DISCOVERY_PROPERTIES
# value: dump_requests=true,port_range=0,namespace=default
# value: port_range=0,dump_requests=true
- name: JGROUPS_DISCOVERY_PROTOCOL
value: dns.DNS_PING
- name: JGROUPS_DISCOVERY_PROPERTIES
value: "dns_query=keycloak"
- name: CACHE_OWNERS_COUNT
value: '2'
- name: CACHE_OWNERS_AUTH_SESSIONS_COUNT
value: '2'
- name: PROXY_ADDRESS_FORWARDING
value: "true"
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
---
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
selector:
app: keycloak
---
apiVersion: v1
kind: Service
metadata:
name: keycloak-np
labels:
app: keycloak
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
selector:
app: keycloak
</code></pre>
<h2>IMPORTANT NOTE</h2>
<ul>
<li>I tried both protocols with and without the database setup.</li>
<li>The above yaml has all the discovery protocol combinations I tried each at a time (the ones commented)</li>
</ul>
| <h3>For Keycloak 17 and newer</h3>
<p>By default, those versions use DNS_PING as the discovery mechanism for JGroups (the underlying cluster mechanism) but you still need to activate it.</p>
<p>You'll need:</p>
<ul>
<li>a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless service</a> pointing to your keycloak pods (a headless service is just a normal service but with <code>ClusterIP: none</code>)</li>
<li>env <code>KC_CACHE_STACK=kubernetes</code> (to activate the kubernetes jgroup configs) and <code>JAVA_OPTS_APPEND=-Djgroups.dns.query=<name-of-headless-service></code> (to tell it how to find the other keycloak pods).</li>
</ul>
<p>That way, when starting up, jgroups will issue a dns query for (example: keycloak-headless.my_namespace.svc.cluster.local) and the response will be the IP of all pods associated to the headless service.</p>
<p>JGroups will then contact every IP in communication port and stablish the cluster.</p>
<hr />
<p><strong>UPDATE</strong> 2022-08-01: This configuration below is for the legacy version of keycloak (or versions up to 16). From 17 on Keycloak migrated to the Quarkus distribution and the configuration is different, as above.</p>
<h3>For Keycloak up to 16</h3>
<p>The way KUBE_PING works is similar to running <code>kubectl get pods</code> inside one Keycloak pod to find the other Keycloak pods' IPs and then trying to connect to them one by one. However, Keycloak does this by querying the Kubernetes API directly instead of using <code>kubectl</code>.</p>
<p>To access the Kubernetes API, Keycloak needs credentials in the form of an access token. You can pass your token directly, but this is not very secure or convenient.</p>
<p>Kubernetes has a built-in mechanism for injecting a token into a pod (or the software running inside that pod) to allow it to query the API. This is done by creating a service account, giving it the necessary permissions through a RoleBinding, and setting that account in the pod configuration.</p>
<p>The token is then mounted as a file at a known location, which is hardcoded and expected by all Kubernetes clients. When the client wants to call the API, it looks for the token at that location.</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">You can get a deeper look at the Service Account mechanism in the documentation</a>.</p>
<p>In some situations, you may not have the necessary permissions to create RoleBindings. In this case, you can ask an administrator to create the service account and RoleBinding for you or pass your own user's token (if you have the necessary permissions) through the SA_TOKEN_FILE environment variable.</p>
<p>You can create the file using a secret or configmap, mount it to the pod, and set SA_TOKEN_FILE to the file location. Note that this method is specific to JGroups library (used by Keycloak) and <a href="https://github.com/jgroups-extras/jgroups-kubernetes/blob/master/src/main/java/org/jgroups/protocols/kubernetes/KUBE_PING.java" rel="nofollow noreferrer">the documentation is here</a>.</p>
<hr />
<p>If you do have permissions to create service accounts and RoleBindings in the cluster:</p>
<p>An example (not tested):</p>
<pre class="lang-bash prettyprint-override"><code>export TARGET_NAMESPACE=default
# convenient method to create a service account
kubectl create serviceaccount keycloak-kubeping-service-account -n $TARGET_NAMESPACE
# No convenient method to create Role and RoleBindings
# Needed to explicitly define them.
cat <<EOF | kubectl apply -f -
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: keycloak-kubeping-pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: keycloak-kubeping-api-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: keycloak-kubeping-pod-reader
subjects:
- kind: ServiceAccount
name: keycloak-kubeping-service-account
namespace: $TARGET_NAMESPACE
EOF
</code></pre>
<p>On the deployment, you set the serviceAccount:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
spec:
template:
spec:
serviceAccount: keycloak-kubeping-service-account
serviceAccountName: keycloak-kubeping-service-account
containers:
- name: keycloak
image: jboss/keycloak
env:
# ...
- name: JGROUPS_DISCOVERY_PROTOCOL
value: kubernetes.KUBE_PING
- name: JGROUPS_DISCOVERY_PROPERTIES
value: dump_requests=true
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
# ...
</code></pre>
<p><code>dump_requests=true</code> will help you debug Kubernetes requests. Better to have it <code>false</code> in production. You can use <code>namespace=<yournamespace</code> instead of <code>KUBERNETES_NAMESPACE</code>, but that's a handy way the pod has to autodetect the namespace it's running at.</p>
<p>Please note that KUBE_PING will find all pods in the namespace, not only keycloak pods, and will try to connect to all of them. Of course, if your other pods don't care about that, it's OK.</p>
|
<p>I'm trying to install Kibana with a plugin via the <code>initContainers</code> functionality and it doesn't seem to create the pod with the plugin in it.</p>
<p>The pod gets created and Kibana works perfectly, but the plugin is not installed using the yaml below.</p>
<p><a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-init-containers-plugin-downloads.html" rel="nofollow noreferrer">initContainers Documentation</a></p>
<pre><code>apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.11.2
count: 1
elasticsearchRef:
name: quickstart
podTemplate:
spec:
initContainers:
- name: install-plugins
command:
- sh
- -c
- |
bin/kibana-plugin install https://github.com/fbaligand/kibana-enhanced-table/releases/download/v1.11.2/enhanced-table-1.11.2_7.11.2.zip
</code></pre>
| <p>Got Kibana working with plugins by using a custom container image</p>
<p>dockerfile</p>
<pre><code>FROM docker.elastic.co/kibana/kibana:7.11.2
RUN /usr/share/kibana/bin/kibana-plugin install https://github.com/fbaligand/kibana-enhanced-table/releases/download/v1.11.2/enhanced-table-1.11.2_7.11.2.zip
RUN /usr/share/kibana/bin/kibana --optimize
</code></pre>
<p>yaml</p>
<pre><code>apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.11.2
image: my-conatiner-path/kibana-with-plugins:7.11.2
count: 1
elasticsearchRef:
name: quickstart
</code></pre>
|
<p>I am learning StackExchange.Redis, and Kubernetes, so I made a simple .net core app that reads a key/value from a Redis master+2slaves deployed on kubernetes. (so, everything, Redis and my app, run inside containers)</p>
<p>To connect to redis I use the syntax suggested in the doc: </p>
<pre><code>ConnectionMultiplexer.Connect("server1:6379,server2:6379,server3:6379");
</code></pre>
<p>However, if I monitor the 3 containers with redis-cli MONITOR, the requests are processed always from the master, the 2 slaves do nothing, so there is no load balancing. </p>
<p>I have tried also to connect to a Kubernetes load balancer service which exposes the 3 Redis containers endpoints, the result is that when I start the .net app the request is processed randomly by one of the 3 Redis nodes, but then always on the same node. I have to restart the .net app and it will query another node but subsequent queries go always on that node.</p>
<p>What is the proper way to read key/values in a load balanced way using StackExchange.Redis with a master/slave Redis setup?</p>
<p>Thank you</p>
| <p>SE.Redis has a <code>CommandFlags</code> parameter that is optional on every command. There are some useful and relevant options here:</p>
<ul>
<li><code>DemandPrimary</code></li>
<li><code>PreferPrimary</code></li>
<li><code>DemandReplica</code></li>
<li><code>PreferReplica</code></li>
</ul>
<p>The default behaviour is <code>PreferPrimary</code>; write operations bump that to <code>DemandPrimary</code>, and there are a <em>very few</em> commands that actively prefer replicas (keyspace iteration, etc).</p>
<p>So: if you aren't specifying <code>CommandFlags</code>, then right now you're probably using the default: <code>PreferPrimary</code>. Assuming a primary exists and is reachable, then: it will use the primary. And there can only be one primary, so: it'll use one server.</p>
<p>A cheap option for today would be to add <code>PreferReplica</code> as a <code>CommandFlags</code> option on your high-volume read operations. This will push the work to the replicas if they can be resolved - or if no replicas can be found: the primary. Since there can be multiple replicas, it applies a basic rotation-based load-balancing scheme, and you should start to see load on multiple replicas.</p>
<p>If you want to spread load over all nodes including primaries and replicas... then I'll need to add new code for that. So if you want that, please log it as an issue on the github repo.</p>
|
<p><strong>UPDATE:</strong></p>
<p>With @Tanktalus 's answer, I realized that it was the left most <code>kubectl</code> command is buffered. </p>
<pre class="lang-sh prettyprint-override"><code># will hang forever, because RHS pipe is broken, and LHS pipe need to send
# the output to the pipe to realize the broken pipe, but as the buffer is
# never filled, it's never broken
kubectl logs -f pod -n NAMESPACE | grep -q "Indicator"
# put LHS to the background, because I don't care if it hang, I just need the log.
(kubectl logs -f pod -n NAMESPACE &) | grep -q "Indicator"
</code></pre>
<p>But I have a new problem, the following now hang forever:<br>
<code>(kubectl logs -f pod -n NAMESPACE &)| tee log >(grep -q "Indicator")</code></p>
<hr>
<p><strong>ORIGINAL QUESTION:</strong><br>
First of all this is not repeated with other similar questions, I have read them all. The subtle difference is that my streamed log is inactive right after the string indicator I am trying to grep.</p>
<p>I have a continuous streamed log output from kubernetes pod. The indicator string "Indicator" will appear in the end of the log generator application, and the log generator goes <code>sleep infinity</code>. So the log will still be streamed, but gives no new output.</p>
<p>I am trying use a pipe <code>|</code> to redirect my kubernetes' streamed log, then grep each line of the log, until I find the "Indicator", then I want to (immediately) exit. The commands I have tried are like:</p>
<pre><code># none of them worked, they all show the Indicator line, and then hangs forever.
kubectl logs -f pod -n NAMESPACE | tee test.log >(grep -q "Indicator")
stdbuf -o 0 kubectl logs -f pod -n NAMESPACE | tee test.log >(grep -m1 "Indicator")
stdbuf -o 0 kubectl logs -f pod -n NAMESPACE | tee test.log >(grep -q --line-buffered "Indicator")
stdbuf -o 0 kubectl logs -f pod -n NAMESPACE | grep -q --line-buffered "Indicator"
</code></pre>
<p>But because after the "Indicator", there will be only one more line of log "+ Sleep infinity". I guess the output buffer from the leftmost end of the pipe is not full, and thus it's not passed to grep?</p>
<p>Is there any way to solve this issue ?</p>
| <p>I suspect it's because <code>kubectl</code> hasn't exited that the shell doesn't continue on. If you look at the <code>ps</code> output, you'll notice that <code>grep -m1 ...</code> does actually exit, and doesn't exist anymore, but the rest of the pipe still exists.</p>
<p>So I suspect you'll need to invert this. In perl, for example, I would use <code>open</code> to open a pipe to kubectl, read the output until I found what I wanted, kill the child, and exit. In C, the same thing with <code>popen</code>. I'm not sure if bash gives quite that level of control.</p>
<p>For example:</p>
<pre><code> perl -E 'my $pid = open my $fh, "-|", qw(perl -E), q($|++; say for 1..10; say "BOOM"; say "Sleep Infinity"; sleep 50) or die "Cannot run: $!"; while(<$fh>) { if (/BOOM/) { say; kill "INT", $pid; exit 0 } }'
</code></pre>
<p>You'll have to replace the stuff in the <code>open</code> after <code>"-|"</code> with your own command, and the <code>if (/BOOM/)</code> with your own regex, but otherwise it should work.</p>
|
<p>I have a Ansible <code>group_vars</code> directory with the following file within it:</p>
<pre><code>$ cat inventory/group_vars/env1
...
...
ldap_config: !vault |
$ANSIBLE_VAULT;1.1;AES256
31636161623166323039356163363432336566356165633232643932623133643764343134613064
6563346430393264643432636434356334313065653537300a353431376264333463333238383833
31633664303532356635303336383361386165613431346565373239643431303235323132633331
3561343765383538340a373436653232326632316133623935333739323165303532353830386532
39616232633436333238396139323631633966333635393431373565643339313031393031313836
61306163333539616264353163353535366537356662333833653634393963663838303230386362
31396431636630393439306663313762313531633130326633383164393938363165333866626438
...
...
</code></pre>
<p>This Ansible encrypted string has a Kubernetes secret encapsulated within it. A base64 blob that looks something like this:</p>
<pre><code>IyMKIyBIb3N0IERhdGFiYXNlCiMKIyBsb2NhbGhvc3QgaXMgdXNlZCB0byBjb25maWd1cmUgdGhlIGxvb3BiYWNrIGludGVyZmFjZQojIHdoZW4gdGhlIHN5c3RlbSBpcyBib290aW5nLiAgRG8gbm90IGNoYW5nZSB0aGlzIGVudHJ5LgojIwoxMjcuMC4wLjEJbG9jYWxob3N0CjI1NS4yNTUuMjU1LjI1NQlicm9hZGNhc3Rob3N0Cjo6MSAgICAgICAgICAgICBsb2NhbGhvc3QKIyBBZGRlZCBieSBEb2NrZXIgRGVza3RvcAojIFRvIGFsbG93IHRoZSBzYW1lIGt1YmUgY29udGV4dCB0byB3b3JrIG9uIHRoZSBob3N0IGFuZCB0aGUgY29udGFpbmVyOgoxMjcuMC4wLjEga3ViZXJuZXRlcy5kb2NrZXIuaW50ZXJuYWwKIyBFbmQgb2Ygc2VjdGlvbgo=
</code></pre>
<p>How can I decrypt this in a single CLI?</p>
| <p>We can use an Ansible adhoc command to retrieve the variable of interest, <code>ldap_config</code>. To start we're going to use this adhoc to retrieve the Ansible encrypted vault string:</p>
<pre><code>$ ansible -i "localhost," all \
-m debug \
-a 'msg="{{ ldap_config }}"' \
--vault-password-file=~/.vault_pass.txt \
-e@inventory/group_vars/env1
localhost | SUCCESS => {
"msg": "ABCD......."
</code></pre>
<p>Make note that we're: </p>
<ul>
<li>using the <code>debug</code> module and having it print the variable, <code>msg={{ ldap_config }}</code></li>
<li>giving <code>ansible</code> the path to the secret to decrypt encrypted strings</li>
<li>using the notation <code>-e@< ...path to file...></code> to pass the file with the encrypted vault variables</li>
</ul>
<p>Now we can use Jinja2 filters to do the rest of the parsing:</p>
<pre><code>$ ansible -i "localhost," all \
-m debug \
-a 'msg="{{ ldap_config | b64decode | from_yaml }}"' \
--vault-password-file=~/.vault_pass.txt \
-e@inventory/group_vars/env1
localhost | SUCCESS => {
"msg": {
"apiVersion": "v1",
"bindDN": "uid=readonly,cn=users,cn=accounts,dc=mydom,dc=com",
"bindPassword": "my secret password to ldap",
"ca": "",
"insecure": true,
"kind": "LDAPSyncConfig",
"rfc2307": {
"groupMembershipAttributes": [
"member"
],
"groupNameAttributes": [
"cn"
],
"groupUIDAttribute": "dn",
"groupsQuery": {
"baseDN": "cn=groups,cn=accounts,dc=mydom,dc=com",
"derefAliases": "never",
"filter": "(objectclass=groupOfNames)",
"scope": "sub"
},
"tolerateMemberNotFoundErrors": false,
"tolerateMemberOutOfScopeErrors": false,
"userNameAttributes": [
"uid"
],
"userUIDAttribute": "dn",
"usersQuery": {
"baseDN": "cn=users,cn=accounts,dc=mydom,dc=com",
"derefAliases": "never",
"scope": "sub"
}
},
"url": "ldap://192.168.1.10:389"
}
}
</code></pre>
<p><strong>NOTE:</strong> The above section <code>-a 'msg="{{ ldap_config | b64decode | from_yaml }}"</code> is what's doing the heavy lifting in terms of converting from Base64 to YAML.</p>
<h3>References</h3>
<ul>
<li><a href="https://stackoverflow.com/questions/37652464/how-to-run-ansible-without-hosts-file">How to run Ansible without hosts file</a></li>
<li><a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#filters-for-formatting-data" rel="nofollow noreferrer">https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#filters-for-formatting-data</a></li>
<li><a href="https://stackoverflow.com/questions/39047657/base64-decode-string-in-jinja">Base64 Decode String in jinja</a></li>
<li><a href="https://stackoverflow.com/questions/43467180/how-to-decrypt-string-with-ansible-vault-2-3-0">How to decrypt string with ansible-vault 2.3.0</a></li>
</ul>
|
<p>I'm deploying a simple app in Kubernetes (on AKS) which is sat behind an Ingress using Nginx, deployed using the Nginx helm chart. I have a problem that for some reason Nginx doesn't seem to be passing on the full URL to the backend service. </p>
<p>For example, my Ingress is setup with the URL of <a href="http://app.client.com" rel="nofollow noreferrer">http://app.client.com</a> and a path of /app1g going <a href="http://app.client.com/app1" rel="nofollow noreferrer">http://app.client.com/app1</a> works fine. However if I try to go to <a href="http://app.client.com/app1/service1" rel="nofollow noreferrer">http://app.client.com/app1/service1</a> I just end up at <a href="http://app.client.com/app1" rel="nofollow noreferrer">http://app.client.com/app1</a>, it seems to be stripping everything after the path.</p>
<p>My Ingress looks like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2019-04-03T12:44:22Z"
generation: 1
labels:
chart: app-1.1
component: app
hostName: app.client.com
release: app
name: app-ingress
namespace: default
resourceVersion: "1789269"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/app-ingress
uid: 34bb1a1d-560e-11e9-bd46-9a03420914b9
spec:
rules:
- host: app.client.com
http:
paths:
- backend:
serviceName: app-service
servicePort: 8080
path: /app1
tls:
- hosts:
- app.client.com
secretName: app-prod
status:
loadBalancer:
ingress:
- {}
</code></pre>
<p>If I port forward to the service and hit that directly it works.</p>
| <p>So I found the answer to this. It seems that as of Nginx v0.22.0 you are required to use capture groups to capture any substrings in the request URI. Prior to 0.22.0 using just <code>nginx.ingress.kubernetes.io/rewrite-target: /</code> worked for any substring. Now it does not. I needed to ammend my ingress to use this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
creationTimestamp: "2019-04-03T12:44:22Z"
generation: 1
labels:
chart: app-1.1
component: app
hostName: app.client.com
release: app
name: app-ingress
namespace: default
resourceVersion: "1789269"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/app-ingress
uid: 34bb1a1d-560e-11e9-bd46-9a03420914b9
spec:
rules:
- host: app.client.com
http:
paths:
- backend:
serviceName: app-service
servicePort: 8080
path: /app1/?(.*)
tls:
- hosts:
- app.client.com
secretName: app-prod
status:
loadBalancer:
ingress:
- {}
</code></pre>
|
<p>I have noticed that when I create and mount a config map that contains some text files, the container will see those files as symlinks to <code>../data/myfile.txt</code> .</p>
<p>For example, if my config map is named tc-configs and contains 2 xml files named stripe1.xml and stripe2.xml, if I mount this config map to /configs in my container, I will have, in my container :</p>
<pre><code>bash-4.4# ls -al /configs/
total 12
drwxrwxrwx 3 root root 4096 Jun 4 14:47 .
drwxr-xr-x 1 root root 4096 Jun 4 14:47 ..
drwxr-xr-x 2 root root 4096 Jun 4 14:47 ..2018_06_04_14_47_03.291041453
lrwxrwxrwx 1 root root 31 Jun 4 14:47 ..data -> ..2018_06_04_14_47_03.291041453
lrwxrwxrwx 1 root root 18 Jun 4 14:47 stripe1.xml -> ..data/stripe1.xml
lrwxrwxrwx 1 root root 18 Jun 4 14:47 stripe2.xml -> ..data/stripe2.xml
</code></pre>
<p>I guess Kubernetes requires those symlinks and ../data and ..timestamp/ folders, but I know some applications that can fail to startup if they see non expected files or folders</p>
<p>Is there a way to tell Kubernetes not to generate all those symlinks and directly mount the files ?</p>
| <p>I think this solution is satisfactory : specifying exact file path in mountPath, will get rid of the symlinks to <code>..data and ..2018_06_04_19_31_41.860238952</code></p>
<p>So if I apply such a manifest : </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html/users.xml
name: site-data
subPath: users.xml
volumes:
- name: site-data
configMap:
name: users
---
apiVersion: v1
kind: ConfigMap
metadata:
name: users
data:
users.xml: |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<users>
</users>
</code></pre>
<p>Apparently, I'm making use of <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="noreferrer">subpath</a> explicitly, and they're not part of the "auto update magic" from ConfigMaps, I won't see any more symlinks : </p>
<pre><code>$ kubectl exec my-lamp-site -c php -- ls -al /var/www/html
total 12
drwxr-xr-x 1 www-data www-data 4096 Jun 4 19:18 .
drwxr-xr-x 1 root root 4096 Jun 4 17:58 ..
-rw-r--r-- 1 root root 73 Jun 4 19:18 users.xml
</code></pre>
<p>Be careful to not forget <code>subPath</code>, otherwise users.xml will be a directory !</p>
<p>Back to my initial manifest :</p>
<pre><code>spec:
containers:
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
volumes:
- name: site-data
configMap:
name: users
</code></pre>
<p>I'll see those symlinks coming back :</p>
<pre><code>$ kubectl exec my-lamp-site -c php -- ls -al /var/www/html
total 12
drwxrwxrwx 3 root root 4096 Jun 4 19:31 .
drwxr-xr-x 3 root root 4096 Jun 4 17:58 ..
drwxr-xr-x 2 root root 4096 Jun 4 19:31 ..2018_06_04_19_31_41.860238952
lrwxrwxrwx 1 root root 31 Jun 4 19:31 ..data -> ..2018_06_04_19_31_41.860238952
lrwxrwxrwx 1 root root 16 Jun 4 19:31 users.xml -> ..data/users.xml
</code></pre>
<p><strong>Many thanks to <a href="https://stackoverflow.com/users/46842/psycotica0">psycotica0</a> on <a href="https://k8scanada.slack.com/" rel="noreferrer">K8s Canada slack</a> for putting me on the right track with <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="noreferrer">subpath</a> (they are quickly mentioned in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="noreferrer">configmap documentation</a>)</strong></p>
|
<p>There are a set of proxy environment variables (http_proxy, HTTP_PROXY, https_proxy, HTTPS_PROXY, no_proxy, NO_PROXY) in my OpenShift pods that I did not explicitly include and I do not want them there.</p>
<p>For example</p>
<pre><code>$ oc run netshoot -it --image docker-registry.default.svc:5000/default/netshoot -- bash
If you don't see a command prompt, try pressing enter.
bash-4.4$ env | grep -i proxy | sort
HTTPS_PROXY=http://xx.xx.xx.xx:8081/
HTTP_PROXY=http://xx.xx.xx.xx:8081/
NO_PROXY=.cluster.local,.mydomain.nz,.localdomain.com,.svc,10.xx.xx.xx,127.0.0.1,172.30.0.1,app01.mydomain.nz,app02.mydomain.nz,inf01.mydomain.nz,inf02.mydomain.nz,mst01.mydomain.nz,localaddress,localhost,.edpay.nz
http_proxy=xx.xx.xx.xx:8081
https_proxy=xx.xx.xx.xx:8081
no_proxy=.cluster.local,.mydomain.nz,.localdomain.com,.svc,10.xx.xx.xx,127.0.0.1,172.30.0.1,app01.mydomain.nz,app02.mydomain.nz,inf01.mydomain.nz,inf02.mydomain.nz,mst01.mydomain.nz,localaddress,localhost,.edpay.nz
</code></pre>
<p>I have yet to track down how those env vars are getting into my pods.</p>
<p>I am not <a href="https://docs.openshift.com/container-platform/3.9/install_config/http_proxies.html#setting-environment-variables-in-pods" rel="nofollow noreferrer">Setting Proxy Environment Variables in Pods</a>.</p>
<pre><code>$ oc get pod netshoot-1-hjp2p -o yaml | grep -A 10 env
[no output]
$ oc get deploymentconfig netshoot -o yaml | grep -A 10 env
[no output]
</code></pre>
<p>I am not <a href="https://docs.openshift.com/container-platform/3.9/dev_guide/pod_preset.html#dev-guide-pod-presets-create" rel="nofollow noreferrer">Creating Pod Presets</a></p>
<pre><code>$ oc get podpresets --all-namespaces
No resources found.
</code></pre>
<p>Docker on my master/app nodes have no proxy env vars.</p>
<pre><code>$ grep -i proxy /etc/sysconfig/docker
[no output]
</code></pre>
<p>Kubelet (openshift-node) on my master/app nodes have no proxy env vars.</p>
<pre><code>$ grep -i proxy /etc/sysconfig/atomic-openshift-node
[no output]
</code></pre>
<p>Master components on my master nodes have no proxy env vars.</p>
<pre><code>$ grep -i proxy /etc/sysconfig/atomic-openshift-master
[no output]
$ grep -i proxy /etc/sysconfig/atomic-openshift-master-api
[no output]
$ grep -i proxy /etc/sysconfig/atomic-openshift-master-controllers
[no output]
</code></pre>
<p>Contents of sysconfig files (not including comments)</p>
<pre><code>$ cat /etc/sysconfig/atomic-openshift-master
OPTIONS="--loglevel=0"
CONFIG_FILE=/etc/origin/master/master-config.yaml
$ cat /etc/sysconfig/atomic-openshift-node
OPTIONS=--loglevel=2
CONFIG_FILE=/etc/origin/node/node-config.yaml
IMAGE_VERSION=v3.9.51
$ cat /etc/sysconfig/docker
OPTIONS=' --selinux-enabled --signature-verification=False --insecure-registry 172.30.0.0/16'
if [ -z "${DOCKER_CERT_PATH}" ]; then
DOCKER_CERT_PATH=/etc/docker
fi
ADD_REGISTRY='--add-registry registry.access.redhat.com'
$ cat /etc/sysconfig/atomic-openshift-master-api
OPTIONS=--loglevel=2 --listen=https://0.0.0.0:8443 --master=https://mst01.mydomain.nz:8443
CONFIG_FILE=/etc/origin/master/master-config.yaml
OPENSHIFT_DEFAULT_REGISTRY=docker-registry.default.svc:5000
$ cat /etc/sysconfig/atomic-openshift-master-controllers
OPTIONS=--loglevel=2 --listen=https://0.0.0.0:8444
CONFIG_FILE=/etc/origin/master/master-config.yaml
OPENSHIFT_DEFAULT_REGISTRY=docker-registry.default.svc:5000
</code></pre>
<p>I'm at a loss as to how those proxy env vars are getting into my pods. </p>
<p>Versions:</p>
<ul>
<li>OpenShift v3.9.51</li>
</ul>
| <p>We finally figured this out. We had <code>openshift_http_proxy</code>, <code>openshift_https_proxy</code>, and <code>openshift_no_proxy</code> set in our installer inventory variables as per <a href="https://docs.openshift.com/container-platform/3.9/install_config/install/advanced_install.html#advanced-install-configuring-global-proxy" rel="nofollow noreferrer">Configuring Global Proxy Options</a>.</p>
<p>We knew that this meant it also implicitly set the <code>openshift_builddefaults_http_proxy</code>, <code>openshift_builddefaults_https_proxy</code>, and <code>openshift_builddefaults_no_proxy</code> installer inventory variables and according to the docs</p>
<blockquote>
<p>This variable defines the HTTP_PROXY environment variable inserted into builds using the BuildDefaults admission controller. If you do not define this parameter but define the openshift_http_proxy parameter, the openshift_http_proxy value is used. Set the openshift_builddefaults_http_proxy value to False to disable default http proxy for builds regardless of the openshift_http_proxy value.</p>
</blockquote>
<p>What we did <em>not</em> know (and I would argue is not at all clear from the description above), is that setting those installer inventory variables sets the <code>HTTP_PROXY</code>, <code>HTTPS_PROXY</code>, and <code>NO_PROXY</code> env vars permanently within your images.</p>
<p>It's painfully apparent now when we look back on the build logs and see lines like this</p>
<pre><code>...
Step 2/19 : ENV "HTTP_PROXY" "xxx.xxx.xxx.xxx" "HTTPS_PROXY" "xxx.xxx.xxx.xxx" "NO_PROXY" "127.0.0.1,localhost,172.30.0.1,.svc,.cluster.local" "http_proxy" "xxx.xxx.xxx.xxx" "https_proxy" "xxx.xxx.xxx.xxx" "no_proxy" "127.0.0.1,localhost,172.30.0.1,.svc,.cluster.local"
...
</code></pre>
<p>We couldn't exclude proxy env vars from the pods because those env vars were set at build time.</p>
|
<p>In some cases, we have Services that get no response when trying to access them. Eg Chrome shows ERR_EMPTY_RESPONSE, and occasionally we get other errors as well, like 408, which I'm fairly sure is returned from the ELB, not our application itself.</p>
<p>After a long involved investigation, including ssh'ing into the nodes themselves, experimenting with load balancers and more, we are still unsure at which layer the problem actually exists: either in Kubernetes itself, or in the backing services from Amazon EKS (ELB or otherwise)</p>
<ul>
<li>It seems that only the instance (data) port of the node is the one that has the issue. The problems seems to come and go intermittently, which makes us believe it is not something obvious in our kubernetes manifest or docker configurations, but rather something else in the underlying infrastructure. Sometimes the service & pod will be working, but come back and the morning it will be broken. This leads us to believe that the issue stems from a redistribution of the pods in kubernetes, possibly triggered by something in AWS (load balancer changing, auto-scaling group changes, etc) or something in kubernetes itself when it redistributes pods for other reasons.</li>
<li>In all cases we have seen, the health check port continues to work without issue, which is why kubernetes and aws both thing that everything is ok and do not report any failures.</li>
<li>We have seen some pods on a node work, while others do not on that same node.</li>
<li>We have verified kube-proxy is running and that the iptables-save output is the "same" between two pods that are working. (the same meaning that everything that is not unique, like ip addresses and ports are the same, and consistent with what they should be relative to each other). (we used these instructions to help with these instructions: <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#is-the-kube-proxy-working" rel="noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#is-the-kube-proxy-working</a></li>
<li>From ssh on the node itself, for a pod that is failing, we CAN access the pod (ie the application itself) via all possible ip/ports that are expected.
<ul>
<li>the 10. address of the node itself, on the instance data port.</li>
<li>the 10. address of the pod (docker container) on the application port.</li>
<li>the 172. address of the ??? on the application port (we are not sure what that ip is, or how the ip route gets to it, as it is a different subnet than the 172 address of the docker0 interface).</li>
</ul></li>
<li>From ssh on another node, for a pod that is failing, we cannot access the failing pod on any ports (ERR_EMPTY_RESPONSE). This seems to be the same behaviour as the service/load balancer.</li>
</ul>
<p>What else could cause behaviour like this?</p>
| <p>After much investigation, we were fighting a number of issues:
* Our application didn't always behave the way we were expecting. Always check that first.
* In our Kubernetes Service manifest, we had set the <code>externalTrafficPolicy: Local</code>, which probably should work, but was causing us problems. (This was with using Classic Load Balancer) <code>service.beta.kubernetes.io/aws-load-balancer-type: "clb"</code>. So if you have problems with CLB, either remove the <code>externalTrafficPolicy</code> or explicitly set it to the default "Cluster" value.</p>
<p>So our manifest is now:
<code>
kind: Service
apiVersion: v1
metadata:
name: apollo-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "clb"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:REDACTED"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"<br>
spec:
externalTrafficPolicy: Cluster
selector:
app: apollo
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80
type: LoadBalancer
</code></p>
|
<p>I'm getting the following warning when trying to run the <code>aws</code> client with the <code>aws-iam-authenticator</code> for kubernetes:</p>
<pre><code>Warning: aws-iam-authenticator is not installed properly or is not in your path.
</code></pre>
<p>However, the <code>aws-iam-authenticator</code> is clearly in my path, since I can call <code>aws-iam-authenticator help</code> and it returns results:</p>
<pre><code>$ aws-iam-authenticator help
A tool to authenticate to Kubernetes using AWS IAM credentials
Usage:
heptio-authenticator-aws [command]
...
</code></pre>
<p>Oddly enough though, <code>which aws-iam-authenticator</code> does not return successfully. So something is odd with my <code>PATH</code>.</p>
<p>Here is a subset of my path:</p>
<pre><code>echo $PATH
/usr/local/sbin:~/work/helpers/bin:~/.rbenv/shims:...:/usr/bin:/bin:/usr/sbin:/sbin
</code></pre>
<p>The <code>aws-iam-authenticator</code> is located in <code>~/work/helpers/bin</code></p>
| <p>Turns out the issue is because I used the <code>~</code> in my <code>PATH</code>. I found <a href="https://github.com/kubernetes-sigs/aws-iam-authenticator/issues/136#issuecomment-417417212" rel="nofollow noreferrer">this issue and comment</a> that pointed me in the correct direction. Updating my <code>PATH</code> to below solved my issue:</p>
<pre><code>echo $PATH
/usr/local/sbin:/$HOME/work/helpers/bin:/$HOME/.rbenv/shims:...:/usr/bin:/bin:/usr/sbin:/sbin
# Where $HOME is expanded properly
</code></pre>
<p>I think it may be best practice to prefer <code>$HOME</code> vs <code>~</code> in <code>PATH</code> exports, but I can't find anything on SO / internet to confirm or deny.</p>
|
<p>I am trying to use named arguments for the method because it has quite a lot of arguments. I am using java-client for Kubernetes API. However, I get <code>Cannot resolve symbol</code> compile error. I am writing the code in Intellij IDEA, could it be some plugins problem?</p>
<p>Here is the code:</p>
<pre><code> def createWatchOnPodsCalls() = {
client.getHttpClient.setReadTimeout(0, TimeUnit.MILLISECONDS)
val watchEvents: Watch[V1Namespace] = Watch.createWatch(client,
apiInstance.listClusterCustomObjectCall(group=null, version=null, plural=null, pretty="true", labelSelector = null, resourceVersion = null, watch = false, progressListener = null, progressRequestListener = null),
new TypeToken[Watch.Response[V1Namespace]]{}.getType)
watchEvents
}
</code></pre>
| <p>Scala 2.12 <a href="https://github.com/scala/scala/pull/4735" rel="nofollow noreferrer">supports named parameters for Java if they are available</a>, but <a href="https://docs.oracle.com/javase/tutorial/reflect/member/methodparameterreflection.html" rel="nofollow noreferrer">by default Java is compiled without storing parameter names</a>. So most Java libraries don't have them (because the benefit in Java itself is minimal), including even <a href="https://contributors.scala-lang.org/t/proposal-named-parameters-for-java-apis/812/6" rel="nofollow noreferrer">the standard library</a> (unless that has changed recently).</p>
|
<p>I'd like to get the list of kubernetes pods for a service that are "Running" and fully "Ready".</p>
<p>And by fully ready, I mean shows a full "READY" count in k9s, so if there are 4 conditions for the pod to run, I see "READY" with "4/4" listed in k9s.</p>
<p>How can I do this?</p>
| <p>For a particular service, <code>my-service</code>, this only shows pods that are fully ready</p>
<pre><code>$ kubectl get pods --selector=app=my-service -o json | select_ready_pods.py
</code></pre>
<p>Similar idea for all pods</p>
<pre><code>$ kubectl get pods --all-namespaces -o json | select_ready_pods.py
</code></pre>
<p>List pods that are NOT ready</p>
<pre><code>$ kubectl get pods --selector=app=my-service -o json | select_ready_pods.py --not_ready
</code></pre>
<h2>select_ready_pods.py</h2>
<pre><code>#!/usr/bin/env python
import sys
import json
try:
a = json.load(sys.stdin)
except:
print("The data from stdin doesnt appear to be valid json. Fix this!")
sys.exit(1)
def main(args):
for i in a['items']:
length = len(i['status']['conditions'])
count = 0
for j in i['status']['conditions']:
if (j['status'] == "True"):
count=count+1
if (args.not_ready):
if (count != length):
print(i['metadata']['name'])
else:
if (count == length):
print(i['metadata']['name'])
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--not_ready", help="show pods that are NOT ready", action="store_true")
args = parser.parse_args()
main(args)
</code></pre>
|
<p>I am attempting to build a pod in kubernetes that has files mounted to the pod from my local system, in a similar way to mounting volumes in <code>docker-compose</code> files</p>
<p>I have tried the following, in an attempt to mount the local folder <code>./test</code> and files to the pod under the <code>/blah/</code> folder. However kubernetes is complaining that <code>MountVolume.SetUp failed for volume "config-volume" : hostPath type check failed: ./test/ is not a directory</code></p>
<p>Here is my <code>yaml</code> file. Am I missing something?</p>
<pre><code>kind: Service
metadata:
name: vol-test
labels:
app: vol-test
spec:
type: NodePort
ports:
- port: 8200
nodePort: 30008
selector:
app: vol-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vol-test
spec:
replicas: 1
selector:
matchLabels:
app: vol-test
template:
metadata:
labels:
app: vol-test
spec:
containers:
- name: vol-test
image: nginx
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: config-volume
mountPath: /blah/
ports:
- containerPort: 8200
volumes:
- name: config-volume
hostPath:
path: ./test/
type: Directory
</code></pre>
| <p>If you just want to pass a file or directory to a Pod for the purpose of reading configuration values (which I assume from your choice of volume mount <code>config-volume</code>) and has no need to update the file/directory, then you can just put the file(s) in a ConfigMap like below:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-router-config
data:
nginx.conf: |
worker_processes 2;
user nginx;
events {
worker_connections 1024;
}
http {
include mime.types;
charset utf-8;
client_max_body_size 8M;
server {
server_name _;
listen 80 default_server;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://drupal:80/ ;
proxy_redirect default;
}
location /api/ {
proxy_pass http://api-gateway:8080/ ;
proxy_redirect default;
}
}
}
</code></pre>
<p>Or you can have the file content imported from where you run the <code>kubectl</code> command and execute (assuming the file name is <code>nginx.conf</code>):</p>
<pre><code>kubectl create configmap nginx-router-config --from-file=nginx.conf
</code></pre>
<p>Then, you can mount the file(s) by adding <code>volumes</code> and <code>volumeMount</code> to the Deployment spec:</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx-router
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-router-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-router-config
configMap:
name: nginx-router-config
items:
- key: nginx.conf
path: nginx.conf
</code></pre>
<p>If you actually want to have read-write access to the file(s) then you can use PersistentVolume and PersistentVolumeClaim as suggested by the other answer, although I will not recommend using <code>hostPath</code> if you have more than one worker node.</p>
|
<p>I have a use case in which I need to run <code>kubectl</code> command in <code>crontab</code>. I am getting empty response from all <code>kubectl</code> commands</p>
<p>I have tried this</p>
<pre><code>#!/bin/sh
/usr/bin/kubectl get po >> /cron.txt
</code></pre>
<p>Help will be appreciated!</p>
| <p>I don't know why you're not getting any output, but I think I know why it's not telling you why it's not getting output.</p>
<p>Your command line is only going to get stdout, and drop stderr on the floor. It's likely that stderr will tell you why it's not working.</p>
<p>Try changing the command line to:</p>
<pre><code>/usr/bin/kubectl get po 2>&1 >> /cron.txt
</code></pre>
|
<p>Below are the setup details:
keda installed,
prometheus deployed,
through application using below scaled object template for hpa:</p>
<pre><code>keda:
triggers:
- metadata:
metricName: container_memory_usage
query: avg(floor((container_memory_usage_bytes{pod=~"pythonb-.*",container=~"proxy"} / container_spec_memory_limit_bytes != +Inf) * 100))
serverAddress: <serveraddress>
threshold: '70'
type: prometheus
</code></pre>
<p>basically we want to scale the deployment based on the given prom query.(based on container memory utilisation of particular pods..if it exceeds 70% then hpa will scale the pods. )
when we try the above query on Prometheus it returns the results as 8.<em>, 10.</em>. , 25.3. Basically single element response
But though keda it gives the result as below:</p>
<pre><code>kubectl get hpa -n integration keda-pythonb
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
keda-pythonb Deployment/pythonb 3500m/70 (avg), 34%/87% + 1 more... 2 10 2 14m
</code></pre>
<p>Instead of single value it gives 3500m as current value.
does keda convert the data returned from prom query? Any pointers would be helpful.
I hope the prom query is correct.</p>
| <p>We just solved this one after a lot of off-and-on hunting. Turns out KEDA has an option called <code>metricType</code> that you can specify under <code>triggers</code>. TLDR you need to set that to <code>"Value"</code>.</p>
<p>To understand why you need to dive into how HPA works in Kubernetes. When you define a <code>kind: HorizontalPodAutoscaler</code> you specify the metrics that are used for scaling. KEDA does this for you and creates an external metric like this:</p>
<pre class="lang-yaml prettyprint-override"><code> metrics:
- external:
metric:
name: ...
selector:
matchLabels:
scaledobject.keda.sh/name: ...
target:
type: AverageValue
averageValue: ...
type: External
</code></pre>
<p>There are <code>Value</code> and <code>AverageValue</code> metric types. <code>AverageValue</code> is the default, meant for metrics like <code>http-requests-per-second</code>, which would need to be divided by the number of replicas before compared to the target. <code>Value</code>, on the other hand, takes the direct value from your metric without averaging it.</p>
<p>Since your Prometheus query is returning an average across pods already, you need to use <code>Value</code>. The clue is in your <code>kubectl get hpa</code> output: <code>3500m/70 (avg)</code>.</p>
<p>See <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects" rel="nofollow noreferrer">docs on HPA with external metrics</a>.</p>
<p>In KEDA that is specified using the <code>metricType</code> option under the <code>triggers</code> field.</p>
<p>See <a href="https://keda.sh/docs/2.9/concepts/scaling-deployments/#triggers" rel="nofollow noreferrer">KEDA: Scaling Deployments</a></p>
|
<p>I'm using Kubernates for production environment (I'm new for these kinds of configuration), This is an example for one of my depolyment files(with changes):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myProd
labels:
app: thisIsMyProd
spec:
replicas: 3
selector:
matchLabels:
app: thisIsMyProd
template:
metadata:
labels:
app: thisIsMyProd
spec:
containers:
- name: myProd
image: DockerUserName/MyProdProject # <==== Latest
ports:
- containerPort: 80
</code></pre>
<p>Now, I wanted to make it works with the <code>travis ci</code>, So I made something similar to this:</p>
<pre><code>sudo: required
services:
- docker
env:
global:
- LAST_COMMIT_SHA=$(git rev-parse HEAD)
- SERVICE_NAME=myProd
- DOCKER_FILE_PATH=.
- DOCKER_CONTEXT=.
addons:
apt:
packages:
- sshpass
before_script:
- docker build -t $SERVICE_NAME:latest -f $DOCKER_FILE_PATH $DOCKER_CONTEXT
script:
# Mocking run test cases
deploy:
- provider: script
script: bash ./deployment/deploy-production.sh
on:
branch: master
</code></pre>
<p>And finally here is the <code>deploy-production.sh</code> script:</p>
<pre><code>#!/usr/bin/env bash
# Log in to the docker CLI
echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
# Build images
docker build -t $DOCKER_USERNAME/$SERVICE_NAME:latest -t $DOCKER_USERNAME/$SERVICE_NAME:$LAST_COMMIT_SHA -f $DOCKER_FILE_PATH $DOCKER_CONTEXT
# Take those images and push them to docker hub
docker push $DOCKER_USERNAME/$SERVICE_NAME:latest
docker push $DOCKER_USERNAME/$SERVICE_NAME:$LAST_COMMIT_SHA
# Run deployment script in deployment machine
export SSHPASS=$DEPLOYMENT_HOST_PASSWORD
ssh-keyscan -H $DEPLOYMENT_HOST >> ~/.ssh/known_hosts
# Run Kubectl commands
kubctl apply -f someFolder
kubctl set image ... # instead of the `...` the rest command that sets the image with SHA to the deployments
</code></pre>
<p>Now here are my questions:</p>
<ol>
<li><p>When <code>travis</code> finish its work, the <code>deploy-production.sh</code> script with run when it is about merging to the <code>master</code> branch, Now I've a concern about the <code>kubectl</code> step, for the first time deployment, when we <code>apply</code> the deployment it will <code>pull</code> the <code>image</code> from dockerhup and try to run them, after that the set image command will run changing the image of these depolyment. Does this will make the deployment to happen twice?</p></li>
<li><p>When I try to deploy for the second time, I noticed the deployment used old version from the <code>latest</code> image because it found it locally. After searching I found <code>imagePullPolicy</code> and I set it to <code>always</code>. But imagine that I didn't use that <code>imagePullPolicy</code> attribute, what would really happen in this case? I know that old-version code containers for the first <code>apply</code> command. But isn't running the set image will fix that? To clarify my question, Is kubernetes using some random way to select pods that are going to go down? Like it doesn't mark the pods with the order which the commands run, so it will detect that the set image pods should remain and the <code>apply</code> pods are the one who needs to be terminated?</p></li>
<li><p>Doesn't pulling every time is harmful? Should I always make the deployment image somehow not to use the <code>latest</code> is better to erase that hassle?</p></li>
</ol>
<p>Thanks</p>
| <ol>
<li><p>If the image tag is the same in both <code>apply</code> and <code>set image</code> then only the <code>apply</code> action re-deploy the Deployment (in which case you do not need the <code>set image</code> command). If they refer to different image tags then yes, the deployment will be run twice.</p></li>
<li><p>If you use <code>latest</code> tag, applying a manifest that use the <code>latest</code> tag with no modification WILL NOT re-deploy the Deployment. You need to introduce a modification to the manifest file in order to force Kubernetes to re-deploy. Like for my case, I use <code>date</code> command to generate a <code>TIMESTAMP</code> variable that is passed as in the <code>env</code> spec of the pod container which my container does not use in any way, just to force a re-deploy of the Deployment. Or you can also use <code>kubectl rollout restart deployment/name</code> if you are using Kubernetes 1.15 or later.</p></li>
<li><p>Other than wasted bandwidth or if you are being charged by how many times you pull a docker image (poor you), there is no harm with additional image pull just to be sure you are using the latest image version. Even if you use a specific image tag with version numbers like <code>1.10.112-rc5</code>, they will be case where you or your fellow developers forget to update the version number when pushing a modified image version. IMHO, <code>imagePullPolicy=always</code> should be the default rather than explicitly required.</p></li>
</ol>
|
<p>I've set up my Kubernetes cluster, and as part of that set up have set up an ingress rule to forward traffic to a web server.</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: alpha-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- alpha.example.com
secretName: letsencrypt-prod
rules:
- host: alpha.example.com
http:
paths:
- backend:
serviceName: web
servicePort: 80
</code></pre>
<p>Eventually the browser times out with a 504 error and in the Ingress log I see </p>
<blockquote>
<p>2019/01/27 23:45:38 [error] 41#41: *4943 upstream timed out (110:
Connection timed out) while reading response header from upstream,
client: 10.131.24.163, server: alpha.example.com, request: "GET /
HTTP/2.0", upstream: "<a href="http://10.244.93.12:80/" rel="nofollow noreferrer">http://10.244.93.12:80/</a>", host:
"alpha.example.com"</p>
</blockquote>
<p>I don't have any services on that IP address ...</p>
<pre><code>╰─$ kgs --all-namespaces 130 ↵
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default database ClusterIP 10.245.181.187 <none> 5432/TCP 4d8h
default kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 9d
default user-api ClusterIP 10.245.41.8 <none> 9000/TCP 4d8h
default web ClusterIP 10.245.145.213 <none> 80/TCP,443/TCP 34h
ingress-nginx ingress-nginx LoadBalancer 10.245.25.107 <external-ip> 80:31680/TCP,443:32324/TCP 50m
kube-system grafana ClusterIP 10.245.81.91 <none> 80/TCP 6d1h
kube-system kube-dns ClusterIP 10.245.0.10 <none> 53/UDP,53/TCP,9153/TCP 9d
kube-system prometheus-alertmanager ClusterIP 10.245.228.165 <none> 80/TCP 6d2h
kube-system prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 6d2h
kube-system prometheus-node-exporter ClusterIP None <none> 9100/TCP 6d2h
kube-system prometheus-pushgateway ClusterIP 10.245.147.195 <none> 9091/TCP 6d2h
kube-system prometheus-server ClusterIP 10.245.202.186 <none> 80/TCP 6d2h
kube-system tiller-deploy ClusterIP 10.245.11.85 <none> 44134/TCP 9d
</code></pre>
<p>If I view the resolv.conf file on the ingress pod, it returns what it should ...</p>
<pre><code>╰─$ keti -n ingress-nginx nginx-ingress-controller-c595c6896-klw25 -- cat /etc/resolv.conf 130 ↵
nameserver 10.245.0.10
search ingress-nginx.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
<p>dig/nslookup/host aren't available on that container, but if I create a simple busybox instance it gets the right IP with that same config:</p>
<pre><code>╰─$ keti busybox -- nslookup web
Server: 10.245.0.10
Address 1: 10.245.0.10 kube-dns.kube-system.svc.cluster.local
Name: web
Address 1: 10.245.145.213 web.default.svc.cluster.local
</code></pre>
<p>Can anyone give me any ideas what to try next?</p>
<p><strong>Update #1</strong></p>
<p>Here is the config for <code>web</code>, as requested in the comments. I'm also investigating why I can't directly <code>wget</code> anything from <code>web</code> using a busybox inside the cluster.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web
app: web
name: web
spec:
ports:
- name: "80"
port: 80
targetPort: 80
- name: "443"
port: 443
targetPort: 443
selector:
io.kompose.service: web
status:
loadBalancer: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: web
name: web
spec:
replicas: 1
strategy:
type: RollingUpdate
selector:
matchLabels:
app: web
template:
metadata:
labels:
io.kompose.service: web
app: web
spec:
containers:
- image: <private docker repo>
imagePullPolicy: IfNotPresent
name: web
resources: {}
imagePullSecrets:
- name: gcr
status: {}
</code></pre>
<p><strong>Update 2</strong></p>
<p>As per Michael's comment below, the IP address that it has resolved for <code>web</code> is one of it's endpoints:</p>
<pre><code>╰─$ k get endpoints web 130 ↵
NAME ENDPOINTS AGE
web 10.244.93.12:443,10.244.93.12:80 2d
</code></pre>
| <p>So, this all boiled down to the php-fpm service not having any endpoints, because I'd misconfigured the service selector!</p>
<p>Some of the more eagle eyed readers might have spotted that my config began life as a conversion from a docker-compose config file (my dev environment), and I've built on it from there.</p>
<p>The problem came because I changed the labels & selector for the deployment, but not the service itself.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: user-api
labels:
io.kompose.service: user-api
app: user-api
spec:
ports:
- name: "9000"
port: 9000
targetPort: 9000
selector:
io.kompose.service: user-api
status:
loadBalancer: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: user-api
name: user-api
spec:
replicas: 1
selector:
matchLabels:
app: user-api
template:
metadata:
labels:
app: user-api
spec:
... etc
</code></pre>
<p>You can see I was still using the old selector that kompose created for me, <code>io.kompose.service: user-api</code> instead of the newer <code>app: user-api</code></p>
<p>I followed the advice from @coderanger, while the nginx service was responding, the php-fpm one wasn't.</p>
<p>A quick look at the documentation for <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">Connecting Applications With Services</a> says :</p>
<blockquote>
<p>As mentioned previously, a Service is backed by a group of Pods. These Pods are exposed through endpoints. The Service’s selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named my-nginx.</p>
</blockquote>
<p>When I checked the selector of both the service & deployment template I saw they were different, now they match and everything works as expected.</p>
|
<p>Update:<br />
A colleague who works for Microsoft said:</p>
<blockquote>
<p>Changelog entry for this behaviour change is here: <a href="https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/docs-ref-conceptual/release-notes-azure-cli.md#aks-3" rel="nofollow noreferrer">https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/docs-ref-conceptual/release-notes-azure-cli.md#aks-3</a></p>
</blockquote>
<hr>
<p>I'm following the proper instructions and the documentation must be out of date.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-service-principal" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/kubernetes-service-principal</a></p>
<blockquote>
<p><strong>Automatically create and use a service principal.</strong><br />
When you create an AKS cluster in the Azure portal or using the az aks create command, Azure can automatically generate a service principal.
In the following Azure CLI example, a service principal is not specified. In this scenario, the Azure CLI creates a service principal for the AKS cluster. To successfully complete the operation, your Azure account must have the proper rights to create a service principal.</p>
</blockquote>
<pre><code>az aks create --name myAKSCluster --resource-group myResourceGroup
</code></pre>
<p>This is what happened a few months ago - see <strong>Finished service principal creation</strong>:</p>
<p><a href="https://i.stack.imgur.com/0IF56.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0IF56.png" alt="enter image description here" /></a></p>
<p>Now when I try I get <strong>Add role propagation</strong>:</p>
<p><a href="https://i.stack.imgur.com/blJZd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/blJZd.png" alt="enter image description here" /></a></p>
<p>The problem is querying the servicePrincipalProfile.clientId results in <strong>msi</strong>, I need the guid of the service principal not the Managed Service Identity.</p>
<pre><code>$CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)
echo $CLIENT_ID
</code></pre>
<p>Used to work:</p>
<p><a href="https://i.stack.imgur.com/tDZXj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tDZXj.png" alt="enter image description here" /></a></p>
<p>Now its changed:</p>
<p><a href="https://i.stack.imgur.com/GnNxh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GnNxh.png" alt="enter image description here" /></a></p>
<p>How do I create the Kubernetes Cluster with a Service Principal as the documentation states and how it used to work?</p>
<p>Repro steps:</p>
<p><a href="https://github.com/MeaningOfLights/AzureTraining/blob/master/Hands-On-Labs-That-Work/80-Kubernetes.md" rel="nofollow noreferrer">https://github.com/MeaningOfLights/AzureTraining/blob/master/Hands-On-Labs-That-Work/80-Kubernetes.md</a></p>
<p><a href="https://github.com/MeaningOfLights/AzureTraining/blob/master/Hands-On-Labs-That-Work/85-Kubernetes-Deployment.md" rel="nofollow noreferrer">https://github.com/MeaningOfLights/AzureTraining/blob/master/Hands-On-Labs-That-Work/85-Kubernetes-Deployment.md</a></p>
| <p>For Reference: I got the same and following your <a href="https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/docs-ref-conceptual/release-notes-azure-cli.md#aks-3" rel="nofollow noreferrer">link</a> I found that this worked.</p>
<pre><code>az aks show -g aks -n cluster --query identityProfile.kubeletidentity.clientId -o tsv
</code></pre>
<p><a href="https://i.stack.imgur.com/FaaOL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FaaOL.png" alt="enter image description here" /></a></p>
<p>and this returned the appropriate guide, that I could use for my RBAC assignment</p>
<pre><code># get the clientId of the cluster
$clientId = (az aks show -g aks -n cluster --query identityProfile.kubeletidentity.clientId -o tsv)
# get the resourceId of the registry
$acrId=(az acr show -g acr -n myacr --query id -o tsv)
# give authority for cluster id to the acr pull
az role assignment create $clientId --role AcrPull --scope $acrId
</code></pre>
|
<p>I have deployed spark applications in cluster-mode in kubernetes. The spark application pod is getting restarted almost every hour.
The driver log has this message before restart:</p>
<pre><code>20/07/11 13:34:02 ERROR TaskSchedulerImpl: Lost executor 1 on x.x.x.x: The executor with id 1 was deleted by a user or the framework.
20/07/11 13:34:02 ERROR TaskSchedulerImpl: Lost executor 2 on y.y.y.y: The executor with id 2 was deleted by a user or the framework.
20/07/11 13:34:02 INFO DAGScheduler: Executor lost: 1 (epoch 0)
20/07/11 13:34:02 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
20/07/11 13:34:02 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, x.x.x.x, 44879, None)
20/07/11 13:34:02 INFO BlockManagerMaster: Removed 1 successfully in removeExecutor
20/07/11 13:34:02 INFO DAGScheduler: Shuffle files lost for executor: 1 (epoch 0)
20/07/11 13:34:02 INFO DAGScheduler: Executor lost: 2 (epoch 1)
20/07/11 13:34:02 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
20/07/11 13:34:02 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(2, y.y.y.y, 46191, None)
20/07/11 13:34:02 INFO BlockManagerMaster: Removed 2 successfully in removeExecutor
20/07/11 13:34:02 INFO DAGScheduler: Shuffle files lost for executor: 2 (epoch 1)
20/07/11 13:34:02 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
20/07/11 13:34:16 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
</code></pre>
<p>And the Executor log has:</p>
<pre><code>20/07/11 15:55:01 INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown
20/07/11 15:55:01 INFO MemoryStore: MemoryStore cleared
20/07/11 15:55:01 INFO BlockManager: BlockManager stopped
20/07/11 15:55:01 INFO ShutdownHookManager: Shutdown hook called
</code></pre>
<p>How can I find what's causing the executors deletion?</p>
<p>Deployment:</p>
<pre><code>Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 0 max surge
Pod Template:
Labels: app=test
chart=test-2.0.0
heritage=Tiller
product=testp
release=test
service=test-spark
Containers:
test-spark:
Image: test-spark:2df66df06c
Port: <none>
Host Port: <none>
Command:
/spark/bin/start-spark.sh
Args:
while true; do sleep 30; done;
Limits:
memory: 4Gi
Requests:
memory: 4Gi
Liveness: exec [/spark/bin/liveness-probe.sh] delay=300s timeout=1s period=30s #success=1 #failure=10
Environment:
JVM_ARGS: -Xms256m -Xmx1g
KUBERNETES_MASTER: https://kubernetes.default.svc
KUBERNETES_NAMESPACE: test-spark
IMAGE_PULL_POLICY: Always
DRIVER_CPU: 1
DRIVER_MEMORY: 2048m
EXECUTOR_CPU: 1
EXECUTOR_MEMORY: 2048m
EXECUTOR_INSTANCES: 2
KAFKA_ADVERTISED_HOST_NAME: kafka.default:9092
ENRICH_KAFKA_ENRICHED_EVENTS_TOPICS: test-events
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: test-spark-5c5997b459 (1/1 replicas created)
Events: <none>
</code></pre>
| <p>I did a quick research on running Spark on Kubernetes, and it seems that Spark by design will terminate executor pod when they finished running Spark applications. Quoted from the official Spark website:</p>
<blockquote>
<p>When the application completes, the executor pods terminate and are cleaned up, but the driver pod persists logs and remains in “completed” state in the Kubernetes API until it’s eventually garbage collected or manually cleaned up.</p>
</blockquote>
<p>Therefore, I believe there is nothing to worry about the restarts as long as your Spark instance still manages to start executor pods as and when required.</p>
<p>Reference: <a href="https://spark.apache.org/docs/2.4.5/running-on-kubernetes.html#how-it-works" rel="nofollow noreferrer">https://spark.apache.org/docs/2.4.5/running-on-kubernetes.html#how-it-works</a></p>
|
<p>I'm not sure how to access the Pod which is running behind a Service.</p>
<p>I have Docker CE installed and running. With this, I have the Docker 'Kubernetes' running.</p>
<p>I created a Pod file and then <code>kubectl created</code> it ... and then used port-forwarding to test that it's working and it was. Tick!</p>
<p>Next I created a Service as a LoadBalancer and <code>kubectl create</code> that also and it's running ... but I'm not sure how to test it / access the Pod that is running.</p>
<p>Here's the terminal outputs:</p>
<pre><code>
Tests-MBP:k8s test$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
hornet-data 1/1 Running 0 4h <none>
Tests-MBP:k8s test$ kubectl get services --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
hornet-data-lb LoadBalancer 10.0.44.157 XX.XX.XX.XX 8080:32121/TCP 4h <none>
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 14d component=apiserver,provider=kubernetes
Tests-MBP:k8s test$
</code></pre>
<p>Not sure if the pod Label <code><none></code> is a problem? I'm using labels for the Service selector.</p>
<p>Here's the two files...</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hornet-data
labels:
app: hornet-data
spec:
containers:
- image: ravendb/ravendb
name: hornet-data
ports:
- containerPort: 8080
</code></pre>
<p>and</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hornet-data-lb
spec:
type: LoadBalancer
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: hornet-data
</code></pre>
<h3>Update 1:</h3>
<p>As requested by @vasily:</p>
<pre><code>Tests-MBP:k8s test$ kubectl get ep hornet-data-lb
NAME ENDPOINTS AGE
hornet-data-lb <none> 5h
</code></pre>
<h3>Update 2:</h3>
<p>More info for/from Vasily:</p>
<pre><code>Tests-MBP:k8s test$ kubectl apply -f hornet-data-pod.yaml
pod/hornet-data configured
Tests-MBP:k8s test$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
hornet-data 1/1 Running 0 5h app=hornet-data
Tests-MBP:k8s test$ kubectl get services --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
hornet-data-lb LoadBalancer 10.0.44.157 XX.XX.XX.XX 8080:32121/TCP 5h <none>
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 14d component=apiserver,provider=kubernetes
</code></pre>
| <p>@vailyangapov basically answered this via comments in the OP - this answer is in two parts.</p>
<ol>
<li><p><strong>I didn't <code>apply</code> my changes in my manifest</strong>. I made some changes to my services yaml file but didn't push these changes up. As such I needed to do <code>kubectl apply -f myPod.yaml</code>.</p></li>
<li><p><strong>I was in the wrong context</strong>. The current context was pointing to a test Azure Kubernetes Service. I thought it was all on my localhost cluster that comes with Docker-CE (called the <code>docker-for-desktop</code> cluster). As this is a new machine, I failed to enable Kubernetes with Docker (it's a manual step AFTER Docker-CE is installed .. with the default setting having it NOT enabled/not ticked). Once I manually noticed that, I ticked the option to enable Kubernetes and <code>docker-for-desktop) cluster was installed. Then I manually changed over to this context:</code>kubectl config use-context docker-for-desktop`.</p></li>
</ol>
<p>Both these mistakes were simple. The reason for providing them into an answer is to hopefully help others use this information to help them review their own settings if something isn't working right - a similar problem to me, is occurring.</p>
|
<p>It is clear from the documentation that whenever pods are in Pending state because there is no node that has enough free resources to respect the pods resource request - the cluster autoscaler will create another node within 30 seconds of the pod creation (for reasonably sized clusters).</p>
<p>However, consider the case that a node is pretty packed. Let's say the node has 2 CPU cores and it contains 4 pods that define 0.5 CPU request and 1.0 CPU limit.
Suddenly there is load, and all 4 pods are suddenly requesting an additional 0.5 CPU that the node is not able to give since all of it's CPU is already taken by the 4 running pods.</p>
<p>In this situation, I'd expect Kubernetes to 'understand' that there are Pending resource requests by running pods that cannot be served and 'move' (destroy and create) those pods to another node that can respect their request (plus the resources they are currently using). In case no such node exists - I'd expected Kubernetes to create an additional node and move the pods there.</p>
<p>However, I don't see this happening. I see that the pods are running on the same node (I guess that node can be called over-provisioned) regardless of resource requests that cannot be respected and performance suffers as a result.</p>
<p>My question is whether this behaviour is avoidable by any means apart from setting the ratio between pod resource requests and limits to 1:1 (where a pod cannot request more resources than initially allocated). Obviously I would to avoid setting requests and limits to be the same to avoid under-provisioning and pay for more than I need.</p>
| <p>It's important to recognise the distinction here between the CPU <code>request</code> in a PodSpec, and the amount of cpu a process is trying to use. Kubernetes provisioning and cluster autoscaling is based purely on the <code>request</code> in the PodSpec. Actual use is irrelevant for those decisions.</p>
<p>In the case you're describing, the Pod still only requests 0.5 CPU - that field is immutable. The process is now trying to use 1 CPU - but this isn't looked at.</p>
<p>CPU limits being higher than requests allows the best-efforts use of that capacity, but it isn't a guarantee, as you're seeing.</p>
<p>In this scenario, it sounds like you might want to be using both the Horizontal Pod Autoscaler, and the cluster autoscaler. In a situation with increased load (where the Pods start to use >80% of the CPU <code>request</code>, say), the HPA will increase the number of Pods for the service, to handle demand. If then those Pods have nowhere they can fit, the cluster autoscaler will provision more Nodes. In this way, your Pods can still use up to the request value, and it's only when they start getting close to it that more Nodes are provisioned, so you won't over-provision resources up-front.</p>
|
<p>I've recently setup a cron process that will make an internal request inside my K8 instance call my service internally. I'm now faced with the task of securing that endpoint. The simplest solution might be to pass a secret via the Authorization header but that doesn't seem safe or the proper way.</p>
<p>What are some correct ways to detect and allow a request to a specific endpoint on my RESTful API?</p>
| <p>Since any traffics from outside will go through an ingress in order to reach your HTTP service, you can add a routing to an error page component if the traffics reach the API paths meant for your internal cron usage, which calls the API via the service and not going through the ingress.</p>
<p>For example:</p>
<pre><code>spec:
rules:
- host: api.domain.com
http:
paths:
- backend:
service:
name: api-service
port:
number: 80
path: /(.*)
pathType: ImplementationSpecific
- backend:
service:
name: error-page
port:
number: 80
path: /api/internal/(.*)
pathType: ImplementationSpecific
</code></pre>
|
<p>I see there are many Github pages for gradle kubernetes plugin like <br>
<a href="https://github.com/bmuschko/gradle-kubernetes-plugin" rel="nofollow noreferrer">https://github.com/bmuschko/gradle-kubernetes-plugin</a>
<br>
<a href="https://github.com/kolleroot/gradle-kubernetes-plugin" rel="nofollow noreferrer">https://github.com/kolleroot/gradle-kubernetes-plugin</a>
<br>
<a href="https://github.com/qaware/gradle-cloud-deployer" rel="nofollow noreferrer">https://github.com/qaware/gradle-cloud-deployer</a><br>
None of these having any concrete example <strong>how to connect to kubernetes from gradle and create a new deployment and service</strong> <br> I tried all about git link in gradle but no luck...<br></p>
| <p>Since I also faced a lack of plugins that deal with Kubernetes I started working on a Gradle plugin to make deploying resources to a Kubernetes cluster easier: <a href="https://github.com/kuberig-io/kuberig" rel="nofollow noreferrer">https://github.com/kuberig-io/kuberig</a>.</p>
<p>In the user manual you will find details about how to connect to a kubernetes cluster here:
<a href="https://kuberig-io.github.io/kuberig/#/initializing-an-environment" rel="nofollow noreferrer">https://kuberig-io.github.io/kuberig/#/initializing-an-environment</a></p>
<p>It also includes an example of how to define a deployment here: <a href="https://kuberig-io.github.io/kuberig/#/defining-a-deployment" rel="nofollow noreferrer">https://kuberig-io.github.io/kuberig/#/defining-a-deployment</a>
And a service here:
<a href="https://kuberig-io.github.io/kuberig/#/defining-a-service" rel="nofollow noreferrer">https://kuberig-io.github.io/kuberig/#/defining-a-service</a></p>
<p>It may also be useful to go through the quickstart first <a href="https://kuberig-io.github.io/kuberig/#/quick-start" rel="nofollow noreferrer">https://kuberig-io.github.io/kuberig/#/quick-start</a>.</p>
<p>Hope it can be of use to you.</p>
|
<p>We have an applications that requires secrets ONLY during the runtime or creation of the pod. Once the Pod is up and started the secrets are no longer needed.</p>
<p>I've tried to load secrets from environment variables then unset them via a script however if you exec into the container the secret is available on each new session.</p>
<p>I'm currently looking into mounting files as secrets and would like to know if it is possible to somehow pass a secret to a container ONLY during runtime and remove it once the pod is up and running? Maybe I can unmount the secret once the pod is running?</p>
<p>I should also point out that the pod is running an application ontop of a stripped down version of a centos 8 image.</p>
| <p>You can't unmount a Secret while the Pod is running. (The design is any updates to the secret will be reflected immediately)</p>
<p>However, what you could do is use an initContainer, which mounts the secret. That initContainer and your main container also both mount an emptyDir volume, which is ephemeral. Init could copy secrets across, main container could read them then delete them.</p>
<p>I suspect this would react badly to failure though, you would likely need to change the Pods restartPolicy to be Never, because the initContainer won't run again if the main container fails, and when it's restarted it would now not have the secrets available to it.</p>
<p>All of that is on the assumption your main container needed to see the secrets. If you could do what's needed during the initContainer, then the main container will never see the secrets if they're not mounted - just use them in the initContainer.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
|
<p>I know there are multiple different solutions to do what I am looking for, but I am looking for a/the proper way to perform some requests in parallel. I am new to Go but it feels cumbersome what I am doing at the moment.</p>
<p><strong>Use case:</strong></p>
<p>I need to query 4 different REST endpoints (kubernetes client requests) in parallel. Once I got all these 4 results I need to do some processing.</p>
<p><strong>My problem:</strong></p>
<p>I know I need to use go routines for that, but what's the best way to collect the results. What I am doing at the moment (see code sample below) is probably very cumbersome, but I am not sure what else I could do to improve the code.</p>
<p><strong>Code:</strong></p>
<p>This code is the easiest to understand but I don't want to perform the requests in sequence:</p>
<pre><code>// Get node resource usage metrics
nodeMetricsList, err := c.kubernetesClient.NodeMetricses()
if err != nil {
log.Warn("Failed to get node usage list from Kubernetes", err)
return err
}
// Get pod resource usage metrics
podMetricsList, err := c.kubernetesClient.PodMetricses()
if err != nil {
log.Warn("Failed to get pod usage list from Kubernetes", err)
return err
}
</code></pre>
<p>This is how I would run the requests in parallel. This makes the code so much less readable and it feels very cumbersome to implement too:</p>
<pre><code>var nodeMetricsList *v1beta1.NodeMetricsList
var nodeMetricsListError error
var podMetricsList *v1beta1.PodMetricsList
var podMetricsListError error
go func() {
nodeMetricsList, nodeMetricsListError = c.kubernetesClient.NodeMetricses()
}()
if nodeMetricsListError != nil {
log.Warn("Failed to get podList from Kubernetes", err)
return err
}
// Get pod resource usage metrics
go func() {
podMetricsList, podMetricsListError = c.kubernetesClient.PodMetricses()
}()
if podMetricsListError != nil {
log.Warn("Failed to get pod usage list from Kubernetes", err)
return err
}
</code></pre>
<p>What's the proper way to perform the requests in my given example in parallel?</p>
| <p>Your code has 2 race conditions, and likely will never correctly report an error.</p>
<p>You need to wait for the goroutines to complete before you can read the values they operate on, which is easily done with a <code>sync.WaitGroup</code> like so:</p>
<pre><code>var nodeMetricsList *v1beta1.NodeMetricsList
var podMetricsList *v1beta1.PodMetricsList
var nodeMetricsListError, podMetricsListError error
var wg sync.WaitGroup
// Get node resource usage metrics
wg.Add(1)
go func() {
defer wg.Done()
nodeMetricsList, nodeMetricsListError = c.kubernetesClient.NodeMetricses()
}()
// Get pod resource usage metrics
wg.Add(1)
go func() {
defer wg.Done()
podMetricsList, podMetricsListError = c.kubernetesClient.PodMetricses()
}()
wg.Wait()
if nodeMetricsListError != nil {
log.Warn("Failed to get podList from Kubernetes", err)
return err
}
if podMetricsListError != nil {
log.Warn("Failed to get pod usage list from Kubernetes", err)
return err
}
fmt.Println("Hello, playground")
</code></pre>
|
<p>Cert-manager various versions ( 15 and 16 ) installed on both k3s version v1.18.8+k3s1 and docker-desktop version v1.16.6-beta.0 using the following command:</p>
<pre class="lang-sh prettyprint-override"><code>helm install cert-manager \
--namespace cert-manager jetstack/cert-manager \
--version v0.16.1 \
--set installCRDs=true \
--set 'extraArgs={--dns01-recursive-nameservers=1.1.1.1:53}'
</code></pre>
<p>I applied the following test yaml file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-token-secret
namespace: test
type: Opaque
stringData:
api-token: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
---
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: letsencrypt
namespace: test
spec:
acme:
email: [email protected]
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt
solvers:
- dns01:
cloudflare:
email: [email protected]
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: example.com
namespace: test
spec:
secretName: example.com-tls
issuerRef:
name: letsencrypt
dnsNames:
- example.com
</code></pre>
<p>Result (I have waited even hours):</p>
<pre><code>kubectl -n test get certs,certificaterequests,order,challenges,ingress -o wide
NAME READY SECRET ISSUER STATUS AGE
certificate.cert-manager.io/example.com False example.com-tls letsencrypt Issuing certificate as Secret does not exist 57s
NAME READY ISSUER STATUS AGE
certificaterequest.cert-manager.io/example.com-rx7jg False letsencrypt Waiting on certificate issuance from order test/example.com-rx7jg-273779930: "pending" 56s
NAME STATE ISSUER REASON AGE
order.acme.cert-manager.io/example.com-rx7jg-273779930 pending letsencrypt 55s
NAME STATE DOMAIN REASON AGE
challenge.acme.cert-manager.io/example.com-rx7jg-273779930-625151916 pending example.com Cloudflare API error for POST "/zones/xxxxxxxxxxxxxxxxxxxxxxxxxx xxxxx/dns_records" 53s
</code></pre>
<p>Cloudflare setting are the ones from:
<a href="https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/" rel="nofollow noreferrer">https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/</a> and i have tried with both token and key.</p>
<p>Cert-manager pod logs:</p>
<pre><code>I0828 08:34:51.370299 1 dns.go:102] cert-manager/controller/challenges/Present "msg"="presenting DNS01 challenge for domain" "dnsName"="example.com" "domain"="example.com" "resource_kind"="Challenge" "resource_name"="example.com-m72dq-3139291111-641020922" "resource_namespace"="test" "type"="dns-01"
E0828 08:34:55.251730 1 controller.go:158] cert-manager/controller/challenges "msg"="re-queuing item due to error processing" "error"="Cloudflare API error for POST \"/zones/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/dns_records\"" "key"="test/example.com-m72dq-3139291111-641020922"
I0828 08:35:35.251982 1 controller.go:152] cert-manager/controller/challenges "msg"="syncing item" "key"="test/example.com-m72dq-3139291111-641020922"
I0828 08:35:35.252131 1 dns.go:102] cert-manager/controller/challenges/Present "msg"="presenting DNS01 challenge for domain" "dnsName"="example.com" "domain"="example.com" "resource_kind"="Challenge" "resource_name"="example.com-m72dq-3139291111-641020922" "resource_namespace"="test" "type"="dns-01"
E0828 08:35:38.797954 1 controller.go:158] cert-manager/controller/challenges "msg"="re-queuing item due to error processing" "error"="Cloudflare API error for POST \"/zones/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/dns_records\"" "key"="test/example.com-m72dq-3139291111-641020922"
</code></pre>
<p>What's wrong?</p>
<p>Thank you!</p>
| <p>Not 100% if it'll resolve your issue but I did come across this thread - <a href="https://github.com/jetstack/cert-manager/issues/1163" rel="nofollow noreferrer">https://github.com/jetstack/cert-manager/issues/1163</a>. They show <code>helm</code> being invoked like this and purporting it worked.</p>
<pre><code>$ helm install \
--name cert-manager \
--namespace cert-manager \
--version v0.7.0 \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
--set ingressShim.defaultIssuerName=letsencrypt-staging-issuer \
--set extraArgs='{--dns01-recursive-nameservers-only,--dns01-self-check-nameservers=8.8.8.8:53\,1.1.1.1:53}' \
jetstack/cert-manager
</code></pre>
|
<p>I have a pod that needs to clean up an external reference when it is terminated. (I can do this with a a curl command.)</p>
<p>I looked into <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">container lifecycle events</a> and they seem just what I need.</p>
<p>But the example shows creating a Pod resource directly via yaml. With Helm, I just made deployments and the pods are auto created.</p>
<p>How can I define a PreStop container lifecycle hook in a Kubernetes deployment?</p>
| <p>I should have looked a bit longer.</p>
<p>The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#writing-a-deployment-spec" rel="nofollow noreferrer">Writing a Deployment Spec</a> section of the deployment documentation says:</p>
<blockquote>
<p>The .spec.template is a Pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion or kind.</p>
</blockquote>
<p>So I can just add my hook in there as if it were the Pod yaml.</p>
|
<p>I have tried using the Patch Class to scale the Deployments but unable to do so. Please let me know how to do it. i have researched a lot but no proper docs on it/Example to achieve.</p>
<pre><code>public static async Task<V1Scale> Scale([FromBody] ReplicaRequest request)
{
try
{
// Use the config object to create a client.
using (var client = new Kubernetes(config))
{
// Create a json patch for the replicas
var jsonPatch = new JsonPatchDocument<V1Scale>();
// Set the new number of repplcias
jsonPatch.Replace(e => e.Spec.Replicas, request.Replicas);
// Creat the patch
var patch = new V1Patch(jsonPatch,V1Patch.PatchType.ApplyPatch);
var list = client.ListNamespacedPod("default");
//client.PatchNamespacedReplicaSetStatus(patch, request.Deployment, request.Namespace);
//var _result = await client.PatchNamespacedDeploymentScaleAsync(patch, request.Deployment, request.Namespace,null, null,true,default);
await client.PatchNamespacedDeploymentScaleAsync(patch, request.Deployment, request.Namespace,null, "M", true, default);
}
}
catch (Microsoft.Rest.HttpOperationException e)
{
Console.WriteLine(e.Response.Content);
}
return null;
}
public class ReplicaRequest
{
public string Deployment { get; set; }
public string Namespace { get; set; }
public int Replicas { get; set; }
}
</code></pre>
|
<p>The <code>JsonPatchDocument<T></code> you are using generates Json Patch, but you are specifying ApplyPatch.</p>
<p>Edit as of 2022-04-19:<br />
The <a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">kubernetes-client/csharp</a> library changed the serializer to <code>System.Text.Json</code> and this doesn't support <code>JsonPatchDocument<T></code> serialization, hence we have to do it beforehand</p>
<h4><code>From version 7 of the client library:</code></h4>
<pre class="lang-cs prettyprint-override"><code>var jsonPatch = new JsonPatchDocument<V1Scale>();
jsonPatch.ContractResolver = new DefaultContractResolver
{
NamingStrategy = new CamelCaseNamingStrategy()
};
jsonPatch.Replace(e => e.Spec.Replicas, request.Replicas);
var jsonPatchString = Newtonsoft.Json.JsonConvert.SerializeObject(jsonPatch);
var patch = new V1Patch(jsonPatchString, V1Patch.PatchType.JsonPatch);
await client.PatchNamespacedDeploymentScaleAsync(patch, request.Deployment, request.Namespace);
</code></pre>
<h4><code>Works until version 6 of the client library:</code></h4>
<p>Either of these should work:</p>
<p>Json Patch:</p>
<pre class="lang-cs prettyprint-override"><code>var jsonPatch = new JsonPatchDocument<V1Scale>();
jsonPatch.Replace(e => e.Spec.Replicas, request.Replicas);
var patch = new V1Patch(jsonPatch, V1Patch.PatchType.JsonPatch);
await client.PatchNamespacedDeploymentScaleAsync(patch, request.Deployment, request.Namespace);
</code></pre>
<p>Json Merge Patch:</p>
<pre class="lang-cs prettyprint-override"><code>var jsonMergePatch = new V1Scale { Spec = new V1ScaleSpec { Replicas = request.Replicas } };
var patch = new V1Patch(jsonMergePatch, V1Patch.PatchType.MergePatch);
await client.PatchNamespacedDeploymentScaleAsync(patch, request.Deployment, request.Namespace);
</code></pre>
<h3>About the different patch options:</h3>
<p>They are described here (official C# client):
<a href="https://github.com/kubernetes-client/csharp/blob/master/src/KubernetesClient/Kubernetes.Header.cs#L32" rel="nofollow noreferrer">https://github.com/kubernetes-client/csharp/blob/master/src/KubernetesClient/Kubernetes.Header.cs#L32</a></p>
<ul>
<li>Nice article about the difference between JSON Patch and JSON Merge Patch: <a href="https://erosb.github.io/post/json-patch-vs-merge-patch/" rel="nofollow noreferrer">https://erosb.github.io/post/json-patch-vs-merge-patch/</a></li>
<li>Strategic merge patch is a custom k8s version of Json Patch:
<a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md</a></li>
<li>And ApplyPatch is the "server side apply" that replaces it in yaml format:
<a href="https://kubernetes.io/docs/reference/using-api/server-side-apply/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/server-side-apply/</a></li>
</ul>
|
<p>I am new to Kubernetes. I am trying to implement zoneminder on Kubernetes. I noticed from google I found quantumobject which is implementing zoneminder on docker, but the resources of implementing it on Kubernetes are really less. I tried implementing it on Kubernetes but somehow it doesn't show the video stream and by tracing the log I'm not able find any solution. Is anyone out there tried it ?</p>
<p>The tutorial I have referred to is <a href="https://github.com/QuantumObject/docker-zoneminder" rel="nofollow noreferrer">https://github.com/QuantumObject/docker-zoneminder</a> . It is a docker way to implement zoneminder on the container. So, i used the images and create my own yaml files which are
<a href="https://i.stack.imgur.com/mhPLJ.png" rel="nofollow noreferrer">mysql yaml</a></p>
<p>this is the container for the mysql and after i create is i will pass the ip address to my zoneminder yaml file
<a href="https://i.stack.imgur.com/8UyDR.png" rel="nofollow noreferrer">zm yaml</a></p>
<p>The zoneminder success to come up but when i add on ip cams, these are the errors
<a href="https://i.stack.imgur.com/MLr9d.png" rel="nofollow noreferrer">error log</a></p>
| <p>I was just quickly researching this myself. Should run fine - it can even run multiple pods with shared storage per <a href="https://zoneminder.readthedocs.io/en/stable/installationguide/multiserver.html" rel="nofollow noreferrer">https://zoneminder.readthedocs.io/en/stable/installationguide/multiserver.html</a></p>
|
<p>I'm trying to install Kubernetes on my Ubuntu server/desktop version 18.04.1.
But, when I want to add kubernetes to the apt repository using the following command:</p>
<pre><code>sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-bionic main"
</code></pre>
<p>I get the following error:</p>
<pre><code>Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
Ign:3 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:4 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:5 http://dl.google.com/linux/chrome/deb stable Release
Hit:6 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:7 https://download.docker.com/linux/ubuntu bionic InRelease
Ign:8 https://packages.cloud.google.com/apt kubernetes-bionic InRelease
Err:10 https://packages.cloud.google.com/apt kubernetes-bionic Release
404 Not Found [IP: 216.58.211.110 443]
Reading package lists... Done
E: The repository 'http://apt.kubernetes.io kubernetes-bionic Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
</code></pre>
<p>If I then try to install <code>kubeadm</code>, it does not work because I don't have the repository added to apt</p>
<p>I hope someone can shed some light on my issue..</p>
<p>All of this is running inside a VM on Hyper-V</p>
<p>PS: I'm not a die hard Linux expert but coming from Windows!</p>
| <p>At the moment (nov. 2018) there is no bionic folder. You can see the supported distributions here: </p>
<p><a href="https://packages.cloud.google.com/apt/dists" rel="noreferrer">https://packages.cloud.google.com/apt/dists</a></p>
<p>The last kubernetes version there is: kubernetes-yakkety</p>
<p>This should still work with bionic.</p>
|
<p>I'm using Apache Ignite .Net v2.7. While resolving another issue (<a href="https://stackoverflow.com/questions/55388489/how-to-use-tcpdiscoverykubernetesipfinder-in-apache-ignite-net/55392149">How to use TcpDiscoveryKubernetesIpFinder in Apache Ignite .Net</a>) I've added a sprint configuration file where the Kubernetes configuration is specified. All other configuration goes in the C# code. </p>
<p>The content of the config file is below (taken from the docs): </p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="ignite"/>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
</code></pre>
<p>The C# code which references the file is as follows:</p>
<pre><code>var igniteConfig = new IgniteConfiguration
{
SpringConfigUrl = "./kubernetes.config",
JvmClasspath = string.Join(";",
new string[] {
"ignite-kubernetes-2.7.0.jar",
"jackson-core-2.9.6.jar",
"jackson-databind-2.9.6.jar"
}
.Select(c => System.IO.Path.Combine(Environment.CurrentDirectory, "Libs", c)))}
</code></pre>
<p>The Ignite node starts fine locally but when deployed to a Kubernetes cluster, it fails with this error: </p>
<pre><code> INFO: Loading XML bean definitions from URL [file:/app/./kubernetes.config]
Mar 28, 2019 10:43:55 PM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.support.GenericApplicationContext@1bc6a36e: startup date [Thu Mar 28 22:43:55 UTC 2019]; root of context hierarchy
Unhandled Exception: Apache.Ignite.Core.Common.IgniteException: Java exception occurred [class=java.lang.NoSuchFieldError, message=logger] ---> Apache.Ignite.Core.Com
mon.JavaException: java.lang.NoSuchFieldError: logger
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:727)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:381)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:104)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
at org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:751)
at org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:809)
at org.apache.ignite.internal.processors.platform.PlatformIgnition.configuration(PlatformIgnition.java:153)
at org.apache.ignite.internal.processors.platform.PlatformIgnition.start(PlatformIgnition.java:68)
at Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.ExceptionCheck()
at Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.IgnitionStart(Env env, String cfgPath, String gridName, Boolean clientMode, Boolean userLogger, Int64 igniteId,
Boolean redirectConsole)
at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)
--- End of inner exception stack trace ---
at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)
at UtilityClick.ProductService.Initializer.<>c__DisplayClass0_0.<Init>b__1(IServiceProvider sp) in /src/ProductService/Initializer.cs:line 102
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitFactory(FactoryCallSite factoryCallSite, ServiceProviderEngineScope scope)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(IServiceCallSite callSite, TArgument argument)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitScoped(ScopedCallSite scopedCallSite, ServiceProviderEngineScope scope)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitSingleton(SingletonCallSite singletonCallSite, ServiceProviderEngineScope sc
ope)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(IServiceCallSite callSite, TArgument argument)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.DynamicServiceProviderEngine.<>c__DisplayClass1_0.<RealizeService>b__0(ServiceProviderEngineScope scope)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngine.GetService(Type serviceType, ServiceProviderEngineScope serviceProviderEngineScope)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngine.GetService(Type serviceType)
at Microsoft.Extensions.DependencyInjection.ServiceProvider.GetService(Type serviceType)
at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetService[T](IServiceProvider provider)
at UtilityClick.ProductService.Initializer.Init(IServiceCollection serviceCollection) in /src/ProductService/Initializer.cs:line 123
at UtilityClick.ProductService.ApiStartup.ConfigureServices(IServiceCollection services) in /src/ProductService/ApiStartup.cs:line 50
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Hosting.ConventionBasedStartup.ConfigureServices(IServiceCollection services)
at Microsoft.AspNetCore.Hosting.Internal.WebHost.EnsureApplicationServices()
at Microsoft.AspNetCore.Hosting.Internal.WebHost.Initialize()
at Microsoft.AspNetCore.Hosting.WebHostBuilder.Build()
at UtilityClick.ProductService.Program.Main(String[] args) in /src/ProductService/Program.cs:line 14
</code></pre>
<p>Do you have any idea why it might happen? Which logger is it complaining about? </p>
<p>Kubernetes is running Linux containers, locally I'm using Windows 10. </p>
<p>Two more observations: </p>
<ol>
<li><p>When I specify the config file name as <code>kubernetes.config</code>, the node launches successfully on local but in Kubernetes it fails with an error which suggests that the URL "kubernetes.config" is missing the schema part.</p></li>
<li><p>JvmClasspath: I have to add "ignite-kubernetes-2.7.0.jar", otherwise the JAR is not found although it is located in the same dir as the rest of the Ignite classes. Adding the next two entries does not make any difference.</p></li>
</ol>
| <p>It looks like you have different versions of <code>spring-core</code> and <code>spring-beans</code> in your Java classpath for some reason. There's no reason for them to not match exactly. I think it should print full classpath somewhere, you can look it up.</p>
<p>In 2.7 Apache Ignite ships differing version of Spring libs in <code>ignite-spring-data_2.0</code> submodule. Maybe it got into your classpath by accident? Please remove that dir for good - you'll not need it when using .Net.</p>
<p><em>UPD:</em> With your reproducer project, it looks like I'm starting successfully:</p>
<pre><code>
Directories in the working directory:
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/libs
Files in the working directory:
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/ApacheIgniteNetKubernetesExample.dll
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/log4net.config
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/ApacheIgniteNetKubernetesExample.deps.json
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/ApacheIgniteNetKubernetesExample.pdb
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/ApacheIgniteNetKubernetesExample.runtimeconfig.dev.json
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/appsettings.json
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/ApacheIgniteNetKubernetesExample.runtimeconfig.json
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/appsettings.development.json
log4j:WARN No appenders could be found for logger (org.springframework.core.env.StandardEnvironment).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[2019-04-03 17:29:02,095][INFO ][main][IgniteKernal]
>>> __________ ________________
>>> / _/ ___/ |/ / _/_ __/ __/
>>> _/ // (7 7 // / / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/
>>>
>>> ver. 2.7.0#20181130-sha1:256ae401
>>> 2018 Copyright(C) Apache Software Foundation
>>>
>>> Ignite documentation: http://ignite.apache.org
[2019-04-03 17:29:02,096][INFO ][main][IgniteKernal] Config URL: n/a
[2019-04-03 17:29:02,110][INFO ][main][IgniteKernal] IgniteConfiguration [igniteInstanceName=null, pubPoolSize=8, svcPoolSize=8, callbackPoolSize=8, stripedPoolSize=8, sysPoolSize=8, mgmtPoolSize=4, igfsPoolSize=8, dataStreamerPoolSize=8, utilityCachePoolSize=8, utilityCacheKeepAliveTime=60000, p2pPoolSize=2, qryPoolSize=8, igniteHome=/home/gridgain/Downloads/apache-ignite-2.7.0-bin, igniteWorkDir=/home/gridgain/Downloads/apache-ignite-2.7.0-bin/work, mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@3e58a80e, nodeId=f5a4c49b-82c9-44df-ba8b-5ff97cad0a1f, marsh=BinaryMarshaller [], marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000, sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=10000, metricsUpdateFreq=2000, metricsExpTime=9223372036854775807, discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0, marsh=null, reconCnt=10, reconDelay=2000, maxAckTimeout=600000, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null], segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=10000, commSpi=TcpCommunicationSpi [connectGate=null, connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$FirstConnectionPolicy@4cc8eb05, enableForcibleNodeKill=false, enableTroubleshootingLog=false, locAddr=null, locHost=null, locPort=47100, locPortRange=100, shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=600000, connTimeout=5000, maxConnTimeout=600000, reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=0, slowClientQueueLimit=0, nioSrvr=null, shmemSrv=null, usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=32, unackedMsgsBufSize=0, sockWriteTimeout=2000, boundTcpPort=-1, boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null, ctxInitLatch=java.util.concurrent.CountDownLatch@51f116b8[Count = 1], stopping=false], evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@19d481b, colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [], indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@7690781, addrRslvr=null, encryptionSpi=org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi@77eca502, clientMode=false, rebalanceThreadPoolSize=1, txCfg=TransactionConfiguration [txSerEnabled=false, dfltIsolation=REPEATABLE_READ, dfltConcurrency=PESSIMISTIC, dfltTxTimeout=0, txTimeoutOnPartitionMapExchange=0, pessimisticTxLogSize=0, pessimisticTxLogLinger=10000, tmLookupClsName=null, txManagerFactory=null, useJtaSync=false], cacheSanityCheckEnabled=true, discoStartupDelay=60000, deployMode=SHARED, p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100, timeSrvPortRange=100, failureDetectionTimeout=10000, sysWorkerBlockedTimeout=null, clientFailureDetectionTimeout=30000, metricsLogFreq=60000, hadoopCfg=null, connectorCfg=ConnectorConfiguration [jettyPath=null, host=null, port=11211, noDelay=true, directBuf=false, sndBufSize=32768, rcvBufSize=32768, idleQryCurTimeout=600000, idleQryCurCheckFreq=60000, sndQueueLimit=0, selectorCnt=4, idleTimeout=7000, sslEnabled=false, sslClientAuth=false, sslCtxFactory=null, sslFactory=null, portRange=100, threadPoolSize=8, msgInterceptor=null], odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration [seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null, grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=PlatformDotNetConfiguration [binaryCfg=null], binaryCfg=null, memCfg=null, pstCfg=null, dsCfg=DataStorageConfiguration [sysRegionInitSize=41943040, sysRegionMaxSize=104857600, pageSize=0, concLvl=0, dfltDataRegConf=DataRegionConfiguration [name=default, maxSize=6720133529, initSize=268435456, swapPath=null, pageEvictionMode=DISABLED, evictionThreshold=0.9, emptyPagesPoolSize=100, metricsEnabled=false, metricsSubIntervalCount=5, metricsRateTimeInterval=60000, persistenceEnabled=false, checkpointPageBufSize=0], dataRegions=null, storagePath=null, checkpointFreq=180000, lockWaitTime=10000, checkpointThreads=4, checkpointWriteOrder=SEQUENTIAL, walHistSize=20, maxWalArchiveSize=1073741824, walSegments=10, walSegmentSize=67108864, walPath=db/wal, walArchivePath=db/wal/archive, metricsEnabled=false, walMode=LOG_ONLY, walTlbSize=131072, walBuffSize=0, walFlushFreq=2000, walFsyncDelay=1000, walRecordIterBuffSize=67108864, alwaysWriteFullPages=false, fileIOFactory=org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIOFactory@59af0466, metricsSubIntervalCnt=5, metricsRateTimeInterval=60000, walAutoArchiveAfterInactivity=-1, writeThrottlingEnabled=false, walCompactionEnabled=false, walCompactionLevel=1, checkpointReadLockTimeout=null], activeOnStart=true, autoActivation=true, longQryWarnTimeout=3000, sqlConnCfg=null, cliConnCfg=ClientConnectorConfiguration [host=null, port=10800, portRange=100, sockSndBufSize=0, sockRcvBufSize=0, tcpNoDelay=true, maxOpenCursorsPerConn=128, threadPoolSize=8, idleTimeout=0, jdbcEnabled=true, odbcEnabled=true, thinCliEnabled=true, sslEnabled=false, useIgniteSslCtxFactory=true, sslClientAuth=false, sslCtxFactory=null], mvccVacuumThreadCnt=2, mvccVacuumFreq=5000, authEnabled=false, failureHnd=null, commFailureRslvr=null]
[2019-04-03 17:29:02,110][INFO ][main][IgniteKernal] Daemon mode: off
[2019-04-03 17:29:02,110][INFO ][main][IgniteKernal] OS: Linux 4.15.0-46-generic amd64
[2019-04-03 17:29:02,110][INFO ][main][IgniteKernal] OS user: gridgain
[2019-04-03 17:29:02,111][INFO ][main][IgniteKernal] PID: 8539
[2019-04-03 17:29:02,111][INFO ][main][IgniteKernal] Language runtime: Java Platform API Specification ver. 1.8
[2019-04-03 17:29:02,111][INFO ][main][IgniteKernal] VM information: Java(TM) SE Runtime Environment 1.8.0_144-b01 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.144-b01
[2019-04-03 17:29:02,112][INFO ][main][IgniteKernal] VM total memory: 0.48GB
[2019-04-03 17:29:02,112][INFO ][main][IgniteKernal] Remote Management [restart: off, REST: on, JMX (remote: off)]
[2019-04-03 17:29:02,113][INFO ][main][IgniteKernal] Logger: Log4JLogger [quiet=false, config=/home/gridgain/Downloads/apache-ignite-2.7.0-bin/config/ignite-log4j.xml]
[2019-04-03 17:29:02,113][INFO ][main][IgniteKernal] IGNITE_HOME=/home/gridgain/Downloads/apache-ignite-2.7.0-bin
[2019-04-03 17:29:02,113][INFO ][main][IgniteKernal] VM arguments: [-Djava.net.preferIPv4Stack=true, -Xms512m, -Xmx512m, -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true, -DIGNITE_QUIET=false]
[2019-04-03 17:29:02,113][INFO ][main][IgniteKernal] System cache's DataRegion size is configured to 40 MB. Use DataStorageConfiguration.systemRegionInitialSize property to change the setting.
[2019-04-03 17:29:02,119][INFO ][main][IgniteKernal] Configured caches [in 'sysMemPlc' dataRegion: ['ignite-sys-cache']]
[2019-04-03 17:29:02,122][INFO ][main][IgniteKernal] 3-rd party licenses can be found at: /home/gridgain/Downloads/apache-ignite-2.7.0-bin/libs/licenses
[2019-04-03 17:29:02,123][INFO ][main][IgniteKernal] Local node user attribute [service=ProductService]
[2019-04-03 17:29:02,157][INFO ][main][IgnitePluginProcessor] Configured plugins:
[2019-04-03 17:29:02,157][INFO ][main][IgnitePluginProcessor] ^-- None
[2019-04-03 17:29:02,157][INFO ][main][IgnitePluginProcessor]
[2019-04-03 17:29:02,158][INFO ][main][FailureProcessor] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]]]
[2019-04-03 17:29:02,186][INFO ][main][TcpCommunicationSpi] Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
[2019-04-03 17:29:07,195][WARN ][main][TcpCommunicationSpi] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[2019-04-03 17:29:07,232][WARN ][main][NoopCheckpointSpi] Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
[2019-04-03 17:29:07,254][WARN ][main][GridCollisionManager] Collision resolution is disabled (all jobs will be activated upon arrival).
[2019-04-03 17:29:07,293][INFO ][main][IgniteKernal] Security status [authentication=off, tls/ssl=off]
[2019-04-03 17:29:07,430][WARN ][main][IgniteCacheDatabaseSharedManager] DataRegionConfiguration.maxWalArchiveSize instead DataRegionConfiguration.walHistorySize would be used for removing old archive wal files
[2019-04-03 17:29:07,444][INFO ][main][PartitionsEvictManager] Evict partition permits=2
[2019-04-03 17:29:07,593][INFO ][main][ClientListenerProcessor] Client connector processor has started on TCP port 10800
[2019-04-03 17:29:07,624][INFO ][main][GridTcpRestProtocol] Command protocol successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
[2019-04-03 17:29:07,643][WARN ][main][PlatformProcessorImpl] Marshaller is automatically set to o.a.i.i.binary.BinaryMarshaller (other nodes must have the same marshaller type).
[2019-04-03 17:29:07,675][INFO ][main][IgniteKernal] Non-loopback local IPs: 172.17.0.1, 172.25.4.188
[2019-04-03 17:29:07,675][INFO ][main][IgniteKernal] Enabled local MACs: 0242929A3D04, D481D72208BB
[2019-04-03 17:29:07,700][INFO ][main][TcpDiscoverySpi] Connection check threshold is calculated: 10000
[2019-04-03 17:29:07,702][INFO ][main][TcpDiscoverySpi] Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0, locNodeId=f5a4c49b-82c9-44df-ba8b-5ff97cad0a1f]
[2019-04-03 17:29:07,844][ERROR][main][TcpDiscoverySpi] Failed to get registered addresses from IP finder on start (retrying every 2000ms; change 'reconnectDelay' to configure the frequency of retries).
class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite pods IP addresses.
at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172)
</code></pre>
<p>(but that's expected)</p>
<p>Have you tried to moving <code>ignite-kubernetes</code> from <code>libs/optional/</code> to <code>libs/</code> instead of manually adding its JARs to classpath? Do you have anything else in your libs/?</p>
|
<p>I want to deploy a service with three replicas, each having a readiness probe. One pod will start working (sending data at a port on which readiness is tested) only when the other two pods also spin up. All the three pods need to spin up, register their IP in an internal service, and then they will be discoverable.</p>
<p>The readiness probe, in my opinion, works sequentially, and so, only spins up one pod. This creates a deadlock situation where the starting pod waits for the other two pods to atlease start running even if they don't start the application, and K8s does not spin up the other two pods until the readiness of the first pod is satifsfied.</p>
<p>My readiness config is:</p>
<pre><code>readinessProbe=ExecProbe(
execute=ExecAction(command=["curl", "localhost:2004"]),
initialDelaySeconds=120,
timeoutSeconds=10,
periodSeconds=10,
successThreshold=1,
failureThreshold=10
)
</code></pre>
<p>I want my pods to spin up, even if the current ones are running but their readiness is not successful. Or maybe I should use something other than readiness?</p>
| <p>If you are using StatefulSet, use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management" rel="nofollow noreferrer">parallel pod management</a> to allow Kubernetes to create the replicas in parallel without waiting for previous pods to be ready.</p>
<p>Set <code>.spec.podManagementPolicy: Parallel</code> in the manifest of the StatefulSet .</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-db
spec:
podManagementPolicy: Parallel
replicas: 3
<omitted>
</code></pre>
|
<p>I am trying to upgrade the nodes in my Kubernetes cluster. When I go to do that, I get a notification saying:</p>
<blockquote>
<p>PDB istio-ingressgateway in namespace istio-system allows 0 pod disruptions</p>
</blockquote>
<p>PDB is Pod Disruption Budget. Basically, istio is saying that it can't loose that pod and keep things working right.</p>
<p>There is a really long <a href="https://github.com/istio/istio/issues/12602" rel="nofollow noreferrer">discussion</a> about this over on the Istio GitHub issues. This issue has been on going for over 2 years. Most of the discussions center around saying that the defaults are wrong. There are few workaround suggestions. But most of them are pre 1.4 (and the introduction of Istiod). The closest workaround I could find that might be compatible with current version is to <a href="https://github.com/istio/istio/issues/12602#issuecomment-701038118" rel="nofollow noreferrer">add some additional replicas</a> to the IstioOperator.</p>
<p>I tried that with a patch operation (run in PowerShell):</p>
<pre><code>kubectl patch IstioOperator installed-state --patch $(Get-Content istio-ha-patch.yaml -Raw) --type=merge -n istio-system
</code></pre>
<p>Where <code>istio-ha-patch.yaml</code> is:</p>
<pre><code>spec:
components:
egressGateways:
- enabled: true
k8s:
hpaSpec:
minReplicas: 2
name: istio-egressgateway
ingressGateways:
- enabled: true
k8s:
hpaSpec:
minReplicas: 2
name: istio-ingressgateway
pilot:
enabled: true
k8s:
hpaSpec:
minReplicas: 2
</code></pre>
<p>I applied that, and checked the yaml of the IstioOperator, and it did apply to the resource's yaml. But the replica count for the ingress pod did not go up. (It stayed at 1 of 1.)</p>
<p>At this point, my only option is to uninstall Istio, apply my update then re-install Istio. (Yuck)</p>
<p><strong>Is there anyway to get the replica count of Istio's ingress gateway up such that I can keep it running as I do a rolling node upgrade?</strong></p>
| <p>Turns out that if you did not install Istio using the Istio Kubernetes Operator, you cannot use the option I tried.</p>
<p>Once I uninstalled Istio and reinstalled it using the Operator, then I was able to get it to work.</p>
<p>Though I did not use the Patch operation, I just did a <code>kubectl apply -f istio-operator-spec.yaml</code> where <code>istio-operator-spec.yaml</code> is:</p>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-controlplane
namespace: istio-system
spec:
components:
ingressGateways:
- enabled: true
k8s:
hpaSpec:
minReplicas: 2
name: istio-ingressgateway
pilot:
enabled: true
k8s:
hpaSpec:
minReplicas: 2
profile: default
</code></pre>
|
<p>I have a YAML file defining multiple Kubernetes resources of various types (separated with <code>---</code> according to the YAML spec):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# ...
spec:
# ...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# ...
rules:
# ...
---
# etc
</code></pre>
<p>Now, I want to parse this list into a slice of <code>client.Object</code> instances, so I can apply some filtering and transforms, and eventually send them to the cluster using</p>
<pre class="lang-golang prettyprint-override"><code>myClient.Patch( # myClient is a client.Client instance
ctx,
object, # object needs to be a client.Object
client.Apply,
client.ForceOwnership,
client.FieldOwner("my.operator.acme.inc"),
)
</code></pre>
<p>However, I can't for the life of me figure out how to get from the YAML doc to <code>[]client.Object</code>. The following gets me <em>almost</em> there:</p>
<pre class="lang-golang prettyprint-override"><code>results := make([]client.Object, 0)
scheme := runtime.NewScheme()
clientgoscheme.AddToScheme(scheme)
apiextensionsv1beta1.AddToScheme(scheme)
apiextensionsv1.AddToScheme(scheme)
decode := serializer.NewCodecFactory(scheme).UniversalDeserializer().Decode
data, err := ioutil.ReadAll(reader)
if err != nil {
return nil, err
}
for _, doc := range strings.Split(string(data), "---") {
object, gvk, err := decode([]byte(doc), nil, nil)
if err != nil {
return nil, err
}
// object is now a runtime.Object, and gvk is a schema.GroupVersionKind
// taken together, they have all the information I need to expose a
// client.Object (I think) but I have no idea how to actually construct a
// type that implements that interface
result = append(result, ?????)
}
return result, nil
</code></pre>
<p>I am totally open to other parser implementations, of course, but I haven't found anything that gets me any further. But this seems like it <em>must</em> be a solved problem in the Kubernetes world... so how do I do it?</p>
| <p>I was finally able to make it work! Here's how:</p>
<pre class="lang-golang prettyprint-override"><code>import (
"k8s.io/client-go/kubernetes/scheme"
"sigs.k8s.io/controller-runtime/pkg/client"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
apiextensionsv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
)
func deserialize(data []byte) (*client.Object, error) {
apiextensionsv1.AddToScheme(scheme.Scheme)
apiextensionsv1beta1.AddToScheme(scheme.Scheme)
decoder := scheme.Codecs.UniversalDeserializer()
runtimeObject, groupVersionKind, err := decoder.Decode(data, nil, nil)
if err != nil {
return nil, err
}
return runtime
}
</code></pre>
<p>A couple of things that seem key (but I'm not sure my understanding is 100% correct here):</p>
<ul>
<li>while the declared return type of <code>decoder.Decode</code> is <code>(runtime.Object, *scheme.GroupVersionKind, error)</code>, the returned first item of that tuple is actually a <code>client.Object</code> and can be cast as such without problems.</li>
<li>By using <code>scheme.Scheme</code> as the baseline before adding the <code>apiextensions.k8s.io</code> groups, I get all the "standard" resources registered for free.</li>
<li>If I use <code>scheme.Codecs.UniversalDecoder()</code>, I get errors about <code> no kind "CustomResourceDefinition" is registered for the internal version of group "apiextensions.k8s.io" in scheme "pkg/runtime/scheme.go:100"</code>, and the returned <code>groupVersionKind</code> instance shows <code>__internal</code> for version. No idea why this happens, or why it <em>doesn't</em> happen when I use the <code>UniversalDeserializer()</code> instead.</li>
</ul>
|
<p>I have a deployment (starterservice) that deploys a single pod with a persistent volume claim. This works. However restart fails:</p>
<pre><code>kubectl rollout restart deploy starterservice
</code></pre>
<p>The new pod is started before the old one has terminated and it cannot attach the volume (Multi-Attach error for volume "pvc-..."). I can work around this by scaling to zero and then back up to 1 instead:</p>
<pre><code>kubectl scale --replicas=0 deployment/starterservice
kubectl scale --replicas=1 deployment/starterservice
</code></pre>
<p>I was wondering if there was a way to get <code>kubectl rollout restart</code> to wait for the old pod to terminate before starting a new one? Tx.</p>
| <p>You need to set deployment strategy = recreate.</p>
<pre><code>spec:
strategy:
type: Recreate
</code></pre>
<p>The difference between the <code>Recreate</code> strategy compared to <code>RollingUpdate</code> (default) is that <code>Recreate</code> will terminate the old pod before creating new one while <code>RollingUpdate</code> will create new pod before terminating the old one.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment</a></p>
|
<p>I am currently in the process to set up <a href="https://sentry.io/" rel="nofollow noreferrer">sentry.io</a> but i am having problems in setting it up in openshift 3.11</p>
<p>I got pods running for <code>sentry</code> itself, <code>postgresql</code>, <code>redis</code> and <code>memcache</code> but according to the log messages they are not able to communicate together.</p>
<pre><code>sentry.exceptions.InvalidConfiguration: Error 111 connecting to 127.0.0.1:6379. Connection refused.
</code></pre>
<p>Do i need to create a network like in docker or should the pods (all in the same namespace) be able to talk to each other by default? I got admin rights for the complete project so i can also work with the console and not only the web interface.</p>
<p>Best wishes</p>
<p><strong>EDIT:</strong> Adding deployment config for sentry and its service and for the sake of simplicity the postgres config and service. I also blanked out some unnecessary information with the keyword <code>BLANK</code> if I went overboard please let me know and ill look it up.</p>
<p>Deployment config for <code>sentry</code>:</p>
<pre><code>apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
generation: 20
labels:
app: sentry
name: sentry
namespace: test
resourceVersion: '506667843'
selfLink: BLANK
uid: BLANK
spec:
replicas: 1
selector:
app: sentry
deploymentconfig: sentry
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: sentry
deploymentconfig: sentry
spec:
containers:
- env:
- name: SENTRY_SECRET_KEY
value: Iamsosecret
- name: C_FORCE_ROOT
value: '1'
- name: SENTRY_FILESTORE_DIR
value: /var/lib/sentry/files/data
image: BLANK
imagePullPolicy: Always
name: sentry
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/sentry/files
name: sentry-1
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: sentry-1
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- sentry
from:
kind: ImageStreamTag
name: 'sentry:latest'
namespace: catcloud
lastTriggeredImage: BLANK
type: ImageChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: Deployment config has minimum availability.
status: 'True'
type: Available
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: replication controller "sentry-19" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 19
observedGeneration: 20
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
</code></pre>
<p>Service for <code>sentry</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
labels:
app: sentry
name: sentry
namespace: test
resourceVersion: '505555608'
selfLink: BLANK
uid: BLANK
spec:
clusterIP: BLANK
ports:
- name: 9000-tcp
port: 9000
protocol: TCP
targetPort: 9000
selector:
deploymentconfig: sentry
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>Deployment config for <code>postgresql</code>:</p>
<pre><code>apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
generation: 10
labels:
app: postgres
type: backend
name: postgres
namespace: test
resourceVersion: '506664185'
selfLink: BLANK
uid: BLANK
spec:
replicas: 1
selector:
app: postgres
deploymentconfig: postgres
type: backend
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: postgres
deploymentconfig: postgres
type: backend
spec:
containers:
- env:
- name: PGDATA
value: /var/lib/postgresql/data/sql
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
- name: POSTGRESQL_USER
value: sentry
- name: POSTGRESQL_PASSWORD
value: sentry
- name: POSTGRESQL_DATABASE
value: sentry
image: BLANK
imagePullPolicy: Always
name: postgres
ports:
- containerPort: 5432
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: volume-uirge
subPath: sql
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 2000020900
terminationGracePeriodSeconds: 30
volumes:
- name: volume-uirge
persistentVolumeClaim:
claimName: postgressql
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- postgres
from:
kind: ImageStreamTag
name: 'postgres:latest'
namespace: catcloud
lastTriggeredImage: BLANK
type: ImageChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: Deployment config has minimum availability.
status: 'True'
type: Available
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: replication controller "postgres-9" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 9
observedGeneration: 10
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
</code></pre>
<p>Service config <code>postgresql</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
labels:
app: postgres
type: backend
name: postgres
namespace: catcloud
resourceVersion: '506548841'
selfLink: /api/v1/namespaces/catcloud/services/postgres
uid: BLANK
spec:
clusterIP: BLANK
ports:
- name: 5432-tcp
port: 5432
protocol: TCP
targetPort: 5432
selector:
deploymentconfig: postgres
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
| <p>Pods (even in the same namespace) are not able to talk <strong>directly</strong> to each other by default. You need to create a <code>Service</code> in order to allow a pod to receive connections from another pod. In general, one pod connects to another pod via the latter's service, as I illustrated below:</p>
<p><a href="https://i.stack.imgur.com/mV4ax.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mV4ax.png" alt="enter image description here" /></a></p>
<p>The connection info would look something like <code><servicename>:<serviceport></code> (e.g. <code>elasticsearch-master:9200</code>) rather than <code>localhost:port</code>.</p>
<p>You can read <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a> for further info on a service.</p>
<p>N.B: <code>localhost:port</code> will only work for containers running inside the same pod to connect to each other, just like how nginx connects to gravitee-mgmt-api and gravitee-mgmt-ui in my illustration above.</p>
|
<p>I have the following <code>RoleBinding</code> (it was deployed by Helm:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
meta.helm.sh/release-name: environment-namespaces
meta.helm.sh/release-namespace: namespace-metadata
creationTimestamp: "2021-04-23T17:16:50Z"
labels:
app.kubernetes.io/managed-by: Helm
name: SA-DevK8s-admin
namespace: dev-my-product-name-here
resourceVersion: "29221536"
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/dev-my-product-name-here/rolebindings/SA-DevK8s-admin
uid: 4818d6ed-9320-408c-82c3-51e627d9f375
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: [email protected]
</code></pre>
<p>When I login to the cluster as <code>[email protected]</code> and run <code>kubectl get pods -n dev-my-product-name-here</code> it get the following error:</p>
<blockquote>
<p>Error from server (Forbidden): pods is forbidden: User "[email protected]" cannot list resource "pods" in API group "" in the namespace "dev-my-product-name-here"</p>
</blockquote>
<p><strong>Shouldn't a user who has the ClusterRole of admin in a namespace be able to list the pods for that namespace?</strong></p>
| <p><strong>Case Matters!!!!</strong></p>
<p>Once I changed the user to be <code>[email protected]</code> (instead of <code>[email protected]</code>), it all started working correctly!</p>
|
<p>I am reading the README file for <a href="https://github.com/prometheus-operator/kube-prometheus#prerequisites" rel="nofollow noreferrer">kube-prometheus</a> and is confused by the following passage:</p>
<blockquote>
<p>This stack provides resource metrics by deploying the Prometheus
Adapter. This adapter is an Extension API Server and Kubernetes needs
to be have this feature enabled, otherwise the adapter has no effect,
but is still deployed.</p>
</blockquote>
<p>What does it mean to have the Extension API Server feature enabled? I have consulted the page on <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">feature gates in K8s</a> and on <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/setup-extension-api-server/#setup-an-extension-api-server-to-work-with-the-aggregation-layer" rel="nofollow noreferrer">setting up an Extension API Server</a> but neither seem to indicate an existence of a dedicated feature to enable for Extension API Servers.</p>
<p>What am I missing?</p>
<p>P.S.</p>
<p>I use an Azure managed K8s cluster.</p>
| <p>I think the documentation you need is under enabling aggregation.</p>
<p><a href="https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/#enable-kubernetes-apiserver-flags" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/#enable-kubernetes-apiserver-flags</a></p>
<p>The section you are looking for is <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/#enable-kubernetes-apiserver-flags" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/#enable-kubernetes-apiserver-flags</a></p>
<p>It looks like the flags needed are</p>
<pre><code>--requestheader-client-ca-file=<path to aggregator CA cert>
--requestheader-allowed-names=front-proxy-client
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--proxy-client-cert-file=<path to aggregator proxy cert>
--proxy-client-key-file=<path to aggregator proxy key>
</code></pre>
|
<p>I want to capture <code>subdomain</code> and rewrite URL with <code>/subdomain</code>, For example <code>bhautik.bhau.tk</code> rewrite to <code>bhau.tk/bhautik</code>.</p>
<p>I also <a href="https://github.com/google/re2/wiki/Syntax" rel="nofollow noreferrer">https://github.com/google/re2/wiki/Syntax</a> tried group syntax</p>
<p>Here is my <code>nginx</code> ingress config:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: subdomain
namespace: subdomain
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
# nginx.ingress.kubernetes.io/rewrite-target: /$sub
nginx.ingress.kubernetes.io/server-snippet: |
set $prefix abcd;
if ($host ~ ^(\w+).bhau\.tk$) {
// TODO?
}
nginx.ingress.kubernetes.io/rewrite-target: /$prefix/$uri
spec:
rules:
- host: "*.bhau.tk"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: subdomain
port:
number: 80
</code></pre>
<p>How do I capture subdomain from $host?</p>
| <p>I believe you want a redirect instead of rewrite. Here is the <code>server-snippet</code> you need:</p>
<pre><code>nginx.ingress.kubernetes.io/server-snippet: |
if ($host ~ ^(?<subdom>\w+)\.(?<basedom>bhau\.tk)$) {
return 302 https://$basedom/$subdom/ ;
}
</code></pre>
<p>If you really want a rewrite where the URL that the user sees remains unchanged but instead the request will be routed to a subpath served by the same service:</p>
<pre><code>nginx.ingress.kubernetes.io/server-snippet: |
if ($host ~ ^(?<subdom>\w+)\.(?<basedom>bhau\.tk)$) {
rewrite ^/(.*)$ /$subdom/$1 ;
}
</code></pre>
<p>Remove the <code>rewrite-target</code> annotation that specifies <code>$prefix</code>. You don't need it.</p>
<p>The <code>?<capturename></code> and <code>$capturename</code> pair is the trick you are looking for.</p>
|
<p>I'll describe what is my target then show what I had done to achieve it... my goal is to:</p>
<ul>
<li>create a configmap that holds a path for properties file</li>
<li>create a deployment, that has a volume mounting the file from the path configured in configmap</li>
</ul>
<p>What I had done:</p>
<p>ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: "my.properties"
</code></pre>
<p>Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-client-deployment
spec:
selector:
matchLabels:
app: my-client
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: my-client
spec:
containers:
- name: my-client-container
image: {{ .Values.image.client}}
imagePullPolicy: {{ .Values.pullPolicy.client }}
ports:
- containerPort: 80
env:
- name: MY_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: my-configmap
key: my_properties_file_name
volumeMounts:
- name: config
mountPath: "/etc/config"
readOnly: true
imagePullSecrets:
- name: secret-private-registry
volumes:
# You set volumes at the Pod level, then mount them into containers inside that Pod
- name: config
configMap:
# Provide the name of the ConfigMap you want to mount.
name: my-configmap
# An array of keys from the ConfigMap to create as files
items:
- key: "my_properties_file_name"
path: "my.properties"
</code></pre>
<p>The result is having a file named<code>my.properties</code> under <code>/etc/config</code>, BUT the content of that file is "my.properties" (as it was indicated as the file name in the configmap), and not the content of properties file as I have it actually in my localdisk.</p>
<p>How can I mount that file, using it's path configured in a configmap?</p>
| <p>Put the content of the <code>my.properties</code> file directly inside the ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: |
This is the content of the file.
It supports multiple lines but do take care of the indentation.
</code></pre>
<p>Or you can also use a <code>kubectl create configmap</code> command:</p>
<pre><code>kubectl create configmap my-configmap --from-file=my_properties_file_name=./my.properties
</code></pre>
<p>In either method, you are actually passing the snapshot of the content of the file on the localdisk to kubernetes to store. Any changes you make to the file on the localdisk won't be reflected unless you re-create the configmap.</p>
<p>The design of kubernetes allows running <code>kubectl</code> command against kubernetes cluster located on the other side of the globe so you can't simply mount a file on your localdisk to be accessible in realtime by the cluster. If you want such mechanism, you can't use a ConfigMap, but instead you would need to setup a shared volume that is mounted by both your local machine and the cluster for example using a NFS server.</p>
|
<p>I'm wondering about the best practices for architecting my Kubernetes clusters.
For 1 environment (e.g. production), what organisation should I have in my clusters?</p>
<p>Examples: 1 cluster per technology stack, 1 cluster per exposure area (internet, private...), 1 cluster with everything ... ?</p>
<p>Thanks for your help</p>
| <p>I'm not a Kubernetes expert, so I'll give you some generic guidance to help until someone who knows more weighs in.</p>
<ul>
<li>By technology stack - no. That wouldn't provide any value I can think of.</li>
<li>By 'exposure' - yes. If one cluster is compromised the damage will hopefully be limited to that cluster only.</li>
<li>By solution - yes.</li>
</ul>
<p><strong>Solution vs Technology Stack</strong></p>
<p>"Solution" is where you have a number of systems that exist to addresses a specific business problem or domain. This could be functional e.g. finance vs CRM vs HR.</p>
<p>Technology stacks in the literal sense is not likely to be relevant. True, it's not uncommon for different solutions & systems to be comprised of different technology (is that what you were meaning?) - but that's usually a by-product, not the primary driver.</p>
<p>Let's say you have two major solutions (e.g. the finance and CRM). It's likely that you will have situations that impacts one but shouldn't impact the other.</p>
<ul>
<li><p>Planned functional changes: e.g. rolling out a major release. Object Orientated programmers and architects have had this nailed for years through designing systems that are cohesive but loosely-coupled (see: <a href="https://stackoverflow.com/questions/3085285/difference-between-cohesion-and-coupling">Difference Between Cohesion and Coupling</a>), and through stuff like the Stable Dependencies Principle. Having both solutions dependent on the same cluster makes them coupled in that respect, which.</p>
</li>
<li><p>Planned infrastructure changes: e.g. patching, maintenance, resource reallocation, etc.</p>
</li>
<li><p>Unplanned changes: e.g. un-planned outage, security breaches.</p>
</li>
</ul>
<p><strong>Conclusion</strong></p>
<p>Look at what will be running on the cluster(s), and what solutions they are part of, and consider separation along those lines.</p>
<p>The final answer might be a combination of both, some sort of balance between security concerns and solution (i.e. change) boundaries.</p>
|
<p>I have set the Kubernetes cronJob to prevent concurrent runs <a href="https://stackoverflow.com/a/62892617/2096986">like here</a> using <code>parallelism: 1</code>, <code>concurrencyPolicy: Forbid</code>, and <code>parallelism: 1</code>. However, when I try to create a cronJob manually I am allowed to do that.</p>
<pre><code>$ kubectl get cronjobs
...
$ kubectl create job new-cronjob-1642417446000 --from=cronjob/original-cronjob-name
job.batch/new-cronjob-1642417446000 created
$ kubectl create job new-cronjob-1642417446001 --from=cronjob/original-cronjob-name
job.batch/new-cronjob-1642417446001 created
</code></pre>
<p>I was expecting that a new cronjob would not be created. Or it could be created and fail with a state that references the <code>concurrencyPolicy</code>. If the property <code>concurrencyPolicy</code> is part of the CronJob spec, not the PodSpec, it should prevent a new job to be created. Why it does not?</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-name
annotations:
argocd.argoproj.io/sync-wave: "1"
spec:
schedule: "0 * * * *"
suspend: false
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 3
concurrencyPolicy: Forbid
jobTemplate:
spec:
parallelism: 1
completions: 1
backoffLimit: 3
template:
spec:
restartPolicy: Never
</code></pre>
<p>After reading the <a href="https://kubernetes.io/docs/tasks/job/_print/#creating-a-cron-job" rel="nofollow noreferrer">official documentation</a> about the <code>kubectl create -f</code> I didn't find a way to prevent that. Is this behavior expected? If it is, I think I should check inside my Docker image (app written in Java) if there is already a cronjob running. How would I do that?</p>
| <p>The <code>concurrencyPolicy: Forbidden</code> spec only prevents concurrent pod creations and executions of <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy" rel="nofollow noreferrer">the same <code>CronJob</code></a>. It does not apply across separate CronJobs even though they effectively execute the same commands using the same Docker image. As far as Kubernetes is concerned, they are different jobs. If this was Java, what Kubernetes does is <code>if (stringA == stringB)</code> and not <code>if (stringA.equals(stringB))</code>.</p>
<blockquote>
<p>If it is, I think I should check inside my Docker image (app written in Java) if there is already a cronjob running. How would I do that?</p>
</blockquote>
<p>One way of using that is to use distributed lock mechanism using separate component such as Redis. Here is the link to the guide to utilize Java Redis library redisson for that purpose: <a href="https://github.com/redisson/redisson/wiki/8.-distributed-locks-and-synchronizers" rel="nofollow noreferrer">https://github.com/redisson/redisson/wiki/8.-distributed-locks-and-synchronizers</a>. Below is code sample taken from that page:</p>
<pre><code>RLock lock = redisson.getLock("myLock");
// wait for lock aquisition up to 100 seconds
// and automatically unlock it after 10 seconds
boolean res = lock.tryLock(100, 10, TimeUnit.SECONDS);
if (res) {
// do operation
} else {
// some other execution is doing it so just chill
}
</code></pre>
|
<p>I have few questions regarding how a random string part in kubernetes podname is decided .</p>
<p>how is pod-template-hash decided ?? ( i understand that it is a random string generated by deployment controller). But exactly what are the inputs which the deployment controller considers before generating this random string .
is there a maximum length this hash string will limit to ??</p>
<p>The reason for asking this, we are storing the complete pod name in database and it cannot exceed certain character length.</p>
<p>Most of the times i have seen the length is 10 characters. Can it go beyond 10 characters ??</p>
| <p>10 characters? That is only the alphanumeric suffix of the replica set name. Pods under a replica set will have additional suffixes of a dash plus 5 characters long alphanumeric string.</p>
<p>The name structure of a Pod will be different depending on the controller type:</p>
<ul>
<li>StatefulSet: StatefulSet name + "-" + ordinal number starting from 0</li>
<li>DaemonSet: DaemonSet name + "-" + 5 alphanumeric characters long string</li>
<li>Deployment: ReplicaSet name (which is Deployment name + "-" + 10 alphanumeric characters long string) + "-" + 5 alphanumeric characters long string</li>
</ul>
<p>But then, the full name of the Pods will also include the name of their controllers, which is rather arbitrary.</p>
<p>So, how do you proceed?</p>
<p>You just have to prepare the length of the column to be <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/overview/working-with-objects/names/" rel="nofollow noreferrer">the maximum length of a pod name, which is <strong>253 characters</strong></a>.</p>
|
<p>When I add morethan 50 Paths in the Ingress file - getting below error from Google Cloud Platform.</p>
<p><strong>"Error during sync: UpdateURLMap: googleapi: Error 413: Value for field 'resource.pathMatchers[0].pathRules' is too large: maximum size 50 element(s); actual size 51., fieldSizeTooLarge"</strong></p>
<p>We are using Path based Ingress thru Traefik. This error coming from Google Cloud Platform.</p>
<p>Sample Ingress looklike:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
traefik.frontend.rule.type: PathPrefixStrip
name: traefik-ingress
namespace: default
spec:
rules:
- host: domain-name.com
http:
paths:
- backend:
serviceName: default-http-backend
servicePort: 8080
path: /
- backend:
serviceName: foo1-service
servicePort: 8080
path: /foo1/*
- backend:
serviceName: foo2-service
servicePort: 8080
path: /foo2/*
- backend:
serviceName: foo3-service
servicePort: 8080
path: /foo3/*
</code></pre>
| <p>This is hard limitation of the URLMap resource, <a href="https://cloud.google.com/load-balancing/docs/quotas" rel="nofollow noreferrer">which cannot be increased</a>.</p>
<blockquote>
<p>URL maps</p>
<p>Host rules per URL map - 50 - This limit cannot be increased.</p>
</blockquote>
<p>Here's a feature request to increase this limit: <a href="https://issuetracker.google.com/issues/126946582" rel="nofollow noreferrer">https://issuetracker.google.com/issues/126946582</a></p>
|
<p>in Kube, i have one pod with two containers
* container 1: nginx reverse proxy
* container 2: myapp</p>
<p>for testing purpose, i also has a docker compose file, include the two services
* service 1 nginx reverse proxy
* service 2: myapp</p>
<p>the issue is, in docker, the nginx upstream host is in the format of container name. In Kube, it is localhost.
Here is a code snipt:</p>
<pre><code>//for docker, nginx.conf
...
upstream web{
server myapp:8080;
}
....
proxy_pass http://web;
//for Kube, nginx.conf
...
upstream web{
server localhost:8080;
}
....
proxy_pass http://web;
}
</code></pre>
<p>i would like to have one nginx.conf to support both kube and docker-compose.
one way i can thinkof is to pass an env run time variable, so i can sed the upstream host in the entrypoint.sh.</p>
<p>are there other ways to accomplish this?</p>
<p>thank you</p>
| <p>I came across this question because we have the same issue.</p>
<p>I noticed the other answers suggested splitting nginx and the app-server into 2 different Services / Pods. Whilst that is certainly a solution, I rather like a self-contained Pod with both nginx and the app-server together. It works well for us, especially with php-fpm which can use a unix socket to communicate when in the same Pod which reduces internal http networking significantly.</p>
<p>Here is one idea:</p>
<p>Create a base nginx configuration file, for example, <code>proxy.conf</code> and setup docker to add it to the <code>conf.d</code> directory while building the image. The command is:</p>
<pre><code>ADD proxy.conf /etc/nginx/conf.d/proxy.conf
</code></pre>
<p>In the <code>proxy.conf</code>, omit the <code>upstream</code> configuration, leaving that for later. Create another file, a <code>run.sh</code> file and add it to the image using the <code>Dockerfile</code>. The file could be as follows:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/sh
(echo "upstream theservice { server $UPSTREAM_NAME:$UPSTREAM_PORT; }" && cat /etc/nginx/conf.d/proxy.conf) > proxy.conf.new
mv proxy.conf.new /etc/nginx/conf.d/proxy.conf
nginx -g 'daemon off;'
</code></pre>
<p>Finally, run the nginx from the <code>run.sh</code> script. The <code>Dockerfile</code> command:</p>
<pre><code>CMD /bin/sh run.sh
</code></pre>
<p>The trick is that since the container is initialized like that, the configuration file does not get permanently written and the configuration is updated accordingly. Set the ENV vars appropriately depending on whether using from docker-compose or Kubernetes.</p>
<hr>
<p>Let me also share a less <em>proper</em> solution which is more <em>hacky</em> but also simpler...</p>
<p>In Kubernetes, we change the docker image CMD so that it modifies the nginx config before the container starts. We use <code>sed</code> to update the upstream name to <code>localhost</code> to make it compatible with Kubernetes Pod networking. In our case it looks like this:</p>
<pre><code> - name: nginx
image: our_custom_nginx:1.14-alpine
command: ["/bin/ash"]
args: ["-c", "sed -i 's/web/127.0.0.1/g' /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
</code></pre>
<p>While this workaround works, it breaks the immutable infrastructure principle, so may not be a good candidate for everyone.</p>
|
<p>We are using Spark & Cassandra in an application which is deployed on bare metal/VM. To connect Spark to Cassandra, we are using following properties in order to enable SSL :</p>
<pre><code>spark.cassandra.connection.ssl.keyStore.password
spark.cassandra.connection.ssl.keyStore.type
spark.cassandra.connection.ssl.protocol
spark.cassandra.connection.ssl.trustStore.path
spark.cassandra.connection.ssl.trustStore.password
spark.cassandra.connection.ssl.trustStore.type
spark.cassandra.connection.ssl.clientAuth.enabled
</code></pre>
<p>Now I am trying to migrate same application in Kubernetes. I have following questions :</p>
<ol>
<li>Do I need to change above properties in order to connect spark to Cassandra cluster in Kubernetes?</li>
<li>Does above properties will work or did I miss something ?</li>
<li>Can anyone point to some document or link which can help me ?</li>
</ol>
| <p>Yes, these properties will continue to work when you run your job on Kubernetes. The only thing that you need to take into account is that all properties with name ending with <code>.path</code> need to point to the actual files with trust & key stores. On Kubernetes, you need to take care of exposing them as secrets, <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod" rel="nofollow noreferrer">mounted as files</a>. First you need to create a secret, like this:</p>
<pre><code>apiVersion: v1
data:
spark.truststore: base64-encoded truststore
kind: Secret
metadata:
name: spark-truststore
type: Opaque
</code></pre>
<p>and then in the spec, point to it:</p>
<pre><code> spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: "/some/path"
name: spark-truststore
readOnly: true
volumes:
- name: spark-truststore
secret:
secretName: spark-truststore
</code></pre>
<p>and point configuration option to given path, like: <code>/some/path/spark.truststore</code></p>
|
<p>I can start an "interactive pod" using:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl run my-shell --rm -i --tty --image ubuntu -- bash
</code></pre>
<p>How can I add a <a href="https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/" rel="nofollow noreferrer">customized hosts file</a> for this pod?</p>
<p>That is, one or more entries in <code>hostAliases</code> which is defined in the pod manifest.</p>
<p>One option is to create a pod that runs some idle process:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: my-shell
spec:
restartPolicy: Never
hostAliases:
- ip: "8.8.8.8"
hostnames:
- "dns.google"
containers:
- name: my-shell
image: ubuntu
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
</code></pre>
<p>Applying that using <code>kubectl apply</code> and then <code>kubectl exec</code> into the running pod.</p>
<p>Is it possible to more directly start an interactive pod with a specific pod spec?</p>
| <p>Add <code>--overrides='{ "spec": { "hostAliases": [ { "ip": "8.8.8.8", "hostnames": [ "dns.google" ] } ] } }'</code> to the <code>kubectl run</code> command:</p>
<pre><code>kubectl run my-shell --rm -i --tty --image ubuntu --overrides='{ "spec": { "hostAliases": [ { "ip": "8.8.8.8", "hostnames": [ "dns.google" ] } ] } }' -- bash
</code></pre>
<p>Reference: <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run</a></p>
|
<p>We have a <code>PVC</code> that is written to by many <code>k8s</code> <code>cronjobs</code>. We'd like to periodically copy this data locally. Ordinarily one would use <code>kubectl cp</code> to do such tasks, but since there's no actively running pod with the <code>PVC</code> mounted, this is not possible.</p>
<p>We've been using a modified version of this gist script <a href="https://gist.github.com/yuanying/3aa7d59dcce65470804ab43def646ab6" rel="nofollow noreferrer">kubectl-run-with-pvc.sh</a> to create a temporary pod (running <code>sleep 300</code>) and then <code>kubectl cp</code> from this temporary pod to get the <code>PVC</code> data. This "works" but seems kludgey.</p>
<p>Is there a more elegant way to achieve this?</p>
| <p>May I propose to use NFS instead of PVC?</p>
<p>If you do not have an NFS server, you can run one inside k8s cluster using this image @ <a href="https://hub.docker.com/r/itsthenetwork/nfs-server-alpine" rel="nofollow noreferrer">https://hub.docker.com/r/itsthenetwork/nfs-server-alpine</a>. The in-cluster NFS server indeed uses PVC for its storage but your pods should mount using NFS instead.</p>
<p>Meaning, from <code>pod --> PVC</code> to <code>pod --> NFS --> PVC</code>.</p>
<p>Here is the script that I quite often use to created dedicated in-cluster NFS servers (just modify the variables at the top of the script accordingly):</p>
<pre><code>export NFS_NAME="nfs-share"
export NFS_SIZE="10Gi"
export NFS_SERVER_IMAGE="itsthenetwork/nfs-server-alpine:latest"
export STORAGE_CLASS="thin-disk"
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
selector:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
name: ${NFS_NAME}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: ${NFS_SIZE}
storageClassName: ${STORAGE_CLASS}
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
template:
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
containers:
- name: nfs-server
image: ${NFS_SERVER_IMAGE}
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: /nfsshare
volumeMounts:
- name: pvc
mountPath: /nfsshare
volumes:
- name: pvc
persistentVolumeClaim:
claimName: ${NFS_NAME}
EOF
</code></pre>
<p>To mount the NFS inside your pod, first you need to get its service IP:</p>
<pre><code>export NFS_NAME="nfs-share"
export NFS_IP=$(kubectl get --template={{.spec.clusterIP}} service/$NFS_NAME)
</code></pre>
<p>Then:</p>
<pre><code> containers:
- name: apache
image: apache
volumeMounts:
- mountPath: /var/www/html/
name: nfs-vol
subPath: html
volumes:
- name: nfs-vol
nfs:
server: $NFS_IP
path: /
</code></pre>
<p>This way, not only you have a permanently running pod (which is the NFS server pod) for you to do the <code>kubectl cp</code>, you also have the opportunity to mount the same NFS volume to multiple pods concurrently since NFS does not have the single-mount restriction that most PVC drivers have.</p>
<p>N.B: I have been using this in-cluster NFS server technique for almost 5 years with no issues supporting production-grade traffic volumes.</p>
|
<p>The official <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#cluster-mode" rel="nofollow noreferrer">spark documentation</a> only has information on the <code>spark-submit</code> method for deploying code to a spark cluster. It mentions we must prefix the address from kubernetes api server with <code>k8s://</code>. What should we do when deploying through <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator" rel="nofollow noreferrer">Spark Operator</a>?</p>
<p>For instance if I have a basic pyspark application that starts up like this how do I set the master:</p>
<pre><code>from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark import SparkConf, SparkContext
sc = SparkContext("local", "Big data App")
spark = SQLContext(sc)
spark_conf = SparkConf().setMaster('local').setAppName('app_name')
</code></pre>
<p>Here I have <code>local</code>, where if I was running on a non-k8's cluster I would mention the master address with <code>spark://</code> prefix or <code>yarn</code>. Must I also use the <code>k8s://</code> prefix if deploying through the Spark Operator?
If not what should be used for master parameter?</p>
| <p>It's better not to use <code>setMaster</code> in the code, but instead specify it when running the code via spark-submit, something like this (see <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#submitting-applications-to-kubernetes" rel="nofollow noreferrer">documentation for details</a>):</p>
<pre class="lang-sh prettyprint-override"><code>./bin/spark-submit \
--master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
--deploy-mode cluster \
your_script.py
</code></pre>
<p>I haven't used Spark operator, but it should set master automatically, as I understand from the documentation.</p>
<p>you also need to get convert this code:</p>
<pre class="lang-py prettyprint-override"><code>sc = SparkContext("local", "Big data App")
spark = SQLContext(sc)
spark_conf = SparkConf().setMaster('local').setAppName('app_name')
</code></pre>
<p>to more modern (see <a href="https://spark.apache.org/docs/latest/sql-getting-started.html#starting-point-sparksession" rel="nofollow noreferrer">doc</a>):</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
</code></pre>
<p>as <code>SQLContext</code> is deprecated.</p>
<p>P.S. I recommend to get through first chapters of Learning Spark, 2ed that is <a href="https://databricks.com/p/ebook/learning-spark-from-oreilly" rel="nofollow noreferrer">freely available from the Databricks site</a>.</p>
|
<p>I've got problem forwarding jmx from kubernetes pod to localhost.
Everything works fine when port is forwarded to the same port:</p>
<pre><code>kubectl port-forward ... 9010:9010 OR
kubectl port-forward ... 9010
</code></pre>
<p>However, when I try to forward to a different local port,</p>
<pre><code>kubectl port-forward ... 9011:9010
</code></pre>
<p>neither jconsole or jms can connect.</p>
<p>It is blocking me from profiling multiple applications at once.</p>
<p>JMS Error:</p>
<pre><code>com.oracle.jmc.rjmx.ConnectionException caused by java.rmi.NoSuchObjectException: no such object in table
at com.oracle.jmc.rjmx.internal.RJMXConnection.connect(RJMXConnection.java:406)
at com.oracle.jmc.rjmx.internal.ServerHandle.doConnect(ServerHandle.java:88)
at com.oracle.jmc.rjmx.internal.ServerHandle.connect(ServerHandle.java:78)
at com.oracle.jmc.console.ui.editor.internal.ConsoleEditor$ConnectJob.run(ConsoleEditor.java:73)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Caused by: java.rmi.NoSuchObjectException: no such object in table
at java.rmi/sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:283)
at java.rmi/sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:260)
at java.rmi/sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
at java.management.rmi/javax.management.remote.rmi.RMIServerImpl_Stub.newClient(Unknown Source)
at java.management.rmi/javax.management.remote.rmi.RMIConnector.getConnection(RMIConnector.java:2105)
at java.management.rmi/javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:321)
at com.oracle.jmc.rjmx.internal.RJMXConnection.connectJmxConnector(RJMXConnection.java:451)
at com.oracle.jmc.rjmx.internal.RJMXConnection.establishConnection(RJMXConnection.java:427)
at com.oracle.jmc.rjmx.internal.RJMXConnection.connect(RJMXConnection.java:399)
... 4 more
</code></pre>
<p>jconsole error:</p>
<pre><code>Connection to localhost:9011 did no succeed.
</code></pre>
| <p>This is an RMI issue. The problem is that the RMI stub delivered to the JMX client is created for 9010 and so it fails when attempting to connect at 9011. There's a couple of decent solutions outlined <a href="https://stackoverflow.com/questions/15093376/jconsole-over-ssh-local-port-forwarding">here</a>. Another option is to switch to <a href="https://meteatamel.wordpress.com/2012/02/13/jmx-rmi-vs-jmxmp/" rel="nofollow noreferrer">JMXMP</a> which is a pure socket JMX protocol so port forwarding works without any additional workarounds.</p>
|
<p>Is their a way to create a ClusterRole using ClusterRolebinding that can provide permissions to create ClusterRoles/ClusterRolebindings and also add a condition somehow it can be limited to one namespace and cannot create resources in other namespaces?</p>
<p>Since, ClusterRole and ClusterRolebinding are not namespaced I'm looking for a way specifically for a way to provide permissions to create ClusterRole and ClusterRolebinding and then limit other resource creation specific to a namespace.</p>
<p>This cannot be achieved with RoleBinding since, it can only limit to namespace and cannot provide the permissions to create the non-namespaced resources.</p>
| <p>From what I understand this is what you want to achieve:</p>
<ol>
<li>You have a cluster admin access</li>
<li>You want to use this cluster admin access to create namespace admin(s)</li>
<li>You want these namespace admins to be able to grant access to other subject (e.g. users, groups or service accounts) to resources in that namespace.</li>
</ol>
<p>If yes, then here is something you can try.</p>
<p>By default, your Kubernetes cluster comes with a set of default <code>ClusterRole</code> objects. In particular there are two default cluster roles that you will focus on:</p>
<ol>
<li>edit</li>
<li>admin</li>
</ol>
<p>Binding <code>edit</code> cluster role to a subject either by using <code>RoleBinding</code> or <code>ClusterRoleBinding</code> gives the subject access to edit most common resources like pods, deployments, secrets etc.</p>
<p>The <code>admin</code> cluster role however contains the accesses contained by the <code>edit</code> cluster role as well as accesses to additional namespaced resources, in particular to two resources that would be useful to administer a namespace:</p>
<ol>
<li>Role</li>
<li>RoleBinding</li>
</ol>
<p>If you bind this <code>admin</code> cluster role using <code>RoleBinding</code> to a subject within a specific namespace, you effectively give that subject the capabilities to administer the namespace, including creating another <code>RoleBinding</code> within that namespace to give some other subjects accesses to that namespace.</p>
<p>To illustrate:</p>
<pre><code>You --(RoleBinding to admin ClusterRole)--> NamespaceAdmin
NamespaceAdmin --(RoleBinding to some Role or ClusterRole)--> OtherSubjects
</code></pre>
<p>Since <code>RoleBinding</code> is restricted to a specific namespace, the namespace admin will only have the <code>admin</code> accesses within that namespace only and cannot wreck havoc in other namespaces or at cluster level.</p>
|
<p>I have recently started working with Kubernetes and Docker and still new with how it all works. I have made a ps1 script to run all the steps I need to do to build image and execute image on Kubernetes. </p>
<p>What I see is all steps work fine on ISE (except this one: "kubectl exec -it test-runner pwsh"). For this step alone, I have to run it on another PowerShell window. </p>
<p>When I run this step in ISE, the script keeps running without ever giving any error or stopping. </p>
<p>Does anyone know if its a limitation with Kubernetes working on ISE or if there is a workaround to make it work? </p>
<p>Working with ISE is quick and saves me tons of time, so this really makes a difference when I have to copy, paste, enter this each time in a separate PowerShell window.</p>
<p>Thanks in advance for your help!</p>
<p>P.S: I looked at other suggested similar questions/answers and none of them seem to be related to Kubernetes not working on ISE. Hence this question. </p>
<p>command: </p>
<pre><code>kubectl exec -it test-runner pwsh
</code></pre>
<p>Expected (and actual when running from PowerShell console):</p>
<pre><code>----------------------
PS C:\windows\system32> kubectl exec -it test-runner pwsh
PowerShell 6.2.2
Copyright (c) Microsoft Corporation. All rights reserved.
https://aka.ms/pscore6-docs
Type 'help' to get help.
PS /test>
-----------------------------
Actual (when running from PowerShell ISE):
PS C:\SourceCodeTLM\Apollo> kubectl exec -it test-runner pwsh
PowerShell 6.2.2
Copyright (c) Microsoft Corporation. All rights reserved.
https://aka.ms/pscore6-docs
Type 'help' to get help.
(with a blinking cursor and script running without breaking and changing to the new path)...
-----------------------------------------------
</code></pre>
| <p><strong>The PowerShell <em>ISE</em> doesn't support <em>interactive</em> console applications</strong>, which notably means that you cannot start other <em>shells</em> from it.</p>
<p>The ISE tries to anticipate that problem by refusing to start well-known shells.
For instance, trying to start <code>cmd.exe</code> fails with the following error message:</p>
<pre class="lang-none prettyprint-override"><code>Cannot start "cmd". Interactive console applications are not supported.
To run the application, use the Start-Process cmdlet or use
"Start PowerShell.exe" from the File menu.
</code></pre>
<p>Note:</p>
<ul>
<li><code>pwsh.exe</code>, the CLI of PowerShell (Core) 7+, is <em>not</em> among the well-known shells, which indicates the <strong><a href="https://learn.microsoft.com/en-us/powershell/scripting/components/ise/introducing-the-windows-powershell-ise#support" rel="nofollow noreferrer">ISE's obsolescent status</a></strong>. It is <strong>being superseded by <a href="https://code.visualstudio.com/" rel="nofollow noreferrer">Visual Studio Code</a> with the <a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell" rel="nofollow noreferrer">PowerShell extension</a></strong>. Obsolesence aside, there are other pitfalls - see the bottom section of <a href="https://stackoverflow.com/a/57134096/45375">this answer</a>.</li>
</ul>
<p>However, it is impossible for the ISE to detect all cases where a given command (ultimately) invokes an interactive console application; when it doesn't, <strong>invocation of the command is attempted, resulting in obscure error messages or, as in your case, hangs</strong>.</p>
<p>As the error message shown for <code>cmd.exe</code> implies, <strong>you must run interactive console applications <em>outside</em> the ISE, in a regular console window.</strong></p>
<p><strong>From the ISE</strong> you can <strong>use <a href="https://learn.microsoft.com/powershell/module/microsoft.powershell.management/start-process" rel="nofollow noreferrer"><code>Start-Process</code></a> to launch a program in a new, regular console window</strong>; in the case at hand:</p>
<pre><code>Start-Process kubectl 'exec -it test-runner pwsh'
</code></pre>
<p><strong>Alternatively</strong>, run your PowerShell sessions outside the ISE to begin with, such as in a regular console window, <strong>Windows Terminal</strong>, or in <strong>Visual Studio Code's integrated terminal</strong>.</p>
|
<p>Our organisation runs Databricks on Azure that is used by data scientists & analysts primarily for Notebooks in order to do ad-hoc analysis and exploration.</p>
<p>We also run Kubernetes clusters for non spark-requiring ETL workflows.</p>
<p>We would like to use Delta Lakes as our storage layer where both Databricks and Kubernetes are able to read and write as first class citizens.<br />
Currently our Kubernetes jobs write parquets directly to blob store, with an additional job that spins up a databricks cluster to load the parquet data into Databrick's table format. This is slow and expensive.</p>
<p>What I would like to do is write to Delta lake from Kubernetes python directly, as opposed to first dumping a parquet file to blob store and then triggering an additional Databricks job to load it into Delta lake format.<br />
Conversely, I'd like to also leverage Delta lake to query from Kubernetes.</p>
<hr />
<p>In short, how do I set up my Kubernetes python environment such that it has equal access to the existing Databricks Delta Lake for writes & queries?<br />
Code would be appreciated.</p>
| <p>You can <em>usually</em> can write into the Delta table using <a href="https://delta.io/" rel="nofollow noreferrer">Delta connector for Spark</a>. Just start a Spark job with <a href="https://docs.delta.io/latest/quick-start.html#set-up-apache-spark-with-delta-lake" rel="nofollow noreferrer">necessary packages and configuration options</a>:</p>
<pre class="lang-sh prettyprint-override"><code>spark-submit --packages io.delta:delta-core_2.12:1.0.0 \
--conf "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension"
--conf "spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog"
...
</code></pre>
<p>and write the same way as on Databricks:</p>
<pre class="lang-py prettyprint-override"><code>df.write.format("delta").mode("append").save("some_location")
</code></pre>
<p>But by using OSS version of Delta you may loose some of the optimizations that are available only on Databricks, like, <a href="https://docs.databricks.com/delta/optimizations/file-mgmt.html#data-skipping" rel="nofollow noreferrer">Data Skipping</a>, etc. - in this case performance for the data written from Kubernetes <em>could be</em> lower (really depends on how do you access data).</p>
<p>There could be a case when you couldn't write into Delta table create by Databricks - when the table was written by writer with writer version higher that supported by OSS Delta connector (see <a href="https://github.com/delta-io/delta/blob/master/PROTOCOL.md#writer-version-requirements" rel="nofollow noreferrer">Delta Protocol documentation</a>). For example, this happens when you enable <a href="https://docs.databricks.com/delta/delta-change-data-feed.html" rel="nofollow noreferrer">Change Data Feed</a> on the Delta table that performs additional actions when writing data.</p>
<p>Outside of Spark, there are plans for implementing so-called <a href="https://github.com/delta-io/connectors/issues/85" rel="nofollow noreferrer">Standalone writer</a> for JVM-based languages (in addition to existing <a href="https://github.com/delta-io/connectors" rel="nofollow noreferrer">Standalone reader</a>). And there is a <a href="https://github.com/delta-io/delta-rs" rel="nofollow noreferrer">delta-rs project</a> implemented in Rust (with bindings for Python & Ruby) that should be able to write into Delta table (but I haven't tested that myself)</p>
<p>Update 14.04.2022: Data Skipping is also available in OSS Delta, starting with version 1.2.0</p>
|
<p>I am using Apache Ignite .Net v2.7 in a Kubernetes environment. I use TcpDiscoverySharedFsIpFinder as a node discovery mechanism in the cluster.</p>
<p>I noticed a strange behaviour in a running cluster. The cluster starts up successfully and works fine for a couple of hours. Then, a node goes offline and then every other node writes the similar log:</p>
<pre><code>[20:03:44] Topology snapshot [ver=45, locNode=fd32d5d7, servers=3, clients=0, state=ACTIVE, CPUs=3, offheap=4.7GB, heap=1.5GB]
[20:03:44] Topology snapshot [ver=46, locNode=fd32d5d7, servers=2, clients=0, state=ACTIVE, CPUs=2, offheap=3.1GB, heap=1.0GB]
[20:03:44] Coordinator changed [prev=TcpDiscoveryNode [id=c954042e-5756-4fed-b82a-b8b1d79889ce, addrs=[10.0.0.28, 127.0.0.1], sockAddrs=[/10.0.0.28:47500, /127.0.0.1:47500], discPort=47500, order=36, intOrder=21, lastExchangeTime=1562009450041, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], cur=TcpDiscoveryNode [id=293902ba-b28d-4a44-8d5f-9cad23a9d7c4, addrs=[10.0.0.11, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, /10.0.0.11:47500], discPort=47500, order=37, intOrder=22, lastExchangeTime=1562009450061, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false]]
Jul 01, 2019 8:03:44 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to send message to remote node [node=TcpDiscoveryNode [id=c954042e-5756-4fed-b82a-b8b1d79889ce, addrs=[10.0.0.28, 127.0.0.1], sockAddrs=[/10.0.0.28:47500, /127.0.0.1:47500], discPort=47500, order=36, intOrder=21, lastExchangeTime=1562009450041, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8, ordered=false, timeout=0, skipOnTimeout=false, msg=GridDhtPartitionsSingleMessage [parts={-2100569601=GridDhtPartitionMap [moving=0, top=AffinityTopologyVersion [topVer=44, minorTopVer=1], updateSeq=107, size=100]}, partCntrs={-2100569601=CachePartitionPartialCountersMap {22=(0,32), 44=(0,31), 59=(0,31), 64=(0,35), 66=(0,31), 72=(0,31), 78=(0,35), 91=(0,35)}}, partsSizes={-2100569601={64=2, 66=2, 22=2, 72=2, 59=2, 91=2, 44=2, 78=2}}, partHistCntrs=null, err=null, client=false, compress=true, finishMsg=null, activeQryTrackers=null, super=GridDhtPartitionsAbstractMessage [exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=45, minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=f27d46f4-0700-4f54-b4b2-2c156152c49a, addrs=[10.0.0.42, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, /10.0.0.42:47500], discPort=47500, order=42, intOrder=25, lastExchangeTime=1562009450061, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], topVer=45, nodeId8=fd32d5d7, msg=Node failed: TcpDiscoveryNode [id=f27d46f4-0700-4f54-b4b2-2c156152c49a, addrs=[10.0.0.42, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, /10.0.0.42:47500], discPort=47500, order=42, intOrder=25, lastExchangeTime=1562009450061, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], type=NODE_FAILED, tstamp=1562011424092], nodeId=f27d46f4, evt=NODE_FAILED], lastVer=GridCacheVersion [topVer=173444804, order=1562009448132, nodeOrder=44], super=GridCacheMessage [msgId=69, depInfo=null, err=null, skipPrepare=false]]]]]
class org.apache.ignite.internal.cluster.ClusterTopologyCheckedException: Failed to send message (node left topology): TcpDiscoveryNode [id=c954042e-5756-4fed-b82a-b8b1d79889ce, addrs=[10.0.0.28, 127.0.0.1], sockAddrs=[/10.0.0.28:47500, /127.0.0.1:47500], discPort=47500, order=36, intOrder=21, lastExchangeTime=1562009450041, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false]
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3270)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2987)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2870)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2713)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2672)
at org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1656)
at org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:1731)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1170)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendLocalPartitions(GridDhtPartitionsExchangeFuture.java:1880)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendPartitions(GridDhtPartitionsExchangeFuture.java:2011)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1501)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:806)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2667)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2539)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
[22:25:17] Topology snapshot [ver=47, locNode=fd32d5d7, servers=1, clients=0, state=ACTIVE, CPUs=1, offheap=1.6GB, heap=0.5GB]
[22:25:17] Coordinator changed [prev=TcpDiscoveryNode [id=293902ba-b28d-4a44-8d5f-9cad23a9d7c4, addrs=[10.0.0.11, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, /10.0.0.11:47500], discPort=47500, order=37, intOrder=22, lastExchangeTime=1562009450061, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], cur=TcpDiscoveryNode [id=fd32d5d7-720f-4c85-925e-01a845992df9, addrs=[10.0.0.60, 127.0.0.1], sockAddrs=[product-service-deployment-76bdb6fffb-bvjx9/10.0.0.60:47500, /127.0.0.1:47500], discPort=47500, order=44, intOrder=26, lastExchangeTime=1562019906752, loc=true, ver=2.7.0#20181130-sha1:256ae401, isClient=false]]
[22:28:29] Joining node doesn't have encryption data [node=adc204a0-3cc7-45da-b512-dd69b9a23674]
[22:28:30] Topology snapshot [ver=48, locNode=fd32d5d7, servers=2, clients=0, state=ACTIVE, CPUs=2, offheap=3.1GB, heap=1.0GB]
[22:31:42] Topology snapshot [ver=49, locNode=fd32d5d7, servers=1, clients=0, state=ACTIVE, CPUs=1, offheap=1.6GB, heap=0.5GB]
</code></pre>
<p>As you can see, the number of servers in the cluster steadily reduces until only one server remains in the cluster (Topology snapshot [.. servers=1..] on every node). If I understand the log correctly, the cluster collapses into a group of individual independent nodes where each node represents a separate cluster. I should emphasize that all other nodes (except for the crashed one) are up and running. </p>
<p>I guess that the failing node might be a cluster leader and when it dies, the cluster fails to elect a new leader and it disintegrates into a number of independent nodes. </p>
<p>Can you comment on this? Am I right in my guessing? Could you tell me what should I check to diagnose and resolve this problem? Thank you!</p>
| <p>Nodes segmentation usually mean there are long pauses: either GC pauses, I/O pauses or network pauses.</p>
<p>You can try increasing <code>failureDetectionTimeout</code>, see if the problem goes away. Or, you can try getting rid of pauses.</p>
|
<p>As we know, by default HTTP 1.1 uses persistent connections which is a long-lived connection. For any service in <a href="https://kubernetes.io/" rel="nofollow noreferrer">Kubernetes</a>, for example, clusterIP mode, it is L4 based load balancer.</p>
<p>Suppose I have a service which is running a web server, this service contains 3 pods, I am wondering whether HTTP/1.1 requests can be distributed to 3 pods?</p>
<p>Could anybody help clarify it?</p>
| <p>This webpage perfectly address your question: <a href="https://learnk8s.io/kubernetes-long-lived-connections" rel="nofollow noreferrer">https://learnk8s.io/kubernetes-long-lived-connections</a></p>
<p>In the spirit of StackOverflow, let me summarize the webpage here:</p>
<ol>
<li><p>TLDR: Kubernetes doesn't load balance long-lived connections, and some Pods might receive more requests than others.</p>
</li>
<li><p>Kubernetes Services do not exist. There's no process listening on the IP address and port of a Service.</p>
</li>
<li><p>The Service IP address is used only as a placeholder that will be translated by iptables rules into the IP addresses of one of the destination pods using cleverly crafted randomization.</p>
</li>
<li><p>Any connections from clients (regardless from inside or outside cluster) are established directly with the Pods, hence for an HTTP 1.1 persistent connection, the connection will be maintained between the client to a specific Pod until it is closed by either side.</p>
</li>
<li><p>Thus, all requests that use a single persistent connection will be routed to a single Pod (that is selected by the iptables rule when establishing connection) and not load-balanced to the other Pods.</p>
</li>
</ol>
<p>Additional info:</p>
<p>By W3C RFC2616 (<a href="https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.3" rel="nofollow noreferrer">https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.3</a>), any proxy server that serves between client and server must maintain HTTP 1.1 persistent connections from client to itself and from itself to server.</p>
|
<p>Suppose that I have a <strong>Deployment</strong> which loads the env variables from a <strong>ConfigMap</strong>:</p>
<pre><code>spec.template.spec.containers[].envFrom.configMapRef
</code></pre>
<p>Now suppose that I change the data inside the ConfigMap.</p>
<p><strong>When exactly are the Pod env variables updated?</strong> (i.e. when the app running in a pod sees the new env variables)</p>
<p>For example (note that we are inside a Deployment):</p>
<ol>
<li>If a container inside the pod crashes and it is restarted does it read the new env or the old env?</li>
<li>If a pod is deleted (but not its RelicaSet) and it is recreated does it read the new env or the old env?</li>
</ol>
| <p>After some testing (with v1.20) I see that env variables in Pod template are updated immediately (it's just a reference to external values).</p>
<p>However the container does not see the new env variables... You need at least to restart it (or otherwise delete and recreate the pod).</p>
|
<p>I need to setup a RabbitMQ cluster with queue mirroring enabled on all queues in Kubernetes.
The RabbitMQ plugin for kubernetes peer discovery only provides a clustering mechanism based on peer discovery , as the plugin name indicates.
But how do I enable queue mirroring and achieve HA , so that if pods a restarted for any reason or if I need to scale the Rabbitmq nodes , I can do it without any loss of messages.</p>
| <p>Add a definitions.json file into your ConfigMap and ensure that your pods mount the file (in /etc/rabbitmq). In that file, specify all exchanges/queues, and have a policy defined for mirroring that would be applied to those exchanges/queues.</p>
<p>It may be easier to manually set this up and export the definitions file from a running RabbitMQ node.</p>
<p>This way - your cluster is all configured when started.</p>
|
<p>Sometimes, I want to explore all deployments/daemonsets which mount specific configmap/secret.</p>
<p>Is there any way can achieve this by <code>kubectl</code>?</p>
| <p>You need to have <code>jq</code> to do such a complex queries.
Here you go:</p>
<pre><code>kubectl get -o json deploy,daemonset | jq '[.items[] | . as $parent | .spec.template.spec.volumes[]? | select(.configMap != null) | {kind:$parent.kind, name:$parent.metadata.name, configMap:.configMap.name}]'
</code></pre>
<p>The <code>jq</code> command de-constructed:</p>
<pre><code>[ // output as array
.items[] // iterate over root key 'items'
|
. as $parent // store the current entry as $parent to refer to it later
|
.spec.template.spec.volumes[]? // iterate over its volumes (the ? to prevent error if there is no volume
|
select(.configMap != null) // select only those volumes with configMap key
|
{kind:$parent.kind, name:$parent.metadata.name, configMap:.configMap.name} // now construct the output using $parent's kind and name and the configMap's name
]
</code></pre>
<p>Example output:</p>
<pre><code>[
{
"kind": "Deployment",
"name": "telemetry-agent",
"configMap": "telemetry-config-map"
},
{
"kind": "DaemonSet",
"name": "fluent-bit",
"configMap": "fluent-bit"
},
{
"kind": "DaemonSet",
"name": "telegraf",
"configMap": "telegraf"
}
]
</code></pre>
<p>N.B. If you want to find specific configMap, just replace the <code>select()</code> clause <code>.configMap != null</code> to <code>.configMap.name == "specific-configmap"</code>. Also, feel free to add <code>--all-namespaces</code> to the <code>kubectl get</code> command if you want to query from all namespaces</p>
|
<p>I was testing Skaffod and It is a great tool for microservices development.
But I do not find any tutorial on how to use it with Java. Is there any support to Maven builds?</p>
| <p>Skaffold now supports JIB out of the box which will be more efficient than multistage Dockerfile building! Check out the <a href="https://github.com/GoogleContainerTools/skaffold/tree/master/examples/jib" rel="nofollow noreferrer">JIB Maven example</a> in Skaffold.</p>
|
<p>Some of our services in our K8s (EKS) envirnment use config files to drive functionality so we don't have to redeploy the whole image each time. Using <code>kubectl cp</code> command allows us to copy new config files to the pod. So the command <code>kubectl cp settings.json myapi-76dc75f47c-lkvdm:/app/settings.json</code> copies the new <code>settings.json</code> file to the pod.</p>
<p>For fun I deleted the pod and k8s recreated it successfully with the old <code>settings.json</code> file. Anyone know a way of keeping the new <code>settings.json</code> file if the pod gets destroyed? Is there a way to update the deployment without redeploying the image?</p>
<p>Thanks, Tim</p>
| <p>Store the config file inside a ConfigMap and mount the ConfigMap to Deployment's pod template. When the file needs updating, either:</p>
<ol>
<li>Re-create the ConfigMap (kubectl delete then kubectl create --from-file)</li>
<li>Or use the "dry-run kubectl create piped into kubectl replace" technique from <a href="https://stackoverflow.com/a/38216458/34586">https://stackoverflow.com/a/38216458/34586</a></li>
</ol>
|
<p>I have just terminated a AWS K8S node, and now.</p>
<p>K8S recreated a new one, and installed new pods. Everything seems good so far.</p>
<p>But when I do:</p>
<pre><code>kubectl get po -A
</code></pre>
<p>I get: </p>
<pre><code>kube-system cluster-autoscaler-648b4df947-42hxv 0/1 Evicted 0 3m53s
kube-system cluster-autoscaler-648b4df947-45pcc 0/1 Evicted 0 47m
kube-system cluster-autoscaler-648b4df947-46w6h 0/1 Evicted 0 91m
kube-system cluster-autoscaler-648b4df947-4tlbl 0/1 Evicted 0 69m
kube-system cluster-autoscaler-648b4df947-52295 0/1 Evicted 0 3m54s
kube-system cluster-autoscaler-648b4df947-55wzb 0/1 Evicted 0 83m
kube-system cluster-autoscaler-648b4df947-57kv5 0/1 Evicted 0 107m
kube-system cluster-autoscaler-648b4df947-69rsl 0/1 Evicted 0 98m
kube-system cluster-autoscaler-648b4df947-6msx2 0/1 Evicted 0 11m
kube-system cluster-autoscaler-648b4df947-6pphs 0 18m
kube-system dns-controller-697f6d9457-zswm8 0/1 Evicted 0 54m
</code></pre>
<p>When I do:</p>
<pre><code>kubectl describe pod -n kube-system dns-controller-697f6d9457-zswm8
</code></pre>
<p>I get: </p>
<pre><code>➜ monitoring git:(master) ✗ kubectl describe pod -n kube-system dns-controller-697f6d9457-zswm8
Name: dns-controller-697f6d9457-zswm8
Namespace: kube-system
Priority: 0
Node: ip-172-20-57-13.eu-west-3.compute.internal/
Start Time: Mon, 07 Oct 2019 12:35:06 +0200
Labels: k8s-addon=dns-controller.addons.k8s.io
k8s-app=dns-controller
pod-template-hash=697f6d9457
version=v1.12.0
Annotations: scheduler.alpha.kubernetes.io/critical-pod:
Status: Failed
Reason: Evicted
Message: The node was low on resource: ephemeral-storage. Container dns-controller was using 48Ki, which exceeds its request of 0.
IP:
IPs: <none>
Controlled By: ReplicaSet/dns-controller-697f6d9457
Containers:
dns-controller:
Image: kope/dns-controller:1.12.0
Port: <none>
Host Port: <none>
Command:
/usr/bin/dns-controller
--watch-ingress=false
--dns=aws-route53
--zone=*/ZDOYTALGJJXCM
--zone=*/*
-v=2
Requests:
cpu: 50m
memory: 50Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from dns-controller-token-gvxxd (ro)
Volumes:
dns-controller-token-gvxxd:
Type: Secret (a volume populated by a Secret)
SecretName: dns-controller-token-gvxxd
Optional: false
QoS Class: Burstable
Node-Selectors: node-role.kubernetes.io/master=
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Evicted 59m kubelet, ip-172-20-57-13.eu-west-3.compute.internal The node was low on resource: ephemeral-storage. Container dns-controller was using 48Ki, which exceeds its request of 0.
Normal Killing 59m kubelet, ip-172-20-57-13.eu-west-3.compute.internal Killing container with id docker://dns-controller:Need to kill Pod
</code></pre>
<p>And: </p>
<pre><code>➜ monitoring git:(master) ✗ kubectl describe pod -n kube-system cluster-autoscaler-648b4df947-2zcrz
Name: cluster-autoscaler-648b4df947-2zcrz
Namespace: kube-system
Priority: 0
Node: ip-172-20-57-13.eu-west-3.compute.internal/
Start Time: Mon, 07 Oct 2019 13:26:26 +0200
Labels: app=cluster-autoscaler
k8s-addon=cluster-autoscaler.addons.k8s.io
pod-template-hash=648b4df947
Annotations: prometheus.io/port: 8085
prometheus.io/scrape: true
scheduler.alpha.kubernetes.io/tolerations: [{"key":"dedicated", "value":"master"}]
Status: Failed
Reason: Evicted
Message: Pod The node was low on resource: [DiskPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/cluster-autoscaler-648b4df947
Containers:
cluster-autoscaler:
Image: gcr.io/google-containers/cluster-autoscaler:v1.15.1
Port: <none>
Host Port: <none>
Command:
./cluster-autoscaler
--v=4
--stderrthreshold=info
--cloud-provider=aws
--skip-nodes-with-local-storage=false
--nodes=0:1:pamela-nodes.k8s-prod.sunchain.fr
Limits:
cpu: 100m
memory: 300Mi
Requests:
cpu: 100m
memory: 300Mi
Liveness: http-get http://:8085/health-check delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8085/health-check delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
AWS_REGION: eu-west-3
Mounts:
/etc/ssl/certs/ca-certificates.crt from ssl-certs (ro)
/var/run/secrets/kubernetes.io/serviceaccount from cluster-autoscaler-token-hld2m (ro)
Volumes:
ssl-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs/ca-certificates.crt
HostPathType:
cluster-autoscaler-token-hld2m:
Type: Secret (a volume populated by a Secret)
SecretName: cluster-autoscaler-token-hld2m
Optional: false
QoS Class: Guaranteed
Node-Selectors: kubernetes.io/role=master
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned kube-system/cluster-autoscaler-648b4df947-2zcrz to ip-172-20-57-13.eu-west-3.compute.internal
Warning Evicted 11m kubelet, ip-172-20-57-13.eu-west-3.compute.internal The node was low on resource: [DiskPressure].
</code></pre>
<p>It seems to be a ressource issue. Weird thing is before I killed my EC2 instance, I didn t have this issue.</p>
<p>Why is it happening and what should I do? Is it mandatory to add more ressources ?</p>
<pre><code>➜ scripts kubectl describe node ip-172-20-57-13.eu-west-3.compute.internal
Name: ip-172-20-57-13.eu-west-3.compute.internal
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=t3.small
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=eu-west-3
failure-domain.beta.kubernetes.io/zone=eu-west-3a
kops.k8s.io/instancegroup=master-eu-west-3a
kubernetes.io/hostname=ip-172-20-57-13.eu-west-3.compute.internal
kubernetes.io/role=master
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 28 Aug 2019 09:38:09 +0200
Taints: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Wed, 28 Aug 2019 09:38:36 +0200 Wed, 28 Aug 2019 09:38:36 +0200 RouteCreated RouteController created a route
OutOfDisk False Mon, 07 Oct 2019 14:14:32 +0200 Wed, 28 Aug 2019 09:38:09 +0200 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Mon, 07 Oct 2019 14:14:32 +0200 Wed, 28 Aug 2019 09:38:09 +0200 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure True Mon, 07 Oct 2019 14:14:32 +0200 Mon, 07 Oct 2019 14:11:02 +0200 KubeletHasDiskPressure kubelet has disk pressure
PIDPressure False Mon, 07 Oct 2019 14:14:32 +0200 Wed, 28 Aug 2019 09:38:09 +0200 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 07 Oct 2019 14:14:32 +0200 Wed, 28 Aug 2019 09:38:35 +0200 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 172.20.57.13
ExternalIP: 35.180.187.101
InternalDNS: ip-172-20-57-13.eu-west-3.compute.internal
Hostname: ip-172-20-57-13.eu-west-3.compute.internal
ExternalDNS: ec2-35-180-187-101.eu-west-3.compute.amazonaws.com
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 2
ephemeral-storage: 7797156Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2013540Ki
pods: 110
Allocatable:
attachable-volumes-aws-ebs: 25
cpu: 2
ephemeral-storage: 7185858958
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 1911140Ki
pods: 110
System Info:
Machine ID: ec2b3aa5df0e3ad288d210f309565f06
System UUID: EC2B3AA5-DF0E-3AD2-88D2-10F309565F06
Boot ID: f9d5417b-eba9-4544-9710-a25d01247b46
Kernel Version: 4.9.0-9-amd64
OS Image: Debian GNU/Linux 9 (stretch)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.3
Kubelet Version: v1.12.10
Kube-Proxy Version: v1.12.10
PodCIDR: 100.96.1.0/24
ProviderID: aws:///eu-west-3a/i-03bf1b26313679d65
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-manager-events-ip-172-20-57-13.eu-west-3.compute.internal 100m (5%) 0 (0%) 100Mi (5%) 0 (0%) 40d
kube-system etcd-manager-main-ip-172-20-57-13.eu-west-3.compute.internal 200m (10%) 0 (0%) 100Mi (5%) 0 (0%) 40d
kube-system kube-apiserver-ip-172-20-57-13.eu-west-3.compute.internal 150m (7%) 0 (0%) 0 (0%) 0 (0%) 40d
kube-system kube-controller-manager-ip-172-20-57-13.eu-west-3.compute.internal 100m (5%) 0 (0%) 0 (0%) 0 (0%) 40d
kube-system kube-proxy-ip-172-20-57-13.eu-west-3.compute.internal 100m (5%) 0 (0%) 0 (0%) 0 (0%) 40d
kube-system kube-scheduler-ip-172-20-57-13.eu-west-3.compute.internal 100m (5%) 0 (0%) 0 (0%) 0 (0%) 40d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 0 (0%)
memory 200Mi (10%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
attachable-volumes-aws-ebs 0 0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasNoDiskPressure 55m (x324 over 40d) kubelet, ip-172-20-57-13.eu-west-3.compute.internal Node ip-172-20-57-13.eu-west-3.compute.internal status is now: NodeHasNoDiskPressure
Warning EvictionThresholdMet 10m (x1809 over 16d) kubelet, ip-172-20-57-13.eu-west-3.compute.internal Attempting to reclaim ephemeral-storage
Warning ImageGCFailed 4m30s (x6003 over 23d) kubelet, ip-172-20-57-13.eu-west-3.compute.internal (combined from similar events): wanted to free 652348620 bytes, but freed 0 bytes space with errors in image deletion: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete dd37681076e1 (cannot be forced) - image is being used by running container b1800146af29
</code></pre>
<p>I think a better command to debug it is:</p>
<pre><code>devops git:(master) ✗ kubectl get events --sort-by=.metadata.creationTimestamp -o wide
LAST SEEN TYPE REASON KIND SOURCE MESSAGE SUBOBJECT FIRST SEEN COUNT NAME
10m Warning ImageGCFailed Node kubelet, ip-172-20-57-13.eu-west-3.compute.internal (combined from similar events): wanted to free 653307084 bytes, but freed 0 bytes space with errors in image deletion: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete dd37681076e1 (cannot be forced) - image is being used by running container b1800146af29 23d 6004 ip-172-20-57-13.eu-west-3.compute.internal.15c4124e15eb1d33
2m59s Warning ImageGCFailed Node kubelet, ip-172-20-36-135.eu-west-3.compute.internal (combined from similar events): failed to garbage collect required amount of images. Wanted to free 639524044 bytes, but freed 0 bytes 7d9h 2089 ip-172-20-36-135.eu-west-3.compute.internal.15c916d24afe2c25
4m59s Warning ImageGCFailed Node kubelet, ip-172-20-33-81.eu-west-3.compute.internal (combined from similar events): failed to garbage collect required amount of images. Wanted to free 458296524 bytes, but freed 0 bytes 4d14h 1183 ip-172-20-33-81.eu-west-3.compute.internal.15c9f3fe4e1525ec
6m43s Warning EvictionThresholdMet Node kubelet, ip-172-20-57-13.eu-west-3.compute.internal Attempting to reclaim ephemeral-storage 16d 1841 ip-172-20-57-13.eu-west-3.compute.internal.15c66e349b761219
41s Normal NodeHasNoDiskPressure Node kubelet, ip-172-20-57-13.eu-west-3.compute.internal Node ip-172-20-57-13.eu-west-3.compute.internal status is now: NodeHasNoDiskPressure 40d 333 ip-172-20-57-13.eu-west-3.compute.internal.15bf05cec37981b6
</code></pre>
<p>Now <code>df -h</code></p>
<pre><code>admin@ip-172-20-57-13:/var/log$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 972M 0 972M 0% /dev
tmpfs 197M 2.3M 195M 2% /run
/dev/nvme0n1p2 7.5G 6.4G 707M 91% /
tmpfs 984M 0 984M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 984M 0 984M 0% /sys/fs/cgroup
/dev/nvme1n1 20G 430M 20G 3% /mnt/master-vol-09618123eb79d92c8
/dev/nvme2n1 20G 229M 20G 2% /mnt/master-vol-05c9684f0edcbd876
</code></pre>
| <p>It looks like your nodes/master is running low on storage? I see only 1GB for ephemeral storage available. </p>
<p>You should free up some space on the node and master. It should get rid of your problem.</p>
|
<p>Just downloaded Lens 5.25. Windows 10, Docker Desktop 4/v20.10.8, Kubernetes v1.21.4.</p>
<p>I run <code>Docker Desktop</code>, wait until <code>Kubernetes</code> is ready and then open <code>Lens</code>.</p>
<p>It shows <code>docker-desktop</code> as being disconnected<a href="https://i.stack.imgur.com/8osMS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8osMS.jpg" alt="enter image description here" /></a>.</p>
<p>There is no way to get past this screen. How do I open the <code>docker-desktop</code> cluster?</p>
| <p>Click the cluster to open its information panel, then select the connect (chainlink) icon in the toolbar.</p>
<p><a href="https://i.stack.imgur.com/VeJEc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VeJEc.png" alt="enter image description here" /></a></p>
<p>You can then click the cluster's icon in the same panel to open it.</p>
|
<p>Currently I have an OKD/openshift template which exposes port 1883 on a specific container:</p>
<pre><code>ports:
- name: 1883-tcp
port: 1883
containerPort: 1883
protocol: TCP
hostPort: ${MQTT-PORT}
</code></pre>
<p>Is it possible to have an if/else clause depending on parameters. For example:</p>
<pre><code>ports:
- name: 1883-tcp
port: 1883
containerPort: 1883
protocol: TCP
{{ if ${MQTT-PORT} != 0 }}
hostPort: ${MQTT-PORT}
{{ /if }}
</code></pre>
<p>By doing this, I can have the same template in all my environments (e.g.: development/testing/production) but based on the parameters given by creation, some ports are available for debugging or testing without having to forward them each time using the oc command.</p>
| <p>You can't do this kind of conditional processing at the template level. </p>
<p>But, to achieve your desired outcome, you can do one of 2 things.</p>
<p><strong>Option 1</strong>
Pass all the parameters required for the condition to process at the template level, like <code>MQTT-PORT</code>and map the correct port number when building your service.
This might be the correct approach as templates are designed to be as logic-less as possible, you do all the decision making at a much lower level.</p>
<p><strong>Option 2</strong>
If you can relax the "same template" constraint, We could have 2 flavors of the same template, one with the specific port and another with the parameterized port. The only issue with this option is to change 2 templates every time you change your app/service specs, which violates the <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" rel="nofollow noreferrer">DRY principle</a>.</p>
<p><strong>Update</strong></p>
<p>Using Helm with OpenShift might be the best option here. You can templatize your artifacts using Helm's conditionals and deploy a Helm app to OpenShift. Here's a <a href="https://github.com/sclorg/nodejs-ex/tree/master/helm/nodejs/templates" rel="nofollow noreferrer">repository</a> which has a Helm chart tailored for OpenShift.
Also, you need to point to the right namespace for Tiller to use Helm with OpenShift. You can find more details about it <a href="https://blog.openshift.com/getting-started-helm-openshift/" rel="nofollow noreferrer">here</a>.</p>
|
<p>We deployed Gridgain cluster in Google Kubernetes cluster and it working properly with persistence enable. We need to auto scale enable. At scale up no any errors, but at the scale down given "Partition loss". We need to recover this loss partitions using control.sh script. But it is not possible at every time.</p>
<p>What is the solution for this? Is scale down not working for Gridgain nodes?</p>
| <p>Usually you should have backup factor sufficient to offset lost nodes (such as, if you have backups=2, you can lose at most 2 nodes at the same time).</p>
<p>Coupled with baselineAutoAdjust set to reasonable value it should provide scale down feature.</p>
<p>Scale down with data loss and persistence enabled will indeed require resetting lost partitions.</p>
|
<p>We are getting warnings in our production logs for .Net Core Web API services that are running in Kubernetes.</p>
<blockquote>
<p>Storing keys in a directory '{path}' that may not be persisted outside
of the container. Protected data will be unavailable when container is
destroyed.","@l":"Warning","path":"/root/.aspnet/DataProtection-Keys",SourceContext:"Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository"</p>
</blockquote>
<p>We do not explicitly call services.AddDataProtection() in StartUp, but it seems that we are getting the warnings for services that are using .Net Core 3.1 and .Net 5.(not for .Net Core 2.1)
,that also have in StartUp</p>
<pre><code>services.AddAuthentication Or
services.AddMvc()
</code></pre>
<p>(May be there are other conditions that I am missing).</p>
<p>I am not able to identify exactly, where it's called but locally I can see related DLLs that are loaded before the access to DataProtection-Keys from XmlKeyManager</p>
<pre><code> Loaded 'C:\Program Files\dotnet\shared\Microsoft.NETCore.App\3.1.19\Microsoft.Win32.Registry.dll'.
Loaded 'C:\Program Files\dotnet\shared\Microsoft.NETCore.App\3.1.19\System.Xml.XDocument.dll'.
Loaded 'C:\Program Files\dotnet\shared\Microsoft.NETCore.App\3.1.19\System.Private.Xml.Linq.dll'.
Loaded 'C:\Program Files\dotnet\shared\Microsoft.NETCore.App\3.1.19\System.Private.Xml.dll'.
Loaded 'C:\Program Files\dotnet\shared\Microsoft.NETCore.App\3.1.19\System.Resources.ResourceManager.dll'.
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager:
Using 'C:\Users\..\AppData\Local\ASP.NET\DataProtection-Keys' as key repository and Windows DPAPI to encrypt keys at rest.
</code></pre>
<p>Is it safe to ignore such warnings, considering that we do not use DataProtection explicitly, do not use authentication encryption for long periods and during testing we haven't seen any issues?</p>
<p>Or the error means that if different pods will be involved for the same client, authentication may fail and it will be better to do something that is suggested in <a href="https://stackoverflow.com/questions/61452280/how-to-store-data-protection-keys-outside-of-the-docker-container">How to store Data Protection-keys outside of the docker container?</a>?</p>
| <p>After analysis how our applications are using protected data(authentication cookies, CSRF tokens etc) our team
decided , that “Protected data will be unavailable when container is destroyed." is just a warning and would have no customer impact, so we ignore it.</p>
<p>But <a href="https://www.howtogeek.com/693183/what-does-ymmv-mean-and-how-do-you-use-it/" rel="nofollow noreferrer">YMMV</a>.</p>
|
<p>I am trying to run the tutorial at <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/</a> locally on by ubuntu 18 machine. </p>
<pre><code>$ minikube start
😄 minikube v1.0.1 on linux (amd64)
🤹 Downloading Kubernetes v1.14.1 images in the background ...
🔥 Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶 "minikube" IP address is 192.168.39.247
🐳 Configuring Docker as the container runtime ...
🐳 Version of container runtime is 18.06.3-ce
⌛ Waiting for image downloads to complete ...
✨ Preparing Kubernetes environment ...
💾 Downloading kubeadm v1.14.1
💾 Downloading kubelet v1.14.1
🚜 Pulling images required by Kubernetes v1.14.1 ...
🚀 Launching Kubernetes v1.14.1 using kubeadm ...
⌛ Waiting for pods: apiserver proxy etcd scheduler controller dns
🔑 Configuring cluster permissions ...
🤔 Verifying component health .....
💗 kubectl is now configured to use "minikube"
🏄 Done! Thank you for using minikube!
</code></pre>
<p>So far, so good.
Next, I try to run </p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>Similar response for </p>
<pre><code>$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>As also,</p>
<pre><code>$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>What am I missing?</p>
| <p>Ok so I was able to find the answer myself.</p>
<p>~/.kube/config was present before so I removed it first.</p>
<p>Next, when I ran the commands again, a config file was created again and that mentions the port as 8443.</p>
<p>So, need to make sure there is no old ~/.kube/config file present before starting minikube.</p>
|
<p>We are running Grafana on EKS Kubernetes v1.21 as a Helm deployment behind a Traefik reverse proxy.</p>
<p>Grafana version: <code>v9.0.3</code></p>
<p>Recently, Grafana has been posting this same log message every minute without fail:</p>
<pre><code>2022-08-24 15:52:47
logger=context traceID=00000000000000000000000000000000 userId=0 orgId=0 uname= t=2022-08-24T13:52:47.293094029Z level=info msg="Request Completed" method=GET path=/api/live/ws status=401 remote_addr=10.1.3.153 time_ms=4 duration=4.609805ms size=27 referer= traceID=00000000000000000000000000000000
2022-08-24 15:52:47
logger=context traceID=00000000000000000000000000000000 t=2022-08-24T13:52:47.290478899Z level=error msg="Failed to look up user based on cookie" error="user token not found"
</code></pre>
<p>I can't confirm whether these two log messages are related but I believe they are.</p>
<p>I cannot find any user with id <code>0</code>.</p>
<p>Another log error I see occasionally is</p>
<pre><code>2022-08-24 15:43:43
logger=ngalert t=2022-08-24T13:43:43.020553296Z level=error msg="unable to fetch orgIds" msg="context canceled"
</code></pre>
<p>What I can see, is that the <code>remote_addr</code> refers to the node in our cluster that Grafana is deployed on.</p>
<p>Can anyone explain why this is continually hitting the endpoint shown?</p>
<p>Thanks!</p>
| <p>The Grafana Live feature is real-time messaging that uses websockets. It is used in Grafana for notifying on events like someone else is editing the same dashboard as you. It can also be used for streaming data directly to Grafana. <a href="https://grafana.com/docs/grafana/latest/setup-grafana/set-up-grafana-live/" rel="nofollow noreferrer">Docs here</a></p>
<p>You can either turn off Grafana Live or configure your proxy to allow websockets.</p>
<ul>
<li><a href="https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#max_connections" rel="nofollow noreferrer">Turn it off by setting config option <code>max_connections</code> to zero</a></li>
<li><a href="https://grafana.com/tutorials/run-grafana-behind-a-proxy/" rel="nofollow noreferrer">Instructions on how to configure the Traefik proxy with Grafana</a></li>
<li><a href="https://grafana.com/docs/grafana/latest/setup-grafana/set-up-grafana-live/#configure-grafana-live" rel="nofollow noreferrer">Setup guide for Grafana Live</a></li>
</ul>
|
<p>I have a demo golang program to list Pods without a particular label. I want to modify it so it also can add a label to each pod.</p>
<p>(I'm using the AWS hosted Kubernetes service, EKS so there's some boilerplate code specific to EKS )</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"fmt"
eksauth "github.com/chankh/eksutil/pkg/auth"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func main() {
cfg := &eksauth.ClusterConfig{ClusterName: "my_cluster_name"}
clientset, _ := eksauth.NewAuthClient(cfg)
api := clientset.CoreV1()
// Get all pods from all namespaces without the "sent_alert_emailed" label.
pods, _ := api.Pods("").List(metav1.ListOptions{LabelSelector: "!sent_alert_emailed"})
for i, pod := range pods.Items {
fmt.Println(fmt.Sprintf("[%2d] %s, Phase: %s, Created: %s, HostIP: %s", i, pod.GetName(), string(pod.Status.Phase), pod.GetCreationTimestamp(), string(pod.Status.HostIP)))
// Here I want to add a label to this pod
// e.g. something like:
// pod.addLabel("sent_alert_emailed=true")
}
}
</code></pre>
<p>I know kubectl can be used to add labels, e.g.</p>
<pre><code>kubectl label pod my-pod new-label=awesome # Add a Label
kubectl label pod my-pod new-label=awesomer --overwrite # Change a existing label
</code></pre>
<p>I was hoping there would be an equivalent method via the go-client?</p>
| <p>I'm hoping there is a more elegant way, but until I learn about it, I managed to add a label to a Pod using <code>Patch</code>. Here is my demo code (again it has some EKS boilerplate stuff you may be able to ignore): </p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"fmt"
"encoding/json"
"time"
"k8s.io/apimachinery/pkg/types"
eksauth "github.com/chankh/eksutil/pkg/auth"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type patchStringValue struct {
Op string `json:"op"`
Path string `json:"path"`
Value string `json:"value"`
}
func main() {
var updateErr error
cfg := &eksauth.ClusterConfig{ClusterName: "my cluster name"}
clientset, _ := eksauth.NewAuthClient(cfg)
api := clientset.CoreV1()
// Get all pods from all namespaces without the "sent_alert_emailed" label.
pods, _ := api.Pods("").List(metav1.ListOptions{LabelSelector: "!sent_alert_emailed"})
for i, pod := range pods.Items {
fmt.Println(fmt.Sprintf("[%2d] %s, Phase: %s, Created: %s, HostIP: %s", i, pod.GetName(), string(pod.Status.Phase), pod.GetCreationTimestamp(), string(pod.Status.HostIP)))
payload := []patchStringValue{{
Op: "replace",
Path: "/metadata/labels/sent_alert_emailed",
Value: time.Now().Format("2006-01-02_15.04.05"),
}}
payloadBytes, _ := json.Marshal(payload)
_, updateErr = api.Pods(pod.GetNamespace()).Patch(pod.GetName(), types.JSONPatchType, payloadBytes)
if updateErr == nil {
fmt.Println(fmt.Sprintf("Pod %s labelled successfully.", pod.GetName()))
} else {
fmt.Println(updateErr)
}
}
}
</code></pre>
|
<p>I'm using the Azure AKS addon for HTTP application routing as described <a href="https://learn.microsoft.com/en-us/azure/aks/http-application-routing" rel="noreferrer">here</a>. I deployed it using Terraform and it generally works:</p>
<pre><code>resource "kubernetes_ingress" "ingress" {
metadata {
name = "nurse-ingress"
namespace = kubernetes_namespace.nurse.metadata[0].name
annotations = {
"kubernetes.io/ingress.class" = "addon-http-application-routing"
"nginx.ingress.kubernetes.io/rewrite-target" = "/$1"
}
}
wait_for_load_balancer = true
spec {
backend {
service_name = "nurse-service"
service_port = 80
}
rule {
host = "nurseapp.${azurerm_kubernetes_cluster.main.addon_profile[0].http_application_routing[0].http_application_routing_zone_name}"
http {
path {
backend {
service_name = kubernetes_service.app.metadata[0].name
service_port = 80
}
path = "/app/(.*)"
}
path {
backend {
service_name = kubernetes_service.nurse.metadata[0].name
service_port = 80
}
path = "/nurse/(.*)"
}
}
}
}
}
</code></pre>
<p>However, it only works on the default backend (i.e. path=/). When I call the URL on /nurse or /app it does not work since the rewrite-target <code>/$1</code> does not seem to be taken into account. I will just get a 404 - since the nurse-service itself does expect calls on /foo and not on /nurse/foo</p>
<p>Should this be possible to configure to begin with and if so, any idea where my mistake is?</p>
<p><a href="https://i.stack.imgur.com/PPtzX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PPtzX.png" alt="enter image description here" /></a></p>
| <p>The following will rewrite the path so that requests for <code>/sample/suffix</code> will be rewritten as <code>/suffix</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sample-ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
nginx.ingress.kubernetes.io/rewrite-target: $2
spec:
rules:
- host: sample.YOUR-ID.westeurope.aksapp.io
http:
paths:
- path: /sample/
pathType: Prefix
backend:
service:
name: sample
port:
number: 80
</code></pre>
|
<p>I've created a label with:</p>
<p><code>kubectl label pods <pod id> color=green</code></p>
<p>but removing it using:</p>
<p><code>kubectl label pods bar -color</code></p>
<p>gives:</p>
<p><code>unknown shorthand flag: 'c' in -color</code></p>
<p>Any suggestions?</p>
| <p>The dash goes at the end of the label name to remove it, per <code>kubectl help label</code>:</p>
<pre><code># Update pod 'foo' by removing a label named 'bar' if it exists.
# Does not require the --overwrite flag.
kubectl label pods foo bar-
</code></pre>
<p>So try <code>kubectl label pods bar color-</code>.</p>
|
<p>I have a cluster in GKE and it is working, everything seems to be working. If I forward the ports I am able to see that the containers are working.</p>
<p>I am not able to setup a domain I own from namecheap.</p>
<p>These are the steps I followed</p>
<ol>
<li>In Namecheap I setup a custom dns for the domain</li>
</ol>
<pre><code>ns-cloud-c1.googledomains.com.
ns-cloud-c2.googledomains.com.
ns-cloud-c3.googledomains.com.
ns-cloud-c3.googledomains.com.
</code></pre>
<p>I used the letter <code>c</code> because the cluster is in a <code>c</code> zone (I am not sure if this is right)</p>
<ol start="2">
<li>Because I am trying to setup as secure website I installed nginx ingress controller</li>
</ol>
<pre><code>kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
</code></pre>
<p>and</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.46.0/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<ol start="3">
<li>I applied the <code>issuer.yml</code></li>
</ol>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected]
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<ol start="4">
<li>I applied ingress</li>
</ol>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: staging
name: ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- www.stagingmyappsrl.com
- api.stagingmyappsrl.com
secretName: stagingmyappsrl-tls
rules:
- host: wwwstaging.myappsrl.com
http:
paths:
- backend:
serviceName: myappcatalogo-svc
servicePort: 80
- host: apistaging.stagingmyappsrl.com
http:
paths:
- backend:
serviceName: myappnodeapi-svc
servicePort: 80
</code></pre>
<p>It seems that everything is created and working if I check in GKE website, but when I try to access I get <code>DNS_PROBE_FINISHED_NXDOMAIN</code></p>
<p>I am not sure if I am missing an step or if I am setting up something wrong</p>
| <p>GKE should have created a cloud load balancer for your ingress service. Depending on your config, the LB can be internal or external. You can get your LB information by looking at the services:</p>
<pre><code>kubectl get svc -n ingress-nginx
</code></pre>
<p>Create a CNAME record in your DNS (namecheap) with the LB address and that should do it. Alternatively, if you have an IP address of the LB, create an A record in your DNS.</p>
<p>Cert-manager will create an ingress resource to resolve <code>HTTPS01</code> challenges. Make sure your ingresses are reachable over the Internet for the <code>HTTPS01</code> challenges to work. Alternatively, you could explore other solvers.</p>
|
<p>Does Kubernetes support connection draining?</p>
<p>For example, my deployment rolls out a new version of my web app container.
In connection draining mode Kubernetes should spin up a new container from the new image and route all new traffic coming to my service to this new instance. The old instance should remain alive long enough to send a response for existing connections.</p>
| <p>Kubernetes <strong>does</strong> support connection draining, but how it happens is controlled by the Pods, and is called <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="noreferrer">graceful termination</a>.</p>
<h2>Graceful Termination</h2>
<p>Let's take an example of a set of Pods serving traffic through a Service. This is a simplified example, the full details can be found in the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="noreferrer">documentation</a>.</p>
<ol>
<li>The system (or a user) notifies the API that the Pod needs to stop.</li>
<li>The Pod is set to the <code>Terminating</code> state. This removes it from a Service serving traffic. Existing connections are maintained, but new connections should stop as soon as the load balancers recognize the change.</li>
<li>The system sends SIGTERM to all containers in the Pod.</li>
<li>The system waits <code>terminationGracePeriodSeconds</code> (default 30s), or until the Pod completes on it's own.</li>
<li>If containers in the Pod are still running, they are sent SIGKILL and terminated immediately. At this point the Pod is forcefully terminated if it is still running.</li>
</ol>
<p>This not only covers the simple termination case, but the exact same process is used in rolling update deployments, each Pod is terminated in the exact same way and is given the opportunity to clean up.</p>
<h2>Using Graceful Termination For Connection Draining</h2>
<p><strong>If you do not handle SIGTERM in your app, your Pods will immediately terminate</strong>, since the default action of SIGTERM is to terminate the process immediately, and the grace period is not used since the Pod exits on its own.</p>
<p>If you need "connection draining", this is the basic way you would implement it in Kubernetes:</p>
<ol>
<li>Handle the SIGTERM signal, and clean up your connections in a way that your application decides. This may simply be "do nothing" to allow in-flight connections to clear out. Long running connections may be terminated in a way that is (more) friendly to client applications.</li>
<li>Set the <code>terminationGracePeriodSeconds</code> long enough for your Pod to clean up after itself.</li>
</ol>
|
<p>How do I automatically restart Kubernetes pods and pods associated with deployments when their configmap is changed/updated?</p>
<hr>
<p>I know there's been talk about the ability to automatically restart pods when a config maps changes but to my knowledge this is not yet available in Kubernetes 1.2. </p>
<p>So what (I think) I'd like to do is a "rolling restart" of the <a href="http://kubernetes.io/docs/user-guide/deployments/" rel="noreferrer">deployment</a> resource associated with the pods consuming the config map. Is it possible, and if so how, to force a rolling restart of a deployment in Kubernetes without changing anything in the actual template? Is this currently the best way to do it or is there a better option?</p>
| <p>The current best solution to this problem (referenced deep in <a href="https://github.com/kubernetes/kubernetes/issues/22368" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/22368</a> linked in the sibling answer) is to use Deployments, and consider your ConfigMaps to be immutable.</p>
<p>When you want to change your config, create a new ConfigMap with the changes you want to make, and point your deployment at the new ConfigMap. If the new config is broken, the Deployment will refuse to scale down your working ReplicaSet. If the new config works, then your old ReplicaSet will be scaled to 0 replicas and deleted, and new pods will be started with the new config.</p>
<p>Not quite as quick as just editing the ConfigMap in place, but much safer.</p>
|
<p>I'm deploying a Spring Boot app in minikube that connects to a database running on the host. Following the 12 factor app recommendations I use environment variables for the necessary configuration:</p>
<pre><code>SPRING_DATASOURCE_DRIVER_CLASS_NAME=org.postgresql.Driver
SPRING_DATASOURCE_PASSWORD=...
SPRING_DATASOURCE_URL=jdbc:postgresql://<HOST_IP_FROM_K8S>:5432/myservice
SPRING_DATASOURCE_USERNAME=...
</code></pre>
<p>The kubernetes docs only show how to set environment variables in the service and deployment .yaml files which I don't want to do. Is there a way to pass environment variables on the command line for minikube or kubectl when I create the deployment? (In Docker I do this with -e.)</p>
<p>Note that the environment variables have to be set before starting the app or it crashes.</p>
| <p>Following Ansil's comment above I used <code>configmap</code> and <code>secret</code> to pass the configuration like this:</p>
<pre><code>kubectl create secret generic springdatasourcepassword --from-literal=SPRING_DATASOURCE_PASSWORD=postgres
kubectl create secret generic springdatasourceusername --from-literal=SPRING_DATASOURCE_USERNAME=postgres
kubectl create configmap springdatasourcedriverclassname --from-literal=SPRING_DATASOURCE_DRIVER_CLASS_NAME=org.postgresql.Driver
kubectl create configmap springdatasourceurl --from-literal=SPRING_DATASOURCE_URL=jdbc:postgresql://172.18.0.1:5432/bookservice
</code></pre>
<p>These are referenced in the deployment.yaml file like this:</p>
<pre><code>env:
- name: GET_HOSTS_FROM
value: dns
- name: SPRING_DATASOURCE_DRIVER_CLASS_NAME
valueFrom:
configMapKeyRef:
name: springdatasourcedriverclassname
key: SPRING_DATASOURCE_DRIVER_CLASS_NAME
- name: SPRING_DATASOURCE_URL
valueFrom:
configMapKeyRef:
name: springdatasourceurl
key: SPRING_DATASOURCE_URL
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: springdatasourcepassword
key: SPRING_DATASOURCE_PASSWORD
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: springdatasourceusername
key: SPRING_DATASOURCE_USERNAME
</code></pre>
<p>A full explanation can be found <a href="https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have a powershell script, that take my variable and deliver it to the my helm upgrade command</p>
<pre><code>param
(
[Parameter(Mandatory = $false)]
$HELM_SET
)
helm upgrade --install myrelease -n dev my_service.tgz $HELM_SET
</code></pre>
<p>My HELM_SET var contains:<br />
<code>--set config.vali=x --set config.spring=v1</code></p>
<p>But helm said after upgrade:
Error: unknown flag: --set config.vali
helm.go:88: [debug] unknown flag: --set config.vali</p>
<p>if i'm add "--set" into
<code>helm upgrade --install myrelease -n dev my_service.tgz --set $HELM_SET </code>
and my HELM_SET var now contains:
<code>config.vali=x --set config.spring=v1</code></p>
<p>after upgrade i receive that my config:vali var is <code>x --set config.spring=v1</code></p>
<p>Can someone explain me what i'm doing wrong?</p>
|
<p><strong>If you're passing <code>$HELM_SET</code> as a <em>single string</em> encoding <em>multiple arguments</em>, you cannot pass it as-is to a command</strong>.</p>
<p>Instead, you'll need to parse this string into an <em>array</em> of <em>individual</em> arguments.</p>
<p>In the <strong><em>simplest</em> case</strong>, using the unary form of the <a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Split" rel="nofollow noreferrer"><code>-split</code> operator</a>, which splits the string into an array of tokens by whitespace:</p>
<pre class="lang-sh prettyprint-override"><code>helm upgrade --install myrelease -n dev my_service.tgz (-split $HELM_SET)
</code></pre>
<p>However, <strong>if your arguments include <em>quoted strings</em></strong> (e.g. <code>--set config.spring="v 1"</code>), more work is needed, because the quoted strings must be recognize as such, so as not to break them into multiple tokens by their <em>embedded</em> whitespace:</p>
<pre class="lang-sh prettyprint-override"><code># Note: Use of Invoke-Expression is safe here, but should generally be avoided.
$passThruArgs = (Invoke-Expression ('Write-Output -- ' + $HELM_SET -replace '\$', "`0")) -replace "`0", '$$'
helm upgrade --install myrelease -n dev my_service.tgz $passThruArgs
</code></pre>
<p>See <a href="https://stackoverflow.com/a/67423062/45375">this answer</a> for an explanation of this technique.</p>
<hr />
<p><strong>If you control the <em>invocation</em> of your script, a <em>simpler solution</em> is available:</strong></p>
<p>As <a href="https://stackoverflow.com/users/15339544/santiago-squarzon">Santiago Squarzon</a> points out, you can use the <code>ValueFromRemainingArguments</code> property of a parameter declaration to collect all arguments (that aren't bound to other parameters):</p>
<pre class="lang-sh prettyprint-override"><code>param
(
[Parameter(ValueFromRemainingArguments)]
$__passThruArgs # Choose an exotic name that won't be used in the actual arguments
)
helm upgrade --install myrelease -n dev my_service.tgz $__passThruArgs
</code></pre>
<p>Instead of passing the pass-through arguments <em>as a single string</em>, you would then pass them <em>as individual arguments</em>:</p>
<pre class="lang-sh prettyprint-override"><code>./yourScript.ps1 --set config.vali=x --set config.spring=v1
</code></pre>
|
<p>I'm working on a python script for update the configmaps programmatically.</p>
<p>Example script at shown as below. </p>
<pre><code>import requests
headers = {"Content-Type": "application/json-patch+json"}
configData = {
"apiVersion": "v1",
"kind": "ConfigMap",
"data": {
"test2.load": "testimtest"
},
"metadata": {
"name": "nginx2"
}
}
r = requests.patch("http://localhost:8080/api/v1/namespaces/default/configmaps/nginx2", json=configData)
</code></pre>
<p>The interesting side of this problem is that I have no problem with POST and GET methods but when I want to update kubernetes configmaps with PATCH method of HTTP I'm getting </p>
<pre><code> "reason":"UnsupportedMediaType" //STATUS_CODE 415
</code></pre>
<p>How I can handle this problem. </p>
| <p>I suggest you use a Kubernetes client library, instead of making the raw HTTP calls yourself. Then you don't need to figure out the low-level connection stuff, as the library will abstract that away for you.</p>
<p>I've been using <a href="https://github.com/kelproject/pykube" rel="nofollow noreferrer">Pykube</a>, which provides a nice pythonic API, though it does appear to be abandoned now.</p>
<p>You can also use the official <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">client-python</a>, which is actively maintained. The library is a bit more clunky, as it's based on an autogenerated OpenAPI client, but it covers lots of use-cases like streaming results.</p>
|
<p>I have a two node k8s cluster working. I added another node to the cluster and the <code>sudo kubeadm join ...</code> command reported that the node had joined the cluster. The new node is stuck in the NotReady state:</p>
<pre><code>kubectl get nodes
NAME STATUS ROLES AGE VERSION
msi-ubuntu18 NotReady <none> 29m v1.19.0
tv Ready master 131d v1.18.6
ubuntu-18-extssd Ready <none> 131d v1.17.4
</code></pre>
<p>The <code>journalctl -u kubelet</code> shows this error:</p>
<pre><code>Started kubelet: The Kubernetes Node Agent.
22039 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/l...
</code></pre>
<p>But the file /var/lib/kubelet/config.yaml exists and looks OK.</p>
<p>The <code>sudo systemctl status kubelet</code> shows a different error:</p>
<pre><code>kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plu
cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
</code></pre>
<p>And there is no /etc/cni/ directory on the new node. (The existing node has /etc/cni/net.d/ with calico files in it.) If I run</p>
<p><code>kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml</code></p>
<p>on the master again it doesn't solve the problem. There is still no /etc/cni/ dir on the new node.</p>
<p>I must have missed a step when creating the new node. How do I get the /etc/cni/ directory on the new node? It's also puzzling that the <code>kubeadm join ...</code> command indicates success when the new node is stuck in NotReady.</p>
| <p>For anyone else running into this problem, I was finally able to solve this by doing</p>
<pre><code>kubectl delete -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
</code></pre>
<p>followed by</p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
</code></pre>
<p>There must have been some version incompatibility between version 3.11, which I had installed a few months ago and the new node.</p>
|
<p>When I run Kubernetes commands, Powershell is wanting me to use the path to the kubectl.exe instead of just using the command kubectl.</p>
<p>I'm told using an Alias would work but I'm not sure how to do that in this case with Powershell and my attempts have come up fruitless.</p>
<p>This is what I tried:</p>
<p><a href="https://stackoverflow.com/questions/65855456/how-to-make-an-alias-for-kubectl-in-windows-using-env-variables">How to make an alias for Kubectl in Windows using ENV Variables?</a></p>
<p>I tried running this:
<code>C:\Aliases> New-Item -ItemType File -Path C:\Aliases\"K.bat" -Value "doskey k=kubectl $*" -Force</code></p>
<p>And made a system Environment Variable with Aliases as the name and C:\Aliases as the value.</p>
<p>typing K, k, kubectl, etc. was not returning anything that looked like it was being set as an alias.</p>
| <p>Place the following in your <a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Profiles" rel="nofollow noreferrer"><code>$PROFILE</code></a> file (open it for editing with, e.g., <code>notepad $PROFILE</code>; if it doesn't exist, create it with <code>New-Item -Force $PROFILE</code> first):</p>
<pre><code>Set-Alias k kubectl.exe
</code></pre>
<p>If <code>kubectl.exe</code> isn't in a directory listed in <code>$env:PATH</code>, specify the full path instead (substitute the real directory path below):</p>
<pre><code>Set-Alias k 'C:\path\to\kubectl.exe'
</code></pre>
<p>This allows you to invoke <code>kubectl.exe</code> with <code>k</code> in future sessions.</p>
<p>(The post you link to is for <code>cmd.exe</code> (Command Prompt), not PowerShell.)</p>
|
<p>I have two services running in kubernetes with url say <code>https://service1.abc.cloud.com</code> and <code>https://service2.abc.cloud.com</code>. I have routes defined in Ingress of both the services, for example my swagger path is <code>https://service1.abc.cloud.com/docs</code> and in ingress I have defined /docs to go to my service at SSO port 8082. Once SSO is successfull SSO redirects to my service1/docs route.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "service1.fullname" . }}
labels:
{{ include "service1.labels" . | indent 4 }}
{{ include "chart.labels" . | indent 4 }}
annotations:
{{ include "service1.annotations" . | indent 4 }}
spec:
rules:
- host: {{ template "service1.dns" . }}
http:
paths:
- path: /docs
backend:
serviceName: {{ template "service1.fullname" . }}
servicePort: 8082
</code></pre>
<p>Similar ingress file is for service2.</p>
<p>Till now the story was all good, but there is requirement popped up that we need to have a common url for all the services and the path param for the common url will have the service name. So we have 2 ingress file for service1 and service2 now.</p>
<p>The new url for service1 and service2 will be:-</p>
<pre><code>https://mynamespace.abc.cloud.com/service1
https://mynamespace.abc.cloud.com/service2
</code></pre>
<p>I created a common ingress file for mynamespace host as below. So now, I have 3 ingress file one for mynamespace , one for service1 and one for service2.
I know it should be one ingress file doing all the routes mapping but untill we deprecate the service1 and service2 urls we need to support the older service1 and service2 urls. Once we deprecate, we will be merging all code in one deployment and ingress.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "mynamespace.fullname" . }}
labels:
{{ include "mynamespace.labels" . | indent 4 }}
{{ include "chart.labels" . | indent 4 }}
annotations:
{{ include "mynamespace.ingress.annotations" . | indent 4 }}
spec:
rules:
- host: {{ template "iot.dns" . }}
http:
paths:
- path: /mynamespace/docs
backend:
serviceName: {{ template "service1.fullname" . }}
servicePort: 8082
</code></pre>
<p>Problem is I'm not able to route /mynamespace/docs of mynamespace host to /docs of service1 host. I also have SSO sidecar which is at port 8082 and my service1 and service2 is at port 8080.</p>
<p>End goal to be achieved:- When user hits url <code>https://mynamespace.abc.cloud.com/service1/docs</code> it should internally open the service1 swagger file which is hosted at <code>https://service1.abc.barco.cloud.com/docs</code>. The url in browser should not change so redirect should happen at kubernetes level and url should not change in browser.
Since i need to support old urls too, they should also work i.e. <code>https://service1.abc.cloud.com</code> and <code>https://service2.abc.cloud.com</code></p>
<p>Old design below:-
<a href="https://i.stack.imgur.com/eHEQn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eHEQn.png" alt="enter image description here" /></a></p>
<p>New design below:-
<a href="https://i.stack.imgur.com/9hX9E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9hX9E.png" alt="enter image description here" /></a></p>
| <p>I am putting my best guess here:</p>
<blockquote>
<p>You are trying to route <code>mynamespace.abc.cloud.com/service1/docs</code> to <code>service1:8082/docs</code>. You are having difficulty to convert the external path <code>/service1/docs</code> to the internal path <code>/docs</code> of service1</p>
</blockquote>
<p>If that is the case, what your ingress needs is the <code>rewrite-target</code> and <code>use-regex</code> annotations:</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /docs/$2
nginx.ingress.kubernetes.io/use-regex: "true"
</code></pre>
<p>And the path definition needs to be suffixed with <code>(/|$)(.*)</code>:</p>
<pre><code>- path: /service1/docs(/|$)(.*)
</code></pre>
|
<p>Can anybody point me to the workflow that I can direct traffic to my domain through Ingress on EKS?</p>
<p>I have this:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world
labels:
app: hello-world
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: hello-world
servicePort: 80
rules:
- host: DOMAIN-I-OWN.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
ports:
- port: 80
targetPort: 32000
protocol: TCP
name: http
selector:
app: hello-world
</code></pre>
<p>And able to hit <code>DOMAIN-I-OWN.com</code> using minikube</p>
<p>kubectl config use-context minikube
echo "$(minikube ip) DOMAIN-I-OWN.com" | sudo tee -a /etc/hosts</p>
<p>But, I can't find tutorials how to do the same thing on AWS EKS?</p>
<p>I have set up EKS cluster and have 3 nodes running.
And have pods deployed with those Ingress and Service spec.
And let's say I own "DOMAIN-I-OWN.com" through Google domains or GoDaddy.</p>
<p>What would be the next step to set up the DNS?</p>
<p>Do I need ingress controller? Do I need install it separate to make this work?</p>
<p>Any help would be appreciated! Got stuck on this several days...</p>
| <p>You need to wire up something like <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-dns</a> to automatically point DNS names to your cluster's published services' IPs.</p>
|
<p>Created an Ingress for my react application hosted in nginx docker container.</p>
<p>Ingress config</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
labels:
helm.sh/chart: home-service
app.kubernetes.io/name: home-service
app.kubernetes.io/instance: xrayed-vulture
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: Tiller
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "home-service.net"
http:
paths:
- path: "/pmr"
backend:
serviceName: my-service
servicePort: 8089
</code></pre>
<p>Service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
helm.sh/chart: home-service
app.kubernetes.io/name: home-service
app.kubernetes.io/instance: xrayed-vulture
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 8089
targetPort: 8089
protocol: TCP
name: http
- port: 443
targetPort: https
protocol: TCP
name: https
selector:
app.kubernetes.io/name: pmr-ui-app
</code></pre>
<p>Nginx <code>/etc/nginx/conf.d/default.conf</code> config in my react-app which is hosted in nginx:stable-alpine container.</p>
<pre><code>server {
listen 8089;
listen 443 default ssl;
server_name localhost;
root /usr/share/nginx/html;
ssl_certificate /etc/nginx/certs/nginx.crt;
ssl_certificate_key /etc/nginx/certs/nginx.key;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
}
</code></pre>
<p>Making a curl call to the blow address, the ingress works fine and return the right page</p>
<pre><code>curl -v http://home-service.net/pmr
message from my nginx controller container
10.251.128.1 - - [02/Sep/2020:16:33:30 +0000] "GET /pmr HTTP/1.1" 200 3009 "-" "curl/7.64.0" 103 0.002 [nc-my-service-8089] [] 10.x.x.26:8089 3009 0.000 200 e2407a01ffcf7607b958522574979b29
message from the react app container itself
10.x.x.27 - - [02/Sep/2020:16:33:30 +0000] "GET /pmr HTTP/1.1" 200 3009 "-" "curl/7.64.0" "10.251.128.1"
</code></pre>
<p>But visiting the browser, i see 404 for loading some .js, and css files</p>
<p>Chrome with <code>http://home-service.net/pmr</code></p>
<pre><code>Nginx controller logs
10.x.x.198 - - [02/Sep/2020:14:45:36 +0000] "GET /pmr HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" 611 0.001 [nc-my-service-8089] [] 10.x.x.26:8089 0 0.000 304 f994cdb21f962002e73ce6d967f82550
10.x.x.200 - - [02/Sep/2020:14:51:11 +0000] "GET /wf-bridge/wf-bridge/wf-bridge.esm.js HTTP/1.1" 404 21 "http://home-service.net/pmr" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" 495 0.001 [upstream-default-backend] [] 10.x.x.18:8080 21 0.000 404 0ba3519a8f55673cdcbb391d6102609a
10.x.x.1 - - [02/Sep/2020:14:51:11 +0000] "GET /static/js/2.ae30464f.chunk.js HTTP/1.1" 404 21 "http://home-service.net/pmr" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" 435 0.001 [upstream-default-backend] [] 10.x.x.18:8080 21 0.000 404 b01b7b33a961df5be413f10ec14909c1
10.x.x.198 - - [02/Sep/2020:14:51:11 +0000] "GET /static/js/main.d76be073.chunk.js HTTP/1.1" 404 21 "http://home-service.net/pmr" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" 438 0.000 [upstream-default-backend] [] 10.x.x.18:8080 21 0.000 404 c333c6dca261a75133944c6e308874af
10.210.x.200 - - [02/Sep/2020:14:51:11 +0000] "GET /static/css/main.9d650a52.chunk.css HTTP/1.1" 404 21 "http://home-service.net/pmr" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" 455 0.000 [upstream-default-backend] [] 10.x.x.18:8080 21 0.000 404 0d8a5a6aa7d549294fcb1c147dd01294
10.x.x.198 - - [02/Sep/2020:14:51:11 +0000] "GET /static/js/2.ae30464f.chunk.js HTTP/1.1" 404 21 "http://kda9dc8056pmr01.aks.azure.ubsdev.net/pmr" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" 435 0.000 [upstream-default-backend] [] 10.x.x.18:8080 21 0.000 404 1d8ee795cfa4e52225cf51e9fc1989d6
10.x.x.198 - - [02/Sep/2020:14:51:11 +0000] "GET /static/js/main.d76be073.chunk.js HTTP/1.1" 404 21 "http://home-service.net/pmr" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" 438 0.001 [upstream-default-backend] [] 10.x.x.18:8080 21 0.004 404 a0952e173bfb58f418f5d600e377f18c
React Nginx container will recieve the request with 200 like this
10.x.x.25 - - [02/Sep/2020:14:25:27 +0000] "GET / HTTP/1.1" 200 3009 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" "10.210.62.200"
</code></pre>
<p>Why the requests are all not handled by the <code>my-service</code> alone but css and .js files are handled by the <code>upstream-default-backend</code> and resulting in 404 ?</p>
<p>Nginx ingress controller version: <code>nginx-ingress-controller:0.26.1</code></p>
| <p>Your resources are rendered with the root path <code>/static</code>. That’s a separate HTTP request and will get default backend since it does not satisfy your <code>/pmr</code> path rule. You should use relative paths inside the web app for the reverse proxy to work correctly. For example drop the <code>/</code> root and just use <code>static/css/main.css</code>. Or use hostname rule instead of path rules.</p>
|
<p>I have two Kubernetes clusters in datacenters and I'm looking to create a third in public cloud. Both of my clusters use Azure AD for authentication by way of OIDC. I start my API server with the following: </p>
<pre><code>--oidc-issuer-url=https://sts.windows.net/TENAND_ID/
--oidc-client-id=spn:CLIENT_ID
--oidc-username-claim=upn
</code></pre>
<p>I created a Kubernetes cluster on GKE, and I'm trying to figure out how to use my OIDC provider there. I know that GKE fully manages the control plane.</p>
<p>Is it possible to customize a GKE cluster to use my own OIDC provider, which is Azure AD in this case?</p>
| <p>This is now supported! Check out <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/oidc" rel="nofollow noreferrer">the documentation</a> on how to configure an external OIDC provider.</p>
|
<p>Could someone help me please and point me what configuration should I be doing for my use-case?</p>
<p>I'm building a development k8s cluster and one of the steps is to generate security files (private keys) that are generated in a number of pods during deployment (let's say for a simple setup I have 6 pods that each build their own security keys). I need to have access to all these files, also they must be persistent after the pod goes down.</p>
<p>I'm trying to figure out now how to set up it locally for internal testing. From what I understand Local PersistentVolumes only allow 1:1 with PersistentVolumeClaims, so I would have to create a separate PersistentVolume and PersistentVolumeClaim for each pod that get's configured. I would prefer to void this and use one PersistentVolume for all.</p>
<p>Could someone be so nice and help me or point me to the right setup that should be used?</p>
<p><strong>-- Update: 26/11/2020</strong>
So this is my setup:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hlf-nfs--server
spec:
replicas: 1
selector:
matchLabels:
app: hlf-nfs--server
template:
metadata:
labels:
app: hlf-nfs--server
spec:
containers:
- name: hlf-nfs--server
image: itsthenetwork/nfs-server-alpine:12
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: "/opt/k8s-pods/data"
volumeMounts:
- name: pvc
mountPath: /opt/k8s-pods/data
volumes:
- name: pvc
persistentVolumeClaim:
claimName: shared-nfs-pvc
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hlf-nfs--server
labels:
name: hlf-nfs--server
spec:
type: ClusterIP
selector:
app: hlf-nfs--server
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
</code></pre>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-nfs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
resources:
requests:
storage: 1Gi
</code></pre>
<p>These three are being created at once, after that, I'm reading the IP of the service and adding it to the last one:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-nfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /opt/k8s-pods/data
server: <<-- IP from `kubectl get svc -l name=hlf-nfs--server`
</code></pre>
<p>The problem I'm getting and trying to resolve is that the PVC does not get bound with the PV and the deployment keeps in READY mode.</p>
<p>Did I miss anything?</p>
| <p>You can create a NFS and have the pods use NFS volume. Here is the manifest file to create such in-cluster NFS server (make sure you modify STORAGE_CLASS and the other variables below):</p>
<pre><code>export NFS_NAME="nfs-share"
export NFS_SIZE="10Gi"
export NFS_IMAGE="itsthenetwork/nfs-server-alpine:12"
export STORAGE_CLASS="thin-disk"
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
selector:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
name: ${NFS_NAME}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: $STORAGE_CLASS
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
template:
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
containers:
- name: nfs-server
image: ${NFS_IMAGE}
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: /nfsshare
volumeMounts:
- name: pvc
mountPath: /nfsshare
volumes:
- name: pvc
persistentVolumeClaim:
claimName: ${NFS_NAME}
EOF
</code></pre>
<p>Below is an example how to point the other pods to this NFS. In particular, refer to the <code>volumes</code> section at the end of the YAML:</p>
<pre><code>export NFS_NAME="nfs-share"
export NFS_IP=$(kubectl get --template={{.spec.clusterIP}} service/$NFS_NAME)
kubectl apply -f - <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
name: apache
labels:
app: apache
spec:
replicas: 2
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
containers:
- name: apache
image: apache
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: nfs-vol
subPath: html
volumes:
- name: nfs-vol
nfs:
server: $NFS_IP
path: /
EOF
</code></pre>
|
<p>I have deployed an application on Kubernetes and exposed with Istio service mesh. There is 2 components in the application, UI and API. I am trying to setup canary based setup to enable AB testing. So, for these 2 components, there is 2 versions (v1 and v2) has deployed, so (min) 4 pods are running.</p>
<p>Assume, v1 is stable and v2 is release version. Version v1 will serve real internet traffic and version v2 will serve the request from specific ip address to make sure promotion of version v2 will not impact real production environment. Refer the attached image for clarity of traffic flow within application.</p>
<p><a href="https://i.stack.imgur.com/YXe14.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YXe14.png" alt="enter image description here" /></a></p>
<p>Testing of UI V2 (release version) is quite easy by filter real client ip address of user using virtualService-</p>
<pre><code> - headers:
x-forwarded-for:
regex: .*1.2.3.4.*
</code></pre>
<p>Testing of API v2 (release version) is complex, because it is not exposed to internet and must serve traffic only from UI v2 (release version) internally but I am unable to do.</p>
<pre><code> url = "http://service-api"
hdr = { 'CALLER_POD' : 'ui_v2_pod_hostname_with_release' }
req = urllib.request.Request(url, headers=hdr)
response = urllib.request.urlopen(req)
</code></pre>
<p>One hacky trick I have applied in application, added custom http request headers <strong>"CALLER_POD"</strong> while calling API from UI v2 pod, so that API virtualService can filter out request based on <strong>"CALLER_POD"</strong>. But it looks more complex because it need code refactoring on a broader level and more human manageable in future if any changes come.</p>
<p><strong>Is there any way to add UI v2 pod identity (preferable hostname) in http request header while calling API service internally on Kubernetes or Istio level.</strong></p>
| <p>Have you tried using <code>sourceLabels</code> based <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest" rel="nofollow noreferrer">routing</a>? For example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: socks-com
spec:
hosts:
- sock.com
http:
- match:
- sourceLabels:
UI: v2
- route:
- destination:
host: API
label: v2
- route:
- destination:
host: API
subset: v1
</code></pre>
<p>It would also require <code>DestinationRule</code> update with two <code>subsets</code>.</p>
|
<ul>
<li>I need to see the logs of all the pods in a deployment with N worker pods</li>
<li>When I do <code>kubectl logs deployment/name --tail=0 --follow</code> the command syntax makes me assume that it will tail all pods in the deployment</li>
<li>However when I go to process I don't see any output as expected until I manually view the logs for all N pods in the deployment</li>
</ul>
<h3>Does <code>kubectl log deployment/name</code> get all pods or just one pod?</h3>
| <h3>only one pod seems to be the answer.</h3>
<ul>
<li>i went here <a href="https://stackoverflow.com/questions/33069736/how-do-i-get-logs-from-all-pods-of-a-kubernetes-replication-controller#comment124606499_56258727">How do I get logs from all pods of a Kubernetes replication controller?</a> and it seems that the command <code>kubectl logs deployment/name</code> only shows one pod of N</li>
<li>also when you do execute the <code>kubectl logs</code> on a deployment it does say it only print to console that it is for one pod (not all the pods)</li>
</ul>
|
<p>We are finding that our Kubernetes cluster tends to have hot-spots where certain nodes get far more instances of our apps than other nodes.</p>
<p>In this case, we are deploying lots of instances of Apache Airflow, and some nodes have 3x more web or scheduler components than others.</p>
<p>Is it possible to use anti-affinity rules to force a more even spread of pods across the cluster?</p>
<p><strong>E.g. "prefer the node with the least pods of label <code>component=airflow-web</code>?"</strong></p>
<p>If anti-affinity does not work, are there other mechanisms we should be looking into as well?</p>
| <p>Try adding this to the Deployment/StatefulSet <code>.spec.template</code>:</p>
<pre><code> affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "component"
operator: In
values:
- airflow-web
topologyKey: "kubernetes.io/hostname"
</code></pre>
|
<p>I have two services in kubernetes sharing the same namespace.</p>
<p>I am trying to connect to <code>service B</code> from inside a pod that is associated with <code>service A</code>.</p>
<p>I exec into the pod that is associated with <code>service A</code> then try to send <code>curl</code> request to <code>service B</code>:</p>
<pre><code>curl service-b-beta.common-space.svc.cluster.local:7000
</code></pre>
<p>However, it returns the following error:</p>
<pre><code>curl: (6) Could not resolve host: service-b-beta.common-space.svc.cluster.local
</code></pre>
<p>service A:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
name: service-a
namespace: common-space
name: service-a-beta
spec:
ports:
- name: http
port: 7200
protocol: TCP
targetPort: 7200
selector:
name: service-a-beta
sessionAffinity: None
type: ClusterIP
</code></pre>
<p>Service B:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
name: service-b
namespace: common-space
name: service-b-beta
spec:
ports:
- name: http
port: 7000
protocol: TCP
targetPort: 7000
selector:
name: service-b-beta
sessionAffinity: None
type: ClusterIP
</code></pre>
| <p>Here are some debugging tips:</p>
<ol>
<li>If running on multiple nodes, please make sure nodes can talk to each other.</li>
<li>Check if <code>coredns</code> pod on <code>master</code> is running and is healthy. See logs for any issues.</li>
<li>Run a test pod in the cluster and see if you can resolve internet domains. If failing then check your <code>coredns</code> for logs.</li>
<li>Run a test pod and check /etc/resolve.conf and see if it makes sense.</li>
<li>Check <code>coredns</code> config map in the system namespace. See if it looks normal</li>
<li>Describe the endpoints of the target service and see if it's bound correctly to the target pod.</li>
</ol>
|
<p>I configured kubernetes cluster with one master and one node, the machines that run master and node aren't in the same network. For networking I installed calico and all the pods are running. For testing the cluster I used <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">get shell example</a> and when I run the following command from master machine:</p>
<pre><code>kubectl exec -it shell-demo -- /bin/bash
</code></pre>
<p>I received the error:</p>
<pre><code>Error from server: error dialing backend: dial tcp 10.138.0.2:10250: i/o timeout
</code></pre>
<p>The ip 10.138.0.2 is on eth0 interface on the node machine. </p>
<p>What configuration do I need to make to access the pod from master?</p>
<p><strong>EDIT</strong></p>
<p>kubectl get all --all-namespaces -o wide output:</p>
<pre><code>default shell-demo 1/1 Running 0 10s 192.168.4.2 node-1
kube-system calico-node-7wlqw 2/2 Running 0 49m 10.156.0.2 instance-1
kube-system calico-node-lnk6d 2/2 Running 0 35s 10.132.0.2 node-1
kube-system coredns-78fcdf6894-cxgc2 1/1 Running 0 50m 192.168.0.5 instance-1
kube-system coredns-78fcdf6894-gwwjp 1/1 Running 0 50m 192.168.0.4 instance-1
kube-system etcd-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
kube-system kube-apiserver-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
kube-system kube-controller-manager-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
kube-system kube-proxy-b64b5 1/1 Running 0 50m 10.156.0.2 instance-1
kube-system kube-proxy-xxkn4 1/1 Running 0 35s 10.132.0.2 node-1
kube-system kube-scheduler-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
</code></pre>
<p>Thanks!</p>
| <p>I had this issue too. Don't know if you're on Azure, but I am, and I solved this by deleting the tunnelfront pod and letting Kubernetes restart it:</p>
<pre><code>kubectl -n kube-system delete po -l component=tunnel
</code></pre>
<p>which is a solution I got from <a href="https://github.com/Azure/AKS/issues/232#issuecomment-403484459" rel="nofollow noreferrer">here</a></p>
|
<p>I'm trying to set up an Ingress rule for a service (Kibana) running in my microk8s cluster but I'm having some problems.</p>
<p>The first rule set up is</p>
<pre><code>Name: web-ingress
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
k8s-ingress-tls terminates web10
Rules:
Host Path Backends
---- ---- --------
*
/ web-service:8080 (10.1.72.26:8080,10.1.72.27:8080)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
</code></pre>
<p>I'm trying to set Kibana service to get served on path /kibana</p>
<pre><code>Name: kibana
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
k8s-ingress-tls terminates web10
Rules:
Host Path Backends
---- ---- --------
*
/kibana(/|$)(.*) kibana:5601 (10.1.72.39:5601)
Annotations: nginx.ingress.kubernetes.io/ssl-redirect: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 17m nginx-ingress-controller Ingress default/kibana
Normal UPDATE 17m nginx-ingress-controller Ingress default/kibana
</code></pre>
<p>My problem is that first thing Kibana does is returns a 302 redirect to host/login?next=%2F which gets resolved by the first Ingress rule because now I've lost my /kibana path.</p>
<p>I tried adding <code>nginx.ingress.kubernetes.io/rewrite-target: /kibana/$2</code> but redirect then just looks like <code>host/login?next=%2Fkibana%2F</code> which is not what I want at all.</p>
<p>If I delete the first rule, I just get 404 once Kibana does a redirect to host/login?next=%2F</p>
| <p>Add the following annotation to the <code>kibana</code> ingress so that nginx-ingress interprets the <code>/kibana(/|$)(.*)</code> path using regex:</p>
<pre><code> nginx.ingress.kubernetes.io/use-regex: "true"
</code></pre>
<p>Additional detail:
To let kibana know that it runs on <code>/kibana</code> path, add the following env variable to the kibana pod/deployment:</p>
<pre><code> - name: SERVER_BASEPATH
value: /kibana
</code></pre>
|
<p>I'm creating a plain vanilla AKS cluster with an ACR container registry and deploying a dummy service, something I've done a number of times before and should work but it's not - the service deploys without errors, I see the pod and the service are alive, the ports seem to match - but I fail to reach the app running in the pod.</p>
<p>Here is my YAML file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: dummyapp-prep
spec:
selector:
app: dummyapp-prep
ports:
- protocol: TCP
port: 80
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dummyapp-prep
spec:
selector:
matchLabels:
run: dummyapp-prep
replicas: 1
template:
metadata:
labels:
run: dummyapp-prep
spec:
containers:
- name: dummyapp-prep
image: dummyappregistry.azurecr.io/dummyappregistry.azurecr.io/dummyapp-prep:dummyapp-prep-18
ports:
- containerPort: 80
imagePullSecrets:
- name: secret
</code></pre>
<p>Everything deploys fine - I see the service and it gets an external IP:</p>
<pre><code>kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dummyapp-prep LoadBalancer 10.0.230.4 52.149.106.85 80:32708/TCP 4m24s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 26h
</code></pre>
<p>The pod is fine, I connect to it and curl the app on localhost:80. Still, browsing <a href="http://52.149.106.85:80" rel="nofollow noreferrer">http://52.149.106.85:80</a> timeouts</p>
<p>I check the Azure Load Balancer - the IP is registered.</p>
<p>What else could be wrong?</p>
| <p>You have the wrong label applied. Service is looking for <code>app: dummyapp-prep</code> while the pod and deployment have <code>run: dummyapp-prep</code>. Notice <code>run</code> vs <code>app</code> label names.</p>
<p>You can also check if the service is bound by checking the endpoint object the API server creates for you by running <code>kubectl describe endpoints dummyapp-prep</code>. If it doesn't list the pod IPs in the <code>Subsets</code> section then it would mean the service can't find the pods.</p>
|
<p>How can I schedule a Kubernetes <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">cron job</a> to run at a specific time and just once?</p>
<p>(Or alternatively, a Kubernetes job which is not scheduled to run right away, but delayed for some amount of time – what is in some scheduling systems referred to as "earliest time to run".)</p>
<p>The documentation says:</p>
<blockquote>
<p>Cron jobs can also schedule individual tasks for a specific time [...]</p>
</blockquote>
<p>But how does that work in terms of job history; is the control plane smart enough to know that the scheduling is for a specific time and won't be recurring?</p>
| <p>You can always put specific minute, hour, day, month in the schedule cron expression, for example 12:15am on 25th of December:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "15 0 25 12 *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
<p>Unfortunately it does not support specifying the year (the single <code>*</code> in the cron expression is for the day of the week) but you have one year to remove the cronjob before the same date & time comes again for the following year.</p>
|
<p>I have a scaler service that was working fine, until my recent kubernetes version upgrade. Now I keep getting the following error. (some info redacted)</p>
<p><code>Error from server (Forbidden): deployments.extensions "redacted" is forbidden: User "system:serviceaccount:namesspace:saname" cannot get resource "deployments/scale" in API group "extensions" in the namespace "namespace"</code></p>
<p>I have below cluster role:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: redacted
chart: redacted
heritage: Tiller
release: redacted
name: redacted
rules:
- apiGroups:
- '*'
resources: ["configmaps", "endpoints", "services", "pods", "secrets", "namespaces", "serviceaccounts", "ingresses", "daemonsets", "statefulsets", "persistentvolumeclaims", "replicationcontrollers", "deployments", "replicasets"]
verbs: ["get", "list", "watch", "edit", "delete", "update", "scale", "patch", "create"]
- apiGroups:
- '*'
resources: ["nodes"]
verbs: ["list", "get", "watch"]
</code></pre>
| <p>scale is a subresource, not a verb. Include "deployments/scale" in the resources list. </p>
|
<p>If I perform the following command it looks in "https://github.com/grafana/" instead of the one I specified - "https://grafana.github.io/helm-charts"</p>
<p>Here is what I run and the results:</p>
<pre><code>helm3 upgrade --install grafana grafana --dry-run --repo https://grafana.github.io/helm-charts --wait
Release "grafana" does not exist. Installing it now.
Error: failed to download "https://github.com/grafana/helm-charts/releases/download/grafana-6.16.14/grafana-6.16.14.tgz"
</code></pre>
<p>Why is it looking in "github.com/grafana" instead of where I told it to look with the repo flag - "grafana.github.io"?</p>
<p>My co worker runs the same command and it works.
I list the repositories and grafana is not there so I would assume that would force this to work?</p>
<pre><code>helm3 repo list
NAME URL
stable https://charts.helm.sh/stable
local http://127.0.0.1:8879/charts
eks https://aws.github.io/eks-charts
bitnami https://charts.bitnami.com/bitnami
cluster-autoscaler https://kubernetes.github.io/autoscaler
kube-dns-autoscaler https://kubernetes-sigs.github.io/cluster-proportional-autoscaler
cluster-proportional-autoscaler https://kubernetes-sigs.github.io/cluster-proportional-autoscaler
external-dns https://charts.bitnami.com/bitnami
kube2iam https://jtblin.github.io/kube2iam/
kubernetes-dashboard https://kubernetes.github.io/dashboard/
incubator https://charts.helm.sh/incubator
</code></pre>
<p>My coworker has the same repo list output as above.</p>
<p>The below commands will work in my system however I want to know why for me it will not work when I use the --repo flag as in the above example (all of our code has that flag in it and they do not want to change it) :</p>
<pre><code>helm3 repo add grafana https://grafana.github.io/helm-charts
"grafana" has been added to your repositories
kconfig_et helm3 upgrade --install grafana grafana/grafana --dry-run --wait
</code></pre>
| <p>I executed your Helm command but with <code>--debug</code> flag to get this error:</p>
<pre><code>helm upgrade --install grafana grafana --dry-run --repo https://grafana.github.io/helm-charts --wait --debug
history.go:56: [debug] getting history for release grafana
Release "grafana" does not exist. Installing it now.
install.go:178: [debug] Original chart version: ""
Error: no cached repo found. (try 'helm repo update')
</code></pre>
<p>Then I simply executed <code>helm repo update</code> as suggested. I then retried the same <code>helm upgrade</code> command and it successfully installed the chart.</p>
<p>You coworker did not encounter the error because at some point he/she has executed <code>helm repo update</code> at least once. (Mine was a freshly installed Helm)</p>
|
<p>I am using helm/k8s to deploy a third party (<a href="https://prisma.io" rel="nofollow noreferrer">prisma</a>) container. The container expects a environment variable in the shape of yaml similar to </p>
<pre><code>port: 4466
managementApiSecret: $PRISMA_SECRET
databases:
default:
connector: postgres
host: postgresql
port: 5432
user: postgres
password: $PG_SECRET
migrations: true
</code></pre>
<p>I have access to the postgres password and managementApiSecret as values in a separate secret. I am trying to create pod that fetches the two secrets and uses them to create a environment variable. My currently attempt at a solution looks like this. </p>
<pre><code>containers:
- name: prisma
image: 'prismagraphql/prisma:1.14'
ports:
- name: prisma-4466
containerPort: 4466
env:
- name: PG_SECRET
valueFrom:
secretKeyRef:
name: postgresql
key: postgres-password
- name: PRISMA_CONFIG
value: |
port: 4466
managementApiSecret: $PRISMA_SECRET
databases:
default:
connector: postgres
host: postgresql
port: 5432
user: postgres
password: $PG_SECRET
migrations: true
</code></pre>
<p>This does not seem to work (because the secret is evaluated at kubectl apply time?). Is there an alternative way of creating env variables with secret information?</p>
| <p>From the envvar doc: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#envvar-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#envvar-v1-core</a></p>
<blockquote>
<p>Variable references $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables.</p>
</blockquote>
<p>Your second envvar can use the value of the earlier envvar as <code>$(PG_SECRET)</code></p>
|
<p>I am trying to install Operator Lifecycle Manager (OLM) — a tool to help manage the Operators running on your cluster — from the <a href="https://operatorhub.io/operator/gitlab-runner-operator" rel="nofollow noreferrer">official documentation</a>, but I keep getting the error below. What could possibly be wrong?</p>
<p>This is the result from the command:</p>
<pre class="lang-bash prettyprint-override"><code>curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0
</code></pre>
<pre class="lang-bash prettyprint-override"><code>/bin/bash: line 2: $'\r': command not found
/bin/bash: line 3: $'\r': command not found
/bin/bash: line 5: $'\r': command not found
: invalid option6: set: -
set: usage: set [-abefhkmnptuvxBCEHPT] [-o option-name] [--] [-] [arg ...]
/bin/bash: line 7: $'\r': command not found
/bin/bash: line 9: $'\r': command not found
/bin/bash: line 60: syntax error: unexpected end of file
</code></pre>
<p>I've tried removing the existing curl and downloaded and installed another version but the issue has still persisted. Most solutions online are for Linux users and they all lead to Windows path settings and files issues.</p>
<p>I haven't found one tackling installing a file using <code>curl</code>.</p>
<p>I'll gladly accept any help.</p>
| <p><strong>Using PowerShell <em>on Windows</em></strong>, <strong>you must explicitly ensure that the stdout lines emitted by <code>curl.exe</code> are separated with Unix-format LF-only newlines, <code>\n</code>, <em>when PowerShell passes them on to <code>bash</code></em></strong>, given that <code>bash</code>, like other Unix shells, doesn't recognize Windows-format CRLF newlines, <code>\r\n</code>:</p>
<p>The <strong>simplest way to <em>avoid</em> the problem is to call via <code>cmd /c</code></strong>:</p>
<pre><code>cmd /c 'curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0'
</code></pre>
<p><code>cmd.exe</code>'s pipeline (<code>|</code>) (as well as its redirection operator, <code>></code>), unlike PowerShell's (see below), acts as a <em>raw byte conduit</em>, so it simply streams whatever bytes <code>curl.exe</code> outputs to the receiving <code>bash</code> call, unaltered.</p>
<p><strong>Fixing the problem on the <em>PowerShell</em> side</strong> requires more work, and is inherently <em>slower</em>:</p>
<pre><code>(
(
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh
) -join "`n"
) + "`n" | bash -s v0.22.0
</code></pre>
<p><sup>Note: <code>`n</code> is a <a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Special_Characters" rel="nofollow noreferrer">PowerShell escape sequence</a> that produces a literal LF character, analogous to <code>\n</code> in certain <code>bash</code> contexts.</sup></p>
<p>Note:</p>
<ul>
<li><p>It is important to note that, <strong>as of PowerShell 7.2.x, passing <em>raw bytes</em> through the <a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Pipelines" rel="nofollow noreferrer">pipeline</a> is <em>not</em> supported</strong>: external-program stdout output is invariably <em>decoded into .NET strings</em> on <em>reading</em>, and <em>re-encoded</em> based on the <a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Preference_Variables#outputencoding" rel="nofollow noreferrer"><code>$OutputEncoding</code> preference variable</a> when <em>writing</em> to an(other) external program.</p>
<ul>
<li>See <a href="https://stackoverflow.com/a/59118502/45375">this answer</a> for more information, and <a href="https://github.com/PowerShell/PowerShell/issues/1908" rel="nofollow noreferrer">GitHub issue #1908</a> for potential <em>future</em> support for raw byte streaming between external programs and on redirection to a file.</li>
</ul>
</li>
<li><p>That is, <strong>PowerShell invariably interprets output from external programs, such as <code>curl.exe</code>, as <em>text</em>, and sends it <em>line by line</em> through the pipeline, <em>as .NET string objects</em></strong> (the PowerShell pipeline in general conducts (.NET) <em>objects</em>).</p>
<ul>
<li>Note that these lines (strings) do <em>not</em> have a trailing newline themselves; that is, the information about what specific newline sequences originally separated the lines is <em>lost</em> at that point (PowerShell itself recognizes CRLF and LF newlines interchangeably).</li>
</ul>
</li>
<li><p>However, <strong>if the receiving command is <em>also</em> an external program</strong>, <strong>PowerShell <em>adds a trailing platform-native newline</em> to each line, which on Windows is a CRLF newline</strong> - this is what caused the problem.</p>
</li>
<li><p>By collecting the lines in an array up front, using <code>(...)</code>, they can be sent as a <em>single, LF-separated multi-line string</em>, using the <a href="https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Join" rel="nofollow noreferrer"><code>-join</code>operator</a>, as shown above.</p>
<ul>
<li><p>Note that PowerShell appends a trailing platform-native newline to this single, multi-line string too, but a stray <code>\r\n</code> at the <em>very end</em> of the input is in effect ignored by <code>bash</code>, assuming that the last true input line ends in <code>\n</code>, which is what the extra <code>+ "`n"</code> at the end of the expression ensures.</p>
</li>
<li><p>However, there are scenarios where this trailing CRLF newline <em>does</em> cause problems - see <a href="https://stackoverflow.com/a/48372333/45375">this answer</a> for an example and workarounds via the platform-native shell.</p>
</li>
</ul>
</li>
</ul>
|
<p>I am running kubectl version 1.7</p>
<p>I am trying to add an init container to my deployment via <code>kubectl patch</code> but no matter how I try it it simply returns "not patched". </p>
<p><code>kubectl patch deployment my-deployment --patch "$(cat ./init-patch.yaml)"</code>
<strong>deployment "my-deployment" not patched</strong></p>
<p><code>spec:
template:
spec:
initContainers:
- name: my-mount-init
image: "my-image"
command:
- "sh"
- "-c"
- "mkdir /mnt/data && chmod -R a+rwx /mnt/data"
volumeMounts:
- name: "my-volume"
mountPath: "/mnt/data"
securityContext:
runAsUser: 0
resources:
limits:
cpu: "0.2"
memory: "256Mi"
requests:
cpu: "0.1"
memory: "128Mi"</code></p>
<p>This is to allow a custom linux user rights to read and write to the volume instead of needing to be the root user.</p>
<p>Wish there was a better response as to why it is not being patched..</p>
| <p>Kubectl is not idempotent. If the element to be patched already contains the patch, kubectl patch fails. </p>
<p>The solution can be read in Natim's comment, but it took me a while to realise that was indeed my problem.</p>
|
<p>I have a React app that I want to pull data from a springboot endpoint on load. If I run it locally (via intellij and webstorm) it loads and I see my page render. It looks like this:</p>
<p><a href="https://i.stack.imgur.com/b9NQq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b9NQq.png" alt="enter image description here" /></a></p>
<p>The 1: course1 and 2: course2 are the response being read from my springboot call.</p>
<p>The ONLY thing I'm changing in the code between local run and local cluster is the endpoint that react calls. I change from http://localhost:8080/greeting to http://bootdemo:8080/greeting (bootdemo being the name of the service). When I run it in my local cluster (Im using docker-desktop and navigating my browser to http://localhost:31000/) I get a page that just says "Loading..." and nothing ever loads. Using google developer tools on the page I can see this:</p>
<pre><code>[HMR] Waiting for update signal from WDS...
App.js:12 Fetching from http://bootdemo:8080/greeting
App.js:13 GET http://bootdemo:8080/greeting net::ERR_NAME_NOT_RESOLVED
App.js:24 error: TypeError: Failed to fetch
</code></pre>
<p>Any help would be appreciated. Below is the react code and the k8's files and output I thought relevant to the problem. Let me know what else I can add if needed!</p>
<p>React App.js:</p>
<pre><code>import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';
class App extends Component {
state = {
isLoading: true,
groups: []
};
async componentDidMount() {
console.log("Fetching from http://bootdemo:8080/greeting")
await fetch('http://bootdemo:8080/greeting').then((response) => {
if(!response.ok) throw new Error("Couldnt make the connection: " + response.status.toString());
else return response.json();
})
.then((data) => {
console.log("Testing Testing")
this.setState({ isLoading: false, groups: data });
console.log(data)
console.log("DATA STORED");
})
.catch((error) => {
console.log('error: ' + error);
this.setState({ requestFailed: true });
});
}
render() {
const {groups, isLoading} = this.state;
if (isLoading) {
return <p>Loading...</p>;
}
console.log("Made it here")
return (
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo" alt="logo" />
<div className="App-intro">
<h2>Attempted Response</h2>
<div>
{groups.map((group) => (
<p>{group.id} : {group.name}</p>
))}
</div>
</div>
</header>
</div>
);
}
}
export default App;
</code></pre>
<p>K8's Springboot setup:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: bootdemo
labels:
name: bootdemo
spec:
replicas: 1
selector:
matchLabels:
app: bootdemo
template:
metadata:
labels:
app: bootdemo
spec:
containers:
- name: bootdemo
image: demo:latest
imagePullPolicy: Never
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: bootdemo
labels:
app: bootdemo
spec:
selector:
app: bootdemo
ports:
- name: http
port: 8080
targetPort: 80
</code></pre>
<p>K8's React setup:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
labels:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: reactdemo
image: reactdemo:latest
imagePullPolicy: Never
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
type: NodePort
selector:
app: demo
ports:
- name: ingressport
port: 3000
targetPort: 3000
nodePort: 31000
</code></pre>
<p>Kubenetes after applying the deployment yamls:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/bootdemo-7f84f48fc6-2zw4t 1/1 Running 0 30m
pod/demo-c7fbcd87b-w77xh 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/bootdemo ClusterIP 10.102.241.8 <none> 8080/TCP 30m
service/demo NodePort 10.108.61.5 <none> 3000:31000/TCP 30m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/bootdemo 1/1 1 1 30m
deployment.apps/demo 1/1 1 1 30m
NAME DESIRED CURRENT READY AGE
replicaset.apps/bootdemo-7f84f48fc6 1 1 1 30m
replicaset.apps/demo-c7fbcd87b 1 1 1 30m
</code></pre>
<p>It seems that the react service just cant reach the boot service. I suspect it is one or more of the following:</p>
<ol>
<li>a naming/mapping issue in the deployment files</li>
<li>using two services instead of just one</li>
<li>React served into my browser doesn't know how to reach back into my cluster and get to springboot. Do I need something like nginx in the mix?</li>
</ol>
| <p>Your browser won't be able to resolve <code>http://bootdemo:8080/</code> because <code>bootdemo</code> is only resolvable by pods running inside the same namespace.</p>
<p>You see, even if you run Kubernetes cluster on your local machine via Docker Desktop, the cluster and your local machine are effectively isolated by their separate networking layers and hence any access from your local machine to any service in your cluster requires special bridging mechanisms such as <code>NodePort</code> or <code>LoadBalancer</code> services (as I can see you are already using <code>NodePort</code> service for the <code>demo</code>).</p>
<p>In a proper clusters served by Kubernetes distributions like AKS, GCP, EKS, PKS etc, however, you would need to design how the cluster would accept and route inbound traffics to the component within the cluster, which would commonly require setting up ingress controllers (NGINX being most popular though you can also use Traefik or HAProxy) as well as creating ingress resources which will map specific URLs & paths to specific services inside the cluster and namespace.</p>
|
<p>Hello I try to have a Pod with 2 container, one a c++ app, one a mysql database. I used to have the mysql deployed in its own service, but i got latency issue. So i want to try multi-container pod.</p>
<p>But i've been struggling to connect my app with the mysql through localhost. It says..</p>
<blockquote>
<p>Can\'t connect to local MySQL server through socket
\'/var/run/mysqld/mysqld.sock</p>
</blockquote>
<p>Here is my kubernetes.yaml. Please I need help :(</p>
<pre><code># Database setup
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storage-camera
labels:
group: camera
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: camera-pv
labels:
group: camera
spec:
storageClassName: db-camera
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: storage-camera
---
# Service setup
apiVersion: v1
kind: Service
metadata:
name: camera-service
labels:
group: camera
spec:
ports:
- port: 50052
targetPort: 50052
selector:
group: camera
tier: service
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: camera-service
labels:
group: camera
tier: service
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
minReadySeconds: 60
template:
metadata:
labels:
group: camera
tier: service
spec:
containers:
- image: asia.gcr.io/test/db-camera:latest
name: db-camera
env:
- name : MYSQL_ROOT_PASSWORD
value : root
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: camera-persistent-storage
mountPath: /var/lib/mysql
- name: camera-service
image: asia.gcr.io/test/camera-service:latest
env:
- name : DB_HOST
value : "localhost"
- name : DB_PORT
value : "3306"
- name : DB_NAME
value : "camera"
- name : DB_ROOT_PASS
value : "password"
ports:
- name: http-cam
containerPort: 50052
volumes:
- name: camera-persistent-storage
persistentVolumeClaim:
claimName: camera-pv
restartPolicy: Always
</code></pre>
| <p>Your MySQL client is configured to use a socket and not talk over the network stack, cf. the <a href="https://dev.mysql.com/doc/refman/5.5/en/connecting.html" rel="nofollow noreferrer">MySQL documentation</a>:</p>
<blockquote>
<p>On Unix, MySQL programs treat the host name localhost specially, in a
way that is likely different from what you expect compared to other
network-based programs. For connections to localhost, MySQL programs
attempt to connect to the local server by using a Unix socket file.
This occurs even if a --port or -P option is given to specify a port
number. To ensure that the client makes a TCP/IP connection to the
local server, use --host or -h to specify a host name value of
127.0.0.1, or the IP address or name of the local server. You can also specify the connection protocol explicitly, even for localhost, by
using the --protocol=TCP option.</p>
</blockquote>
<p>If you still want <code>camera-service</code> to talk over the file system socket you need to mount the file system for the <code>camera-service</code> as well. Currently you only mount it for <code>db-camera</code></p>
|
<p>I created ingress resource but it is not working.
Please note - I have not deployed the ingress controller using gke default ( is this mandatory to deploy ingress controller in managed gke )</p>
<h1>I have created two nginx deployment and nodeport service respectively
here is kubectl get all</h1>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/app1-57df48bcd9-d48l9 1/1 Running 0 69m
pod/app2-8478484c64-khn5w 1/1 Running 0 69m
pod/test-d4df74fc9-kkzjd 1/1 Running 0 42m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/app1 NodePort 10.121.13.120 <none> 8080:32252/TCP 67m
service/app2 NodePort 10.121.15.112 <none> 80:31412/TCP 58m
service/kubernetes ClusterIP 10.121.0.1 <none> 443/TCP 79m
service/test NodePort 10.121.13.108 <none> 6060:32493/TCP 42m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/app1 1/1 1 1 69m
deployment.apps/app2 1/1 1 1 69m
deployment.apps/test 1/1 1 1 42m
NAME DESIRED CURRENT READY AGE
replicaset.apps/app1-57df48bcd9 1 1 1 69m
replicaset.apps/app2-8478484c64 1 1 1 69m
replicaset.apps/test-d4df74fc9 1 1 1 42m
</code></pre>
<p>=========================</p>
<h2>and deployed ingress resource as per below
yaml</h2>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: connection
spec:
rules:
- http:
paths:
- path: /app1
backend:
serviceName: app1
servicePort: 8080
- path: /app2
backend:
serviceName: app2
servicePort: 80
</code></pre>
<p>Ingress description</p>
<pre><code>Name: connection
Namespace: default
Address: xxxxxxxxxxxxx
Default backend: default-http-backend:80 (10.56.0.9:8080)
Rules:
Host Path Backends
---- ---- --------
*
/ app1:8080 (10.56.2.4:80)
/app2 app2:80 (10.56.1.4:80)
Annotations: ingress.kubernetes.io/backends:
{"k8s-be-31412--b52155807170af3c":"HEALTHY","k8s-be-32252--b52155807170af3c":"HEALTHY","k8s-be-32504--b52155807170af3c":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-connection--b52155807170af3c
ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/target-proxy: k8s-tp-default-connection--b52155807170af3c
ingress.kubernetes.io/url-map: k8s-um-default-connection--b52155807170af3c
ingress.kubernetes.io/use-regex: true
kubernetes.io/ingress.class: gce
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 58m loadbalancer-controller default/connection
Normal CREATE 57m loadbalancer-controller ip: xxxxxxxxxx
</code></pre>
<p>when I am trying to access <code>http://ingresLBip/app1</code> or <code>http://ingessLBip/app2</code> I am getting 404 not found error</p>
<p>if I configure ingres resouce with single backend then it is working for single service.</p>
<p>Is anyone face this issue on gke?
do i need to install nginx controller as well?</p>
| <p>I am assuming your applications are listening to <code>/</code> and not <code>/app1</code> or <code>/app2</code> paths. That would explain why the single backend is working. There <a href="https://github.com/kubernetes/ingress-gce/issues/109" rel="nofollow noreferrer">seems</a> to be a limitation in <code>gce-ingress</code> controller that it doesn't support <code>rewrite-target</code> annotation. If that thread is current then either you have to update the applications to work with the paths or you will have to ditch <code>gce-ingress</code> and use <code>nginx-ingress</code>.</p>
|
<p>I have deployed on prod 10+ java/js microservices in GKE, all is good, none use external volumes, its a simple process in pipeline of generating new image, pushing to container registry and when upgrading the app to new version, just deploy new deployment with the new image and pods using rolling update are upgraded.</p>
<p>My question is how would it look like with Common Lisp application ? The main benefit of the language is that the code can be changed in runtime. Should the config <code>.lisp</code> files be attached as ConfigMap? (update to ConfigMap still requires recreation of pods for the new ConfigMap changes to be applied) Or maybe as some volume? (but what about there being 10x pods of the same deployment? all read from the same volume? what if there are 50 pods or more (wont there be some problems?)) And should the deploy of new version of the application look like v1 and v2 (new pods) or do we use somehow the benefits of runtime changes (with solutions I mentioned above), and the pods version stays the same, while the new code is added via some external solution</p>
| <p>I would probably generate an image with the compiled code, and possibly a post-dump image, then rely on Kubernetes to restart pods in your Deployment or StatefulSet in a sensible way. If necessary (and web-based), use Readiness checks to gate what pods will be receiving requests.</p>
<p>As an aside, the projected contents of a ConfigMap should show up in side the container, unless you have specified the filename(s) of the projected keys from the ConfigMap, so it should be possible to keep the source that way, then have either the code itself check for updates or have another mechanism to signal "time for a reload". But, unless you pair that with compilation, you would probably end up with interpreted code.</p>
|