/var/www/yatta47.log

/var/www/yatta47.log

やったのログ置場です。スクラップみたいな短編が多いかと。

Ubuntuにk3sをインストール

Kubernetes

Kubernetesの勉強をすることになって、かといってk8sをインストールするのはクラウド環境とかでないと難しそうだったので、k3sを試してみました。

環境

ローカルのVagrantにUbuntu18.04をインストールして、その上にk3sを入れてみました。

 

インストール

コマンド一つです。

vagrant@vagrant:~$ curl -sfL https://get.k3s.io | sh -

実行したログがこちら。

vagrant@vagrant:~$ curl -sfL https://get.k3s.io | sh -
[INFO]  Finding latest release
[INFO]  Using v1.17.0+k3s.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.17.0+k3s.1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.17.0+k3s.1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
vagrant@vagrant:~$

これだけで入ったのか?!と思い起動状態を確認すると・・・・

vagrant@vagrant:~$ systemctl status k3s
● k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2020-01-12 15:20:54 UTC; 10s ago
     Docs: https://k3s.io
  Process: 2704 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 2694 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
 Main PID: 2705 (k3s-server)
    Tasks: 9
   CGroup: /system.slice/k3s.service
           tq2705 /usr/local/bin/k3s server
           mq2731 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/contai

Jan 12 15:20:59 vagrant k3s[2705]: I0112 15:20:59.297095    2705 cronjob_controller.go:97] Starting CronJob Manager
Jan 12 15:20:59 vagrant k3s[2705]: I0112 15:20:59.345490    2705 node_ipam_controller.go:94] Sending events to api server.
Jan 12 15:21:00 vagrant k3s[2705]: time="2020-01-12T15:21:00.706531677Z" level=info msg="waiting for node vagrant CIDR not assigned yet"
Jan 12 15:21:00 vagrant k3s[2705]: E0112 15:21:00.959616    2705 resource_quota_controller.go:407] unable to retrieve the complete list of server AP
Jan 12 15:21:01 vagrant k3s[2705]: time="2020-01-12T15:21:01.391307620Z" level=info msg="Tunnel endpoint watch event: [10.0.2.15:6443]"
Jan 12 15:21:01 vagrant k3s[2705]: time="2020-01-12T15:21:01.391551538Z" level=info msg="Connecting to proxy" url="wss://10.0.2.15:6443/v1-k3s/conne
Jan 12 15:21:01 vagrant k3s[2705]: time="2020-01-12T15:21:01.393301593Z" level=info msg="Handling backend connection request [vagrant]"
Jan 12 15:21:01 vagrant k3s[2705]: W0112 15:21:01.819166    2705 garbagecollector.go:639] failed to discover some groups: map[metrics.k8s.io/v1beta1
Jan 12 15:21:02 vagrant k3s[2705]: time="2020-01-12T15:21:02.709358038Z" level=info msg="waiting for node vagrant CIDR not assigned yet"
Jan 12 15:21:04 vagrant k3s[2705]: time="2020-01-12T15:21:04.711747572Z" level=info msg="waiting for node vagrant CIDR not assigned yet"

入っている。すげぇぇぇ。

kubectlも一緒に入ってきていました。

vagrant@vagrant:~$ which kubectl
/usr/local/bin/kubectl

 

まとめ

これを使って次回から動かしてみようと思います。

 

おまけ

k3sの認証関連情報。

vagrant@vagrant:~$ cat /etc/rancher/k3s/k3s.yaml
cat: /etc/rancher/k3s/k3s.yaml: Permission denied
vagrant@vagrant:~$ sudo cat /etc/rancher/k3s/k3s.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFUzT0RnME1qUTBOVEFlRncweU1EQXhNVEl4TlRJd05EVmFGdzB6TURBeE1Ea3hOVEl3TkRWYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFUzT0RnME1qUTBOVEJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQkNVSlJuQ1pqbEtEVDY0NTVGY1FPYlFacVZ3eitSaGYrYnhBK0ZmdytXQVEKbityVW4yVzhoaFpLZldWN2E5TDJYTjduWUFwcS91aDc2T29GbTAzRTJ0aWpJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFEMnBISnZKalN4CmFOb2Y5Ni83YitWellYZlNjc0ZIMld4eTJTWVZPZDVHYkFJZ2VBTUk0SWFRb0c1K3JwSFVsSGkvOVZ1N3lSd0QKNlRyUnNQaEFoeW9GQkdVPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    password: a06bf9624f6f38e555c22e3bf97a036f
    username: admin
vagrant@vagrant:~$