Universal 2nd Factor (U2F) is an open standard that strengthens and simplifies two-factor authentication (2FA) using specialized USB or NFC devices based on similar security technology found in smart cards. While initially developed by Google and Yubico, with contribution from NXP Semiconductors, the standard is now hosted by the FIDO Alliance. -- from arch wiki
YubiKey team developed pam_u2f module, also open source at GitHub Yubico/pam-u2f. With pam_u2f, we can easily configure authenticating methods for Linux. Although YubiKey does way more than U2F authenticating (OTP, OpenPGP, ...), this post will only focus on U2F.
Install it on ArchLinux:
pacman -S pam_u2f
Generate a key for user
mkdir ~/.config/Yubico
pamu2fcfg -o pam://hostname -i pam://hostname > ~/.config/Yubico/u2f_keys
pam_u2f looks for key file at $XDG_CONFIG_HOME/Yubico/u2f_keys
. If $XDG_CONFIG_HOME
is not set, $HOME/.config/Yubico/u2f_keys
is used.
Before configuring PAM rules, be sure to back up the current setting and leave a console with edit privilege for PAM files. You may get locked out (since it's too secure) during the process. If this happens, plug your storage device into another computer and fix it. Keep in mind that the order of the auth rules in PAM config matters.
Here's how to passwordless sudo:
Open /etc/pam.d/sudo
and add
auth sufficient pam_u2f.so cue origin=pam://hostname appid=pam://hostname
before any auth required
or auth include
Passwordless unlock 1password by adding the same line to /etc/pam.d/polkit-1
2FA log in to gnome desktop:
Open /etc/pam.d/gdm-password
and add
auth required pam_u2f.so cue nouserok origin=pam://hostname appid=pam://hostname
after auth lines. nouserok
flag is for user that don't have a security key (or can't find u2f_keys in certain path)
OpenSSH supports FIDO/U2F hardware tokens natively since 8.2. Both the client and server must support the ecdsa-sk/ed25519-sk key types. Generate a security key backed key pair with:
- for ECDSA key
ssh-keygen -t ecdsa-sk
- for Ed25519 key
ssh-keygen -t ed25519-sk
-- from arch wiki
After generating keys, copy the public key to ~/.ssh/authorized_keys
at destination server.
Do notice that ssh does not prompt for tapping the security key.
Some distribution that uses SELinux may encounter trouble when accessing credential files. 2FA will be denied or may be bypassed if nouserok flag is set.
Check out details at https://access.redhat.com/security/cve/CVE-2020-24612
Unfortunately, they rolled out the new YubiKey Bio series a week after I bought this one. I'm guessing the YubiKey Bio will work with the authentication method mentioned above as the current module contains userverification
flag. If this doesn't work, I tested with pinverification=1
and the user interface prompted to input PIN code correctly. However, it would be disappointing if fingerprint verification doesn't work since we can achieve the same level of security with pin verification and a "normal" YubiKey.
A drawback of using YubiKey Bio is it does not provide NFC function, which I guess is understandable because of power and stuff (or maybe they could provide one in the future?). Thus we cannot use it with various mobile devices not compatible with USB-A/USB-C. This drawback is negligible since I only use it with laptop/PC now. Waiting for more details from webinar at 10 a.m. PT on Mon. Oct. 18
Might buy one if this gets a discount at Black Friday Sale.
I don't have one.
Solo key is an open source implementation of security key with FIDO standard. It should work fine with U2F PAM module. Much cheaper than Google Titan and YubiKeys.
[^1]: Recent Twitch incident (https://blog.twitch.tv/en/2021/10/06/updates-on-the-twitch-security-incident/)
]]>Though with experience in docker and docker-compose, I’m totally new to kubernetes. My impression of k8s is a bunch of yaml files. In this post, I followed this tutorial, but I did a little twist to deploy ELK stack so that I could learn to debug k8s problems instead of simply doing copy pastas.
First we need to create a Kubernetes cluster, go to the dashboard and create a k8s cluster with 3 nodes. After the cluster is created, generate an API token for connecting to the cluster using doctl
.
# download and extract doctl bin
wget https://github.com/digitalocean/doctl/releases/download/v1.66.0/doctl-1.66.0-linux-amd64.tar.gz
tar xf doctl-1.66.0-linux-amd64.tar.gz
mv doctl /usr/local/bin
# authenticate
doctl auth init # paste the api token here
doctl account get # verify the api token
# get k8s config file
doctl kubernetes cluster kubeconfig save k8s-first-proj
Next step, install kubectl
to control the Kubernetes cluster manager. To install kubectl on ArchLinux, simply do pacman -S kubectl
.
This part consist of a Statefulset and a service. K8s Statefulset is for storing persistent volumes. Normally, data are lost after we shut a container down. So using persistent volumes ensure stored contents would not be discarded after container’s lifetime ended. In this YAML file, be sure not to remove the increase-vm-max-map
section. This section ensure the mmapfs
directory large enough that Elasticsearch requires. Also, the resource.limits
section defines this service can use minimum 0.1 vCPU to maximum 1 vCPU.
K8s service is for exporting ports from the Statefulset. Here we export port 9200
for Kibana and Logstash.
# deploy the statefulset
kubectl create -f elasticsearch_statefulset.yaml
# check the progress of deployment
kubectl rollout status sts/es-cluster
kubectl get pods -n kube-logging # this should show 3 pods running (es-cluster-{0,1,2})
# deploy the service
kubectl create -f elasticsearch_svc.yaml
In this part, we define a deployment template with kibana service connected to ES, and export 5601 port for user interface. To interact with the UI, we need to port-forward 5601 to localhost, then open browser with URL: http://localhost:5601
# deploy
kubectl create -f kibana.yaml
# port-forwarding to localhost
kubectl port-forward svc/kibana 5601:5601 --namespace=kube-logging
This part consist of a deployment template and a configmap and a service. The configmap here is for modifying input and output config.
In our setup, Logstash get messages from Filebeat and forward them to Elasticsearch. Logs that passed into logstash get into input-filter-output pipeline. In input section, we collect logs from beats
with port 5044
. In output section, set the Elasticsearch endpoint with http://elasticsearch:9200
# deploy
kubectl create -f logstash.yaml
# check
kubectl rollout status deployment/logstash -n kube-logging
In this part, we define a DaemonSet, which will be deployed on every node for collecting logs. For discovering logs in pods, we also need to define and assign a ServiceAccount with several permissions.
# deploy
kubectl create -f filebeat.yaml
kubectl rollout status ds/filebeat -n kube-logging
This part gave me a headache. I followed every tutorials I could find and nothing really works. I turned on every debug flag and tried to figure out what’s happening. After that, I determined that everything other than Filebeat works. Then, I found that there are no logs in /var/lib/docker/containers
. I came across this GitHub Issue and followed the solution and boom! Everything is running! So I guess if we get stuck on a problem, we have to RTFM before doing anything stupid.
There are some commands that I found useful when I debug this problem:
Check logs: kubectl logs ds/filebeat -n kube-logging
Run commands in pod: kubectl exec (POD | TYPE/NAME) -t -- [COMMAND]
, like kubectl exec -n kube-logging ds/filebeat -t ls /var/log
Show details: kubectl describe ds filebeat -n kube-logging