Deploy Gitea On Kubernetes
Gitea is a completely self-hosted Git Service. This is what I use for all my source-control needs. I’ve also paired this up with DroneCI for automated software builds and tests.
There is an Helm chart available for installing Gitea on Kubernetes, but I’m not a huge fan of Helm charts. We’ll deploy Gitea via yaml manifest files. This is a bit more complicated than deploying a simple app like Evobot but still very rewarding!
Namespace
Let’s start off by creating the namespace. As before, we’ll create a gitea folder in ~/k3s/
directory to keep all of our manifest files tidy and clean. Once the folder has been created, let’s create the namespace.gitea.yaml file.
apiVersion: v1
kind: Namespace
metadata:
name: gitea
labels:
name: gitea
Once the file has been created, let’s apply the file with $ kubectl apply -f namespace.gitea.yaml
We can confirm the namespace was created successfully by running the following command:
$ kubectl get ns
NAME STATUS AGE
default Active 44h
kube-system Active 44h
kube-public Active 44h
kube-node-lease Active 44h
metallb-system Active 42h
cert-manager Active 15h
ingress-nginx Active 14h
kubernetes-dashboard Active 14h
pihole Active 5h39m
evobot Active 133m
gitea Active 5s
If successful, you should now see the gitea namespace in the cluster
Secrets
Next let’s create our mysql password and save it in a Kubernetes secret.
First, let’s generate a random string that we will use for our mysql database password with $ openssl rand -base64 32
. Next we’ll create a yaml file titled secret.gitea.yaml
Once you’ve recieved the random string, let’s input it into the following secret file next to “password”
apiVersion: v1
kind: Secret
metadata:
name: mysqlpass
namespace: gitea
type: Opaque
data:
password: '3uR3rPUJ65bKuZR67xxXq9pD8QKsP6YiAeuhMGMYLKQ='
We’ll then want to apply the file with $ kubectl apply -f secret.gitea.yaml
confirm the secret was created successfully by running the following command:
$ kubectl get secrets -n gitea
NAME TYPE DATA AGE
default-token-lfclp kubernetes.io/service-account-token 3 2m46s
mysqlpass Opaque 1 5s
Storage
With the secret created, we can now start on creating our PVC’s for both Gitea and MySQL. Remember, if you’re creating this on a baremetal setup, with NFS, if you don’t have a NFS provisioner, you’ll need to manually create the “gitea” and “gitea_mysql” folders along with manually creating a PV.
- We’ll add both the PV and PVC into one yaml file for simplicity’s sake. I’ve named my yaml file storage.gitea.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: "gitea"
name: "gitea"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "5Gi" #Keep in mind, if you decide to alter this, you'll need to modify the storage in the PV as well
- Now we can create the PV and PVC for mysql. I’ve titled this file storage.mysql.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-mysql-pvc
namespace: gitea
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
With the two yaml files created, let’s apply both by running the following two commands:
$ kubectl apply -f storage.gitea.yaml
$ kubectl apply -f storage.mysql.yaml
Now let’s confirm the PVC’s were created successfully:
Check PVC’s:
$ kubectl get pvc -n gitea
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gitea Bound gitea 5Gi RWO nfs-client 63s
gitea-mysql-pvc Bound gitea-mysql-pv 5Gi RWO nfs-client 57s
Both PV’s and PVC’s should be “Bound”
Services
Let’s create two services so we can reach both the gitea web frontend along with opening port 22 for ssh git pulls/pushes. In the ~/k3s/gitea
folder, create a yaml file called svc.gitea.yaml and input the following code:
apiVersion: v1
kind: Service
metadata:
name: gitea-web
namespace: gitea
labels:
app: gitea
spec:
selector:
app: gitea
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: gitea-ssh
namespace: gitea
spec:
ports:
- port: 22
targetPort: 22
name: gitea-ssh
selector:
app: gitea
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: gitea-mysql
namespace: gitea
spec:
ports:
- port: 3306
selector:
app: gitea-mysql
clusterIP: None
Now let’s apply the yaml file with $ kubectl apply -f svc.gitea.yaml
and confirm the services are up.
$ kubectl get svc -n gitea
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gitea-mysql ClusterIP None <none> 3306/TCP 8s
gitea-web LoadBalancer 10.43.103.94 192.168.122.203 80:31449/TCP 8s
gitea-ssh LoadBalancer 10.43.130.30 192.168.122.204 22:31392/TCP 8s
Deployment
We’re now ready to Deploy our app. Let’s first start with the mysql deployment.
- Create a sfs.mysql.yaml file in the
~/k3s/gitea
folder and input the following:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: gitea-mysql
namespace: gitea
spec:
selector:
matchLabels:
app: gitea-mysql
serviceName: gitea-mysql
template:
metadata:
labels:
app: gitea-mysql
spec:
containers:
- image: mysql/mysql-server:latest
imagePullPolicy: IfNotPresent
name: gitea-mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysqlpass
key: password
ports:
- containerPort: 3306
name: gitea-mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: gitea-mysql-pvc
- Next we’ll want to create a deployment file for Gitea. I’ve titled mine deploy.gitea.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: gitea
name: gitea
namespace: gitea
spec:
replicas: 1
selector:
matchLabels:
app: gitea
strategy:
type: Recreate
template:
metadata:
labels:
app: gitea
spec:
volumes:
- name: gitea-volume
persistentVolumeClaim:
claimName: gitea
containers:
- name: gitea
image: gitea/gitea:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/data"
name: gitea-volume
We’re now ready to deploy mysql and gitea. Let’s deploy them with $ kubectl apply -f sfs.mysql.yaml
and then $ kubectl apply -f deploy.gitea.yaml
Let’s confirm everything is working as expected:
$ kubectl get all -n gitea
NAME READY STATUS RESTARTS AGE
pod/gitea-8479fd768-777gk 1/1 Running 0 11m
pod/gitea-mysql-0 1/1 Running 0 60s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gitea-mysql ClusterIP None <none> 3306/TCP 19m
service/gitea-web LoadBalancer 10.43.103.94 192.168.122.203 80:31449/TCP 19m
service/gitea-ssh LoadBalancer 10.43.130.30 192.168.122.204 22:31392/TCP 19m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/gitea 1/1 1 1 11m
NAME DESIRED CURRENT READY AGE
replicaset.apps/gitea-8479fd768 1 1 1 11m
NAME READY AGE
statefulset.apps/gitea-mysql 1/1 60s
Your pods and statefulset pods should be running without any issues. If your receiving errors, try a $ kubectl describe pod PodNamehere -n gitea
or $ kubectl logs PodNameHere -n gitea
If everything is working as expected, get the External Endpoint for gitea-web and you should now be on the set up page for Gitea!