To launch a live stream website on Kubernetes, you need to build a system that can handle real-time video streaming. A typical live-streaming platform would consist of a frontend UI for users, a backend API for managing streams and users, and a media server for ingesting and serving live video streams. You could use popular streaming tools like FFmpeg, NGINX with RTMP module, or more advanced streaming servers like Wowza, Red5, or Media Services on AWS.
Here’s a step-by-step guide to deploying a live stream website on Kubernetes:
1. Components of a Live Streaming Architecture
Frontend: A web application (React, Angular, etc.) where users watch and interact with the stream.
Backend: A backend service (Node.js, Django, etc.) to handle user authentication, stream management, and metadata.
Media Server: A streaming server to manage video ingest and serving, like NGINX with RTMP module, Wowza, or Red5.
Database: To store user info, stream metadata, etc. (e.g., MySQL, PostgreSQL).
Object Storage: (Optional) For storing recorded streams, e.g., S3 or MinIO.
2. Choosing the Streaming Protocol
You’ll need to use a protocol for ingesting and serving live streams, such as:
RTMP (Real-Time Messaging Protocol) for video ingestion.
HLS (HTTP Live Streaming) for serving video to end-users.
3. NGINX RTMP Media Server Setup
You can set up an NGINX server with the RTMP module to handle video streams.
NGINX RTMP Dockerfile
FROM alfg/nginx-rtmp
COPY nginx.conf /etc/nginx/nginx.conf
NGINX RTMP Configuration (nginx.conf
)
This configuration will allow you to stream live using RTMP and serve the stream via HLS.
worker_processes auto;
events {
worker_connections 1024;
}
rtmp {
server {
listen 1935;
application live {
live on;
record off;
hls on;
hls_path /mnt/hls;
hls_fragment 5s;
hls_playlist_length 60s;
}
}
}
http {
include mime.types;
default_type application/octet-stream;
server {
listen 80;
location /hls {
types {
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
alias /mnt/hls;
add_header Cache-Control no-cache;
}
}
}
4. Containerize the Components
a. Frontend Dockerfile (React Example)
FROM node:16-alpine as build
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
b. Backend Dockerfile (Node.js Example)
FROM node:16-alpine
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
5. Kubernetes Deployment and Service Files
Once the components are containerized, you need Kubernetes manifests to deploy them.
a. NGINX RTMP Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-rtmp-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx-rtmp
template:
metadata:
labels:
app: nginx-rtmp
spec:
containers:
- name: nginx-rtmp
image: your-repo/nginx-rtmp:latest
ports:
- containerPort: 1935 # RTMP Port
- containerPort: 80 # HTTP Port for HLS
volumeMounts:
- name: hls-storage
mountPath: /mnt/hls
volumes:
- name: hls-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: nginx-rtmp-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
- protocol: TCP
port: 1935
targetPort: 1935
selector:
app: nginx-rtmp
type: LoadBalancer
b. Frontend Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
spec:
replicas: 2
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: your-repo/frontend:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
c. Backend Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: your-repo/backend:latest
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 3000
targetPort: 3000
type: LoadBalancer
d. Database Deployment (Optional)
apiVersion: apps/v1
kind: Deployment
metadata:
name: db-deployment
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: yourpassword
- name: MYSQL_DATABASE
value: livestream
ports:
- containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
name: db-service
spec:
ports:
- protocol: TCP
port: 3306
targetPort: 3306
selector:
app: db
6. Ingress Setup (Optional)
You can configure an Ingress resource to expose your live streaming service externally under different URLs, making the services accessible.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: livestream-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: your-livestream-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- path: /backend
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 3000
- path: /stream
pathType: Prefix
backend:
service:
name: nginx-rtmp-service
port:
number: 80
7. Deploy to Kubernetes
Once all your YAML files are prepared, you can deploy them to your Kubernetes cluster:
kubectl apply -f frontend.yaml
kubectl apply -f backend.yaml
kubectl apply -f nginx-rtmp.yaml
kubectl apply -f ingress.yaml # Optional if you're using Ingress
8. Streaming Workflow
Ingest: Use software like OBS Studio to stream video using the RTMP endpoint provided by your NGINX RTMP service. The RTMP URL would be something like:
rtmp://<nginx-rtmp-service-ip>/live/streamkey
Serve: Users can access the stream via HLS at:
http://<nginx-rtmp-service-ip>/hls/streamkey.m3u8
9. Monitoring and Scaling
Use Kubernetes’ Horizontal Pod Autoscaler (HPA) to scale your services based on resource consumption:
kubectl autoscale deployment nginx-rtmp-deployment --cpu-percent=70 --min=1 --max=5
For monitoring, consider integrating Prometheus and Grafana to get insights into resource usage and traffic.
Summary
This setup provides a basic framework to launch a live-streaming website using Kubernetes. The media server (NGINX with RTMP)