通过 Helm Chart 部署 Easysearch
2023-09-18
Easysearch 可以通过 Helm 快速部署了,快来看看吧!
Easysearch 的 Chart 仓库地址在这里 https://helm.infinilabs.com。
使用 Helm 部署 Easysearch 有两个前提条件:
我们先按照 Chart 仓库的说明来快速部署一下。
~ helm repo add infinilabs https://helm.infinilabs.com
~ cat << EOF | kubectl apply -n test -f -
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: easysearch-ca-issuer
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: easysearch-ca-certificate
spec:
commonName: easysearch-ca-certificate
duration: 87600h0m0s
isCA: true
issuerRef:
kind: Issuer
name: easysearch-ca-issuer
privateKey:
algorithm: ECDSA
size: 256
renewBefore: 2160h0m0s
secretName: easysearch-ca-secret
EOF
~ helm install easysearch infinilabs/easysearch -n test
执行上面的两个命令之后,查看一下部署情况
~ kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
easysearch-0 1/1 Running 0 38s
~ kubectl get svc -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
easysearch-svc-headless ClusterIP None <none> 9200/TCP,9300/TCP 67s
~ kubectl exec -n test easysearch-0 -it -- curl -ku'admin:admin' https://localhost:9200
Defaulted container "easysearch" out of: easysearch, init-config (init)
{
"name" : "easysearch-0",
"cluster_name" : "infinilabs",
"cluster_uuid" : "JwhwwWHMQKy8l6_US7rB1A",
"version" : {
"distribution" : "easysearch",
"number" : "1.5.0",
"distributor" : "INFINI Labs",
"build_hash" : "5b5b117bc43e6793e7bb0cd8bd83567a5ef35be0",
"build_date" : "2023-09-07T14:55:21.232870Z",
"build_snapshot" : false,
"lucene_version" : "8.11.2",
"minimum_wire_lucene_version" : "7.7.0",
"minimum_lucene_index_compatibility_version" : "7.7.0"
},
"tagline" : "You Know, For Easy Search!"
}
通过上面的验证,我们可以看到 Easysearch 已经部署完成,是不是很方便。
按照 Chart 仓库的指导说明部署的是一个单节点集群,那如果要部署多节点的要怎么办呢?下面让我们来研究一下 Easysearch Chart 包的源码 https://github.com/infinilabs/helm-charts/tree/main/charts/easysearch。
熟悉 Chart 包结构的小伙伴都清楚,Chart 包的变量配置一般都是在 values.yaml 文件中配置的。
我们先来看一下默认的 values.yaml 文件内容(这里只截选了一些可能需要变更的配置,完整内容请查阅源码):
- pod 副本数以及使用资源的配置
replicaCount: 1
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 1000m
memory: 2Gi
- 使用存储类型以及容量的配置
storageClassName: local-path
dataVolumeStorage: 100Gi
- 集群名、主节点列表以及节点角色配置
clusterName: infinilabs
masterHosts: '"easysearch-0"'
discoverySeedHosts: '"easysearch-0.easysearch-svc-headless"'
nodeRoles: '"master","data","ingest","remote_cluster_client"'
根据研究源码的结果,多节点集群的部署只需要我们调整部署的 pod 副本数、集群名、主节点列表以及节点角色这几个配置。下面让我们来实践一下:
1、集群规划
集群名:es-test
规模:3 主节点 + 3 数据节点 + 2 协调节点
2、Chart 的版本名
主节点:es-test-master
数据节点:es-test-data
协调节点:es-test-coordinate
3、根据节点角色创建不同的 values.yaml 文件
- es-test-master.yaml
replicaCount: 3
clusterName: es-test
masterHosts: '"es-test-master-easysearch-0","es-test-master-easysearch-1","es-test-master-easysearch-2"'
discoverySeedHosts: '"es-test-master-easysearch-0.es-test-master-easysearch-svc-headless","es-test-master-easysearch-1.es-test-master-easysearch-svc-headless","es-test-master-easysearch-2.es-test-master-easysearch-svc-headless"'
nodeRoles: '"master","ingest","remote_cluster_client"'
- es-test-data.yaml
replicaCount: 3
clusterName: es-test
masterHosts: '"es-test-master-easysearch-0","es-test-master-easysearch-1","es-test-master-easysearch-2"'
discoverySeedHosts: '"es-test-master-easysearch-0.es-test-master-easysearch-svc-headless","es-test-master-easysearch-1.es-test-master-easysearch-svc-headless","es-test-master-easysearch-2.es-test-master-easysearch-svc-headless"'
nodeRoles: '"data","ingest","remote_cluster_client"'
- es-test-coordinate.yaml
replicaCount: 2
clusterName: es-test
masterHosts: '"es-test-master-easysearch-0","es-test-master-easysearch-1","es-test-master-easysearch-2"'
discoverySeedHosts: '"es-test-master-easysearch-0.es-test-master-easysearch-svc-headless","es-test-master-easysearch-1.es-test-master-easysearch-svc-headless","es-test-master-easysearch-2.es-test-master-easysearch-svc-headless"'
nodeRoles: ""
4、使用各节点角色的配置文件部署
~ helm install es-test-master infinilabs/easysearch -n test -f es-test-master.yaml
~ helm install es-test-data infinilabs/easysearch -n test -f es-test-data.yaml
~ helm install es-test-coordinate infinilabs/easysearch -n test -f es-test-coordinate.yaml
5、验证
~ kubectl get pod -n test|grep es-test
es-test-master-easysearch-0 1/1 Running 0 5m57s
es-test-data-easysearch-0 1/1 Running 0 5m29s
es-test-coordinate-easysearch-0 1/1 Running 0 5m10s
es-test-master-easysearch-1 1/1 Running 0 4m57s
es-test-data-easysearch-1 1/1 Running 0 4m29s
es-test-coordinate-easysearch-1 1/1 Running 0 4m10s
es-test-master-easysearch-2 1/1 Running 0 3m56s
es-test-data-easysearch-2 1/1 Running 0 3m29s
~ kubectl exec -n test es-test-master-easysearch-0 -it -- curl -ku'admin:admin' https://localhost:9200/_cat/nodes?v
Defaulted container "easysearch" out of: easysearch, init-config (init)
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.42.0.130 12 63 12 1.53 2.67 2.11 - - es-test-coordinate-easysearch-0
10.42.0.136 53 65 52 1.53 2.67 2.11 dir - es-test-data-easysearch-1
10.42.0.139 6 63 14 1.53 2.67 2.11 - - es-test-coordinate-easysearch-1
10.42.0.133 10 63 14 1.53 2.67 2.11 imr - es-test-master-easysearch-1
10.42.0.149 58 65 59 1.53 2.67 2.11 dir - es-test-data-easysearch-2
10.42.0.124 53 68 35 1.53 2.67 2.11 imr * es-test-master-easysearch-0
10.42.0.127 56 65 46 1.53 2.67 2.11 dir - es-test-data-easysearch-0
10.42.0.146 15 63 18 1.53 2.67 2.11 imr - es-test-master-easysearch-2
至此,多集群已部署完成。