• 使用 elastic-oparator 部署 Elasticsearch 和 Kibana
    • 安装 elastic-operator
    • 部署 Elasticsearch
    • 部署 Kibana
      • 创建 Service

    使用 elastic-oparator 部署 Elasticsearch 和 Kibana

    参考官方文档:

    • https://www.elastic.co/cn/elasticsearch-kubernetes
    • https://www.elastic.co/cn/blog/introducing-elastic-cloud-on-kubernetes-the-elasticsearch-operator-and-beyond

    安装 elastic-operator

    一键安装:

    1. kubectl apply -f https://download.elastic.co/downloads/eck/0.9.0/all-in-one.yaml

    部署 Elasticsearch

    准备一个命名空间用来部署 elasticsearch,这里我们使用 monitoring 命名空间:

    1. kubectl create ns monitoring

    创建 CRD 资源部署 Elasticsearch,最简单的部署:

    1. cat <<EOF | kubectl apply -f -
    2. apiVersion: elasticsearch.k8s.elastic.co/v1alpha1
    3. kind: Elasticsearch
    4. metadata:
    5. name: es
    6. namespace: monitoring
    7. spec:
    8. version: 7.2.0
    9. nodes:
    10. - nodeCount: 1
    11. config:
    12. node.master: true
    13. node.data: true
    14. node.ingest: true
    15. EOF

    多节点部署高可用 elasticsearch 集群:

    1. cat <<EOF | kubectl apply -f -
    2. apiVersion: elasticsearch.k8s.elastic.co/v1alpha1
    3. kind: Elasticsearch
    4. metadata:
    5. name: es
    6. namespace: monitoring
    7. spec:
    8. version: 7.2.0
    9. nodes:
    10. - nodeCount: 1
    11. config:
    12. node.master: true
    13. node.data: true
    14. node.ingest: true
    15. volumeClaimTemplates:
    16. - metadata:
    17. name: elasticsearch-data
    18. spec:
    19. accessModes:
    20. - ReadWriteOnce
    21. resources:
    22. requests:
    23. storage: 100Gi
    24. podTemplate:
    25. spec:
    26. affinity:
    27. podAntiAffinity:
    28. requiredDuringSchedulingIgnoredDuringExecution:
    29. - labelSelector:
    30. matchExpressions:
    31. - key: elasticsearch.k8s.elastic.co/cluster-name
    32. operator: In
    33. values:
    34. - es
    35. topologyKey: "kubernetes.io/hostname"
    36. - nodeCount: 2
    37. config:
    38. node.master: false
    39. node.data: true
    40. node.ingest: true
    41. volumeClaimTemplates:
    42. - metadata:
    43. name: elasticsearch-data
    44. spec:
    45. accessModes:
    46. - ReadWriteOnce
    47. resources:
    48. requests:
    49. storage: 80Gi
    50. podTemplate:
    51. spec:
    52. affinity:
    53. podAntiAffinity:
    54. requiredDuringSchedulingIgnoredDuringExecution:
    55. - labelSelector:
    56. matchExpressions:
    57. - key: elasticsearch.k8s.elastic.co/cluster-name
    58. operator: In
    59. values:
    60. - es
    61. topologyKey: kubernetes.io/hostname
    62. EOF
    • metadata.name 是 elasticsearch 集群的名称
    • nodeCount 大于 1 (多副本) 并且加了 pod 反亲和性 (避免调度到同一个节点) 可避免单点故障,保证高可用
    • node.master 为 true 表示是 master 节点
    • 可根据需求调整 nodeCount (副本数量) 和 storage (数据磁盘容量)
    • 反亲和性的 labelSelector.matchExpressions.values 中写 elasticsearch 集群名称,更改集群名称时记得这里要也改下
    • 强制开启 ssl 不允许关闭: https://github.com/elastic/cloud-on-k8s/blob/576f07faaff4393f9fb247e58b87517f99b08ebd///pkg/controller/elasticsearch/settings/fields.go#L51

    查看部署状态:

    1. $ kubectl -n monitoring get es
    2. NAME HEALTH NODES VERSION PHASE AGE
    3. es green 3 7.2.0 Operational 3m
    4. $
    5. $ kubectl -n monitoring get pod -o wide
    6. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
    7. es-es-c7pwnt5kz8 1/1 Running 0 4m3s 172.16.4.6 10.0.0.24 <none>
    8. es-es-qpk7kkpdxh 1/1 Running 0 4m3s 172.16.5.6 10.0.0.48 <none>
    9. es-es-vl56nv78hd 1/1 Running 0 4m3s 172.16.3.9 10.0.0.32 <none>
    10. $
    11. $ kubectl -n monitoring get svc
    12. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    13. es-es-http ClusterIP 172.16.15.74 <none> 9200/TCP 7m3s

    elasticsearch 的默认用户名是 elastic,获取密码:

    1. $ kubectl -n monitoring get secret es-es-elastic-user -o jsonpath='{.data.elastic}' | base64 -d
    2. rhd6jdw9brbj69d49k46px9j

    后续连接 elasticsearch 时就用这对用户名密码:

    • username: elastic
    • password: rhd6jdw9brbj69d49k46px9j

    部署 Kibana

    还可以再部署一个 Kibana 集群作为 UI:

    1. cat <<EOF | kubectl apply -f -
    2. apiVersion: kibana.k8s.elastic.co/v1alpha1
    3. kind: Kibana
    4. metadata:
    5. name: kibana
    6. namespace: monitoring
    7. spec:
    8. version: 7.2.0
    9. nodeCount: 2
    10. podTemplate:
    11. spec:
    12. affinity:
    13. podAntiAffinity:
    14. requiredDuringSchedulingIgnoredDuringExecution:
    15. - labelSelector:
    16. matchExpressions:
    17. - key: kibana.k8s.elastic.co/name
    18. operator: In
    19. values:
    20. - kibana
    21. topologyKey: kubernetes.io/hostname
    22. elasticsearchRef:
    23. name: es
    24. namespace: monitoring
    25. EOF
    • nodeCount 大于 1 (多副本) 并且加了 pod 反亲和性 (避免调度到同一个节点) 可避免单点故障,保证高可用
    • 反亲和性的 labelSelector.matchExpressions.values 中写 kibana 集群名称,更改集群名称时记得这里要也改下
    • elasticsearchRef 引用已经部署的 elasticsearch 集群,namenamespace 分别填部署的 elasticsearch 集群名称和命名空间

    查看部署状态:

    1. $ kubectl -n monitoring get kb
    2. NAME HEALTH NODES VERSION AGE
    3. kibana green 2 7.2.0 3m
    4. $
    5. $ kubectl -n monitoring get pod -o wide
    6. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
    7. kibana-kb-58dc8994bf-224bl 1/1 Running 0 93s 172.16.0.92 10.0.0.3 <none>
    8. kibana-kb-58dc8994bf-nchqt 1/1 Running 0 93s 172.16.3.10 10.0.0.32 <none>
    9. $
    10. $ kubectl -n monitoring get svc
    11. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    12. kibana-kb-http ClusterIP 172.16.8.71 <none> 5601/TCP 4m35s

    还需要为 Kibana 暴露一个外部地址好让我们能从从浏览器访问,可以创建 Service 或 Ingress 来实现。

    默认也会为 Kibana 创建 ClusterIP 类型的 Service,可以在 Kibana 的 CRD spec 里加 service 来自定义 service type 为 LoadBalancer 实现对外暴露,但我不建议这么做,因为一旦删除 CRD 对象,service 也会被删除,在云上通常意味着对应的负载均衡器也被自动删除,IP 地址就会被回收,下次再创建的时候 IP 地址就变了,所以推荐对外暴露方式使用单独的 Service 或 Ingress 来维护

    创建 Service

    先看下当前 kibana 的 service:

    1. $ kubectl -n monitoring get svc -o yaml kibana-kb-http
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. creationTimestamp: 2019-09-17T09:20:04Z
    6. labels:
    7. common.k8s.elastic.co/type: kibana
    8. kibana.k8s.elastic.co/name: kibana
    9. name: kibana-kb-http
    10. namespace: monitoring
    11. ownerReferences:
    12. - apiVersion: kibana.k8s.elastic.co/v1alpha1
    13. blockOwnerDeletion: true
    14. controller: true
    15. kind: Kibana
    16. name: kibana
    17. uid: 54fd304b-d92c-11e9-89f7-be8690a7fdcf
    18. resourceVersion: "5668802758"
    19. selfLink: /api/v1/namespaces/monitoring/services/kibana-kb-http
    20. uid: 55a1198f-d92c-11e9-89f7-be8690a7fdcf
    21. spec:
    22. clusterIP: 172.16.8.71
    23. ports:
    24. - port: 5601
    25. protocol: TCP
    26. targetPort: 5601
    27. selector:
    28. common.k8s.elastic.co/type: kibana
    29. kibana.k8s.elastic.co/name: kibana
    30. sessionAffinity: None
    31. type: ClusterIP
    32. status:
    33. loadBalancer: {}

    仅保留端口和 selector 的配置,如果集群支持 LoadBanlancer 类型的 service,可以修改 service 的 type 为 LoadBalancer:

    1. cat <<EOF | kubectl apply -f -
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: kibana
    6. namespace: monitoring
    7. spec:
    8. ports:
    9. - port: 443
    10. protocol: TCP
    11. targetPort: 5601
    12. selector:
    13. common.k8s.elastic.co/type: kibana
    14. kibana.k8s.elastic.co/name: kibana
    15. type: LoadBalancer
    16. EOF

    拿到负载均衡器的 IP 地址:

    1. $ kubectl -n monitoring get svc
    2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. kibana LoadBalancer 172.16.10.71 150.109.27.60 443:32749/TCP 47s
    4. kibana-kb-http ClusterIP 172.16.15.39 <none> 5601/TCP 118s

    在浏览器访问: https://150.109.27.60:443

    输入之前部署 elasticsearch 的用户名密码进行登录