极限网关能够跟踪记录经过网关的所有请求,可用来分析发送给 Elasticsearch 的请求情况,用于分析请求性能和了解业务运行情况。
设置网关路由 #
如果需要开启极限网关的查询日志分析,需要在路由上面配置 tracing_flow
参数,设置一个流程来记录请求日志。
router:
- name: default
tracing_flow: request_logging
default_flow: cache_first
上面的配置定义了一个名为 default
的路由,默认的请求流程为 cache_first
,用于日志记录的流程为 request_logging
。
定义日志流程 #
日志处理流程配置 request_logging
的定义如下:
flow:
- name: request_logging
filter:
- request_path_filter:
must_not: # any match will be filtered
prefix:
- /favicon.ico
- request_header_filter:
exclude:
- app: kibana # in order to filter kibana's access log, config `elasticsearch.customHeaders: { "app": "kibana" }` to your kibana's config `/config/kibana.yml`
- logging:
queue_name: request_logging
上面的流程里面使用了若干个过滤器:
- request_path_filter 过滤了无用的
/favicon.ico
请求 - request_header_filter,过滤了来自 Kibana 的请求
- logging,将请求日志记录到本地磁盘队列
request_logging
,供后续管道来消费并创建索引
定义日志管道 #
极限网关使用管道任务来异步消费这些日志,并创建索引,具体的定义配置如下:
pipeline:
- name: request_logging_index
auto_start: true
keep_running: true
processor:
- json_indexing:
index_name: "gateway_requests"
elasticsearch: "dev"
input_queue: "request_logging"
idle_timeout_in_seconds: 1
worker_size: 1
bulk_size_in_mb: 10 #in MB
上面的配置里面,定义了一个名为 request_logging_index
的处理管道,设置了消费的磁盘队列名称 request_logging
和索引的目标集群 dev
和索引名 gateway_requests
,使用了一个工作线程,批次提交大小为 10MB。
定义索引集群 #
接下来配置索引集群,如下:
elasticsearch:
- name: dev
enabled: true
endpoint: https://192.168.3.98:9200 # if your elasticsearch is using https, your gateway should be listen on as https as well
basic_auth: #used to discovery full cluster nodes, or check elasticsearch's health and versions
username: elastic
password: pass
discovery: # auto discovery elasticsearch cluster nodes
enabled: true
refresh:
enabled: true
上面的配置定义了一个名为 dev
的 Elasticsearch 集群,并且开启 Elastic 模块来处理集群的自动配置。
配置索引模板 #
然后就可以配置 Elasticsearch 集群的索引模板了,在 dev
集群上执行下面的命令创建日志索引的模板。
配置索引生命周期 #
导入仪表板 #
下载面向 Kibana 7.9 的最新的仪表板
INFINI-Gateway-7.9.2-2021-01-15.ndjson.zip,在 dev
集群的 Kibana 里面导入,如下:
启动网关 #
接下来,就可以启动网关,。
➜ ./bin/gateway
___ _ _____ __ __ __ _
/ _ \ /_\ /__ \/__\/ / /\ \ \/_\ /\_/\
/ /_\///_\\ / /\/_\ \ \/ \/ //_\\\_ _/
/ /_\\/ _ \/ / //__ \ /\ / _ \/ \
\____/\_/ \_/\/ \__/ \/ \/\_/ \_/\_/
[GATEWAY] A light-weight, powerful and high-performance elasticsearch gateway.
[GATEWAY] 1.0.0_SNAPSHOT, a17be4c, Wed Feb 3 00:12:02 2021 +0800, medcl, add extra retry for bulk_indexing
[02-03 13:51:35] [INF] [instance.go:24] workspace: data/gateway/nodes/0
[02-03 13:51:35] [INF] [api.go:255] api server listen at: http://0.0.0.0:2900
[02-03 13:51:35] [INF] [runner.go:59] pipeline: request_logging_index started with 1 instances
[02-03 13:51:35] [INF] [entry.go:267] entry [es_gateway] listen at: http://0.0.0.0:8000
[02-03 13:51:35] [INF] [app.go:297] gateway now started.
修改应用配置 #
将之前指向 Elasticsearch 地址的应用(如 Beats、Logstash、Kibana 等)换成网关的地址。
假设网关 IP 是 192.168.3.98
,则修改 Kibana 配置如下:
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["https://192.168.3.98:8000"]
elasticsearch.customHeaders: { "app": "kibana" }
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "home"
保存配置并重启 Kibana。
查看效果 #
现在任何通过网关访问 Elasticsearch 的请求都能被监控到了。