写在最前

  • 既然都看到这了,那肯定都是有基础的了,由于在网上貌似没找到docker-compose搭建ELK的,有的也不太规范(虽然我也不咋规范),我最近刚好也需要搭建一个ELK,随手写一下

初始化环境

  • 根据目录结构创建好对应的文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
ELK
├── docker-compose.yml
├── elasticsearch
│ ├── data
│ ├── logs
│ └── plugins
├── kibana
│ └── config
│ └── kibana.yml
└── logstash
├── config
│ └── logstash.yml
└── pipeline
└── logstash.conf

编辑docker-compose.yml

  • 需要注意的点就是他们三个要放在同一个networks里
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
version: '3'

services:
elasticsearch:
image: elasticsearch:7.17.2
container_name: elasticsearch
networks:
- network_elk_test
ports:
- "9200:9200"
environment:
cluster.name: elasticsearch
discovery.type: single-node
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
volumes:
- ./elasticsearch/plugins:/usr/share/elasticsearch/plugins
- ./elasticsearch/data:/usr/share/elasticsearch/data
- ./elasticsearch/logs:/usr/share/elasticsearch/logs

kibana:
image: kibana:7.17.2
container_name: kibana
networks:
- network_elk_test
ports:
- "5601:5601"
depends_on:
- elasticsearch
environment:
I18N_LOCALE: zh-CN
volumes:
- ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml

logstash:
image: logstash:7.17.2
container_name: logstash
networks:
- network_elk_test
ports:
- "4560:4560"
volumes:
- ./logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
depends_on:
- elasticsearch

networks:
network_elk_test:
name: ELKTest

编辑kibana.yml文件

  • 由于他们三个是处于同一个networks下,所以这里可以使用服务名来互相访问
1
2
3
4
5
server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
server.publicBaseUrl: "http://kibana:5601"

编辑logstash.yml

  • 这里同样是使用服务名来访问es,由于我这里搭建的是7.x版本,所以要加上第三行那行配置,如果你搭建的是7.x版本之前,可以忽略第三行
1
2
3
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
pipeline.ecs_compatibility: v1

编辑logstash.conf

  • 这里要注意索引名的格式啊,appName是获取spring.application.name里的值来拼接的,后面是加上日期,每一天的每一个服务都有对应的日志,这样排查问题查日志的时候都比较方便
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 4560
codec => json_lines
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => "elasticsearch:9200"
index => "%{appName}-%{+YYYY.MM.dd}" #索引名
}
}

Java服务导入logstash依赖

1
2
3
4
5
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.1.1</version>
</dependency>

在resources目录下创建logback-spring.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false">
<property name="LOG_HOME" value="logs/demo.log"/>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
</encoder>
</appender>

<springProperty scope="context" name="appName" source="spring.application.name"/>
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--指定logstash的ip及端口-->
<destination>XX.XX.XX.XX:4560</destination>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"spring.application.name":"${appName}"}</customFields>
</encoder>
</appender>


<root level="INFO">
<appender-ref ref="STDOUT"/>
<appender-ref ref="logstash"/>
</root>
</configuration>
  • 配置好了,启动服务,会自动创建服务名+当前日期的索引,去kibana里可以看到