Contents
- 1 Getting started with the Elastic Stack and Docker-Compose Elastic Stack 和 Docker-Compose 入门
Getting started with the Elastic Stack and Docker-Compose Elastic Stack 和 Docker-Compose 入门
转载来源: https://www.elastic.co/cn/blog/getting-started-with-the-elastic-stack-and-docker-compose
作者: Eddie Mitchell 埃迪·米切尔 2023年5月17日

As the Elastic Stack has grown over the years and the feature sets have increased, so has the complexity of getting started or attempting a proof-of-concept (POC) locally. And while Elastic Cloud is still the fastest and easiest way to get started with Elastic, the need for local development and testing is still widely abundant. As developers, we are drawn to quick setups and rapid development with low-effort results. Nothing screams fast setup and POC quite like Docker — which is what we’ll be focusing on to get started with an entire Elastic Stack build-out for your local enjoyment. 没有什么比 Docker 更能实现快速设置和 POC 了——这就是我们开始构建整个 Elastic Stack 以便您在本地享受的重点。 随着 Elastic Stack 多年来的发展和功能集的增加,本地入门或尝试概念验证 (POC) 的复杂性也随之增加。尽管 Elastic Cloud 仍然是开始使用 Elastic 的最快、最简单的方法,但本地开发和测试的需求仍然广泛。作为开发人员,我们喜欢快速设置和快速开发,并轻松获得成果。
In part one of this two-part series, we’ll dive into configuring the components of a standard Elastic Stack consisting of Elasticsearch, Logstash, Kibana, and Beats (ELK-B), on which we can immediately begin developing. 在这个由两部分组成的系列的第一部分中,我们将深入探讨由 Elasticsearch、Logstash、Kibana 和 Beats (ELK-B) 组成的标准 Elastic Stack 的组件配置,我们可以立即开始开发。
In part two, we’ll enhance our base configuration and add many of the different features that power our evolving stack, such as APM, Agent, Fleet, Integrations, and Enterprise Search. 在第二部分中,我们将增强基本配置并添加许多不同的功能来支持我们不断发展的堆栈,例如 APM、代理、队列、集成和企业搜索。 We will also look at instrumenting these in our new local environment for development and POC purposes. 我们还将考虑在新的本地环境中对这些进行检测,以用于开发和 POC 目的。
For those who have been through some of this before, you’re welcome to TL;DR and head over to the repo to grab the files. 对于那些以前经历过其中一些内容的人,欢迎您来到 TL;DR 并前往存储库获取文件。
As a prerequisite, Docker Desktop or Docker Engine with Docker-Compose will need to be installed and configured. For this tutorial, we will be using Docker Desktop. 作为先决条件,需要安装和配置带有 Docker-Compose 的 Docker Desktop 或 Docker Engine。在本教程中,我们将使用 Docker Desktop。
Our focus for these Docker containers will primarily be Elasticsearch and Kibana. However, we’ll be utilizing Metricbeat to give us some cluster insight as well as Filebeat and Logstash for some ingestion basics. 我们对这些 Docker 容器的关注主要是 Elasticsearch 和 Kibana。但是,我们将利用 Metricbeat 为我们提供一些集群洞察力,并利用 Filebeat 和 Logstash 提供一些摄取基础知识。
File structure 文件结构
First, let’s start by defining the outline of our file structure. 首先,让我们从定义文件结构的轮廓开始。
├── .env ├── .env
├── docker-compose.yml ├── docker-compose.yml
├── filebeat.yml ├── 文件beat.yml
├── logstash.conf ├──logstash.conf
└── metricbeat.yml └── metricbeat.yml
We’ll keep it simple initially. Elasticsearch and Kibana will be able to start from the docker-compose file, while Filebeat, Metricbeat, and Logstash will all need additional configuration from yml files. 我们一开始会保持简单。 Elasticsearch 和 Kibana 将能够从 docker-compose 文件启动,而 Filebeat、Metricbeat 和 Logstash 都需要从 yml 文件进行额外配置。
Environment file 环境文件
Next, we’ll define variables to pass to the docker-compose via the .env file. These parameters will help us establish ports, memory limits, component versions, etc. 接下来,我们将定义变量以通过 .env 文件传递给 docker-compose。这些参数将帮助我们建立端口、内存限制、组件版本等。
.env
# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject
# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=changeme
# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=changeme
# Version of Elastic products
STACK_VERSION=8.7.1
# Set the cluster name
CLUSTER_NAME=docker-cluster
# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial
# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
# Port to expose Kibana to the host
KIBANA_PORT=5601
# Increase or decrease based on the available host memory (in bytes)
ES_MEM_LIMIT=1073741824
KB_MEM_LIMIT=1073741824
LS_MEM_LIMIT=1073741824
# SAMPLE Predefined Key only to be used in POC environments
ENCRYPTION_KEY=c34d38b3a14956121ff2170e5030b471551370178f43e5626eec58b04a30fae2
了解详情
Note that the placeholder word “changeme” for all the passwords and the sample key are used for demonstration purposes only. These should be changed even for your local POC needs. 请注意,所有密码的占位符单词“changeme”和示例密钥仅用于演示目的。即使是您当地的 POC 需求,也应该更改这些内容。
As you can see here, we specify ports 9200 and 5601 for Elasticsearch and Kibana respectively. This is also where you can change from “basic” to “trial” license type in order to test additional features. 正如您在这里看到的,我们分别为 Elasticsearch 和 Kibana 指定了端口 9200 和 5601。这也是您可以从“基本”许可类型更改为“试用”许可类型以测试其他功能的地方。
We make use of the `STACK_VERSION’ environment variable here in order to pass it to each of the services (containers) in our docker-compose.yml file. When using Docker, opting to hard-code the version number as opposed to using something like the :latest tag is a good way to maintain positive control over the environment. For components of the Elastic Stack, the :latest tag is not supported and we require version numbers to pull the images. 我们在这里使用“STACK_VERSION”环境变量,以便将其传递给 docker-compose.yml 文件中的每个服务(容器)。使用 Docker 时,选择硬编码版本号而不是使用 :latest 标签之类的东西是保持对环境的积极控制的好方法。对于 Elastic Stack 的组件,不支持 :latest 标签,我们需要版本号来拉取镜像。
Setup and Elasticsearch node 设置和 Elasticsearch 节点
One of the first bits of trouble that’s often run into when getting started is security configuration. As of 8.0, security is enabled by default. 开始时经常遇到的首要问题之一是安全配置。从 8.0 开始,默认情况下启用安全性。 Therefore, we’ll need to make sure we have the certificate CA setup correctly by utilizing a “setup” node to establish the certificates. Having security enabled is a recommended practice and should not be disabled, even in POC environments. 因此,我们需要利用“设置”节点来建立证书,以确保证书 CA 设置正确。启用安全性是建议的做法,不应禁用,即使在 POC 环境中也是如此。
docker-compose.yml (‘setup’ container) docker-compose.yml(‘设置’容器)
version: "3.8"
volumes:
certs:
driver: local
esdata01:
driver: local
kibanadata:
driver: local
metricbeatdata01:
driver: local
filebeatdata01:
driver: local
logstashdata01:
driver: local
networks:
default:
name: elastic
external: false
services:
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: kibana\n"\
" dns:\n"\
" - kibana\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120了解详情
At the top of the docker-compose.yml we set the compose version, followed by the volumes and default networking configuration that will be used throughout our different containers. 在 docker-compose.yml 的顶部,我们设置了组合版本,然后是将在我们的不同容器中使用的卷和默认网络配置。
We also see that we’re standing up a container labeled “setup” with some bash magic to specify our cluster nodes. This allows us to call the elasticsearch-certutil, passing the server names in yml format in order to create the CA cert and node certs. 我们还看到,我们正在建立一个标记为“setup”的容器,并使用一些 bash 魔法来指定我们的集群节点。这允许我们调用 elasticsearch-certutil,以 yml 格式传递服务器名称,以便创建 CA 证书和节点证书。 If you wanted to have more than one Elasticsearch node in your stack, this is where you would add the server name to allow the cert creation. 如果您想在堆栈中拥有多个 Elasticsearch 节点,您可以在此处添加服务器名称以允许创建证书。
Note: In a future post, we’ll adopt the recommended method of using a keystore to keep secrets, but for now, this will allow us to get the cluster up and running. 注意:在以后的文章中,我们将采用推荐的使用密钥库保密的方法,但现在,这将使我们能够启动并运行集群。
This setup container will start up first, wait for the ES01 container to come online, and then use our environment variables to set up the passwords we want in our cluster. 这个设置容器将首先启动,等待 ES01 容器上线,然后使用我们的环境变量在集群中设置我们想要的密码。 We’re also saving all certificates to the “certs” volume so that all other containers can have access to them. 我们还将所有证书保存到“certs”卷中,以便所有其他容器都可以访问它们。
Since the Setup container is dependent on the ES01 container, let’s take a quick look at the next configuration so we can start them both up: 由于 Setup 容器依赖于 ES01 容器,让我们快速看一下下一个配置,以便我们可以同时启动它们:
docker-compose.yml (‘es01’ container) docker-compose.yml(‘es01’容器)
es01:
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
labels:
co.elastic.logs/module: elasticsearch
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${ES_MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120了解详情
This will be the single-node cluster of Elasticsearch that we’re using for testing. 这将是我们用于测试的 Elasticsearch 单节点集群。
Notice we’ll be using the CA cert and node certificates that were generated. 请注意,我们将使用生成的 CA 证书和节点证书。
You will also notice that we’re storing the Elasticsearch data in a volume outside of the container by specifying – esdata01:/usr/share/elasticsearch/data The two primary reasons for this are performance and data persistence. 您还会注意到,我们通过指定 – esdata01:/usr/share/elasticsearch/data 将 Elasticsearch 数据存储在容器外部的卷中。这样做的两个主要原因是性能和数据持久性。 If we were to leave the data directory inside the container, we would see a significant degradation in the performance of our Elasticsearch node, as well as lose data anytime we needed to change the configuration of the container within our docker-compose file. 如果我们将数据目录保留在容器内,我们会看到 Elasticsearch 节点的性能显着下降,并且每当我们需要更改 docker-compose 文件中容器的配置时就会丢失数据。
With both configurations in place, we can perform our first docker-compose up
command. 有了这两个配置,我们就可以执行我们的第一个 docker-compose up
命令。
Docker Compose tips Docker Compose 技巧
If you’re new to Docker Compose or it’s been a while since you’ve had to remember some of the commands, let’s quickly review the primary ones you will want to know for this adventure. 如果您是 Docker Compose 的新手,或者您已经有一段时间没有记住一些命令了,让我们快速回顾一下您在这次冒险中想要了解的主要命令。
You will want to run all these commands in a terminal while in the same folder in which your docker-compose.yml file resides. My example folder: 您将需要在终端中运行所有这些命令,同时位于 docker-compose.yml 文件所在的同一文件夹中。我的示例文件夹:

Let’s take a look at those commands. 让我们看一下这些命令。

Now, lets run docker-compose up
. 现在,让我们运行“docker-compose up”。

At this point, if the syntax is correct, Docker will begin to download all images and build the environment that is listed in the docker-compose.yml file. This may take a few minutes depending on the speed of your internet. If you want to see the images outside of Docker Desktop, you can always find them in the official Elastic Docker Hub. 此时,如果语法正确,Docker 将开始下载所有镜像并构建 docker-compose.yml 文件中列出的环境。这可能需要几分钟,具体取决于您的互联网速度。如果您想查看 Docker Desktop 之外的镜像,您始终可以在官方 Elastic Docker Hub 中找到它们。
Troubleshooting Virtual Memory misconfigurations 虚拟内存配置错误故障排除
When starting up the Elasticsearch node for the first time, many users get stuck on the Virtual Memory configuration and receive an error message such as: 首次启动 Elasticsearch 节点时,许多用户会卡在虚拟内存配置上并收到如下错误消息:
{"@timestamp":"2023-04-14T13:16:22.148Z", "log.level":"ERROR", "message":"node validation exception\n[1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch.\nbootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
The key takeaway here is max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]. 这里的关键是最大虚拟内存区域 vm.max_map_count [65530] 太低,至少增加到 [262144]。
Ultimately, the command sysctl -w vm.max_map_count=262144 needs to be run where the containers are being hosted. 最终,需要在托管容器的地方运行命令 sysctl -w vm.max_map_count=262144。
In the case of Mac, check these instructions for Docker for Mac. Follow these instructions for Docker Desktop. For Linux users, see these instructions. Windows users, if you have Docker Desktop, you can try these instructions. However, if you’re using WSLv2 with Docker Desktop, take a look here. 如果是 Mac,请查看 Docker for Mac 的这些说明。请遵循 Docker Desktop 的这些说明。对于 Linux 用户,请参阅这些说明。 Windows 用户,如果您有 Docker Desktop,则可以尝试这些说明。但是,如果您将 WSLv2 与 Docker Desktop 一起使用,请查看此处。
Once complete, you can reboot Docker Desktop and retry your docker-compose up command. 完成后,您可以重新启动 Docker Desktop 并重试 docker-compose up 命令。

Remember, the Setup container will exit on purpose after it has completed generating the certs and passwords. 请记住,安装容器将在完成生成证书和密码后故意退出。
So far so good, but let’s test. 到目前为止一切顺利,但让我们测试一下。
We can use a command to copy the ca.crt out of the es01-1 container. Remember, the name of the set of containers is based on the folder from which the docker-compose.yml is running. For example, my directory is “elasticstack_docker” therefore, my command would look like this, based on the screenshot above: 我们可以使用命令将 ca.crt 从 es01-1 容器中复制出来。请记住,容器集的名称基于运行 docker-compose.yml 的文件夹。例如,我的目录是“elasticstack_docker”,因此,根据上面的屏幕截图,我的命令如下所示:
docker cp 码头工人CP
elasticstack_docker-es01-1:/usr/share/elasticsearch/config/certs/ca/ca.crt /tmp/. elasticstack_docker-es01-1:/usr/share/elasticsearch/config/certs/ca/ca.crt/tmp/。
Once the certificate is downloaded, run a curl command to query the Elasticsearch node: 下载证书后,运行 curl 命令查询 Elasticsearch 节点:
curl –cacert /tmp/ca.crt -u elastic:changeme https://localhost:9200 curl –cacert /tmp/ca.crt -u elastic:changeme https://localhost:9200

Success! 成功!
Notice that we’re accessing Elasticsearch using localhost:9200. This is thanks to the port, which has been specified via the ports section of docker-compose.yml. This setting maps ports on the container to ports on the host and allows traffic to pass through your machine and into the docker container with that port specified. 请注意,我们正在使用 localhost:9200 访问 Elasticsearch。这要归功于端口,它已通过 docker-compose.yml 的端口部分指定。此设置将容器上的端口映射到主机上的端口,并允许流量通过您的机器并进入指定该端口的 docker 容器。
Kibana 木花
For the Kibana config, we will utilize the certificate output from earlier. We will also specify that this node doesn’t start until it sees that the Elasticsearch node above is up and running correctly. 对于 Kibana 配置,我们将使用之前的证书输出。我们还将指定此节点不会启动,直到它看到上面的 Elasticsearch 节点已启动并正确运行。
docker-compose.yml (‘kibana’ container) docker-compose.yml(“kibana”容器)
kibana:
depends_on:
es01:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
labels:
co.elastic.logs/module: kibana
volumes:
- certs:/usr/share/kibana/config/certs
- kibanadata:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- XPACK_SECURITY_ENCRYPTIONKEY=${ENCRYPTION_KEY}
- XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${ENCRYPTION_KEY}
- XPACK_REPORTING_ENCRYPTIONKEY=${ENCRYPTION_KEY}
mem_limit: ${KB_MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120了解详情
Notice in our environment
section that we’re specifying ELASTICSEARCH_HOSTS=https://es01:9200 We’re able to specify the container name here for our ES01 Elasticsearch container since we’re utilizing the Docker default networking. All containers that are using the “elastic” network that was specified at the beginning of our docker-compose.yml file will be able to properly resolve other container names and communicate with each other. 请注意,在“环境”部分中,我们指定了 ELASTICSEARCH_HOSTS=https://es01:9200 我们可以在此处为 ES01 Elasticsearch 容器指定容器名称,因为我们使用的是 Docker 默认网络。所有使用在 docker-compose.yml 文件开头指定的“弹性”网络的容器都将能够正确解析其他容器名称并相互通信。
Let’s load up Kibana and see if we can access it. 让我们加载 Kibana,看看我们是否可以访问它。

The containers are green. We should now be able to reach http://localhost:5601. 容器是绿色的。我们现在应该能够访问 http://localhost:5601。

A quick login with the username and password that was specified should drop us right into a brand-new instance of Kibana. Excellent! 使用指定的用户名和密码快速登录应该会让我们直接进入一个全新的 Kibana 实例。出色的!
Metricbeat 度量节拍
Now that we have Kibana and Elasticsearch up and running and communicating, let’s configure Metricbeat to help us keep an eye on things. This will require both configuration in our docker-compose file, and also in a standalone metricbeat.yml file. 现在我们已经启动并运行了 Kibana 和 Elasticsearch 并进行了通信,让我们配置 Metricbeat 来帮助我们关注事物。这将需要在我们的 docker-compose 文件和独立的 metricbeat.yml 文件中进行配置。
Note: For Logstash, Filebeat, and Metricbeat, the configuration files are using bind mounts. Bind mounts for files will retain the same permissions and ownership within the container that they have on the host system. Be sure to set permissions such that the files will be readable and, ideally, not writeable by the container’s user. You will receive an error in the container otherwise. Removing the write permissions on your host may suffice. 否则,您将在容器中收到错误消息。删除主机上的写入权限可能就足够了。 注意:对于 Logstash、Filebeat 和 Metricbeat,配置文件使用绑定挂载。文件的绑定挂载将在容器内保留与主机系统上相同的权限和所有权。请务必设置权限,以便容器的用户可以读取文件,并且在理想情况下不可写入文件。
docker-compose.yml (‘metricbeat01’ container) docker-compose.yml(“metricbeat01”容器)
metricbeat01:
depends_on:
es01:
condition: service_healthy
kibana:
condition: service_healthy
image: docker.elastic.co/beats/metricbeat:${STACK_VERSION}
user: root
volumes:
- certs:/usr/share/metricbeat/certs
- metricbeatdata01:/usr/share/metricbeat/data
- "./metricbeat.yml:/usr/share/metricbeat/metricbeat.yml:ro"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "/sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro"
- "/proc:/hostfs/proc:ro"
- "/:/hostfs:ro"
environment:
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://es01:9200
- KIBANA_HOSTS=http://kibana:5601
- LOGSTASH_HOSTS=http://logstash01:9600了解详情
Here, we’re exposing host information regarding processes, filesystem, and the docker daemon to the Metricbeat container in a read-only fashion. This enables Metricbeat to collect the data to send to Elasticsearch. 在这里,我们以只读方式将有关进程、文件系统和 docker 守护进程的主机信息公开给 Metricbeat 容器。这使 Metricbeat 能够收集数据以发送到 Elasticsearch。
metricbeat.yml
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
metricbeat.modules:
- module: elasticsearch
xpack.enabled: true
period: 10s
hosts: ${ELASTIC_HOSTS}
ssl.certificate_authorities: "certs/ca/ca.crt"
ssl.certificate: "certs/es01/es01.crt"
ssl.key: "certs/es01/es01.key"
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
ssl.enabled: true
- module: logstash
xpack.enabled: true
period: 10s
hosts: ${LOGSTASH_HOSTS}
- module: kibana
metricsets:
- stats
period: 10s
hosts: ${KIBANA_HOSTS}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
xpack.enabled: true
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "healthcheck"
- "info"
#- "image"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
processors:
- add_host_metadata: ~
- add_docker_metadata: ~
output.elasticsearch:
hosts: ${ELASTIC_HOSTS}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
ssl:
certificate: "certs/es01/es01.crt"
certificate_authorities: "certs/ca/ca.crt"
key: "certs/es01/es01.key"了解详情
Our Metricbeat is dependent on ES01 and Kibana nodes being healthy before starting. The notable configurations here are in the metricbeat.yml file. We have enabled four modules for gathering metrics including Elasticsearch, Kibana, Logstash, and Docker. This means, once we verify Metricbeat is up, we can hop into Kibana and navigate to “Stack Monitoring” to see how things look. 我们的 Metricbeat 依赖于 ES01 和 Kibana 节点在启动前保持健康。这里值得注意的配置位于 metricbeat.yml 文件中。我们启用了四个模块来收集指标,包括 Elasticsearch、Kibana、Logstash 和 Docker。这意味着,一旦我们验证 Metricbeat 已启动,我们就可以跳入 Kibana 并导航到“堆栈监控”以查看情况。

Don’t forget to set up your out-of-the-box rules! 不要忘记设置开箱即用的规则!


Metricbeat is also configured for monitoring the container’s host through /var/run/docker.sock Checking Elastic Observability allows you to see metrics coming in from your host. Metricbeat 还配置为通过 /var/run/docker.sock 监控容器的主机。检查 Elastic 可观察性允许您查看来自主机的指标。

Filebeat 文件节拍
Now that the cluster is stable and monitored with Metricbeat, let’s look at Filebeat for log ingestion. Here, our Filebeat will be utilized in two different ways: 现在集群稳定并使用 Metricbeat 进行监控,让我们看看用于日志摄取的 Filebeat。在这里,我们的 Filebeat 将以两种不同的方式使用:
docker-compose.yml (‘filebeat01’ container) docker-compose.yml(‘filebeat01’容器)
filebeat01:
depends_on:
es01:
condition: service_healthy
image: docker.elastic.co/beats/filebeat:${STACK_VERSION}
user: root
volumes:
- certs:/usr/share/filebeat/certs
- filebeatdata01:/usr/share/filebeat/data
- "./filebeat_ingest_data/:/usr/share/filebeat/ingest_data/"
- "./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "/var/lib/docker/containers:/var/lib/docker/containers:ro"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
environment:
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://es01:9200
- KIBANA_HOSTS=http://kibana:5601
- LOGSTASH_HOSTS=http://logstash01:9600了解详情
filebeat.yml
filebeat.inputs:
- type: filestream
id: default-filestream
paths:
- ingest_data/*.log
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
processors:
- add_docker_metadata: ~
setup.kibana:
host: ${KIBANA_HOSTS}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
output.elasticsearch:
hosts: ${ELASTIC_HOSTS}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
ssl.enabled: true
ssl.certificate_authorities: "certs/ca/ca.crt"了解详情
First, we set a bind mount to map the folder “filebeat_ingest_data” into the container. If this folder doesn’t exist on your host, it will be created when the container spins up. If you’d like to test the Logs Stream viewer within Elastic Observability for your custom logs, you can easily drop any file with a .log extension into /filebeat_ingest_data/ and the logs will be read into the default Filebeat Datastream.
首先,我们设置绑定挂载以将文件夹“filebeat_ingest_data”映射到容器中。如果您的主机上不存在此文件夹,则会在容器启动时创建该文件夹。如果您想在 Elastic Observability 中测试自定义日志的日志流查看器,您可以轻松地将任何具有 .log 扩展名的文件拖放到 /filebeat_ingest_data/ 中,日志将被读入默认的 Filebeat Datastream。 Alongside this, we also map in /var/lib/docker/containers and /var/run/docker.sock which, combined with the filebeat.autodiscover section and hints-based autodiscover, allows Filebeat to pull in the logs for all the containers. These logs will also be found in the Logs Stream viewer mentioned above. 除此之外,我们还映射 /var/lib/docker/containers 和 /var/run/docker.sock ,与 filebeat.autodiscover 部分和基于提示的自动发现相结合,允许 Filebeat 提取所有容器的日志。这些日志也可以在上面提到的日志流查看器中找到。

Logstash 日志存储
Our final container to bring to life is none other than Logstash. 我们最终要实现的容器就是 Logstash。
docker-compose.yml (‘logstash01’ container) docker-compose.yml(’logstash01’容器)
logstash01:
depends_on:
es01:
condition: service_healthy
kibana:
condition: service_healthy
image: docker.elastic.co/logstash/logstash:${STACK_VERSION}
labels:
co.elastic.logs/module: logstash
user: root
volumes:
- certs:/usr/share/logstash/certs
- logstashdata01:/usr/share/logstash/data
- "./logstash_ingest_data/:/usr/share/logstash/ingest_data/"
- "./logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro"
environment:
- xpack.monitoring.enabled=false
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://es01:9200了解详情
logstash.conf
input {
file {
#https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html
#default is TAIL which assumes more data will come into the file.
#change to mode => "read" if the file is a compelte file. by default, the file will be removed once reading is complete -- backup your files if you need them.
mode => "tail"
path => "/usr/share/logstash/ingest_data/*"
}
}
filter {
}
output {
elasticsearch {
index => "logstash-%{+YYYY.MM.dd}"
hosts=> "${ELASTIC_HOSTS}"
user=> "${ELASTIC_USER}"
password=> "${ELASTIC_PASSWORD}"
cacert=> "certs/ca/ca.crt"
}
}
了解详情
The Logstash configuration is very similar to the Filebeat configuration. Again we’re using a bind mount and mapping a folder called /logstash_ingest_data/ from the host into the Logstash container. Here, you can test out some of the many input plugins and filter plugins by modifying the logstash.yml file. Then drop your data into the /logstash_ingest_data/ folder. You may need to restart your Logstash container after modifying the logstash.yml file.
Logstash 配置与 Filebeat 配置非常相似。我们再次使用绑定挂载并将名为 /logstash_ingest_data/ 的文件夹从主机映射到 Logstash 容器。在这里,您可以通过修改logstash.yml文件来测试许多输入插件和过滤器插件中的一些。然后将数据放入 /logstash_ingest_data/ 文件夹中。修改logstash.yml 文件后,您可能需要重新启动Logstash 容器。 Note, the Logstash output index name is “logstash-%{+YYYY.MM.dd}”. To see the data, you will need to create a Data View for the “logstash-” pattern, as seen below. 请注意,Logstash 输出索引名称为“logstash-%{+YYYY.MM.dd}”。要查看数据,您需要为“logstash-”模式创建一个数据视图,如下所示。


Now, with Filebeat and Logstash both up and running, if you navigate back to Cluster Monitoring you will see Logstash being monitored, as well as some metrics and links for Elasticsearch Logs. 现在,Filebeat 和 Logstash 均已启动并运行,如果您导航回“集群监控”,您将看到正在监控的 Logstash,以及 Elasticsearch 日志的一些指标和链接。

Conclusion 结论

Part one of this series has covered a full active cluster with monitoring and ingestion as the foundation of our stack. This will act as your local playground to test some of the features of the Elastic ecosystem. 本系列的第一部分涵盖了完整的活动集群,并将监控和摄取作为我们堆栈的基础。这将作为您当地的游乐场来测试 Elastic 生态系统的一些功能。
Stay tuned for part two! We’ll dive into optimizing this foundation, along with setting up additional features such as APM Server, Elastic Agents, Elastic Integrations, and Enterprise Search. 请继续关注第二部分!我们将深入优化这个基础,并设置其他功能,例如 APM 服务器、弹性代理、弹性集成和企业搜索。 We will also deploy and test an application that you can instrument with some of these pieces. 我们还将部署和测试一个应用程序,您可以使用其中的一些部分来检测该应用程序。
All files discussed here are available on GitHub along with some sample data to ingest for Filebeat and Logstash. 这里讨论的所有文件都可以在 GitHub 上找到,以及一些供 Filebeat 和 Logstash 摄取的示例数据。
发表回复