在开始源码分析之前先说一下filebeat是什么?beats是知名的ELK日志分析套件的一部分。它的前身是logstash-forwarder,用于收集日志并转发给后端(logstash、elasticsearch、redis、kafka等等)。filebeat是beats项目中的一种beats,负责收集日志文件的新增内容。当前的代码分支是最新的6.x的代码。
先看我们服务启动配置文件的一个例子,这个是filebeat采集k8s的日志的一个例子:
filebeat.prospectors:
- type: log
paths:
- /var/lib/docker/containers/*/*-json.log
- /var/log/filelog/containers/*/*/*/*.log
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
- add_fields:
fields:
log: '{message}'
- decode_json_fields:
when:
regexp:
message: "{*}"
fields: ["message"]
overwrite_keys: true
target: ""
- drop_fields:
fields: ["docker.container.labels.annotation.io.kubernetes.container.terminationMessagePath", "docker.container.labels.annotation.io.kubernetes.container.hash", "docker.container.labels.annotation.io.kubernetes.container.terminationMessagePolicy", "docker.container.labels.annotation.io.kubernetes.pod.terminationGracePeriod", "beat.version", "docker.container.labels.annotation.io.kubernetes.container.ports", "docker.container.labels.io.kubernetes.container.terminationMessagePath", "docker.container.labels.io.kubernetes.container.restartCount", "docker.container.labels.io.kubernetes.container.ports", "docker.container.labels.io.kubernetes.container.hash", "docker.container.labels.io.kubernetes.pod.terminationGracePeriod", "docker.container.labels.annotation.io.kubernetes.container.restartCount", "message"]
- parse_level:
levels: ["fatal", "error", "warn", "info", "debug"]
field: "log"
logging.level: info
setup.template.enabled: true
setup.template.name: "filebeat-%{+yyyy.MM.dd}"
setup.template.pattern: "filebeat-*"
setup.template.fields: "${path.config}/fields.yml"
setup.template.overwrite: true
setup.template.settings:
index:
analysis:
analyzer:
enncloud_analyzer:
filter: ["standard", "lowercase", "stop"]
char_filter: ["my_filter"]
type: custom
tokenizer: standard
char_filter:
my_filter:
type: mapping
mappings: ["-=>_"]
output:
elasticsearch:
hosts: ["paasdev.enncloud.cn:9200"]
index: "filebeat-%{+yyyy.MM.dd}"
filebeat启动时候会加载这个配置文件。再看看总结的接口libbeat/beat/beat.go
type Beater interface {
// The main event loop. This method should block until signalled to stop by an
// invocation of the Stop() method.
Run(b *Beat) error
// Stop is invoked to signal that the Run method should finish its execution.
// It will be invoked at most once.
Stop()
}
这个是每个beat都需要实现的两个接口,当然filebeat也不例外,filebeat/beater/filebeat.go
这个里面是filebeat的具体实现,篇幅有限就,省略的粘贴一下
// Run allows the beater to be run as a beat.
func (fb *Filebeat) Run(b *beat.Beat) error {
var err error
config := fb.config
if !fb.moduleRegistry.Empty() {
err = fb.loadModulesPipelines(b)
if err != nil {
return err
}
}
// Setup registrar to persist state
registrar, err := registrar.New(config.RegistryFile, config.RegistryFlush, finishedLogger)
if err != nil {
logp.Err("Could not init registrar: %v", err)
return err
}
err = b.Publisher.SetACKHandler(beat.PipelineACKHandler{
ACKEvents: newEventACKer(registrarChannel).ackEvents,
})
if err != nil {
logp.Err("Failed to install the registry with the publisher pipeline: %v", err)
return err
}
crawler, err := crawler.New(
channel.NewOutletFactory(outDone, b.Publisher, wgEvents).Create,
config.Prospectors,
b.Info.Version,
fb.done,
*once)
if err != nil {
logp.Err("Could not init crawler: %v", err)
return err
}
err = registrar.Start()
if err != nil {
return fmt.Errorf("Could not start registrar: %v", err)
}
var pipelineLoaderFactory fileset.PipelineLoaderFactory
if b.Config.Output.Name() == "elasticsearch" {
pipelineLoaderFactory = newPipelineLoaderFactory(b.Config.Output.Config())
} else {
logp.Warn(pipelinesWarning)
}
err = crawler.Start(registrar, config.ConfigProspector, config.ConfigModules, pipelineLoaderFactory)
if err != nil {
crawler.Stop()
return err
}
var adiscover *autodiscover.Autodiscover
if fb.config.Autodiscover != nil {
adapter := NewAutodiscoverAdapter(crawler.ProspectorsFactory, crawler.ModulesFactory)
adiscover, err = autodiscover.NewAutodiscover("filebeat", adapter, config.Autodiscover)
if err != nil {
return err
}
}
adiscover.Start()
return nil
}
// Stop is called on exit to stop the crawling, spooling and registration processes.
func (fb *Filebeat) Stop() {
logp.Info("Stopping filebeat")
// Stop Filebeat
close(fb.done)
}
上面的代码省略的介绍了start和stop的函数,stop天简单,就是一个关闭的总开关,就不说了。
详细分析一下这个start方法,它是整个filebeat最核心的地方。
filebeat支持采集特定程序的日志,譬如redis、nginx等,这些都是通过module支持的,所以在程序开始时候先确定elasticsearch里面有没有这些关联的pipeline、ingest,
if !fb.moduleRegistry.Empty() {
err = fb.loadModulesPipelines(b)
if err != nil {
return err
}
}
深入看看注册方法
func (fb *Filebeat) loadModulesPipelines(b *beat.Beat) error {
if b.Config.Output.Name() != "elasticsearch" {
logp.Warn(pipelinesWarning)
return nil
}
// 这里注册一个回调的方法,每当和一个es建立连接的时候,都会重新和es确认这些pipeline
callback := func(esClient *elasticsearch.Client) error {
return fb.moduleRegistry.LoadPipelines(esClient)
}
elasticsearch.RegisterConnectCallback(callback)
return nil
}
上面的代码主要是先和es确认一下pipeline,下面接着看启动,然后就创建registrar,registrar是啥呢?
registrar, err := registrar.New(config.RegistryFile, config.RegistryFlush, finishedLogger)
if err != nil {
logp.Err("Could not init registrar: %v", err)
return err
}
其实它是注册日志读取进度的,通过记录offset,下面就是我截取的一段registry文件。
{"source":"/var/lib/docker/containers/e892ad615535e877c8af5856bd27631937d050d00b4ca55554bec41e3391685f/e892ad615535e877c8af5856bd27631937d050d00b4ca55554bec41e3391685f-json.log","offset":0,"timestamp":"2017-11-29T17:01:28.645203497Z","ttl":-1,"type":"log","FileStateOS":{"inode":526963,"device":64769}}
这个json文件里面保存了容器和对应的offset,这样当filebeat重启过后则能继续工作。
然后创建crawler,这个是负责日志采集的。
crawler, err := crawler.New(
channel.NewOutletFactory(outDone, b.Publisher, wgEvents).Create,
config.Prospectors,
b.Info.Version,
fb.done,
*once)
通过config.Prospectors,crawler就知道采集哪些目标。然后就通过
err = crawler.Start(registrar, config.ConfigProspector, config.ConfigModules, pipelineLoaderFactory)
if err != nil {
crawler.Stop()
return err
}
启动采集任务。下面看看具体启动任务地方,filebeat/crawler/crawler.go
for _, prospectorConfig := range c.prospectorConfigs {
err := c.startProspector(prospectorConfig, r.GetStates())
if err != nil {
return err
}
}
就来到filebeat/prospector/prospector.go里面
func (p *Prospector) Run() {
// Initial prospector run
p.prospectorer.Run()
// Shuts down after the first complete run of all prospectors
if p.Once {
return
}
for {
select {
case <-p.done:
logp.Info("Prospector ticker stopped")
return
case <-time.After(p.config.ScanFrequency):
logp.Debug("prospector", "Run prospector")
p.prospectorer.Run()
}
}
}
这个prospectorer.Run是一个接口,可以支持从UDP/STDIN/LOG/REDIS/DOCKER里面直接获取日志,我们看一个log的filebeat/prospector/log/prospector.go
func (p *Prospector) Run() {
...
p.scan()
...
}
这里如果发现文件需要被采集,则创建采集任务
if lastState.IsEmpty() {
logp.Debug("prospector", "Start harvester for new file: %s", newState.Source)
err := p.startHarvester(newState, 0)
if err != nil {
logp.Err("Harvester could not be started on new file: %s, Err: %s", newState.Source, err)
}
} else {
p.harvestExistingFile(newState, lastState)
}
startHarvester启动日志采集,还是相同的套路,先创建createHarvester,然后启动harvesters.Start(h)。
这个里面通过for死循环里面执行
message, err := h.reader.Next()
这样分批读取。启动程序的内容先说到这里。还有很多细节后面逐一描述。