Amazon CloudWatch Logs 输出插件
此插件将日志指标写入 Amazon CloudWatch 服务。
首次引入于: Telegraf v1.19.0 标签: cloud, logging 操作系统支持: all
Amazon 认证
此插件使用凭证链与 CloudWatch Logs API 端点进行身份验证。插件将按以下顺序尝试进行身份验证。
- 通过 STS 的 Web 身份提供商凭证,如果指定了
role_arn和web_identity_token_file - 如果指定了
role_arn属性,则通过 STS 假设凭证(源凭证将从后续规则中评估)。endpoint_url属性仅用于 Cloudwatch Logs 服务。获取凭证时,将使用 STS 全局端点。 - 来自
access_key、secret_key和token属性的显式凭证 - 来自
profile属性的共享配置文件 - 环境变量
- 共享凭证
- EC2 实例配置文件
IAM 用户需要以下权限(有关更多信息,请参阅此 参考)
logs:DescribeLogGroups- 检查已配置的日志组是否存在所需权限logs:DescribeLogStreams- 查看与日志组关联的所有日志流所需权限。logs:CreateLogStream- 在日志组中创建新日志流所需权限。)logs:PutLogEvents- 将日志事件批量上传到日志流所需权限。
全局配置选项
插件支持其他全局和插件配置设置,用于修改指标、标签和字段,创建别名以及配置插件顺序等任务。更多详情请参阅 CONFIGURATION.md。
配置
# Configuration for AWS CloudWatchLogs output.
[[outputs.cloudwatch_logs]]
## The region is the Amazon region that you wish to connect to.
## Examples include but are not limited to:
## - us-west-1
## - us-west-2
## - us-east-1
## - ap-southeast-1
## - ap-southeast-2
## ...
region = "us-east-1"
## Amazon Credentials
## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile'
## 5) environment variables
## 6) shared credentials file
## 7) EC2 Instance Profile
#access_key = ""
#secret_key = ""
#token = ""
#role_arn = ""
#web_identity_token_file = ""
#role_session_name = ""
#profile = ""
#shared_credential_file = ""
## Endpoint to make request against, the correct endpoint is automatically
## determined and this option should only be set if you wish to override the
## default, e.g endpoint_url = "https://:8000"
# endpoint_url = ""
## Cloud watch log group. Must be created in AWS cloudwatch logs upfront!
## For example, you can specify the name of the k8s cluster here to group logs
## from all cluster in oine place
log_group = "my-group-name"
## Log stream in log group
## Either log group name or reference to metric attribute, from which it can
## be parsed, tag:<TAG_NAME> or field:<FIELD_NAME>. If the log stream is not
## exist, it will be created. Since AWS is not automatically delete logs
## streams with expired logs entries (empty log streams), you need to put
## in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
log_stream = "tag:location"
## Source of log data - metric name
## specify the name of the metric, from which the log data should be
## retrieved. I.e., if you are using docker_log plugin to stream logs from
## container, then specify log_data_metric_name = "docker_log"
log_data_metric_name = "docker_log"
## Specify from which metric attribute the log data should be retrieved:
## tag:<TAG_NAME> or field:<FIELD_NAME>.
## I.e., if you are using docker_log plugin to stream logs from container,
## then specify log_data_source = "field:message"
log_data_source = "field:message"此页面是否有帮助?
感谢您的反馈!
支持和反馈
感谢您成为我们社区的一员!我们欢迎并鼓励您对 Telegraf 和本文档提出反馈和 bug 报告。要获取支持,请使用以下资源
具有年度合同或支持合同的客户可以 联系 InfluxData 支持。