文档文档

远程文件输出插件

此插件使用rclone 库将指标写入远程位置的文件。目前支持以下后端:

引入于: Telegraf v1.32.0 标签: datastore 操作系统支持: all

全局配置选项

插件支持其他全局和插件配置设置,用于修改指标、标签和字段,创建别名以及配置插件顺序等任务。更多详情请参阅 CONFIGURATION.md

Secret-store 支持

此插件支持 remote 选项的 secret-store 密钥。有关如何使用它们的更多详细信息,请参阅 secret-store 文档

配置

# Send telegraf metrics to file(s) in a remote filesystem
[[outputs.remotefile]]
  ## Remote location according to https://rclone.org/#providers
  ## Check the backend configuration options and specify them in
  ##   <backend type>[,<param1>=<value1>[,...,<paramN>=<valueN>]]:[root]
  ## for example:
  ##   remote = 's3,provider=AWS,access_key_id=...,secret_access_key=...,session_token=...,region=us-east-1:mybucket'
  ## By default, remote is the local current directory
  # remote = "local:"

  ## Files to write in the remote location
  ## Each file can be a Golang template for generating the filename from metrics.
  ## See https://pkg.go.dev/text/template for a reference and use the metric
  ## name (`{{.Name}}`), tag values (`{{.Tag "name"}}`), field values
  ## (`{{.Field "name"}}`) or the metric time (`{{.Time}}) to derive the
  ## filename.
  ## The 'files' setting may contain directories relative to the root path
  ## defined in 'remote'.
  files = ['{{.Name}}-{{.Time.Format "2006-01-02"}}']

  ## Use batch serialization format instead of line based delimiting.
  ## The batch format allows for the production of non-line-based output formats
  ## and may more efficiently encode metrics.
  # use_batch_format = false

  ## Cache settings
  ## Time to wait for all writes to complete on shutdown of the plugin.
  # final_write_timeout = "10s"

  ## Time to wait between writing to a file and uploading to the remote location
  # cache_write_back = "5s"

  ## Maximum size of the cache on disk (infinite by default)
  # cache_max_size = -1

  ## Forget files after not being touched for longer than the given time
  ## This is useful to prevent memory leaks when using time-based filenames
  ## as it allows internal structures to be cleaned up.
  ## Note: When writing to a file after is has been forgotten, the file is
  ##       treated as a new file which might cause file-headers to be appended
  ##       again by certain serializers like CSV.
  ## By default files will be kept indefinitely.
  # forget_files_after = "0s"

  ## Data format to output.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
  data_format = "influx"
  
  ## Compress output data with the specified algorithm.
  ## If empty, compression will be disabled and files will be plain text.
  ## Supported algorithms are "zstd", "gzip" and "zlib".
  # compression_algorithm = ""

  ## Compression level for the algorithm above.
  ## Please note that different algorithms support different levels:
  ##   zstd  -- supports levels 1, 3, 7 and 11.
  ##   gzip -- supports levels 0, 1 and 9.
  ##   zlib -- supports levels 0, 1, and 9.
  ## By default the default compression level for each algorithm is used.
  # compression_level = -1

可用的自定义函数

以下函数可在模板中使用

  • now: 返回当前时间 (示例: {{now.Format "2006-01-02"}})

此页面是否有帮助?

感谢您的反馈!


InfluxDB 3.8 新特性

InfluxDB 3.8 和 InfluxDB 3 Explorer 1.6 的主要增强功能。

查看博客文章

InfluxDB 3.8 现已适用于 Core 和 Enterprise 版本,同时发布了 InfluxDB 3 Explorer UI 的 1.6 版本。本次发布着重于操作成熟度,以及如何更轻松地部署、管理和可靠地运行 InfluxDB。

更多信息,请查看

InfluxDB Docker 的 latest 标签将指向 InfluxDB 3 Core

在 **2026 年 2 月 3 日**,InfluxDB Docker 镜像的 latest 标签将指向 InfluxDB 3 Core。为避免意外升级,请在您的 Docker 部署中使用特定的版本标签。

如果使用 Docker 来安装和运行 InfluxDB,latest 标签将指向 InfluxDB 3 Core。为避免意外升级,请在您的 Docker 部署中使用特定的版本标签。例如,如果使用 Docker 运行 InfluxDB v2,请将 latest 版本标签替换为 Docker pull 命令中的特定版本标签 — 例如

docker pull influxdb:2