文档文档

ClickHouse 输入插件

此插件从 ClickHouse 服务器收集统计数据。Clickhouse Cloud 的用户将看不到 Zookeeper 指标,因为他们可能没有权限查询这些表。

引入版本: Telegraf v1.14.0 标签: server 操作系统支持: all

全局配置选项

插件支持其他全局和插件配置设置,用于修改指标、标签和字段,创建别名以及配置插件顺序等任务。更多详情请参阅 CONFIGURATION.md

配置

# Read metrics from one or many ClickHouse servers
[[inputs.clickhouse]]
  ## Username for authorization on ClickHouse server
  username = "default"

  ## Password for authorization on ClickHouse server
  # password = ""

  ## HTTP(s) timeout while getting metrics values
  ## The timeout includes connection time, any redirects, and reading the
  ## response body.
  # timeout = 5s

  ## List of servers for metrics scraping
  ## metrics scrape via HTTP(s) clickhouse interface
  ## https://clickhouse.ac.cn/docs/en/interfaces/http/
  servers = ["http://127.0.0.1:8123"]

  ## Server Variant
  ## When set to "managed", some queries are excluded from being run. This is
  ## useful for instances hosted in ClickHouse Cloud where certain tables are
  ## not available.
  # variant = "self-hosted"

  ## If "auto_discovery"" is "true" plugin tries to connect to all servers
  ## available in the cluster with using same "user:password" described in
  ## "user" and "password" parameters and get this server hostname list from
  ## "system.clusters" table. See
  ## - https://clickhouse.ac.cn/docs/en/operations/system_tables/#system-clusters
  ## - https://clickhouse.ac.cn/docs/en/operations/server_settings/settings/#server_settings_remote_servers
  ## - https://clickhouse.ac.cn/docs/en/operations/table_engines/distributed/
  ## - https://clickhouse.ac.cn/docs/en/operations/table_engines/replication/#creating-replicated-tables
  # auto_discovery = true

  ## Filter cluster names in "system.clusters" when "auto_discovery" is "true"
  ## when this filter present then "WHERE cluster IN (...)" filter will apply
  ## please use only full cluster names here, regexp and glob filters is not
  ## allowed for "/etc/clickhouse-server/config.d/remote.xml"
  ## <yandex>
  ##  <remote_servers>
  ##    <my-own-cluster>
  ##        <shard>
  ##          <replica><host>clickhouse-ru-1.local</host><port>9000</port></replica>
  ##          <replica><host>clickhouse-ru-2.local</host><port>9000</port></replica>
  ##        </shard>
  ##        <shard>
  ##          <replica><host>clickhouse-eu-1.local</host><port>9000</port></replica>
  ##          <replica><host>clickhouse-eu-2.local</host><port>9000</port></replica>
  ##        </shard>
  ##    </my-own-cluster>
  ##  </remote_servers>
  ##
  ## </yandex>
  ##
  ## example: cluster_include = ["my-own-cluster"]
  # cluster_include = []

  ## Filter cluster names in "system.clusters" when "auto_discovery" is
  ## "true" when this filter present then "WHERE cluster NOT IN (...)"
  ## filter will apply
  ##    example: cluster_exclude = ["my-internal-not-discovered-cluster"]
  # cluster_exclude = []

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

Metrics

  • clickhouse_events (详情请参见 system.events)

    • 标签 (tags)
      • source (ClickHouse 服务器主机名)
      • cluster (集群名称 [可选])
      • shard_num (集群中的分片编号 [可选])
    • 字段 (fields)
  • clickhouse_metrics (详情请参见 system.metrics)

    • 标签 (tags)
      • source (ClickHouse 服务器主机名)
      • cluster (集群名称 [可选])
      • shard_num (集群中的分片编号 [可选])
    • 字段 (fields)
  • clickhouse_asynchronous_metrics (详情请参见 system.asynchronous_metrics)

    • 标签 (tags)
      • source (ClickHouse 服务器主机名)
      • cluster (集群名称 [可选])
      • shard_num (集群中的分片编号 [可选])
    • 字段 (fields)
  • clickhouse_tables

    • 标签 (tags)
      • source (ClickHouse 服务器主机名)
      • table
      • database
      • cluster (集群名称 [可选])
      • shard_num (集群中的分片编号 [可选])
    • 字段 (fields)
      • bytes
      • parts
      • rows
  • clickhouse_zookeeper (详情请参见 system.zookeeper)

    • 标签 (tags)
      • source (ClickHouse 服务器主机名)
      • cluster (集群名称 [可选])
      • shard_num (集群中的分片编号 [可选])
    • 字段 (fields)
      • root_nodes (path=/ 的节点计数)
  • clickhouse_replication_queue (详情请参见 system.replication_queue)

    • 标签 (tags)
      • source (ClickHouse 服务器主机名)
      • cluster (集群名称 [可选])
      • shard_num (集群中的分片编号 [可选])
    • 字段 (fields)
      • too_many_tries_replicas (num_tries > 1 的副本计数)
  • clickhouse_detached_parts (详情请参见 system.detached_parts)

    • 标签 (tags)
      • source (ClickHouse 服务器主机名)
      • cluster (集群名称 [可选])
      • shard_num (集群中的分片编号 [可选])
    • 字段 (fields)
  • clickhouse_dictionaries (详情请参见 system.dictionaries)

    • 标签 (tags)
      • source (ClickHouse 服务器主机名)
      • cluster (集群名称 [可选])
      • shard_num (集群中的分片编号 [可选])
      • dict_origin (当字典从 *_dictionary.xml 创建时为 xml 文件名,当字典从 DDL 创建时为 database.table)
    • 字段 (fields)
      • is_loaded (0 - 当字典数据加载不成功时,1 - 当字典数据加载失败时
      • bytes_allocated (字典加载后在 RAM 中分配的字节数)
  • clickhouse_mutations (详情请参见 system.mutations)

    • 标签 (tags)
      • source (ClickHouse 服务器主机名)
      • cluster (集群名称 [可选])
      • shard_num (集群中的分片编号 [可选])
    • 字段 (fields)
      • running - 指示当前未完成突变的数量
      • failed - 自首次运行 clickhouse-server 以来失败突变的总数
      • completed - 自首次运行 clickhouse-server 以来成功完成突变的总数
  • clickhouse_disks (详情请参见 system.disks)

    • 标签 (tags)
      • source (ClickHouse 服务器主机名)
      • cluster (集群名称 [可选])
      • shard_num (集群中的分片编号 [可选])
      • name (存储配置中的磁盘名称)
      • path (磁盘路径)
    • 字段 (fields)
      • free_space_percent - 0-100,指示当前可用磁盘空间字节数占总磁盘空间字节数的百分比
      • keep_free_space_percent - 0-100,指示当前所需保留的可用磁盘字节数占总磁盘空间字节数的百分比
  • clickhouse_processes (详情请参见 system.processes)

    • 标签 (tags)
      • source (ClickHouse 服务器主机名)
      • cluster (集群名称 [可选])
      • shard_num (集群中的分片编号 [可选])
    • 字段 (fields)
      • percentile_50 - 浮点数,显示正在运行进程的 elapsed 字段的 50% 分位数 (0.5 分位数)
      • percentile_90 - 浮点数,显示正在运行进程的 elapsed 字段的 90% 分位数 (0.9 分位数)
      • longest_running - 浮点数,显示正在运行进程的 elapsed 字段的最大值
  • clickhouse_text_log (详情请参见 system.text_log)

    • 标签 (tags)
      • source (ClickHouse 服务器主机名)
      • cluster (集群名称 [可选])
      • shard_num (集群中的分片编号 [可选])
      • level (消息级别,仅收集级别小于或等于 Notice 的消息)
    • 字段 (fields)
      • messages_last_10_min - 指示收集到的消息数量

示例输出

clickhouse_events,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 read_compressed_bytes=212i,arena_alloc_chunks=35i,function_execute=85i,merge_tree_data_writer_rows=3i,rw_lock_acquired_read_locks=421i,file_open=46i,io_buffer_alloc_bytes=86451985i,inserted_bytes=196i,regexp_created=3i,real_time_microseconds=116832i,query=23i,network_receive_elapsed_microseconds=268i,merge_tree_data_writer_compressed_bytes=1080i,arena_alloc_bytes=212992i,disk_write_elapsed_microseconds=556i,inserted_rows=3i,compressed_read_buffer_bytes=81i,read_buffer_from_file_descriptor_read_bytes=148i,write_buffer_from_file_descriptor_write=47i,merge_tree_data_writer_blocks=3i,soft_page_faults=896i,hard_page_faults=7i,select_query=21i,merge_tree_data_writer_uncompressed_bytes=196i,merge_tree_data_writer_blocks_already_sorted=3i,user_time_microseconds=40196i,compressed_read_buffer_blocks=5i,write_buffer_from_file_descriptor_write_bytes=3246i,io_buffer_allocs=296i,created_write_buffer_ordinary=12i,disk_read_elapsed_microseconds=59347044i,network_send_elapsed_microseconds=1538i,context_lock=1040i,insert_query=1i,system_time_microseconds=14582i,read_buffer_from_file_descriptor_read=3i 1569421000000000000
clickhouse_asynchronous_metrics,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 jemalloc.metadata_thp=0i,replicas_max_relative_delay=0i,jemalloc.mapped=1803177984i,jemalloc.allocated=1724839256i,jemalloc.background_thread.run_interval=0i,jemalloc.background_thread.num_threads=0i,uncompressed_cache_cells=0i,replicas_max_absolute_delay=0i,mark_cache_bytes=0i,compiled_expression_cache_count=0i,replicas_sum_queue_size=0i,number_of_tables=35i,replicas_max_merges_in_queue=0i,replicas_max_inserts_in_queue=0i,replicas_sum_merges_in_queue=0i,replicas_max_queue_size=0i,mark_cache_files=0i,jemalloc.background_thread.num_runs=0i,jemalloc.active=1726210048i,uptime=158i,jemalloc.retained=380481536i,replicas_sum_inserts_in_queue=0i,uncompressed_cache_bytes=0i,number_of_databases=2i,jemalloc.metadata=9207704i,max_part_count_for_partition=1i,jemalloc.resident=1742442496i 1569421000000000000
clickhouse_metrics,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 replicated_send=0i,write=0i,ephemeral_node=0i,zoo_keeper_request=0i,distributed_files_to_insert=0i,replicated_fetch=0i,background_schedule_pool_task=0i,interserver_connection=0i,leader_replica=0i,delayed_inserts=0i,global_thread_active=41i,merge=0i,readonly_replica=0i,memory_tracking_in_background_schedule_pool=0i,memory_tracking_for_merges=0i,zoo_keeper_session=0i,context_lock_wait=0i,storage_buffer_bytes=0i,background_pool_task=0i,send_external_tables=0i,zoo_keeper_watch=0i,part_mutation=0i,disk_space_reserved_for_merge=0i,distributed_send=0i,version_integer=19014003i,local_thread=0i,replicated_checks=0i,memory_tracking=0i,memory_tracking_in_background_processing_pool=0i,leader_election=0i,revision=54425i,open_file_for_read=0i,open_file_for_write=0i,storage_buffer_rows=0i,rw_lock_waiting_readers=0i,rw_lock_waiting_writers=0i,rw_lock_active_writers=0i,local_thread_active=0i,query_preempted=0i,tcp_connection=1i,http_connection=1i,read=2i,query_thread=0i,dict_cache_requests=0i,rw_lock_active_readers=1i,global_thread=43i,query=1i 1569421000000000000
clickhouse_tables,cluster=test_cluster_two_shards_localhost,database=system,host=kshvakov,source=localhost,shard_num=1,table=trace_log bytes=754i,parts=1i,rows=1i 1569421000000000000
clickhouse_tables,cluster=test_cluster_two_shards_localhost,database=default,host=kshvakov,source=localhost,shard_num=1,table=example bytes=326i,parts=2i,rows=2i 1569421000000000000

此页面是否有帮助?

感谢您的反馈!


InfluxDB 3.8 新特性

InfluxDB 3.8 和 InfluxDB 3 Explorer 1.6 的主要增强功能。

查看博客文章

InfluxDB 3.8 现已适用于 Core 和 Enterprise 版本,同时发布了 InfluxDB 3 Explorer UI 的 1.6 版本。本次发布着重于操作成熟度,以及如何更轻松地部署、管理和可靠地运行 InfluxDB。

更多信息,请查看

InfluxDB Docker 的 latest 标签将指向 InfluxDB 3 Core

在 **2026 年 2 月 3 日**,InfluxDB Docker 镜像的 latest 标签将指向 InfluxDB 3 Core。为避免意外升级,请在您的 Docker 部署中使用特定的版本标签。

如果使用 Docker 来安装和运行 InfluxDB,latest 标签将指向 InfluxDB 3 Core。为避免意外升级,请在您的 Docker 部署中使用特定的版本标签。例如,如果使用 Docker 运行 InfluxDB v2,请将 latest 版本标签替换为 Docker pull 命令中的特定版本标签 — 例如

docker pull influxdb:2