• telegraf 学习一 基本安装


    telegraf 是influxdata 开发的一个插件驱动的服务器代理,可以方便的用来收集以及报告系统的metrics

    我使用mac 系统,测试安装使用了brew

    安装

    • 下载地址

      说明官方也提供了mac版本

    https://github.com/influxdata/telegraf/releases

    • linux 系统安装
      下载对应版本即可
    • mac 系统安装
     
     brew update
     brew install telegraf

    基本使用

    • 生成运行配置文件
      安装好的二进制文件已经包含了生成配置文件的命令,,以下是一个简单的采集cpu 、内存使用情况的,同时输出到
      influxdb
     
    telegraf -sample-config -input-filter cpu:mem -output-filter influxdb > telegraf.conf
     

    内容如下:

    # Telegraf Configuration
    #
    # Telegraf is entirely plugin driven. All metrics are gathered from the
    # declared inputs, and sent to the declared outputs.
    #
    # Plugins must be declared in here to be active.
    # To deactivate a plugin, comment out the name and any variables.
    #
    # Use 'telegraf -config telegraf.conf -test' to see what metrics a config
    # file would generate.
    #
    # Environment variables can be used anywhere in this config file, simply surround
    # them with ${}. For strings the variable must be within quotes (ie, "${STR_VAR}"),
    # for numbers and booleans they should be plain (ie, ${INT_VAR}, ${BOOL_VAR})
    # Global tags can be specified here in key="value" format.
    [global_tags]
      # dc = "us-east-1" # will tag all metrics with dc=us-east-1
      # rack = "1a"
      ## Environment variables can be used as tags, and throughout the config file
      # user = "$USER"
    # Configuration for telegraf agent
    [agent]
      ## Default data collection interval for all inputs
      interval = "10s"
      ## Rounds collection interval to 'interval'
      ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
      round_interval = true
      ## Telegraf will send metrics to outputs in batches of at most
      ## metric_batch_size metrics.
      ## This controls the size of writes that Telegraf sends to output plugins.
      metric_batch_size = 1000
      ## Maximum number of unwritten metrics per output.
      metric_buffer_limit = 10000
      ## Collection jitter is used to jitter the collection by a random amount.
      ## Each plugin will sleep for a random time within jitter before collecting.
      ## This can be used to avoid many plugins querying things like sysfs at the
      ## same time, which can have a measurable effect on the system.
      collection_jitter = "0s"
      ## Default flushing interval for all outputs. Maximum flush_interval will be
      ## flush_interval + flush_jitter
      flush_interval = "10s"
      ## Jitter the flush interval by a random amount. This is primarily to avoid
      ## large write spikes for users running a large number of telegraf instances.
      ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
      flush_jitter = "0s"
      ## By default or when set to "0s", precision will be set to the same
      ## timestamp order as the collection interval, with the maximum being 1s.
      ##   ie, when interval = "10s", precision will be "1s"
      ##       when interval = "250ms", precision will be "1ms"
      ## Precision will NOT be used for service inputs. It is up to each individual
      ## service input to set the timestamp at the appropriate precision.
      ## Valid time units are "ns", "us" (or "µs"), "ms", "s".
      precision = ""
      ## Log at debug level.
      # debug = false
      ## Log only error level messages.
      # quiet = false
      ## Log file name, the empty string means to log to stderr.
      # logfile = ""
      ## The logfile will be rotated after the time interval specified.  When set
      ## to 0 no time based rotation is performed.
      # logfile_rotation_interval = "0d"
      ## The logfile will be rotated when it becomes larger than the specified
      ## size.  When set to 0 no size based rotation is performed.
      # logfile_rotation_max_size = "0MB"
      ## Maximum number of rotated archives to keep, any older logs are deleted.
      ## If set to -1, no archives are removed.
      # logfile_rotation_max_archives = 5
      ## Override default hostname, if empty use os.Hostname()
      hostname = ""
      ## If set to true, do no set the "host" tag in the telegraf agent.
      omit_hostname = false
    ###############################################################################
    #                            OUTPUT PLUGINS                                   #
    ###############################################################################
    # Configuration for sending metrics to InfluxDB
    [[outputs.influxdb]]
      ## The full HTTP or UDP URL for your InfluxDB instance.
      ##
      ## Multiple URLs can be specified for a single cluster, only ONE of the
      ## urls will be written to each interval.
      # urls = ["unix:///var/run/influxdb.sock"]
      # urls = ["udp://127.0.0.1:8089"]
      # urls = ["http://127.0.0.1:8086"]
      ## The target database for metrics; will be created as needed.
      ## For UDP url endpoint database needs to be configured on server side.
      # database = "telegraf"
      ## The value of this tag will be used to determine the database.  If this
      ## tag is not set the 'database' option is used as the default.
      # database_tag = ""
      ## If true, no CREATE DATABASE queries will be sent.  Set to true when using
      ## Telegraf with a user without permissions to create databases or when the
      ## database already exists.
      # skip_database_creation = false
      ## Name of existing retention policy to write to.  Empty string writes to
      ## the default retention policy.  Only takes effect when using HTTP.
      # retention_policy = ""
      ## Write consistency (clusters only), can be: "any", "one", "quorum", "all".
      ## Only takes effect when using HTTP.
      # write_consistency = "any"
      ## Timeout for HTTP messages.
      # timeout = "5s"
      ## HTTP Basic Auth
      # username = "telegraf"
      # password = "metricsmetricsmetricsmetrics"
      ## HTTP User-Agent
      # user_agent = "telegraf"
      ## UDP payload size is the maximum packet size to send.
      # udp_payload = "512B"
      ## Optional TLS Config for use on HTTP connections.
      # tls_ca = "/etc/telegraf/ca.pem"
      # tls_cert = "/etc/telegraf/cert.pem"
      # tls_key = "/etc/telegraf/key.pem"
      ## Use TLS but skip chain & host verification
      # insecure_skip_verify = false
      ## HTTP Proxy override, if unset values the standard proxy environment
      ## variables are consulted to determine which proxy, if any, should be used.
      # http_proxy = "http://corporate.proxy:3128"
      ## Additional HTTP headers
      # http_headers = {"X-Special-Header" = "Special-Value"}
      ## HTTP Content-Encoding for write request body, can be set to "gzip" to
      ## compress body or "identity" to apply no encoding.
      # content_encoding = "identity"
      ## When true, Telegraf will output unsigned integers as unsigned values,
      ## i.e.: "42u".  You will need a version of InfluxDB supporting unsigned
      ## integer values.  Enabling this option will result in field type errors if
      ## existing data has been written.
      # influx_uint_support = false
    ###############################################################################
    #                            PROCESSOR PLUGINS                                #
    ###############################################################################
    # # Convert values to another metric value type
    # [[processors.converter]]
    #   ## Tags to convert
    #   ##
    #   ## The table key determines the target type, and the array of key-values
    #   ## select the keys to convert.  The array may contain globs.
    #   ##   <target-type> = [<tag-key>...]
    #   [processors.converter.tags]
    #     string = []
    #     integer = []
    #     unsigned = []
    #     boolean = []
    #     float = []
    #
    #   ## Fields to convert
    #   ##
    #   ## The table key determines the target type, and the array of key-values
    #   ## select the keys to convert.  The array may contain globs.
    #   ##   <target-type> = [<field-key>...]
    #   [processors.converter.fields]
    #     tag = []
    #     string = []
    #     integer = []
    #     unsigned = []
    #     boolean = []
    #     float = []
    # # Map enum values according to given table.
    # [[processors.enum]]
    #   [[processors.enum.mapping]]
    #     ## Name of the field to map
    #     field = "status"
    #
    #     ## Name of the tag to map
    #     # tag = "status"
    #
    #     ## Destination tag or field to be used for the mapped value.  By default the
    #     ## source tag or field is used, overwriting the original value.
    #     dest = "status_code"
    #
    #     ## Default value to be used for all values not contained in the mapping
    #     ## table.  When unset, the unmodified value for the field will be used if no
    #     ## match is found.
    #     # default = 0
    #
    #     ## Table of mappings
    #     [processors.enum.mapping.value_mappings]
    #       green = 1
    #       amber = 2
    #       red = 3
    # # Apply metric modifications using override semantics.
    # [[processors.override]]
    #   ## All modifications on inputs and aggregators can be overridden:
    #   # name_override = "new_name"
    #   # name_prefix = "new_name_prefix"
    #   # name_suffix = "new_name_suffix"
    #
    #   ## Tags to be added (all values must be strings)
    #   # [processors.override.tags]
    #   #   additional_tag = "tag_value"
    # # Parse a value in a specified field/tag(s) and add the result in a new metric
    # [[processors.parser]]
    #   ## The name of the fields whose value will be parsed.
    #   parse_fields = []
    #
    #   ## If true, incoming metrics are not emitted.
    #   drop_original = false
    #
    #   ## If set to override, emitted metrics will be merged by overriding the
    #   ## original metric using the newly parsed metrics.
    #   merge = "override"
    #
    #   ## The dataformat to be read from files
    #   ## Each data format has its own unique set of configuration options, read
    #   ## more about them here:
    #   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
    #   data_format = "influx"
    # # Print all metrics that pass through this filter.
    # [[processors.printer]]
    # # Transforms tag and field values with regex pattern
    # [[processors.regex]]
    #   ## Tag and field conversions defined in a separate sub-tables
    #   # [[processors.regex.tags]]
    #   #   ## Tag to change
    #   #   key = "resp_code"
    #   #   ## Regular expression to match on a tag value
    #   #   pattern = "^(\d)\d\d$"
    #   #   ## Pattern for constructing a new value (${1} represents first subgroup)
    #   #   replacement = "${1}xx"
    #
    #   # [[processors.regex.fields]]
    #   #   key = "request"
    #   #   ## All the power of the Go regular expressions available here
    #   #   ## For example, named subgroups
    #   #   pattern = "^/api(?P<method>/[\w/]+)\S*"
    #   #   replacement = "${method}"
    #   #   ## If result_key is present, a new field will be created
    #   #   ## instead of changing existing field
    #   #   result_key = "method"
    #
    #   ## Multiple conversions may be applied for one field sequentially
    #   ## Let's extract one more value
    #   # [[processors.regex.fields]]
    #   #   key = "request"
    #   #   pattern = ".*category=(\w+).*"
    #   #   replacement = "${1}"
    #   #   result_key = "search_category"
    # # Rename measurements, tags, and fields that pass through this filter.
    # [[processors.rename]]
    # # Perform string processing on tags, fields, and measurements
    # [[processors.strings]]
    #   ## Convert a tag value to uppercase
    #   # [[processors.strings.uppercase]]
    #   #   tag = "method"
    #
    #   ## Convert a field value to lowercase and store in a new field
    #   # [[processors.strings.lowercase]]
    #   #   field = "uri_stem"
    #   #   dest = "uri_stem_normalised"
    #
    #   ## Trim leading and trailing whitespace using the default cutset
    #   # [[processors.strings.trim]]
    #   #   field = "message"
    #
    #   ## Trim leading characters in cutset
    #   # [[processors.strings.trim_left]]
    #   #   field = "message"
    #   #   cutset = "	"
    #
    #   ## Trim trailing characters in cutset
    #   # [[processors.strings.trim_right]]
    #   #   field = "message"
    #   #   cutset = "
    "
    #
    #   ## Trim the given prefix from the field
    #   # [[processors.strings.trim_prefix]]
    #   #   field = "my_value"
    #   #   prefix = "my_"
    #
    #   ## Trim the given suffix from the field
    #   # [[processors.strings.trim_suffix]]
    #   #   field = "read_count"
    #   #   suffix = "_count"
    #
    #   ## Replace all non-overlapping instances of old with new
    #   # [[processors.strings.replace]]
    #   #   measurement = "*"
    #   #   old = ":"
    #   #   new = "_"
    # # Print all metrics that pass through this filter.
    # [[processors.topk]]
    #   ## How many seconds between aggregations
    #   # period = 10
    #
    #   ## How many top metrics to return
    #   # k = 10
    #
    #   ## Over which tags should the aggregation be done. Globs can be specified, in
    #   ## which case any tag matching the glob will aggregated over. If set to an
    #   ## empty list is no aggregation over tags is done
    #   # group_by = ['*']
    #
    #   ## Over which fields are the top k are calculated
    #   # fields = ["value"]
    #
    #   ## What aggregation to use. Options: sum, mean, min, max
    #   # aggregation = "mean"
    #
    #   ## Instead of the top k largest metrics, return the bottom k lowest metrics
    #   # bottomk = false
    #
    #   ## The plugin assigns each metric a GroupBy tag generated from its name and
    #   ## tags. If this setting is different than "" the plugin will add a
    #   ## tag (which name will be the value of this setting) to each metric with
    #   ## the value of the calculated GroupBy tag. Useful for debugging
    #   # add_groupby_tag = ""
    #
    #   ## These settings provide a way to know the position of each metric in
    #   ## the top k. The 'add_rank_field' setting allows to specify for which
    #   ## fields the position is required. If the list is non empty, then a field
    #   ## will be added to each and every metric for each string present in this
    #   ## setting. This field will contain the ranking of the group that
    #   ## the metric belonged to when aggregated over that field.
    #   ## The name of the field will be set to the name of the aggregation field,
    #   ## suffixed with the string '_topk_rank'
    #   # add_rank_fields = []
    #
    #   ## These settings provide a way to know what values the plugin is generating
    #   ## when aggregating metrics. The 'add_agregate_field' setting allows to
    #   ## specify for which fields the final aggregation value is required. If the
    #   ## list is non empty, then a field will be added to each every metric for
    #   ## each field present in this setting. This field will contain
    #   ## the computed aggregation for the group that the metric belonged to when
    #   ## aggregated over that field.
    #   ## The name of the field will be set to the name of the aggregation field,
    #   ## suffixed with the string '_topk_aggregate'
    #   # add_aggregate_fields = []
    ###############################################################################
    #                            AGGREGATOR PLUGINS                               #
    ###############################################################################
    # # Keep the aggregate basicstats of each metric passing through.
    # [[aggregators.basicstats]]
    #   ## The period on which to flush & clear the aggregator.
    #   period = "30s"
    #   ## If true, the original metric will be dropped by the
    #   ## aggregator and will not get sent to the output plugins.
    #   drop_original = false
    #
    #   ## Configures which basic stats to push as fields
    #   # stats = ["count", "min", "max", "mean", "stdev", "s2", "sum"]
    # # Report the final metric of a series
    # [[aggregators.final]]
    #   ## The period on which to flush & clear the aggregator.
    #   period = "30s"
    #   ## If true, the original metric will be dropped by the
    #   ## aggregator and will not get sent to the output plugins.
    #   drop_original = false
    #
    #   ## The time that a series is not updated until considering it final.
    #   series_timeout = "5m"
    # # Create aggregate histograms.
    # [[aggregators.histogram]]
    #   ## The period in which to flush the aggregator.
    #   period = "30s"
    #
    #   ## If true, the original metric will be dropped by the
    #   ## aggregator and will not get sent to the output plugins.
    #   drop_original = false
    #
    #   ## If true, the histogram will be reset on flush instead
    #   ## of accumulating the results.
    #   reset = false
    #
    #   ## Example config that aggregates all fields of the metric.
    #   # [[aggregators.histogram.config]]
    #   #   ## The set of buckets.
    #   #   buckets = [0.0, 15.6, 34.5, 49.1, 71.5, 80.5, 94.5, 100.0]
    #   #   ## The name of metric.
    #   #   measurement_name = "cpu"
    #
    #   ## Example config that aggregates only specific fields of the metric.
    #   # [[aggregators.histogram.config]]
    #   #   ## The set of buckets.
    #   #   buckets = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]
    #   #   ## The name of metric.
    #   #   measurement_name = "diskio"
    #   #   ## The concrete fields of metric
    #   #   fields = ["io_time", "read_time", "write_time"]
    # # Keep the aggregate min/max of each metric passing through.
    # [[aggregators.minmax]]
    #   ## General Aggregator Arguments:
    #   ## The period on which to flush & clear the aggregator.
    #   period = "30s"
    #   ## If true, the original metric will be dropped by the
    #   ## aggregator and will not get sent to the output plugins.
    #   drop_original = false
    # # Count the occurrence of values in fields.
    # [[aggregators.valuecounter]]
    #   ## General Aggregator Arguments:
    #   ## The period on which to flush & clear the aggregator.
    #   period = "30s"
    #   ## If true, the original metric will be dropped by the
    #   ## aggregator and will not get sent to the output plugins.
    #   drop_original = false
    #   ## The fields for which the values will be counted
    #   fields = []
    ###############################################################################
    #                            INPUT PLUGINS                                    #
    ###############################################################################
    # Read metrics about cpu usage
    [[inputs.cpu]]
      ## Whether to report per-cpu stats or not
      percpu = true
      ## Whether to report total system cpu stats or not
      totalcpu = true
      ## If true, collect raw CPU time metrics.
      collect_cpu_time = false
      ## If true, compute and report the sum of all non-idle CPU states.
      report_active = false
    # Read metrics about memory usage
    [[inputs.mem]]
      # no configuration
     
     
    • 测试模式运行

      可以方便进行调试输出

    telegraf --config telegraf.conf --test
     

    内容:

    2019-07-27T15:36:34Z I! Starting Telegraf 1.11.3
    > cpu,cpu=cpu0,host=dalongrong.local usage_guest=0,usage_guest_nice=0,usage_idle=38,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=6,usage_user=56 1564241795000000000
    > cpu,cpu=cpu1,host=dalongrong.local usage_guest=0,usage_guest_nice=0,usage_idle=96,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=2,usage_user=2 1564241795000000000
    > cpu,cpu=cpu2,host=dalongrong.local usage_guest=0,usage_guest_nice=0,usage_idle=44,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=4,usage_user=52 1564241795000000000
    > cpu,cpu=cpu3,host=dalongrong.local usage_guest=0,usage_guest_nice=0,usage_idle=100,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=0,usage_user=0 1564241795000000000
    > cpu,cpu=cpu4,host=dalongrong.local usage_guest=0,usage_guest_nice=0,usage_idle=54,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=2,usage_user=44 1564241795000000000
     
    • 启动

      因为我没有安装influxdb,所以会有错误信息,安装方式可以使用容器

    telegraf --config telegraf.conf

    日志

    2019-07-27T15:34:27Z I! Starting Telegraf 1.11.3
    2019-07-27T15:34:27Z I! Loaded inputs: cpu mem
    2019-07-27T15:34:27Z I! Loaded aggregators: 
    2019-07-27T15:34:27Z I! Loaded processors: 
    2019-07-27T15:34:27Z I! Loaded outputs: influxdb
    2019-07-27T15:34:27Z I! Tags enabled: host=dalongrong.local
    2019-07-27T15:34:27Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"dalongrong.local", Flush Interval:10s
    2019-07-27T15:34:27Z W! [outputs.influxdb] when writing to [http://localhost:8086]: database "" creation failed: Post http://localhost:8086/query: dial tcp [::1]:8086: connect: connection refused
    2019-07-27T15:34:40Z E! [outputs.influxdb] when writing to [http://localhost:8086]: Post http://localhost:8086/write?db=telegraf: dial tcp [::1]:8086: connect: connection refused
     
     

    telegraf 包含的命令

    • help 命令
      学习包含的命令可以快速的了解工具的使用
     
    Telegraf, The plugin-driven server agent for collecting and reporting metrics.
    Usage:
      telegraf [commands|flags]
    The commands & flags are:
      config print out full sample configuration to stdout
      version print the version to stdout
      --aggregator-filter <filter> filter the aggregators to enable, separator is :
      --config <file> configuration file to load
      --config-directory <directory> directory containing additional *.conf files
      --debug turn on debug logging
      --input-filter <filter> filter the inputs to enable, separator is :
      --input-list print available input plugins.
      --output-filter <filter> filter the outputs to enable, separator is :
      --output-list print available output plugins.
      --pidfile <file> file to write our pid to
      --pprof-addr <address> pprof address to listen on, don't activate pprof if empty
      --processor-filter <filter> filter the processors to enable, separator is :
      --quiet run in quiet mode
      --section-filter filter config sections to output, separator is :
                                     Valid values are 'agent', 'global_tags', 'outputs',
                                     'processors', 'aggregators' and 'inputs'
      --sample-config print out full sample configuration
      --test gather metrics, print them out, and exit;
                                     processors, aggregators, and outputs are not run
      --usage <plugin> print usage for a plugin, ie, 'telegraf --usage mysql'
      --version display the version and exit
    Examples:
      # generate a telegraf config file:
      telegraf config > telegraf.conf
      # generate config with only cpu input & influxdb output plugins defined
      telegraf --input-filter cpu --output-filter influxdb config
      # run a single telegraf collection, outputing metrics to stdout
      telegraf --config telegraf.conf --test
      # run telegraf with all plugins defined in config file
      telegraf --config telegraf.conf
      # run telegraf, enabling the cpu & memory input, and influxdb output plugins
      telegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb
      # run telegraf with pprof
      telegraf --config telegraf.conf --pprof-addr localhost:6060

    说明

    以上就是一个简单的安装,以及基本使用,后边会关注下input,output ,aggregator, processor的使用

    参考资料

    https://github.com/influxdata/telegraf
    https://docs.influxdata.com/telegraf/v1.11/

  • 相关阅读:
    Apache Ant 1.9.1 版发布
    Apache Subversion 1.8.0rc2 发布
    GNU Gatekeeper 3.3 发布,网关守护管理
    Jekyll 1.0 发布,Ruby 的静态网站生成器
    R语言 3.0.1 源码已经提交到 Github
    SymmetricDS 3.4.0 发布,数据同步和复制
    beego 0.6.0 版本发布,Go 应用框架
    Doxygen 1.8.4 发布,文档生成工具
    SunshineCRM 20130518发布,附带更新说明
    Semplice Linux 4 发布,轻量级发行版
  • 原文地址:https://www.cnblogs.com/rongfengliang/p/11257337.html
Copyright © 2020-2023  润新知