Telegraf安装及使用

1 安装

1.1 创建用户

  (1)添加用户

# useradd tigk
# passwd tigk
Changing password for user tigk.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.

  (2)授权

  个人用户的权限只可以在本home下有完整权限,其他目录需要别人授权。经常需要root用户的权限,可以通过修改sudoers文件来赋予权限,使用sudo命令。

 # 赋予读写权限
# chmod -v u+w /etc/sudoers
mode of ‘/etc/sudoers’ changed from 0440 (r--r-----) to 0640 (rw-r-----)

  修改sudoers文件,添加新用户信息 vi /etc/sudoers,添加内容"elastic ALL=(ALL) ALL "

## Allow root to run any commands anywhere
root    ALL=(ALL)       ALL
tigk    ALL=(ALL)       ALL

  收回权限

#  chmod -v u-w /etc/sudoers
mode of ‘/etc/sudoers’ changed from 0640 (rw-r-----) to 0440 (r--r-----)

创建tigk安装目录

# su - tigk
$ mkdir /home/tigk/.local

  (3) 创建目录存放TIGK相关文件

# mkdir /data/tigk
# chown tigk:tigk /data/tigk
# su - tigk
$ mkdir /data/tigk/telegraf
$ mkdir /data/tigk/influxdb
$ mkdir /data/tigk/kapacitor

1.2 Tar包安装

1.2.1 获取tar包

wget https://dl.influxdata.com/telegraf/releases/telegraf-1.14.4_linux_amd64.tar.gz

1.2.2 解压tar包

$ tar xf /opt/package/telegraf-1.14.4_linux_amd64.tar.gz -C /home/tigk/.local/

1.2.3 生成简单配置

  可执行文件在 {telegraf根目录}/usr/bin/telegraf,配置文件在安装后的etc目录下,也可直接配置生成

  查看帮助telegraf --help

  生成配置文件 telegraf config > telegraf.conf

  生成带cpu、memroy、http_listener和influxdb插件的配置文件
telegraf --input-filter cpu:mem:http_listener --output-filter influxdb config > telegraf.conf

  执行程序 telegraf --config telegraf.conf

  以后台方式启动 nohup telegraf --config telegraf > /dev/null 2>&1 &

$ cd /home/tigk/.local/telegraf/usr/bin$ ./telegraf --help$ ./telegraf config > telegraf.conf$ ./telegraf --input-filter cpu:mem:http_listener --output-filter influxdb config > telegraf.conf

1.2.4 修改配置文件

[tigk@fbi-local-02 ~]$ mkdir /data/tigk/telegraf/logs

$ mkdir /data/tigk/telegraf/conf
$ cp /home/tigk/.local/telegraf/usr/bin/telegraf.conf /data/tigk/telegraf/conf
$ vim /data/tigk/telegraf/conf/telegraf.conf 
找到[outputs.influxdb]部分提供用户名和密码,修改内容如下
[[outputs.influxdb]]urls = ["http://10.0.165.2:8085"]timeout = "5s"username = "tigk"password = "tigk"
[agent]logfile = "/data/tigk/telegraf/logs/telegraf.log"

启动

$ cd /home/tigk/.local/telegraf/usr/bin
$ nohup ./telegraf --config /data/tigk/telegraf/conf/telegraf.conf   &

1.3 RPM包安装

  (1)获取rpm包

wget https://dl.influxdata.com/telegraf/releases/telegraf-1.14.4-1.x86_64.rpm

  (2) 安装rpm包

sudo yum localinstall telegraf-1.14.4-1.x86_64.rpm

  (3)启动服务、添加开机启动

systemctl start  telegraf.service
service telegraf status
systemctl enable  telegraf.service 

  (4)查看版本,修改配置文件

telegraf --version

  配置文件位置(默认配置):/etc/telegraf/telegraf.conf
修改telegraf配置文件

vim /etc/telegraf/telegraf.conf 

  (5)启动

service telegraf start

2 使用

2.1 常见命令及配置

  (1)命令展示 telegraf –h

$ ./telegraf -h
Telegraf, The plugin-driven server agent for collecting and reporting metrics.Usage:telegraf [commands|flags]The commands & flags are:config              print out full sample configuration to stdoutversion             print the version to stdout--aggregator-filter <filter>   filter the aggregators to enable, separator is :--config <file>                configuration file to load--config-directory <directory> directory containing additional *.conf files--plugin-directory             directory containing *.so files, this directory will besearched recursively. Any Plugin found will be loadedand namespaced.--debug                        turn on debug logging--input-filter <filter>        filter the inputs to enable, separator is :--input-list                   print available input plugins.--output-filter <filter>       filter the outputs to enable, separator is :--output-list                  print available output plugins.--pidfile <file>               file to write our pid to--pprof-addr <address>         pprof address to listen on, don't activate pprof if empty--processor-filter <filter>    filter the processors to enable, separator is :--quiet                        run in quiet mode--section-filter               filter config sections to output, separator is :Valid values are 'agent', 'global_tags', 'outputs','processors', 'aggregators' and 'inputs'--sample-config                print out full sample configuration--test                         gather metrics, print them out, and exit;processors, aggregators, and outputs are not run--test-wait                    wait up to this many seconds for serviceinputs to complete in test mode--usage <plugin>               print usage for a plugin, ie, 'telegraf --usage mysql'--version                      display the version and exitExamples:# generate a telegraf config file:telegraf config > telegraf.conf# generate config with only cpu input & influxdb output plugins definedtelegraf --input-filter cpu --output-filter influxdb config# run a single telegraf collection, outputing metrics to stdouttelegraf --config telegraf.conf --test# run telegraf with all plugins defined in config filetelegraf --config telegraf.conf# run telegraf, enabling the cpu & memory input, and influxdb output pluginstelegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb# run telegraf with pproftelegraf --config telegraf.conf --pprof-addr localhost:6060

  (2)命令使用

命令解释
telegraf --help查看帮助
telegraf config > telegraf.conf标准输出生成配置文档模板
telegraf --input-filter cpu --output-filter influxdb config只生成数据采集插件为cpu、输出插件为influxdb的配置文档模板
telegraf --config telegraf.conf --test使用指定配置文件进行测试、将收集到的数据输出stdout
telegraf --config telegraf.conf使用指定文件启动telegraf
telegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb按指定配置文件启动telegraf,过滤使用cpu、mem作为数据采集插件、influxdb为输出插件

  (3)配置文档位置

安装方式默认位置默认补充配置文件夹
Linux RPM包/etc/telegraf/telegraf.conf/etc/telegraf/telegraf.d
Linux Tar包{安装目录}/etc/telegraf/telegraf.conf{安装目录}/etc/telegraf/telegraf.d

  (4)配置加载方式
  命令默认加载telegraf.conf和/etc/telegraf/telegraf.d下的所有配置。选项—config和–config-directory可改变其行为。配置中每一个input模块,都会有对应的线程进行收集。如果有input配置重复,会造成资源浪费。

  (5)配置全局tag标签
  在配置文件中的[global_tags]区域定义key=“value”形式的键值对,这样收集到的metrics都会打上这样子的标签
  (6)Agent配置
[agent] 区域可以对本机所有进行数据采集的agent进行配置。

属性说明
interval数据采集间隔
round_interval是否整时收集。如interval=10s,设置会使收集发生在每分钟的00,10,20,30…
metric_batch_size发送到output的数据的分批大小
metric_buffer_limit发给output的数据buffer大小
collection_jitter收集数据前agent最大随机休眠时间,主要防止agent在同一时间收集数据
flush_interval发送数据到output的时间间隔
flush_jitter发送数据前最大随机休眠时间,主要防止一起发output时出现大的写高峰
Precision时间精度
logfile日志名
debug是否debug模式
quiet安静模式,只有错误消息
hostname默认os.Hostname(),设置则覆盖
omit_hostnameTag中是否需要包含hostname

  (7)Input插件通用配置

属性说明
interval数据采集间隔,如果设置,会覆盖Agent的设置
name_override改变输出的measurement名字
name_prefixmeasurement名字前缀
name_suffixmeasurement名字后缀
Tags添加到输出measurement 的一个tag字典

  (8)Output通用插件配置:无通用配置
  (9)Measurement过滤,可以定义在input,output等插件中

属性说明
namepass只有Measurement符合此正则的数据点通过
namedropmeasurement符合此正则的数据点被丢弃
fieldpass只有fieldkey符合此正则的field通过
fielddropfieldkey符合此正则的field被丢弃
tagpass只有tag符合此正则的点通过
tagdroptag符合此正则的点被丢弃
taginclude只有tag符合此正则的点通过,并丢掉不符合的tag
tagexclude丢掉符合正则的tag

  (10)典型配置举例
  ①Input - System – cpu

# Read metrics about cpu usage
[[inputs.cpu]]## Whether to report per-cpu stats or notpercpu = true## Whether to report total system cpu stats or nottotalcpu = true## If true, collect raw CPU time metrics.collect_cpu_time = false## If true, compute and report the sum of all non-idle CPU states.report_active = false

  ②Input - System – disk

# Read metrics about disk usage by mount point
[[inputs.disk]]## By default stats will be gathered for all mount points.## Set mount_points will restrict the stats to only the specified mount points.# mount_points = ["/"]## Ignore mount points by filesystem type.ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]

  ③Input - System – kernel

# Get kernel statistics from /proc/stat
[[inputs.kernel]]# no configuration

  ④Input - System – MEM

# Read metrics about memory usage
[[inputs.mem]]# no configuration

  ⑤Input - System – netstat

# # Read TCP metrics such as established, time wait and sockets counts.
# [[inputs.netstat]]
#   # no configuration

  ⑥Input - System – processes

# Get the number of processes and group them by status
[[inputs.processes]]# no configuration

  ⑦Input - System – system

# Read metrics about system load & uptime
[[inputs.system]]## Uncomment to remove deprecated metrics.# fielddrop = ["uptime_format"]

  ⑧Input - System – ping

# # Ping given url(s) and return statistics
# [[inputs.ping]]
#   ## Hosts to send ping packets to.
#   urls = ["example.org"]
#
#   ## Method used for sending pings, can be either "exec" or "native".  When set
#   ## to "exec" the systems ping command will be executed.  When set to "native"
#   ## the plugin will send pings directly.
#   ##
#   ## While the default is "exec" for backwards compatibility, new deployments
#   ## are encouraged to use the "native" method for improved compatibility and
#   ## performance.
#   # method = "exec"
#
#   ## Number of ping packets to send per interval.  Corresponds to the "-c"
#   ## option of the ping command.
#   # count = 1
#
#   ## Time to wait between sending ping packets in seconds.  Operates like the
#   ## "-i" option of the ping command.
#   # ping_interval = 1.0
#
#   ## If set, the time to wait for a ping response in seconds.  Operates like
#   ## the "-W" option of the ping command.
#   # timeout = 1.0
#
#   ## If set, the total ping deadline, in seconds.  Operates like the -w option
#   ## of the ping command.
#   # deadline = 10
#
#   ## Interface or source address to send ping from.  Operates like the -I or -S
#   ## option of the ping command.
#   # interface = ""
#
#   ## Specify the ping executable binary.
#   # binary = "ping"
#
#   ## Arguments for ping command. When arguments is not empty, the command from
#   ## the binary option will be used and other options (ping_interval, timeout,
#   ## etc) will be ignored.
#   # arguments = ["-c", "3"]
#
#   ## Use only IPv6 addresses when resolving a hostname.
#   # ipv6 = false

  ⑨Input - App – procstat

# [[inputs.procstat]]
#   ## PID file to monitor process
#   pid_file = "/var/run/nginx.pid"
#   ## executable name (ie, pgrep <exe>)
#   # exe = "nginx"
#   ## pattern as argument for pgrep (ie, pgrep -f <pattern>)
#   # pattern = "nginx"
#   ## user as argument for pgrep (ie, pgrep -u <user>)
#   # user = "nginx"
#   ## Systemd unit name
#   # systemd_unit = "nginx.service"
#   ## CGroup name or path
#   # cgroup = "systemd/system.slice/nginx.service"
#
#   ## Windows service name
#   # win_service = ""
#
#   ## override for process_name
#   ## This is optional; default is sourced from /proc/<pid>/status
#   # process_name = "bar"
#
#   ## Field name prefix
#   # prefix = ""
#
#   ## When true add the full cmdline as a tag.
#   # cmdline_tag = false
#
#   ## Add PID as a tag instead of a field; useful to differentiate between
#   ## processes whose tags are otherwise the same.  Can create a large number
#   ## of series, use judiciously.
#   # pid_tag = false
#
#   ## Method to use when finding process IDs.  Can be one of 'pgrep', or
#   ## 'native'.  The pgrep finder calls the pgrep executable in the PATH while
#   ## the native finder performs the search directly in a manor dependent on the
#   ## platform.  Default is 'pgrep'
#   # pid_finder = "pgrep"

  ⑩Input – App – redis


# # Read metrics from one or many redis servers
# [[inputs.redis]]
#   ## specify servers via a url matching:
#   ##  [protocol://][:password]@address[:port]
#   ##  e.g.
#   ##    tcp://localhost:6379
#   ##    tcp://:password@192.168.99.100
#   ##    unix:///var/run/redis.sock
#   ##
#   ## If no servers are specified, then localhost is used as the host.
#   ## If no port is specified, 6379 is used
#   servers = ["tcp://localhost:6379"]
#
#   ## specify server password
#   # password = "s#cr@t%"
#
#   ## Optional TLS Config
#   # tls_ca = "/etc/telegraf/ca.pem"
#   # tls_cert = "/etc/telegraf/cert.pem"
#   # tls_key = "/etc/telegraf/key.pem"
#   ## Use TLS but skip chain & host verification
#   # insecure_skip_verify = true

  ⑪Input – App – kafka_consumer

# # Read metrics from Kafka topics
# [[inputs.kafka_consumer]]
#   ## Kafka brokers.
#   brokers = ["localhost:9092"]
#
#   ## Topics to consume.
#   topics = ["telegraf"]
#
#   ## When set this tag will be added to all metrics with the topic as the value.
#   # topic_tag = ""
#
#   ## Optional Client id
#   # client_id = "Telegraf"
#
#   ## Set the minimal supported Kafka version.  Setting this enables the use of new
#   ## Kafka features and APIs.  Must be 0.10.2.0 or greater.
#   ##   ex: version = "1.1.0"
#   # version = ""
#
#   ## Optional TLS Config
#   # enable_tls = true
#   # tls_ca = "/etc/telegraf/ca.pem"
#   # tls_cert = "/etc/telegraf/cert.pem"
#   # tls_key = "/etc/telegraf/key.pem"
#   ## Use TLS but skip chain & host verification
#   # insecure_skip_verify = false
#
#   ## SASL authentication credentials.  These settings should typically be used
#   ## with TLS encryption enabled using the "enable_tls" option.
#   # sasl_username = "kafka"
#   # sasl_password = "secret"
#
#   ## SASL protocol version.  When connecting to Azure EventHub set to 0.
#   # sasl_version = 1
#
#   ## Name of the consumer group.
#   # consumer_group = "telegraf_metrics_consumers"
#
#   ## Initial offset position; one of "oldest" or "newest".
#   # offset = "oldest"
#
#   ## Consumer group partition assignment strategy; one of "range", "roundrobin" or "sticky".
#   # balance_strategy = "range"
#
#   ## Maximum length of a message to consume, in bytes (default 0/unlimited);
#   ## larger messages are dropped
#   max_message_len = 1000000
#
#   ## Maximum messages to read from the broker that have not been written by an
#   ## output.  For best throughput set based on the number of metrics within
#   ## each message and the size of the output's metric_batch_size.
#   ##
#   ## For example, if each message from the queue contains 10 metrics and the
#   ## output metric_batch_size is 1000, setting this to 100 will ensure that a
#   ## full batch is collected and the write is triggered immediately without
#   ## waiting until the next flush_interval.
#   # max_undelivered_messages = 1000
#
#   ## Data format to consume.
#   ## Each data format has its own unique set of configuration options, read
#   ## more about them here:
#   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
#   data_format = "influx"

  ⑫Input – App – exec

# [[outputs.exec]]
#   ## Command to ingest metrics via stdin.
#   command = ["tee", "-a", "/dev/null"]
#
#   ## Timeout for command to complete.
#   # timeout = "5s"
#
#   ## Data format to output.
#   ## Each data format has its own unique set of configuration options, read
#   ## more about them here:
#   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
#   # data_format = "influx"

  ⑬Output – influxdb

# # Configuration for sending metrics to InfluxDB
# [[outputs.influxdb_v2]]
#   ## The URLs of the InfluxDB cluster nodes.
#   ##
#   ## Multiple URLs can be specified for a single cluster, only ONE of the
#   ## urls will be written to each interval.
#   ##   ex: urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
#   urls = ["http://127.0.0.1:9999"]
#
#   ## Token for authentication.
#   token = ""
#
#   ## Organization is the name of the organization you wish to write to; must exist.
#   organization = ""
#
#   ## Destination bucket to write into.
#   bucket = ""
#
#   ## The value of this tag will be used to determine the bucket.  If this
#   ## tag is not set the 'bucket' option is used as the default.
#   # bucket_tag = ""
#
#   ## If true, the bucket tag will not be added to the metric.
#   # exclude_bucket_tag = false
#
#   ## Timeout for HTTP messages.
#   # timeout = "5s"
#
#   ## Additional HTTP headers
#   # http_headers = {"X-Special-Header" = "Special-Value"}
#
#   ## HTTP Proxy override, if unset values the standard proxy environment
#   ## variables are consulted to determine which proxy, if any, should be used.
#   # http_proxy = "http://corporate.proxy:3128"
#
#   ## HTTP User-Agent
#   # user_agent = "telegraf"
#
#   ## Content-Encoding for write request body, can be set to "gzip" to
#   ## compress body or "identity" to apply no encoding.
#   # content_encoding = "gzip"
#
#   ## Enable or disable uint support for writing uints influxdb 2.0.
#   # influx_uint_support = false
#
#   ## Optional TLS Config for use on HTTP connections.
#   # tls_ca = "/etc/telegraf/ca.pem"
#   # tls_cert = "/etc/telegraf/cert.pem"
#   # tls_key = "/etc/telegraf/key.pem"
#   ## Use TLS but skip chain & host verification
#   # insecure_skip_verify = false

2.2 获取官方未提供input plugin的应用

  如获取yarn中的应用,并存入influxdb:①可利用input插件exec,执行某个脚本,使其标准输出打印符合influxdb line protocol的日志②通过脚本里利用yarn的api获取正在跑的应用

#!bin/python
import json
import urllib
import httplibhost="10.0.165.3:8088"path="/ws/v1/cluster/apps"
data=urllib.urlencode({'state':"RUNNING","applicationTypes":"Apache Flink"})
path=path+"?"+data
headers = {"Accept":"application/json"}
conn=httplib.HTTPConnection(host)
conn.request("GET",path,headers=headers)
result=conn.getresponse()
if(result.status):content = result.read()apps = json.loads(content)["apps"]["app"]for app in apps:if("test" in app["name"] or "TEST" in app["name"] or "Test" in app["name"]):continueapp["escaped_name"] = app["name"].replace(' ','\ ')print "APPLICATION.RUNNING,appname=%s,appid=%s field_appname=\"%s\",field_appid=\"%s\" " % (app["escaped_name"],app["id"],app["name"],app["id"])	

  执行结果为APPLICATION.RUNNING,appname=iot_road_traffic,appid=application_1592979353214_0175 field_appname=“iot_road_traffic”,field_appid=“application_1592979353214_0175”
  配置input插件的exec如下

[[outputs.exec]]## Command to ingest metrics via stdin.command = ["python", "/data/tigk/telegraf/exec/getRunningFlinkJob.py"]# Timeout for command to complete.timeout = "5s"## Data format to output.## Each data format has its own unique set of configuration options, read## more about them here:## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.mddata_format = "influx"

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/473590.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

天池 在线编程 中位数

文章目录1. 题目2. 解题1. 题目 描述 给定一个长度为N的整数数组arr 返回一个长度为N的整数答案数组ans ans[i] 表示删除arr数组第i个数后&#xff0c;arr数组的中位数 N为偶数 2 < N < 10^5 示例 输入:[1,2,3,4,5,6] 输出:[4,4,4,3,3,3] 解释:删去1后 剩下的数组为[…

自动化运维Shell课堂笔记

1、课程回顾 2、课程大纲 1、shell编程 开发和运维 shell基础知识 shell变量 shell表达式 shell流程控制语句 2、代码发布 项目周期 代码部署的方式 代码部署流程 服务器环境部署 手工方式部署代码 脚本方式部署代码 3、shell 3.1、开发和运维 3.1.1 开发 开发是什么&…

InfluxDB安装及使用

1 安装 1.1 Tar包安装 &#xff08;1&#xff09;获取tar包 wget https://dl.influxdata.com/influxdb/releases/influxdb-1.8.0_linux_amd64.tar.gz&#xff08;2&#xff09;解压tar包   tar xvfz influxdb-1.8.0_linux_amd64.tar.gz $ su - tigk $ tar xvfz /opt/packa…

倒排索引原理和实现

关于倒排索引 搜索引擎通常检索的场景是&#xff1a;给定几个关键词&#xff0c;找出包含关键词的文档。怎么快速找到包含某个关键词的文档就成为搜索的关键。这里我们借助单词——文档矩阵模型&#xff0c;通过这个模型我们可以很方便知道某篇文档包含哪些关键词&#xff0c;某…

天池 在线编程 Character deletion

文章目录1. 题目2. 解题1. 题目 描述 Enter two strings and delete all characters in the second string from the first string 字符串长度&#xff1a;[1, 10^5] Example 1: Input: str”They are students”&#xff0c;sub”aeiou” Output: ”Thy r stdnts”来源&am…

【翻译】在Ext JS中创建特定主题的重写

Ext JS提供了大量的功能来使类的创建和处理变得简单&#xff0c;还提供了一系列的功能来扩展和重新现有的Javascript类。这意味着可以为类添加行为和创建属于自己的类&#xff0c;或者重写某些函数的行为。在本文&#xff0c;将展示如何实现特定主题类的重写。原文&#xff1a;…

Kapacitor安装及使用

1 安装 1.1 Tar包安装 &#xff08;1&#xff09;下载 wget https://dl.influxdata.com/kapacitor/releases/kapacitor-1.5.5_linux_amd64.tar.gz&#xff08;2&#xff09;安装 $ tar xvfz /opt/package/kapacitor-1.5.5-static_linux_amd64.tar.gz -C /home/tigk/.local/ …

Python答题:LinteCode简单题库(一)

366. 斐波纳契数列&#xff1a;查找斐波纳契数列中第 N 个数。 所谓的斐波纳契数列是指&#xff1a; 前2个数是 0 和 1 。 第 i 个数是第 i-1 个数和第i-2 个数的和。 斐波纳契数列的前10个数字是&#xff1a; 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 ... 给定 1&#xff0c;返回 …

天池 在线编程 扫雷(BFS)

文章目录1. 题目2. 解题1. 题目 描述 现在有一个简易版的扫雷游戏&#xff0c;你将得到一个n*m大小的二维数组作为游戏地图。 每个位置上有一个值&#xff08;0或1&#xff0c;1代表此处没有雷&#xff0c;0表示有雷&#xff09;。 你将获得一个起点的位置坐标&#xff08;x&a…

Linux搭建高并发高可用Redis集群

安装Redis Redis 是一个高性能的key-value数据库。常用作缓存服务器使用。 1. 下载redis安装包&#xff0c;redis-3.2.11.tar.gz&#xff08;http://download.redis.io/releases/redis-3.2.11.tar.gz&#xff09; > wget http://download.redis.io/releases/redis-3.2.11.…

Flink简介

1 什么是Flink Apache Flink 是一个框架和分布式处理引擎&#xff0c;用于在无边界和有边界数据流上进行有状态的计算。Flink 能在所有常见集群环境中运行&#xff0c;并能以内存速度和任意规模进行计算。 它的主要特性包括&#xff1a;批流一体化、精密的状态管理、事件时间支…

Winform datagridview相关操作

datagridview显示行号的2种方法: 方法一&#xff1a; 网上最常见的做法是用DataGridView的RowPostPaint事件在RowHeaderCell中绘制行号&#xff1a; privatevoiddataGridView1_RowPostPaint(objectsender, DataGridViewRowPostPaintEventArgs e){try{e.Graphics.DrawString((e.…

天池 在线编程 旅行计划(暴力回溯)

文章目录1. 题目2. 解题1. 题目 描述 有n个城市&#xff0c;给出邻接矩阵arr代表任意两个城市的距离。 arr[i][j]代表从城市i到城市j的距离。Alice在周末制定了一个游玩计划&#xff0c;她从所在的0号城市开始&#xff0c;游玩其他的1 ~ n-1个城市&#xff0c;最后回到0号。 A…

初始化环境配置:CentOS 7.4x64 系统安装及基础配置

1.安装CentOS操作系统 ① 在进入系统引导后&#xff0c;会进入文字界面&#xff0c;选择install CentOS7 &#xff08;用键盘上的方向键↑、↓来选择要执行的操作&#xff0c;白色字体表示选中&#xff0c;按下回车&#xff0c;进入下一步操作&#xff09; ② 按回车执行安…

天池 在线编程 拿走瓶子(区间DP)

文章目录1. 题目2. 解题1. 题目 描述 有n个瓶子排成一列&#xff0c;用arr表示。 你每次可以选择能够形成回文连续子串的瓶子拿走&#xff0c;剩下的瓶子拼接在一起。 返回你能拿走所有的瓶子的最小次数。 n<500 arr[i]<1000示例 例1: 输入&#xff1a;[1,3,4,1,5] …

Flink运行时架构

1 运行时相关的组件 Flink运行时架构主要包括四个不同的组件&#xff1a;作业管理器&#xff08;JobManager&#xff09;、资源管理器&#xff08;ResourceManager&#xff09;、任务管理器&#xff08;TaskManager&#xff09;&#xff0c;以及分发器&#xff08;Dispatcher&a…

大型网站电商网站架构案例和技术架构的示例

大型网站架构是一个系列文档&#xff0c;欢迎大家关注。本次分享主题&#xff1a;电商网站架构案例。从电商网站的需求&#xff0c;到单机架构&#xff0c;逐步演变为常用的&#xff0c;可供参考的分布式架构的原型。除具备功能需求外&#xff0c;还具备一定的高性能&#xff0…

Spring JPA

http://files.cnblogs.com/weishuai90/spring.rar转载于:https://www.cnblogs.com/weishuai90/p/3567794.html

天池 在线编程 删除字符(单调栈)

文章目录1. 题目2. 解题1. 题目 描述 给定一个字符串str&#xff0c;现在要对该字符串进行删除操作&#xff0c; 保留字符串中的 k 个字符且相对位置不变&#xff0c;并且使它的字典序最小&#xff0c;返回这个子串。 示例 例1: 输入:str"fskacsbi",k2 输出:&quo…

Flink常见流处理API

Flink 流处理API的编程可以分为environment&#xff0c;source&#xff0c;transform&#xff0c;sink四大部分 1 Flink支持的数据类型 在Flink底层因为要对所有的数据序列化&#xff0c;反序列化对数据进行传输&#xff0c;以便通过网络传送它们&#xff0c;或者从状态后端、…