kafka应用于区块链_Apache Kafka的区块链实验

kafka应用于区块链

by Luc Russell

卢克·罗素(Luc Russell)

Apache Kafka的区块链实验 (A blockchain experiment with Apache Kafka)

Blockchain technology and Apache Kafka share characteristics which suggest a natural affinity. For instance, both share the concept of an ‘immutable append only log’. In the case of a Kafka partition:

区块链技术和Apache Kafka具有共同的特征,这暗示了自然的亲和力。 例如,两者共享“不可变的仅追加日志”的概念。 如果是Kafka分区:

Each partition is an ordered, immutable sequence of records that is continually appended to — a structured commit log. The records in the partitions are each assigned a sequential id number called the offset that uniquely identifies each record within the partition [Apache Kafka]

每个分区都是有序的,不变的记录序列,这些记录连续地附加到结构化的提交日志中。 分区中的每个记录均分配有一个顺序ID号,称为偏移量,该ID唯一地标识分区中的每个记录[ Apache Kafka ]

Whereas a blockchain can be described as:

而区块链可以描述为:

a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block typically contains a hash pointer as a link to a previous block, a timestamp and transaction data [Wikipedia]

不断增长的记录列表(称为块),这些记录使用密码进行链接和保护。 每个块通常包含一个哈希指针(作为指向前一个块的链接),时间戳和交易数据[ Wikipedia ]

Clearly, these technologies share the parallel concepts of an immutable sequential structure, with Kafka being particularly optimized for high throughput and horizontal scalability, and blockchain excelling in guaranteeing the order and structure of a sequence.

显然,这些技术共享不可变顺序结构的并行概念,其中Kafka特别针对高吞吐量和水平可伸缩性进行了优化,而区块链在保证序列的顺序和结构方面表现出色。

By integrating these technologies, we can create a platform for experimenting with blockchain concepts.

通过集成这些技术,我们可以创建一个试验区块链概念的平台。

Kafka provides a convenient framework for distributed peer to peer communication, with some characteristics particularly suitable for blockchain applications. While this approach may not be viable in a trustless public environment, there could be practical uses in a private or consortium network. See Scaling Blockchains with Apache Kafka for further ideas on how this could be implemented.

Kafka为分布式对等通信提供了方便的框架,具有一些特别适合于区块链应用程序的特征。 尽管此方法在不信任的公共环境中可能不可行,但在私有或联盟网络中可能会有实际用途。 有关如何实现此功能的更多想法,请参见使用Apache Kafka扩展区块链 。

Additionally, with some experimentation, we may be able to draw on concepts already implemented in Kafka (e.g. sharding by partition) to explore solutions to blockchain challenges in public networks (e.g. scalability problems).

另外,通过一些试验,我们也许能够利用已经在Kafka中实现的概念(例如,按分区分片)来探索解决公共网络中的区块链挑战(例如,可伸缩性问题)的解决方案。

The purpose of this experiment is therefore to take a simple blockchain implementation and port it to the Kafka platform; we’ll take Kafka’s concept of a sequential log and guarantee immutability by chaining the entries together with hashes. The blockchain topic on Kafka will become our distributed ledger. Graphically, it will look like this:

因此,本实验的目的是采用简单的区块链实现并将其移植到Kafka平台。 我们将采用Kafka的顺序日志的概念,并通过将条目与哈希值链接在一起来确保不变性。 卡夫卡上的blockchain主题将成为我们的分布式账本。 在图形上,它将如下所示:

卡夫卡简介 (Introduction to Kafka)

Kafka is a streaming platform designed for high-throughput, real-time messaging, i.e. it enables publication and subscription to streams of records. In this respect it is similar to a message queue or a traditional enterprise messaging system. Some of the characteristics are:

Kafka是用于高吞吐量,实时消息传递的流媒体平台,即,它可以发布和订阅记录流。 在这方面,它类似于消​​息队列或传统的企业消息传递系统。 一些特征是:

  • High throughput: Kafka brokers can absorb gigabytes of data per second, translating into millions of messages per second. You can read more about the scalability characteristics in Benchmarking Apache Kafka: 2 Million Writes Per Second.

    高吞吐量:Kafka代理可以每秒吸收千兆字节的数据,每秒可以转换成数百万条消息。 您可以在基准化Apache Kafka:每秒2百万次写入中了解有关可伸缩性特征的更多信息。

  • Competing consumers: Simultaneous delivery of messages to multiple consumers, typically expensive in traditional messaging systems, is no more complex than for a single consumer. This means we can design for competing consumers, guaranteeing that each consumer will receive only one of the messages and achieving a high degree of horizontal scalability.

    竞争的消费者:向多个消费者同时传递消息(通常在传统消息传递系统中价格昂贵)并不比单个消费者复杂。 这意味着我们可以为竞争的消费者进行设计,从而确保每个消费者仅接收到一条消息,并实现高度的水平可扩展性。

  • Fault tolerance: By replicating data across multiple nodes in a cluster, the impact of individual node failures is minimized.

    容错能力:通过在群集中的多个节点之间复制数据,可以将单个节点故障的影响降至最低。
  • Message retention and replay: Kafka brokers maintain a record of consumer offsets — a consumer’s position in the stream of messages. Using this, consumers can rewind to a previous position in the stream even if the messages have already been delivered, allowing them to recreate the status of the system at a point in time. Brokers can be configured to retain messages indefinitely, which is necessary for blockchain applications.

    消息保留和重播:Kafka经纪人保留消费者补偿记录-消费者在消息流中的位置。 使用此方法,即使消息已经传递,消费者也可以倒退到流中的先前位置,从而允许他们在某个时间点重新创建系统的状态。 可以将代理配置为无限期保留消息,这对于区块链应用程序是必需的。

In Kafka, each topic is split into partitions, where each partition is a sequence of records which is continually appended to. This is similar to a text log file, where new lines are appended to the end. The entries in the partition are each assigned a sequential id, called an offset, which uniquely identifies the record.

在Kafka中,每个主题都分为多个分区,每个分区都是一系列记录,这些记录不断地附加到该记录上。 这类似于文本日志文件,该文件的末尾添加了新行。 每个分区中的条目都分配有一个顺序ID,称为偏移量,用于唯一标识记录。

The Kafka broker can be queried by offset, i.e. a consumer can reset its offset to some arbitrary point in the log to retrieve records from that point forward.

可以通过偏移量查询Kafka代理,即,使用者可以将其偏移量重置为日志中的任意点,以从该点开始检索记录。

讲解 (Tutorial)

Full source code is available here.

完整的源代码在这里 。

先决条件 (Prerequisites)

  • Some understanding of blockchain concepts: The tutorial below is based on implementations from Daniel van Flymen and Gerald Nash, both excellent practical introductions. The following tutorial builds heavily on these concepts, while using Kafka as the message transport. In effect, we’ll port a Python blockchain to Kafka, while maintaining most of the current implementation.

    对区块链概念的一些理解:以下教程基于Daniel van Flymen和Gerald Nash的实现 ,它们都是出色的实用介绍。 下面的教程在将Kafka用作消息传输的同时,也以这些概念为基础。 实际上,我们将在保持大多数当前实现的同时将Python区块链移植到Kafka。

  • Basic knowledge of Python: the code is written for Python 3.6.

    Python的基本知识:该代码是为Python 3.6编写的。
  • Docker: docker-compose is used to run the Kafka broker.

    Docker :docker-compose用于运行Kafka代理。

  • kafkacat: This is a useful tool for interacting with Kafka (e.g. publishing messages to topics)

    kafkacat :这是与Kafka进行交互的有用工具(例如,将消息发布到主题)

On startup, our Kafka consumer will try to do three things: initialize a new blockchain if one has not yet been created; build an internal representation of the current state of the blockchain topic; then begin reading transactions in a loop:

在启动时,我们的Kafka消费者将尝试做三件事:如果尚未创建一个新的区块链,则对其进行初始化; 建立区块链主题当前状态的内部表示; 然后开始循环读取事务:

The initialization step looks like this:

初始化步骤如下所示:

First, we find the highest available offset on the blockchain topic. If nothing has ever been published to the topic, the blockchain is new, so we start by creating and publishing the genesis block:

首先,我们在区块链主题上找到最高的可用偏移量。 如果尚未对该主题发布任何东西,则区块链是新的,因此我们首先创建和发布创世块:

In read_and_validate_chain(), we’ll first create a consumer to read from the blockchain topic:

read_and_validate_chain() ,我们首先创建一个消费者以读取read_and_validate_chain() blockchain主题:

Some notes on the parameters we’re creating this consumer with:

关于我们使用以下方法创建此使用者的一些注意事项:

  • Setting the consumer group to the blockchain group allows the broker to keep a reference of the offset the consumers have reached, for a given partition and topic

    将消费者组设置为blockchain组可允许经纪人针对给定的分区和主题保留消费者已达到的偏移量的参考

  • auto_offset_reset=OffsetType.EARLIEST indicates that we’ll begin downloading messages from the start of the topic.

    auto_offset_reset=OffsetType.EARLIEST表示我们将从主题的开头开始下载消息。

  • auto_commit_enable=True periodically notifies the broker of the offset we’ve just consumed (as opposed to manually committing)

    auto_commit_enable=True定期通知经纪人我们刚刚消耗的偏移量(与手动提交相对)

  • reset_offset_on_start=True is a switch which activates the auto_offset_reset for the consumer

    reset_offset_on_start=True是一个为使用者激活auto_offset_reset的开关

  • consumer_timeout_ms=5000 will trigger the consumer to return from the method after five seconds if no new messages are being read (we’ve reached the end of the chain)

    consumer_timeout_ms=5000将在五秒钟后触发使用者返回方法,如果没有新消息被读取(我们已经到达链的末尾)

Then we begin reading block messages from the blockchain topic:

然后,我们开始从区块blockchain主题中读取阻止消息:

For each message we receive:

对于每条消息,我们收到:

  1. If it’s the first block in the chain, skip validation and add to our internal copy (this is the genesis block)

    如果它是链中的第一个块,请跳过验证并添加到我们的内部副本中(这是创世块)
  2. Otherwise, check the block is valid with respect to the previous block, and append it to our copy

    否则,检查该块相对于前一个块是否有效,并将其附加到我们的副本中
  3. Keep a note of the offset of the block we just consumed

    记下我们刚刚消耗的块的偏移量

At the end of this process, we’ll have downloaded the whole chain, discarding any invalid blocks, and we’ll have a reference to the offset of the latest block.

在此过程结束时,我们将下载整个链,丢弃所有无效块,并且将引用最新块的偏移量。

At this point, we’re ready to create a consumer on the transactions topic:

至此,我们准备在transactions主题上创建使用者:

Our example topic has been created with two partitions, to demonstrate how partitioning works in Kafka. The partitions are set up in the docker-compose.yml file, with this line:

我们的示例主题已经创建了两个分区,以演示分区在Kafka中的工作方式。 分区在docker-compose.yml文件中设置,行如下:

KAFKA_CREATE_TOPICS=transactions:2:1,blockchain:1:1

KAFKA_CREATE_TOPICS=transactions:2:1,blockchain:1:1

transactions:2:1 specifies the number of partitions and the replication factor (i.e. how many brokers will maintain a copy of the data on this partition).

transactions:2:1指定分区数和复制因子(即,有多少代理将在此分区上维护数据副本)。

This time, our consumer will start from OffsetType.LATEST so we only get transactions published from the current time onwards.

这次,我们的使用者将从OffsetType.LATEST开始,因此我们仅从当前时间开始发布交易。

By pinning the consumer to a specific partition of the transactions topic, we can increase the total throughput of all consumers on the topic. The Kafka broker will evenly distribute incoming messages across the two partitions of the transactions topic, unless we specify a partition when we publish to the topic. This means each consumer will be responsible for processing 50% of the messages, doubling the potential throughput of a single consumer.

通过将消费者固定在transactions主题的特定分区上,我们可以增加该主题上所有消费者的总吞吐量。 Kafka代理将在事务主题的两个分区之间平均分配传入消息,除非在发布到该主题时指定一个分区。 这意味着每个使用者将负责处理50%的消息,使单个使用者的潜在吞吐量增加一倍。

Now we can begin consuming transactions:

现在我们可以开始使用交易了:

As transactions are received, we’ll add them to an internal list. Every three transactions, we’ll create a new block and call mine():

收到交易后,我们会将其添加到内部列表中。 每三笔交易,我们将创建一个新块并调用mine()

  1. First, we’ll check if our blockchain is the longest one in the network; is our saved offset the latest, or have other nodes already published later blocks to the blockchain? This is our consensus step.

    首先,我们将检查我们的区块链是否是网络中最长的区块链; 我们保存的偏移量是最新的,还是其他节点已经发布了更高版本的区块链? 这是我们的共识步骤。
  2. If new blocks have already been appended, we’ll make use of the read_and_validate_chain from before, this time supplying our latest known offset to retrieve only the newer blocks.

    如果已经添加了新的块,那么我们将使用之前的read_and_validate_chain ,这次提供我们最新的已知偏移量以仅检索较新的块。

  3. At this point, we can attempt to calculate the proof of work, basing it on the proof from the latest block.

    在这一点上,我们可以基于最新区块的证明来尝试计算工作证明。
  4. To reward ourselves for solving the proof of work, we can insert a transaction into the block, paying ourselves a small block reward.

    为了奖励自己解决工作量证明的方法,我们可以在交易中插入一笔交易,并向自己支付一小笔奖励。
  5. Finally, we’ll publish our block onto the blockchain topic. The publish method looks like this:

    最后,我们将区块发布到区块链主题上。 publish方法看起来像这样:

行动中 (In Action)

  1. First start the broker:

    首先启动代理:

docker-compose up -d

docker-compose up -d

2. Run a consumer on partition 0:

2.在分区0上运行使用者:

python kafka_blockchain.py 0

python kafka_blockchain.py 0

3. Publish 3 transactions directly to partition 0:

3.直接将3个事务发布到分区0:

4. Check the transactions were added to a block on the blockchain topic:

4.检查将交易添加到关于区块blockchain的区块:

kafkacat -C -b kafka:9092 -t blockchain

kafkacat -C -b kafka:9092 -t blockchain

You should see output like this:

您应该看到如下输出:

To balance transactions across two consumers, start a second consumer on partition 1, and remove -p 0 from the publication script above.

要在两个使用者之间平衡事务,请在分区1上启动另一个使用者,然后从上面的发布脚本中删除-p 0

结论 (Conclusion)

Kafka can provide the foundation for a simple framework for blockchain experimentation. We can take advantage of features built into the platform, and associated tools like kafkacat, to experiment with distributed peer to peer transactions.

Kafka可以为区块链实验的简单框架提供基础。 我们可以利用平台内置的功能以及诸如kafkacat之类的相关工具来试验分布式对等事务。

While scaling transactions in a public setting presents one set of issues, within a private network or consortium, where real-world trust is already established, transaction scaling might be achieved via an implementation which takes advantage of Kafka concepts.

虽然在公共场所扩展事务会带来一系列问题,但在已经建立了真实世界信任的专用网络或财团内部,可以通过利用Kafka概念的实现来实现事务扩展。

翻译自: https://www.freecodecamp.org/news/a-blockchain-experiment-with-apache-kafka-97ee0ab6aefc/

kafka应用于区块链

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/394792.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

pythonfor循环100次_以写代学: python for循环 range函数 xrange函数

脚本一: #!/usr/bin/env python # coding: utf8 sum100 0 for i in range(101): sum100 i #(1)range是一个可以取值的函数,上边这个取的是0-100,并不包含101 #(2)也可以指定,比如r…

iis下php 500错误

很不想用iis,然而客户不想增加机器,只好按客户的意思了。可是没想到发送短信以在本地 机器上是好的,在iis下直接500。   一开始以为是防火墙问题,后来检查了一下没有,再后来换了一个短信接口,就莫名其妙好…

linux mv 递归拷贝,奇技淫巧 - 给Linux中的cp和mv命令中添加进度条的高级拷贝

GNU cp和GNU mv命令用于在GNU/Linux操作系统中复制和移动文件和目录。这两个命令缺少的一个特性是它们不显示任何进度条。如果复制一个大文件或目录,您就不知道完成复制过程需要多长时间,也不知道复制的数据所占的百分比。还有您将看不到当前正在复制哪个…

webgl 着色器_如何在WebAssembly中使用WebGL着色器

webgl 着色器by Dan Ruta通过Dan Ruta 在WebAssembly中使用WebGL着色器 (Using WebGL shaders in WebAssembly) WebAssembly is blazing fast for number crunching, game engines, and many other things, but nothing can quite compare to the extreme parallelization of …

【洛谷P1966】火柴排队

两列排序后将编号一一对应 归并排序求逆序对 &#xff08;每一次交换就去掉一个逆序对&#xff09; 1 #include<cstdio>2 #include<cstring>3 #include<algorithm>4 #define ll long long5 using namespace std;6 const int N100100;7 const ll P99999997;8 …

python字符串补空格输出_Python去除空格,Python中常见字符串去除空格的方法总结...

今天小编就为大家分享一篇关于Python去除字符串前后空格的几种方法&#xff0c;小编觉得内容挺不错的&#xff0c;现在分享给大家&#xff0c;具有很好的参考价值&#xff0c;需要的朋友一起跟随小编来看看吧&#xff1a; Python去除空格方法一&#xff1a; strip()方法&#x…

Alan Walker MV 合辑01 by defender

Alan Walker MV合辑 出来啦&#xff01; 百度网盘下载地址&#xff1a; 链接&#xff1a;https://pan.baidu.com/s/10WSool70XBe_8tJOae8V-w 提取码&#xff1a;uckq 地址查看Microsoft Onedrive Download Address:  BE DELETED Google Drive Download Address&#xff1a; …

scanf函数具体解释与缓冲区

1.基本信息 函数原型&#xff1a; int scanf( char *format, args, ...); 函数返回值&#xff1a; 读入并赋给args的数据个数。遇到文件结束返回EOF&#xff0c;出错返回0。 函数功能&#xff1a; scanf函数是格式化输入函数&#xff0c;它从标准输入设备(键盘)读取输入的信息。…

linux中win文件转为unix,如何将文本文件从Windows转换为Unix

从Unix转换到Windows时&#xff0c;我得到正确的输出;但是&#xff0c;从Windows到Unix时&#xff0c;我得到了一些奇怪的输出。我认为我必须允许的是删除回车\ r。虽然这不起作用。当我运行代码后打开文本文件时&#xff0c;我得到了一些奇怪的结果&#xff0c;第一行是正确的…

程序员伪造一年工作经验_试火—如何伪造程序员

程序员伪造一年工作经验2017年9月6日 (6 September 2017) Sweat is running down my face. I’m staring down a blank sublime text document. What on earth am I doing? My hands are resting above the keyboard of my MacBook pro.汗水顺着我的脸。 我盯着一个空白的崇高…

在unity中设置多种怪物数据_Unity可编程渲染管线(SRP)系列(三)——光照(单通道 正向渲染)...

本文重点:1、漫反射着色2、支持方向光、点光源和聚光灯3、每帧允许16个可见光源4、每个对象最多计算四个像素光和四个顶点光这是涵盖Unity可编写脚本的渲染管线的教程系列的第三部分。这次&#xff0c;我们将通过一个Drawcall为每个对象最多着色8个灯光来增加对漫反射光照的支持…

Java内部类的定义和使用

为什么要用到内部类&#xff1a; 在java开发学习中我们经常会碰到内部类。内部类又有很多的优势&#xff1a;首先举一个简单的例子&#xff0c;如果你想实现一个接口&#xff0c;但是这个接口中的一个方法和你构想的这个类中的一个方法名称参数相同&#xff0c;你应该怎么办&am…

蛋清打发奶油状

在做蛋糕、冰淇凌、面包之类的时候往往都需要奶油状蛋清&#xff0c;让蛋糕、面包更蓬松&#xff0c;冰激凌也可以使用其当做奶油来用。用料 鸡蛋4个 根据用量选择盐(只做冰激凌用奶油放)5g(根据蛋量)白醋(可以不放&#xff0c;根据喜好)5g(根据蛋量)白砂糖40g(分三次放)根据…

react构建_您应该了解的有关React的一切:开始构建所需的基础知识

react构建by Scott Domes由斯科特多姆斯(Scott Domes) 您应该了解的有关React的一切&#xff1a;开始构建所需的基础知识 (Everything You Should Know About React: The Basics You Need to Start Building) Are you curious about React and haven’t had the chance to lea…

荣新linux培训,51CTO博客-专业IT技术博客创作平台-技术成就梦想

切换用户 su - root文件夹管理 mkdir(新建文件夹) rmdir(删除空目录)文件管理 touch(新建文件) rm(删除文件)rm -rf(删除文件夹) cat(查询文件)文件文件夹 mv(剪切文件) cp(复制文件)默认拷贝文件&#xff0c;cp -r 就可以拷贝文件夹啦批量建文件 touch /root/tes…

Educational Codeforces Round 33 (Rated for Div. 2) E. Counting Arrays

题目链接 题意&#xff1a;给你两个数x,yx,yx,y,让你构造一些长为yyy的数列&#xff0c;让这个数列的累乘为xxx&#xff0c;输出方案数。 思路:考虑对xxx进行质因数分解&#xff0c;设某个质因子PiP_iPi​的的幂为kkk,则这个质因子的贡献就相当于把kkk个PiP_iPi​放到yyy个盒子…

《面向对象分析与设计》一第2章 什么是面向对象分析

第2章 什么是面向对象分析 面向对象分析&#xff08;ObjectOriented Analysis&#xff0c;OOA&#xff09;&#xff0c;就是运用面向对象方法进行系统分析。它是软件生命周期的一个阶段&#xff0c;具有一般分析方法所共同具有的内容、目标及策略。但是OOA强调运用面向对象方…

hql可以使用distinct吗_输送食品可以使用白色PVC输送带吗?

食品&#xff0c;是给人们吃到肚子里的&#xff0c;因此不管在加工环节、制造环节还是其他环节&#xff0c;都需要做好食品的安全问题。根据不同的食品&#xff0c;其制造的环境也不同&#xff0c;所使用到的食品输送带的材质也是不一样的&#xff0c;这些是需要根据输送的食品…

htc one m7 linux驱动,HTC One M7官方RUU固件包(可救砖)

在网上找了找关于HTC One M7 (801e)的官方ruu固件包还不多&#xff0c;找了一些&#xff0c;不过有些不能下载&#xff0c;在这里整理了几款可以下载的官方ruu包&#xff0c;这些包都是官方原版的&#xff0c;都是支持线刷的&#xff0c;大家可以下载下来备用了&#xff0c;也可…

emoji .png_根据我对3.5GB聊天记录的分析,Emoji开发人员使用最多

emoji .pngby Evaristo Caraballo通过Evaristo Caraballo 根据我对3.5GB聊天记录的分析&#xff0c;Emoji开发人员使用最多 (The Emoji developers use most — based on my analysis of 3.5GB of chat logs) Emoji have drastically changed the way we communicate in socia…