使用IPFS集群搭建创建私有IPFS网络

基本介绍


IPFS 网络分两类:

  1. 公有
  2. 私有

对于大多数商用应用尤其是企业级解决方案而言,需要对自身数据有完全的控制,这种场合公有IPFS网络并不适用,搭建私有IPFS往往是这类应用的必要需求。

本文我们讲解创建一个私有 IPFS 网络的过程:

创建一个 IPFS集群的私有 IPFS网络用于数据复制。

IPFS 本身不提供节点间数据复制,为了在 IPFS网络中复制数据有两个选择:

  1. Filecoin
  2. IPFS-Cluster。

本文中我们使用 IPFS-Cluster 。

我们通过三个虚拟机器实现私有网络,以下是相关的参考文档:

  1. IPFS: A protocol and network designed to create a content-addressable, peer-to-peer method of storing and sharing hypermedia in a distributed file system. Read more

  2. Private IPFS: 私有网络中的用户(peers)使用同一个共享私钥、此时所有用户只能在私有IPFS网络中通讯, Read more。

  3. IPFS-Cluster: 一个 IPFS集群是一个独立的程序和一个 CLI 客户端。通过一组IPFS守护进程负责分配、复制以及跟踪 pins 。 IPFS-Cluster uses a leader-based consensus algorithm Raft to coordinate storage of a pinset, distributing the set of data across the participating nodes.

    • A cluster peer application: ipfs-cluster-service, to be run along with go-ipfs.
    • A client CLI application: ipfs-cluster-ctl, which allows easily interacting with the peer's HTTP API.
    • An additional "follower" peer application: ipfs-cluster-follow, focused on simplifying the process of configuring and running follower peers.

    Read more

需要注意:

  1. IPFS 的核心功能中,私有网络是其默认特征,同时 IPFS-Cluster 是一个独立的应用。
  2. IPFSIPFS-Cluster 程序是作为不同包来安装的、分别以不同进程启动。
  3. IPFSIPFS-Cluster 具有不同的 peer IDs、不同的API endpoints 、使用不同的端口。
  4. IPFS-Cluster 守护进程是依赖于 IPFS 守护进程,启动 IPFS 守护进程后才可以启动 IPFS-Cluster 守护进程。

搭建 IPFS 私有网络


默认 IPFSIPFS-Cluster 使用以下端口:

IPFS 4001 – Communication with other nodes 5001 – API server 8080 – Gateway server

IPFS-CLUSTER 9094 – HTTP API endpoint 9095 – IPFS proxy endpoint 9096 – Cluster swarm, used for communication between cluster nodes

We will use recently created three virtual machines (in my case I used DigitalOcean) with installed Linux Ubuntu Distributive version 16.04 and command line as the main tool for installing necessary packages and settings. Depending on your cloud provider (AWS, Azure, Google, etc.), you may need to look at some additional settings, like firewall or security group configuration, to let your peers see each other.

Let’s suppose that we have three VMs with the following IP addresses: Node0: 192.168.10.1 Node1: 192.168.10.2 Node2: 192.168.10.3

Let’s start with the zero node (Node0) which will be our bootstrap node.

Step 1: 安装 Go

First of all, let’s install Go as we will need it during our deployment process. Update Linux packages and dependencies:

1sudo apt-get update
2sudo apt-get -y upgrade

Download the latest version and unzip Go

1wget https:``//dl.google.com/go/go1.11.4.linux-amd64.tar.gz
2sudo tar -xvf go1.11.4.linux-amd64.tar.gz
3sudo mv go /usr local

Create Path for Go and set environment variables. \1. Create folder:

1mkdir $HOME``/gopath

Open .bashrc file and add to the end three variables GOROOT, GOPATH, PATH. Open file:

1sudo nano ``$HOME``/.bashrc

Insert to the end of the .bashrc file:

1export GOROOT=/usr/local/go
2export GOPATH=``$HOME``/gopath
3export PATH=``$PATH``:``$GOROOT``/bin:``$GOPATH``/bin

\2. Update .bashrc file and check Go version:

1source ~/.bashrc
2go version

Step 2: 安装 IPFS

We will install the latest version of the go-IPFS. At the moment of writing this article, it was v0.4.18 for Linux. You can check for the latest version here https://dist.IPFS.io/#go-IPFS

Download IPFS, unzip tar file, move unzipped folder under bin and initialise IPFS node:

1wget https:``//dist.IPFS.io/go-IPFS/v0.4.18/go-IPFS_v0.4.18_linux-amd64.tar.gz
2tar xvfz go-IPFS_v0.4.18_linux-amd64.tar.gz
3sudo mv goIPFS/IPFS/usr/local/bin/IPFS``
4``IPFSinit
5``IPFSversion

Repeat steps 1 and 2 for all your VMs.

Step 3: 创建私有网络

Once you have Go and IPFS installed on all of your nodes, run the following command to install the swarm key generation utility. Swarm key allows us to create a private network and tell network peers to communicate only with those peers who share this secret key.

This command should be run only on your Node0. We generate swarm.key on the bootstrap node and then just copy it to the rest of the nodes.

1go get -u github.com/Kubuxu/go-IPFS-swarm-key-gen/IPFS-swarm-key-gen

Now run this utility on your first node to generate swarm.key under .IPFS folder:

1``IPFS-swarm-key-gen & > ~/.IPFS/swarm.key

Copy the file generated swarm.key to the IPFS directory of each node participating in the private network. First of all, you need to remove the default entries of bootstrap nodes from all the nodes you have created.

Step 4: 自引导IPFS 节点

1``IPFSbootstrap rm –all

Add the hash address of your bootstrap to each of the nodes including the bootstrap.

1``IPFSbootstrap add /ip4/192.168.10.1/tcp/4001/IPFS/QmQVvZEmvjhYgsyEC7NvMn8EWf131EcgTXFFJQYGSz4Y83

The IP part (192.168.10.1) will be changed to your Node0 machine IP. The last part is the peer ID which is generated when you initialise your peer IPFS init). You can see it above where it shows “peer identity:

1QmQVvZEmvjhYgsyEC7NvMn8EWf131EcgTXFFJQYGSz4Y83

or if you run *IPFS id* command in the console. So, you need to change IP and peer ID accordingly to you Node0. Do this for all of your nodes.

We also need to set the environment variable “LIBP2P_FORCE_PNET” to force our network to Private mode:

1export LIBP2P_FORCE_PNET=1

Configuring IP for communication

Inside the .IPFS folder, there is a “config” file. It contains a lot of settings including the network details on which our IPFS nodes will work on. Open this config file and find “Addresses”. It will look like this:

1"Addresses"``: {
2"API"``: ``"/ip4/192.168.10.1/tcp/5001"``,
3"Announce"``: [],
4"Gateway"``: ``"/ip4/192.168.10.1/tcp/8080"``,
5"NoAnnounce"``: [],
6"Swarm"``: [
7"/ip4/0.0.0.0/tcp/4001"``,
8"/ip6/::/tcp/4001"
9]
10},

The IP mentioned in the API is the one on which IPFS will bind on for communication. By default, it’s localhost (127.0.0.1), so to enable our nodes to “see” each other we need to set this parameter accordingly to each node’s IP. Gateway parameter is for access from the browser.

Step 5: 节点启动与测试

We are done with all the configurations, and now it is time to start all the nodes to see if everything went well and if they are closed to the private network. Run IPFS daemon on all of your nodes.

1``IPFSdaemon

Now let’s add the file from one of the nodes and try to access it from another.

1mkdir test-files
2echo helloIPFS& > file.txt
3``IPFSadd file.txt

Take the printed hash and try to the cat file from another node.

1``IPFScat QmZULkCELmmk5XNfCgTnCyFgAVxBRBXyDHGGMVoLFLiXEN

You should see the contents of the added file from the first node. To check and be sure that we have a private network we can try to access our file by its CID from the public IPFS gateway. You can choose one of the public gateways from this list: https://IPFS.github.io/public-gateway-checker.

If you did everything right, then the file won’t be accessible. Also, you can run the *IPFS swarm peers* command, and it will display a list of the peers in the network it’s connected to. In our example, each peer sees two others.

Step 6: 后台服务方式启动 IPFS 守护进程

For IPFS demon to be continually running, even after we have exited from our console session, we will create systemd service. Before we do so, stop/kill your IPFS daemon. Create a file for a new service.

1sudo nano /etc/systemd/system/IPFS.service

And add to it the following settings:

1[Unit]
2``Description=IPFSDaemon
3``After=syslog.target network.target remote-fs.target nss-lookup.target
4``[Service]
5``Type=simple
6``ExecStart=/usr/local/bin/IPFSdaemon --enable-namesys-pubsub
7``User=root
8``[Install]
9``WantedBy=multi-user.target

Save and close the file. Apply the new service.

1sudo systemctl daemon-reload
2sudo systemctl enableIPFS``
3sudo systemctl startIPFS``
4sudo systemctl statusIPFS``

Reboot your system and check that IPFS daemon is active and running, and then you can again try to add the file from one node and access it from another.

We have completed part of creating a private IPFS network and running its demons as a service. At this phase, you should have three IPFS nodes organized in one private network. Now let’s create our IPFS-CLUSTER for data replication.

部署 IPFS-Cluster


After we create a private IPFS network, we can start deploying IPFS-Cluster on top of IPFS for automated data replication and better management of our data.

There are two ways how to organize IPFS cluster, the first one is to set a fixed peerset (so you will not be able to increase your cluster with more peers after the creation) and the other one – to bootstrap nodes (you can add new peers after cluster was created).

IPFS-Cluster includes two components:

  • IPFS-cluster-service mostly to initialize cluster peer and run its daemon
  • IPFS-cluster-ctl for managing nodes and data among the cluster

Step 1: 安装 IPFS-Cluster

There are many ways how to install IPFS-Cluster. In this manual, we are using the installing from source method. You can see all the provided methods here.

Run next commands in your console terminal to install IPFS-cluster components:

1git clone https:``//github.com/IPFS/IPFS-cluster.git $GOPATH/src/github.com/IPFS/IPFS-cluster
2cd ``$GOPATH``/src/github.com/IPFS/IPFS-cluster
3make install

Check successful installation by running:

1``IPFS-cluster-service --version
2``IPFS-cluster-ctl --version

Repeat this step for all of your nodes.

Step 2: 生成并设置 CLUSTER_SECRET 变量

Now we need to generate CLUSTER_SECRET and set it as an environment variable for all peers participating in the cluster. Sharing the same CLUSTER_SECRET allow peers to understand that they are part of one IPFS-Cluster. We will generate this key on the zero node and then copy it to all other nodes. On your first node run the following commands:

1export CLUSTER_SECRET=$(od -vN 32 -An -tx1 /dev/urandom | tr -d ``' \n'``) ``echo $CLUSTER_SECRET

You should see something like this:

19a420ec947512b8836d8eb46e1c56fdb746ab8a78015b9821e6b46b38344038f

In order for CLUSTER_SECRET to not disappear after you exit the console session, you must add it as a constant environment variable to the .bashrc file. Copy the printed key after echo command and add it to the end of .bashrc file on all of your nodes.

It should look like this:

1export CLUSTER_SECRET=9a420ec947512b8836d8eb46e1c56fdb746ab8a78015b9821e6b46b38344038f

And don’t forget to update your .bashrc file with command:

1source ~/.bashrc

Step 3: cluster初始化和启动

After we have installed IPFS-Cluster service and set a CLUSTER_SECRET environment variable, we are ready to initialize and start first cluster peer (Node0).

Note: make sure that your IPFS daemon is running before you start the IPFS-cluster-service daemon. To initialize cluster peer, we need to run the command:

1-cluster-service init

To start cluster peer, run:

1-cluster-service daemon

You should see the output in the console:

1INFO cluster:IPFSCluster is ready cluster.go:461
2``IPFS-cluster-service daemon

You should see the output in the console:

1INFO cluster:IPFSCluster is ready cluster.go:461

Now open a new console window and connect to your second VM(node1). Note: make sure that your IPFS daemon is running before you start the IPFS-cluster-service daemon.

You need to install IPFS-Cluster components and set a CLUSTER_SECRET environment variable (copy from node0) as we did it for our first node. Run the following commands to initialise IPFS-Cluster and bootstrap it to node0:

1``IPFS-cluster-service init
2``IPFS-cluster-service daemon --bootstrap
3/ip4/192.168.10.1/tcp/9096/IPFS/QmZjSoXUQgJ9tutP1rXjjNYwTrRM9QPhmD9GHVjbtgWxEn

The IP part (192.168.10.1) will be changed to your Node0 machine IP. The last part is the cluster peer ID which is generated when you initialise your cluster peer(IPFS-cluster-service init). Bear in mind that it should be IPFS-Cluster peer ID, not an IPFS peer ID.

You can run *IPFS-cluster-service* *id* command in the console to get this. You need to change IP and cluster peer ID according to your Node0. Do this for all of your nodes. To check that we have two peers in our cluster, run command:

1``IPFS-cluster-ctl peers ls

And you should see the list of cluster peers:

1node1 & >IPFS-cluster-ctl peers ls
2QmYFYwnFUkjFhJcSJJGN72wwedZnpQQ4aNpAtPZt8g5fCd | Sees 1 other peers
3Addresses:
4- /ip4/127.0.0.1/tcp/10096/IPFS/QmYFYwnFUkjFhJcSJJGN72wwedZnpQQ4aNpAtPZt8g5fCd
5- /ip4/192.168.1.3/tcp/10096/IPFS/QmYFYwnFUkjFhJcSJJGN72wwedZnpQQ4aNpAtPZt8g5fCd
6``IPFS: Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
7- /ip4/127.0.0.1/tcp/4001/IPFS/Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
8- /ip4/192.168.1.3/tcp/4001/IPFS/Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
9QmZjSoXUQgJ9tutP1rXjjNYwTrRM9QPhmD9GHVjbtgWxEn | Sees 1 other peers
10Addresses:
11- /ip4/127.0.0.1/tcp/9096/IPFS/QmZjSoXUQgJ9tutP1rXjjNYwTrRM9QPhmD9GHVjbtgWxEn
12- /ip4/192.168.1.2/tcp/9096/IPFS/QmZjSoXUQgJ9tutP1rXjjNYwTrRM9QPhmD9GHVjbtgWxEn
13``IPFS: Qmbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
14- /ip4/127.0.0.1/tcp/4001/IPFS/Qmbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
15- /ip4/192.168.1.2/tcp/4001/IPFS/Qmbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb

Repeat this step for the third node and all others nodes you want to join to the cluster.

Step 4: 以服务方式启动 IPFS-Cluster 守护进程

For the IPFS-Cluster daemon to be continually running, even after we close console session, we will create systemd service for it. Run the following command to create a file for IPFS-Cluster system service:

1sudo nano /etc/systemd/system/IPFS-cluster.service

And insert to it:

1[Unit]
2Description=IPFS-Cluster Daemon
3Requires=IPFS``
4After=syslog.target network.target remote-fs.target nss-lookup.targetIPFS``
5[Service]
6Type=simple
7ExecStart=/home/ubuntu/gopath/bin/IPFS-cluster-service daemon
8User=root
9[Install]
10WantedBy=multi-user.target

Apply new service and run it:

1sudo systemctl daemon-reload
2sudo systemctl enableIPFS-cluster
3sudo systemctl startIPFS-cluster
4sudo systemctl statusIPFS-cluster

Reboot your machine and check that both IPFS and IPFS-Cluster services are running.

Step 5: 测试 IPFS-Cluster 与数据复制

To test data replication, create the file and add it to the cluster:

1``IPFS-cluster-ctl add myfile.txt

Take CID of the recently added file and check its status:

1``IPFS-cluster-ctl status CID

You should see that this file has been PINNED among all cluster nodes.

总结


Are you wondering how you can apply this IPFS tutorial to support your real-life needs? This article describes how we started with an internal PoC and ended up with a real prototype allowing us to share files on the blockchain with IPFS securely.

If you have any questions regarding IPFS networks and their potential use for data replication and secure data sharing, don’t hesitate to get in touch!

任何程序错误,以及技术疑问或需要解答的,请添加

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/546993.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

MongoDB基础介绍安装与使用

MongoDB已经日益成为流程和主流的数据库了,原因有两个:第一个就是技术优势,第二就是便利性,个人使用部署都很方便。 MongoDB的优缺点,以及使用场景 优点: 面向文档存储(自由读高,…

python调用cv2.findContours时报错:ValueError: not enough values to unpack (expected 3, got 2)

python调用cv2.findContours时报错:ValueError: not enough values to unpack (expected 3, got 2) OpenCV旧版,返回三个参数: im2, contours, hierarchy cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) 要想返回三个参数…

优先级队列用的的数据结构

2019独角兽企业重金招聘Python工程师标准>>> 优先级队列和队列没有本质的区别 只是 每次出队列的时候出队列中优先级最高的 这里假定数字越小,优先级越高 优先级队列 是不同于先进先出队列的另一种队列。每次从队列中取出的是具有最高优先权的元素。 通…

公网访问阿里云数据库MongoDB——填坑笔记

业务情景 两台服务器,一台阿里云ECS云服务器(专用网络),另一台是阿里云数据库MongoDB,处于安全考虑MongoDB是不运行外网连接的,那接下来就看怎么实现公网访问。 看到上面红色的网络类型描述,有…

华为S5700交换机开启telnet远程登陆配置(推荐)

实验目标: 电脑PC1经过S3700交换机,telnet远程登录到S5700交换机。 连接拓扑图如下:(测试时用实物测试) 一、配置S5700交换机。 1.交换机开启Telnet服务 <Huawei>system-view Enter system view, return user view with Ctrl+Z. [Huawei]sysname LSW1 [LSW1]

2021最新Python量化A股投资必赚策略

一、板块信息&#xff1a; 1、每隔30分钟后台自动采集一个开盘啦的板块信息&#xff08;9:15开始到15:00是股票开市时间&#xff0c;如果15点以后已经采集过数据&#xff0c;就不需要重复采集&#xff0c;避免频繁采集被网站屏蔽&#xff09;。按照codelist.txt列表&#xff0…

Ubuntu安装设置nginx和nohup常用操作

nginx安装 Ubuntu直接从常规源中安装 apt-get install nginx 安装的目录 配置文件&#xff1a;/etc/nginx/主程序文件&#xff1a;/usr/sbin/nginxWeb默认目录&#xff1a;/usr/share/nginx/http/日志目录&#xff1a;/var/log/nginx/ nginx常用命令 1、启动/停止nginx服务 1…

crontab环境变量

为什么80%的码农都做不了架构师&#xff1f;>>> 设置了一个crontab30 0 * * * cd /home/work/user/huangbx/research/getfeature/data/current; sh resample.sh &>/dev/null$sh resample.sh是可以运行的$head -5 resample.sh##对事实数据进行采样set -xg_da…

不同网段通过静态路由实现互通(强烈推荐)

实验拓扑图如下&#xff1a; 只贴出PC1到S5700链路的配置代码&#xff01; 一、华为交换机S5700配置 1.新建VLAN66 <Huawei>system-view Enter system view, return user view with CtrlZ. [Huawei]sysname LSW4 [LSW4]vlan 66 //新建管理VLAN [LSW4-vlan66]quit [L…

Hadoop 副本存储策略的源码修改和设置

Table of Contents BlockPlacementPolicyHadoop 提供的 BlockPlacementPolicy 实现BlockPlacementPolicyDefault 源码阅读 首先处理favoredNodes三副本选择再到具体的选择源码阅读的几个注意修改HDFS默认的副本放置机制RackAwareness 机架感知 大多数的叫法都是副本放置策略&a…

fabric.js和高级画板

本文介绍fabric.js框架使用&#xff0c;以及使用fabricjs打造一个高级画板程序. 高级画板功能介绍 全局绘制颜色选择护眼模式、网格模式切换自由绘制画箭头画直线画虚线画圆/椭圆/矩形/直角三角形/普通三角形/等边三角形文字输入图片展示及相关移动、缩放等操作删除功能 &am…

不同网段通过静态路由实现互通,华为S5700交换机开启telnet远程指定IP登陆配置(强烈推荐)

首先,不同网段通过静态路由实现互通配置方法,参考不同网段通过静态路由实现互通 在以上基础上,还需要配置 一、配置S5700交换机。 1.交换机开启Telnet服务 <Huawei>system-view Enter system view, return user view with Ctrl+Z. [Huawei]sysname LSW4 [

Centos7.x Hadoop 3.x HDFS 写入文件

操作目的 1、在Linux环境下 编写HDFS写文件程序的java文件 2、编译并打包HDFS的写程序 3、执行HDFS的写程序 环境、工具说明 1、先搭建一个 Hadoop 的基础集群环境 参考&#xff1a;Hadoop集群搭建 2、JDK版本&#xff1a;jdk1.8 安装配置过程 3、工具&#xff1a;xshell5 4、…

不同网段通过静态路由实现互通,华为S5700交换机开启SSH远程指定IP登陆配置(强烈推荐)

首先,不同网段通过静态路由实现互通配置方法,参考不同网段通过静态路由实现互通 在以上基础上,还需要配置 一、配置S5700交换机。 1.交换机开启stelnet服务 <Huawei>system-view Enter system view, return user view with Ctrl+Z. [Huawei]sysname LSW4 [

图片人脸检测——OpenCV版(二)

图片人脸检测 人脸检测使用到的技术是OpenCV&#xff0c;上一节已经介绍了OpenCV的环境安装&#xff0c;点击查看. 功能展示 识别一种图上的所有人的脸&#xff0c;并且标出人脸的位置&#xff0c;画出人眼以及嘴的位置&#xff0c;展示效果图如下&#xff1a; 多张脸识别效果图…

wordpress for sae建站全过程

为什么80%的码农都做不了架构师&#xff1f;>>> 文章链接 http://www.brighttj.com/wordpress/use-wordpress-for-sae/ 里面详细的介绍了整个博客网站搭建的过程。多捧场。 转载于:https://my.oschina.net/saitjr/blog/197592

Tesseract Ocr文字识别

Tesseract的OCR引擎最先由HP实验室于1985年开始研发&#xff0c;至1995年时已经成为OCR业内最准确的三款识别引擎之一。2005年&#xff0c;Tesseract由美国内华达州信息技术研究所获得&#xff0c;并求诸于Google对Tesseract进行改进、消除Bug、优化工作。Tesseract目前已作为开…

jenkins用ssh agent插件在pipeline里实现scp和远程执行命令

现在ssh agent的认证&#xff0c;已不支持明文用户密码&#xff0c;而只能用加密方式实现。 所以我先在jenknis和nginx服务器之后&#xff0c;实现ssh免密码rsa证书登陆。 私钥放jenkins&#xff0c;公钥放nginx。然后&#xff0c;将私钥拿出来&#xff0c;后面要写入jenkins…

QT5 获取窗口、系统屏幕大小尺寸信息,Qt 获取控件位置坐标,屏幕坐标,相对父窗体坐标

一、QT5 获取窗口大小尺寸信息 QT窗口尺寸&#xff0c;窗口大小和大小改变引起的事件 QResizeEvent。 //窗口左上角的位置(含边框)qDebug() << this->frameGeometry().x() << this->frameGeometry().y() << ;//1qDebug() << this->x() <…

视频人脸检测——OpenCV版(三)

视频人脸检测是图片人脸检测的高级版本&#xff0c;图片检测详情点击查看我的上一篇《图片人脸检测——OpenCV版&#xff08;二&#xff09;》 实现思路&#xff1a; 调用电脑的摄像头&#xff0c;把摄像的信息逐帧分解成图片&#xff0c;基于图片检测标识出人脸的位置&#x…