基本介绍
IPFS
网络分两类:
- 公有
- 私有
对于大多数商用应用尤其是企业级解决方案而言,需要对自身数据有完全的控制,这种场合公有IPFS
网络并不适用,搭建私有IPFS
往往是这类应用的必要需求。
本文我们讲解创建一个私有 IPFS
网络的过程:
创建一个 IPFS
集群的私有 IPFS
网络用于数据复制。
IPFS
本身不提供节点间数据复制,为了在 IPFS
网络中复制数据有两个选择:
- Filecoin
IPFS
-Cluster。
本文中我们使用 IPFS
-Cluster 。
我们通过三个虚拟机器实现私有网络,以下是相关的参考文档:
-
IPFS
: A protocol and network designed to create a content-addressable, peer-to-peer method of storing and sharing hypermedia in a distributed file system. Read more
-
Private IPFS
: 私有网络中的用户(peers)使用同一个共享私钥、此时所有用户只能在私有IPFS
网络中通讯, Read more。
-
IPFS
-Cluster: 一个 IPFS
集群是一个独立的程序和一个 CLI
客户端。通过一组IPFS
守护进程负责分配、复制以及跟踪 pins 。 IPFS
-Cluster uses a leader-based consensus algorithm Raft to coordinate storage of a pinset
, distributing the set of data across the participating nodes.
- A cluster peer application:
ipfs-cluster-service
, to be run along with go-ipfs
. - A client
CLI
application: ipfs-cluster-ctl
, which allows easily interacting with the peer's HTTP API
. - An additional "follower" peer application:
ipfs-cluster-follow
, focused on simplifying the process of configuring and running follower peers.
Read more
需要注意:
IPFS
的核心功能中,私有网络是其默认特征,同时 IPFS
-Cluster 是一个独立的应用。IPFS
和 IPFS
-Cluster 程序是作为不同包来安装的、分别以不同进程启动。IPFS
和 IPFS
-Cluster 具有不同的 peer IDs、不同的API
endpoints 、使用不同的端口。IPFS
-Cluster 守护进程是依赖于 IPFS
守护进程,启动 IPFS
守护进程后才可以启动 IPFS
-Cluster 守护进程。
搭建 IPFS
私有网络
默认 IPFS
和 IPFS
-Cluster 使用以下端口:
IPFS
4001 – Communication with other nodes 5001 – API server
8080 – Gateway server
IPFS
-CLUSTER 9094 – HTTP API endpoint
9095 – IPFS
proxy endpoint 9096 – Cluster swarm, used for communication between cluster nodes
We will use recently created three virtual machines (in my case I used DigitalOcean
) with installed Linux Ubuntu Distributive version 16.04
and command line as the main tool for installing necessary packages and settings. Depending on your cloud provider (AWS
, Azure, Google, etc.), you may need to look at some additional settings, like firewall or security group configuration, to let your peers see each other.
Let’s suppose that we have three VMs with the following IP addresses: Node0
: 192.168.10.1 Node1
: 192.168.10.2 Node2
: 192.168.10.3
Let’s start with the zero node (Node0
) which will be our bootstrap node.
Step 1: 安装 Go
First of all, let’s install Go as we will need it during our deployment process. Update Linux packages and dependencies:
Download the latest version and unzip Go
1 | wget https:``//dl.google.com/go/go1.11.4.linux-amd64.tar.gz |
---|
| |
2 | sudo tar -xvf go1.11.4.linux-amd64.tar.gz |
---|
| |
Create Path for Go and set environment variables. \1. Create folder:
Open .bashrc file and add to the end three variables GOROOT, GOPATH, PATH. Open file:
1 | sudo nano ``$HOME``/.bashrc |
---|
| |
Insert to the end of the .bashrc file:
1 | export GOROOT=/usr/local/go |
---|
| |
2 | export GOPATH=``$HOME``/gopath |
---|
| |
3 | export PATH=``$PATH``:``$GOROOT``/bin:``$GOPATH``/bin |
---|
| |
\2. Update .bashrc file and check Go version:
Step 2: 安装 IPFS
We will install the latest version of the go-IPFS
. At the moment of writing this article, it was v0.4.18 for Linux. You can check for the latest version here https://dist.IPFS
.io/#go-IPFS
Download IPFS
, unzip tar file, move unzipped folder under bin and initialise IPFS
node:
1 | wget https:``//dist. IPFS.io/go- IPFS/v0.4.18/go- IPFS_v0.4.18_linux-amd64.tar.gz |
---|
| |
2 | tar xvfz go- IPFS_v0.4.18_linux-amd64.tar.gz |
---|
| |
3 | sudo mv go IPFS/ IPFS/usr/local/bin/ IPFS`` |
---|
| |
Repeat steps 1 and 2 for all your VMs.
Step 3: 创建私有网络
Once you have Go and IPFS
installed on all of your nodes, run the following command to install the swarm key generation utility. Swarm key allows us to create a private network and tell network peers to communicate only with those peers who share this secret key.
This command should be run only on your Node0. We generate swarm.key on the bootstrap node and then just copy it to the rest of the nodes.
1 | go get -u github.com/Kubuxu/go- IPFS-swarm-key-gen/ IPFS-swarm-key-gen |
---|
| |
Now run this utility on your first node to generate swarm.key under .IPFS
folder:
1 | ``IPFS-swarm-key-gen & > ~/. IPFS/swarm.key |
---|
| |
Copy the file generated swarm.key to the IPFS
directory of each node participating in the private network. First of all, you need to remove the default entries of bootstrap nodes from all the nodes you have created.
Step 4: 自引导IPFS
节点
Add the hash address of your bootstrap to each of the nodes including the bootstrap.
1 | ``IPFSbootstrap add /ip4/192.168.10.1/tcp/4001/ IPFS/QmQVvZEmvjhYgsyEC7NvMn8EWf131EcgTXFFJQYGSz4Y83 |
---|
| |
The IP part (192.168.10.1) will be changed to your Node0 machine IP. The last part is the peer ID which is generated when you initialise your peer IPFS
init). You can see it above where it shows “peer identity:
1 | QmQVvZEmvjhYgsyEC7NvMn8EWf131EcgTXFFJQYGSz4Y83 |
---|
| |
or if you run *IPFS
id* command in the console. So, you need to change IP and peer ID accordingly to you Node0. Do this for all of your nodes.
We also need to set the environment variable “LIBP2P_FORCE_PNET” to force our network to Private mode:
1 | export LIBP2P_FORCE_PNET=1 |
---|
| |
Configuring IP for communication
Inside the .IPFS
folder, there is a “config” file. It contains a lot of settings including the network details on which our IPFS
nodes will work on. Open this config file and find “Addresses”. It will look like this:
2 | "API"``: ``"/ip4/192.168.10.1/tcp/5001"``, |
---|
| |
4 | "Gateway"``: ``"/ip4/192.168.10.1/tcp/8080"``, |
---|
| |
7 | "/ip4/0.0.0.0/tcp/4001"``, |
---|
| |
The IP mentioned in the API is the one on which IPFS
will bind on for communication. By default, it’s localhost (127.0.0.1), so to enable our nodes to “see” each other we need to set this parameter accordingly to each node’s IP. Gateway parameter is for access from the browser.
Step 5: 节点启动与测试
We are done with all the configurations, and now it is time to start all the nodes to see if everything went well and if they are closed to the private network. Run IPFS
daemon on all of your nodes.
Now let’s add the file from one of the nodes and try to access it from another.
2 | echo hello IPFS& > file.txt |
---|
| |
Take the printed hash and try to the cat file from another node.
1 | ``IPFScat QmZULkCELmmk5XNfCgTnCyFgAVxBRBXyDHGGMVoLFLiXEN |
---|
| |
You should see the contents of the added file from the first node. To check and be sure that we have a private network we can try to access our file by its CID from the public IPFS
gateway. You can choose one of the public gateways from this list: https://IPFS
.github.io/public-gateway-checker.
If you did everything right, then the file won’t be accessible. Also, you can run the *IPFS
swarm peers* command, and it will display a list of the peers in the network it’s connected to. In our example, each peer sees two others.
Step 6: 后台服务方式启动 IPFS
守护进程
For IPFS
demon to be continually running, even after we have exited from our console session, we will create systemd service. Before we do so, stop/kill your IPFS
daemon. Create a file for a new service.
1 | sudo nano /etc/systemd/system/ IPFS.service |
---|
| |
And add to it the following settings:
2 | ``Description= IPFSDaemon |
---|
| |
3 | ``After=syslog.target network.target remote-fs.target nss-lookup.target |
---|
| |
6 | ``ExecStart=/usr/local/bin/ IPFSdaemon --enable-namesys-pubsub |
---|
| |
9 | ``WantedBy=multi-user.target |
---|
| |
Save and close the file. Apply the new service.
1 | sudo systemctl daemon-reload |
---|
| |
2 | sudo systemctl enable IPFS`` |
---|
| |
3 | sudo systemctl start IPFS`` |
---|
| |
4 | sudo systemctl status IPFS`` |
---|
| |
Reboot your system and check that IPFS
daemon is active and running, and then you can again try to add the file from one node and access it from another.
We have completed part of creating a private IPFS
network and running its demons as a service. At this phase, you should have three IPFS
nodes organized in one private network. Now let’s create our IPFS
-CLUSTER for data replication.
部署 IPFS
-Cluster
After we create a private IPFS
network, we can start deploying IPFS
-Cluster on top of IPFS
for automated data replication and better management of our data.
There are two ways how to organize IPFS
cluster, the first one is to set a fixed peerset (so you will not be able to increase your cluster with more peers after the creation) and the other one – to bootstrap nodes (you can add new peers after cluster was created).
IPFS
-Cluster includes two components:
IPFS
-cluster-service mostly to initialize cluster peer and run its daemonIPFS
-cluster-ctl for managing nodes and data among the cluster
Step 1: 安装 IPFS
-Cluster
There are many ways how to install IPFS
-Cluster. In this manual, we are using the installing from source method. You can see all the provided methods here.
Run next commands in your console terminal to install IPFS
-cluster components:
1 | git clone https:``//github.com/ IPFS/ IPFS-cluster.git $GOPATH/src/github.com/ IPFS/ IPFS-cluster |
---|
| |
2 | cd ``$GOPATH``/src/github.com/ IPFS/ IPFS-cluster |
---|
| |
Check successful installation by running:
1 | ``IPFS-cluster-service --version |
---|
| |
2 | ``IPFS-cluster-ctl --version |
---|
| |
Repeat this step for all of your nodes.
Step 2: 生成并设置 CLUSTER_SECRET 变量
Now we need to generate CLUSTER_SECRET and set it as an environment variable for all peers participating in the cluster. Sharing the same CLUSTER_SECRET allow peers to understand that they are part of one IPFS
-Cluster. We will generate this key on the zero node and then copy it to all other nodes. On your first node run the following commands:
1 | export CLUSTER_SECRET=$(od -vN 32 -An -tx1 /dev/urandom | tr -d ``' \n'``) ``echo $CLUSTER_SECRET |
---|
| |
You should see something like this:
1 | 9a420ec947512b8836d8eb46e1c56fdb746ab8a78015b9821e6b46b38344038f |
---|
| |
In order for CLUSTER_SECRET to not disappear after you exit the console session, you must add it as a constant environment variable to the .bashrc file. Copy the printed key after echo command and add it to the end of .bashrc file on all of your nodes.
It should look like this:
1 | export CLUSTER_SECRET=9a420ec947512b8836d8eb46e1c56fdb746ab8a78015b9821e6b46b38344038f |
---|
| |
And don’t forget to update your .bashrc file with command:
Step 3: cluster初始化和启动
After we have installed IPFS
-Cluster service and set a CLUSTER_SECRET environment variable, we are ready to initialize and start first cluster peer (Node0).
Note: make sure that your IPFS
daemon is running before you start the IPFS
-cluster-service daemon. To initialize cluster peer, we need to run the command:
To start cluster peer, run:
You should see the output in the console:
1 | INFO cluster: IPFSCluster is ready cluster.go:461 |
---|
| |
2 | ``IPFS-cluster-service daemon |
---|
| |
You should see the output in the console:
1 | INFO cluster: IPFSCluster is ready cluster.go:461 |
---|
| |
Now open a new console window and connect to your second VM(node1). Note: make sure that your IPFS
daemon is running before you start the IPFS
-cluster-service daemon.
You need to install IPFS
-Cluster components and set a CLUSTER_SECRET environment variable (copy from node0) as we did it for our first node. Run the following commands to initialise IPFS
-Cluster and bootstrap it to node0:
1 | ``IPFS-cluster-service init |
---|
| |
2 | ``IPFS-cluster-service daemon --bootstrap |
---|
| |
3 | /ip4/192.168.10.1/tcp/9096/ IPFS/QmZjSoXUQgJ9tutP1rXjjNYwTrRM9QPhmD9GHVjbtgWxEn |
---|
| |
The IP part (192.168.10.1) will be changed to your Node0 machine IP. The last part is the cluster peer ID which is generated when you initialise your cluster peer(IPFS
-cluster-service init). Bear in mind that it should be IPFS
-Cluster peer ID, not an IPFS
peer ID.
You can run *IPFS
-cluster-service* *id* command in the console to get this. You need to change IP and cluster peer ID according to your Node0. Do this for all of your nodes. To check that we have two peers in our cluster, run command:
1 | ``IPFS-cluster-ctl peers ls |
---|
| |
And you should see the list of cluster peers:
1 | node1 & > IPFS-cluster-ctl peers ls |
---|
| |
2 | QmYFYwnFUkjFhJcSJJGN72wwedZnpQQ4aNpAtPZt8g5fCd | Sees 1 other peers |
---|
| |
4 | - /ip4/127.0.0.1/tcp/10096/ IPFS/QmYFYwnFUkjFhJcSJJGN72wwedZnpQQ4aNpAtPZt8g5fCd |
---|
| |
5 | - /ip4/192.168.1.3/tcp/10096/ IPFS/QmYFYwnFUkjFhJcSJJGN72wwedZnpQQ4aNpAtPZt8g5fCd |
---|
| |
6 | ``IPFS: Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa |
---|
| |
7 | - /ip4/127.0.0.1/tcp/4001/ IPFS/Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa |
---|
| |
8 | - /ip4/192.168.1.3/tcp/4001/ IPFS/Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa |
---|
| |
9 | QmZjSoXUQgJ9tutP1rXjjNYwTrRM9QPhmD9GHVjbtgWxEn | Sees 1 other peers |
---|
| |
11 | - /ip4/127.0.0.1/tcp/9096/ IPFS/QmZjSoXUQgJ9tutP1rXjjNYwTrRM9QPhmD9GHVjbtgWxEn |
---|
| |
12 | - /ip4/192.168.1.2/tcp/9096/ IPFS/QmZjSoXUQgJ9tutP1rXjjNYwTrRM9QPhmD9GHVjbtgWxEn |
---|
| |
13 | ``IPFS: Qmbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb |
---|
| |
14 | - /ip4/127.0.0.1/tcp/4001/ IPFS/Qmbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb |
---|
| |
15 | - /ip4/192.168.1.2/tcp/4001/ IPFS/Qmbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb |
---|
| |
Repeat this step for the third node and all others nodes you want to join to the cluster.
Step 4: 以服务方式启动 IPFS
-Cluster 守护进程
For the IPFS
-Cluster daemon to be continually running, even after we close console session, we will create systemd service for it. Run the following command to create a file for IPFS
-Cluster system service:
1 | sudo nano /etc/systemd/system/ IPFS-cluster.service |
---|
| |
And insert to it:
2 | Description= IPFS-Cluster Daemon |
---|
| |
4 | After=syslog.target network.target remote-fs.target nss-lookup.target IPFS`` |
---|
| |
7 | ExecStart=/home/ubuntu/gopath/bin/ IPFS-cluster-service daemon |
---|
| |
10 | WantedBy=multi-user.target |
---|
| |
Apply new service and run it:
1 | sudo systemctl daemon-reload |
---|
| |
2 | sudo systemctl enable IPFS-cluster |
---|
| |
3 | sudo systemctl start IPFS-cluster |
---|
| |
4 | sudo systemctl status IPFS-cluster |
---|
| |
Reboot your machine and check that both IPFS
and IPFS
-Cluster services are running.
Step 5: 测试 IPFS
-Cluster 与数据复制
To test data replication, create the file and add it to the cluster:
1 | ``IPFS-cluster-ctl add myfile.txt |
---|
| |
Take CID of the recently added file and check its status:
1 | ``IPFS-cluster-ctl status CID |
---|
| |
You should see that this file has been PINNED among all cluster nodes.
总结
Are you wondering how you can apply this IPFS
tutorial to support your real-life needs? This article describes how we started with an internal PoC and ended up with a real prototype allowing us to share files on the blockchain with IPFS securely.
If you have any questions regarding IPFS
networks and their potential use for data replication and secure data sharing, don’t hesitate to get in touch!
任何程序错误,以及技术疑问或需要解答的,请添加