Teach yourself - Hyperledger Fabric : Hour 06:00 — Network Setup — Part 2
Teach yourself - Hyperledger Fabric : Hour 06:00 — Network Setup — Part 2
Hello, thanks for taking the time to read this series. I believe you have started from Hour 00:00, if not please do that before you proceed further.
In Hour 05:00 we saw how each binary files are used to generate various files that are necessary to bootstrap the networks, we covered how to read cryptogen.yaml & configtx.yaml files and generated the respective files. Here, we will review how to wire these things together using Yaml file & fire it up using “docker-compose”.
Let’s understand what is “docker-compose” first.. As we know, to setup hyperledger network we need to have multiple role play such as orderer, peer, client, couch-db, endoring peer & so on.. so multiple things are combined to form a network. So it is very tedious process to handle all these services(role players) in one umbrella. While we can do manage that individually, but you need someone to control & execute it in one-go. That’s where, “DOCKER-COMPOSE” comes into picture.
As per the official doc : Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the list of features.
Here, we are going to use a file from “first-network” from your fabric-samples folder and start exploring what it contains.
Go to fabric-samples/first-network folder, copy down “docker-compose-cli.yaml” & “base” folders into your folder where you have worked till this point. So your folder should be having these files now..
Now, let’s explore what the docker-compose-cli.yaml file contains & the use of it…
Basically it contains the following..
Volumes: indicates the domain we declared in “part1”.. for orderers & peers.. So it is orderer.example.com & peer0.org1.example.com & so on..
Networks: byfn(build your first network) — please keep in mind that it is very important to declare the network name & use it consistently. As communication between the containers in a network happens through this network(it is one of the option to enable the communication bridge).
Services: this is the core/heart of this file, here we declare each volume & map it with respective container_name… Let’s review the first one..
Here, extends parm map with a file in “base” folder and we declare this service as “orderer.example.com” & map a name as “orderer.example.com”. So after you launch the respective container using docker-compose, you can view all these container by using “docker ps” command.(we will see this shortly).
networks: byfn : here we are mapping this container to point the network byfn
Similarly for peers..
Let’s explore the base yaml file, which contains actual parameters that wire up all the files together..
Note: Please note, if you are having different domain name than “orderer.example.com”, you can very well change that, but it should be changed in all over the places accordingly.
container_name here in this file is same as docker-compose-cli.yaml file..
image: hyperledger/fabric-orderer the images would have already downloaded when you first downloaded the fabric-samples using curl command.
Environment: it contains lot of parameters & each one does special function in wiring up the respective services, let’s discuss one by one..
FABRIC_LOGIC_SPEC → INFO, ERROR, DEBUG are the options.. In production never use DEBUG, as it gives you everything in terminal which will be prone to hacking. So for production use Info , error where as for development go with DEBUG. It brings you the log
ORDERER_GENERAL_LISTENADDRESS -> 0.0.0.0 it means it can listen to any ip-address in the network. If you want ordere to listen to specific ip address, you can override the same here.
ORDERER_GENERAL_GENESISMETHOD -> file : indicates the genesis file we are using is a file system
ORDERER_GENERAL_GENESISFILE-> path from the container(image). Please note that this path is not from your local disk, this is from container.. then how do we map our local disk file here?, we will see this shortly.
ORDERER_GENERAL_LOCALMSPID = OrdererMSP -> remember this ID?, we provided same in “Part 1” , pls check..
ORDERER_GENERAL_LOCALMSPDIR -> path from the container(image) please note that this path is not from your local disk, this is from container.. then how do we map our local disk file here?, we will see this shortly.
Below parms are for TLS -> Transport Layer Security (to enable encryption), again the paths are referring to the containers..
Volumes: are the one actually mapping your localdisk to path in the container.. Let’s talk about the first one..
../channel-artificats/genesis.block => local path is getting mapped to
So basically ,you are creating a link between your local disk to container. So if your local path is different, then map it accordingly.
— 7050:7050 -> these two ports indicates (HOST:Container).. For example, this orderer can be referred as ordere.example.com:7050 and same goes to container. .it is not mandatory that, you should have both same ID. you can define your own, but keep in mind that to have simplicity & ease of calling across the files, always go with simple one.
Now let’s explore Peer0.org1.example.com…
Here, similar to orderer, we declare few parameters extends a file peer-base.yaml to override the parms, but good thing is you need not take any action in peer-base.yaml file, since the required parameters to be overriden are available in docker-compose-base.yaml file itself under peer services..
So let’s go & explore each of these now..
CORE_PEER_ID=peer0.org1.example.com → straight forward, we give our peer0 domain here
CORE_PEER_ADDRESS=peer0.org1.example.com:7051 → we map the address with PORT number, remember all peers will be communicating via the port number 7051
CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.example.com:7051 → Peers in hyperledger will be communicated via gRPC(gossip Remote Procedure Call). So Peer0 will bootstrap the gossips to peer1
CORE_PEER_LOCALMSPID=Org1MSP → this ID we have mapped in Part1 -> please check
To learn more on Gossips & how it works read this article -> Click here
I found this comments very easy to understand from “stackoverflow”
Gossip can be used just between peers in the same organisation or between peers in different organisations. It is always scoped to specific channels.
1) Communication between the peers in a single organisation
- One peer may be the leader and connect to the ordering service and deliver blocks to other peers in its own organisation
- A peer can connect to other peers in its organisation to obtain missing blocks
2) Communication between peers in different organisations
- In v1.2 when using the private data feature, gossip is used to distribute the private data to other peers in the org at endorsement time
- Peers can get missing blocks that have been already committed, from peers in other organisations
- Peers can get missing private data from peers in other organisations at commit time
Similarly, Volume in peers are mapped with local disk, please see to that you have mapped your local disk correctly.
Now, do the same across other peers & you should be good now, please ensure to review the GOSSIP_BOOTSTRAP address of each peers & understand how it is getting communicated with other peers.
Having covered this file end to end, it is time to launch our network!, yes most excited time to kick-start our network & launch the services.
docker-compose -f docker-compose-cli.yaml up -d
You could see that, orderer & 4 peers have been initiated to confirm & view the same, issue below command in the terminal
it should show you all these services running behind.. ! now your network is setup & listening & most importantly waiting for your action!!,
By now, you should have understood the key concepts of each file, parameters & it’s usage, how it works etc. Let’s connect in my next article.