Spring Boot Fullstack Blockchain Application With Hyperledger Fabric running on Kubernetes (Part 6) — Orderer

1_268MMjpMXhKeCJKmo74BWA.png

Hello everyone, through this article series we will the Hyperledger Fabric integration with Spring Boot.In this article, we will look into ordered. Also,I will also explain how to deploy orderer service on Kubernetes.

Other articles on Hyperledger Fabric integration with Spring Boot can be accessed from the links below.

Part 1 — Introduction

Part 2 — Kubernetes Cluster Setup

Part 3 — Fabric CA Server

Part 4 — Generating Certificates and Artifacts

Part 5 — Kafka

Part 6— Orderer

Orderer

The Orderer is responsible for packaging transactions into Blocks, and distribute them to Anchor Peers across the network.

The transaction flow of Fabric have the steps Proposal, Packaging and Validation. The orderer is responsible for Packaging and involved in the Validation step for distribution of new blocks on the network.

Ordering service provides a shared communication channel to clients and peers, offering a broadcast service for messages containing transactions. Clients connect to the channel and may broadcast messages on the channel which are then delivered to all peers. The channel supports atomic delivery of all messages, that is, message communication with total-order delivery and (implementation specific) reliability. In other words, the channel outputs the same messages to all connected peers and outputs them to all peers in the same logical order.

Ordering service is not capable of transaction validations, it’s primary goal to provide total order for transactions published, cut blocks with ordered transactions.

Ordering Service Implementations

While every ordering service currently available handles transactions and configuration updates the same way, there are nevertheless several different implementations for achieving consensus on the strict ordering of transactions between ordering service nodes

Raft

New as of v1.4.1, Raft is a crash fault tolerant (CFT) ordering service based on an implementation of Raft protocol in etcd

Raft follows a “leader and follower” model, where a leader node is elected (per channel) and its decisions are replicated by the followers.

Raft ordering services should be easier to set up and manage than Kafka-based ordering services, and their design allows different organizations to contribute nodes to a distributed ordering service.

Kafka

Similar to Raft-based ordering, Apache Kafka is a CFT implementation that uses a “leader and follower” node configuration.

Kafka utilizes a ZooKeeper ensemble for management purposes. Asset transfer projesinde kafka kullanıyoruz.We will use kafka.

Solo (deprecated in v2.x)

The Solo implementation of the ordering service is intended for test only and consists only of a single ordering node.It has been deprecated and may be removed entirely in a future release.

Existing users of Solo should move to a single node Raft network for equivalent function.

Installation of Orderer on Kubernetes

For the asset transfer project, we will set up kafka and zookeeper to run 5 pods on kubernetes.

Let’s open the project we downloaded from this link and go to the directory where the k8s is located.

                $ cd deploy/k8s
            

Kafka and Zookeeper Deployment

Let’s create a deployment for each orderer.

The yaml files that create it are in the following directory in the project below.

deploy/k8s/orderer/orderer.yaml

deploy/k8s/orderer/orderer2.yaml

deploy/k8s/orderer/orderer3.yaml

deploy/k8s/orderer/orderer4.yaml

deploy/k8s/orderer/orderer5.yaml

                apiVersion: apps/v1
kind: Deployment
metadata:
  name: orderer
  labels: 
    app: orderer
            

  • The Deployment for orderer 1, named orderer.

                apiVersion: apps/v1
kind: Deployment
metadata:
  name: orderer5
  labels: 
    app: orderer5
            

  • The Deployment for orderer 5, named orderer5.In other orderers, this value is assigned to orderer2,orderer3,orderer4.

                spec:
  selector:
    matchLabels:
      app: orderer
  replicas: 1
            

  • The Deployment, has a Spec that indicates that 1 replicas of the orderer container will be launched in unique Pods.This value is assigned the same for each orderer.

                volumes:
  - name: fabricfiles
    persistentVolumeClaim:
      claimName: fabricfiles-pvc
            

  • The volumeClaimTemplates will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner.It should be the same as the pvc metadata name.This value is assigned the same for each orderer.

                volumeMounts:
  - name:  fabricfiles
    mountPath: /organizations
    subPath: organizations # required for certificates.
  - name: fabricfiles
    mountPath: /system-genesis-block
    subPath: system-genesis-block # It needs a genesis block.
  - name: fabricfiles
    mountPath: /var/hyperledger/production/orderer
    subPath: state/orderer # orderer persistence data
            

The Orderer container mounts the PV at /organizations for certificates.

The Orderer container mounts the PV at /system-genesis-block for genesis block. It needs a genesis block.

The Orderer container mounts the PV at /var/hyperledger/production/orderer for orderer persistence data. This data will be kept in state/orderer sub directory on nfs server.

                - name: fabricfiles
  mountPath: /var/hyperledger/production/orderer
  subPath: state/orderer5
            

The persistence data of other orderers are also kept in subdirectories such as state/orderer2, state/orderer5 on the nfs server.

                livenessProbe:
  httpGet:
    port: 9444
    path: /healthz
  initialDelaySeconds: 60
  timeoutSeconds: 5
  failureThreshold: 6
readinessProbe:
    httpGet:
      port: 9444
      path: /healthz
    initialDelaySeconds: 5
    timeoutSeconds: 3
    periodSeconds: 5
            

when a container get in the ready state, kubernetes starts to route traffic to the relavent pod. But the pod in the container may not be ready to accept traffic. Therefore, we need to specify “liveness” and “readiness” probes for applications in order kubernetes to do this process more efficiently.

Kubelet will check whether the container is alive and healthy by sending requests to the /healthz path on port 9443 and expect a success result code.

                spec:
  containers:
  - name: orderer
    image: hyperledger/fabric-orderer:2.3
    imagePullPolicy: IfNotPresent
            

2.3 is assigned as Hyperledger fabric orderer docker image version.

Set imagePullPolicy to IfNotPresent or Never and pre-pull: Pull manually images on each cluster node so the latest is cached, then do a kubectl rolling-update or similar to restart Pods.

The following environment variables are assigned for orderer.

                env:
  - name: CONFIGTX_ORDERER_ADDRESSES
    value: "orderer:7050"
  - name: ORDERER_GENERAL_LISTENADDRESS
    value: "0.0.0.0"
  - name: ORDERER_GENERAL_LISTENPORT
    value: "7050"
  - name: ORDERER_GENERAL_LOGLEVEL
    value: debug
  - name: ORDERER_GENERAL_LOCALMSPDIR
    value: /organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp
  - name: ORDERER_GENERAL_LOCALMSPID
    value: OrdererMSP
  - name: ORDERER_GENERAL_GENESISMETHOD
    value: file
  - name: ORDERER_GENERAL_GENESISFILE
    value: /system-genesis-block/genesis.block
  - name: ORDERER_GENERAL_TLS_ENABLED
    value: "true"
  - name: ORDERER_GENERAL_TLS_PRIVATEKEY
    value: /organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.key
  - name: ORDERER_GENERAL_TLS_CERTIFICATE
    value: /organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
  - name: ORDERER_GENERAL_TLS_ROOTCAS
    value: /organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/ca.crt
  - name:  ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY
    value: /organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.key
  - name:  ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE
    value: /organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
  - name: ORDERER_OPERATIONS_LISTENADDRESS # metric endpoint
    value: 0.0.0.0:9444
  - name: ORDERER_METRICS_PROVIDER
    value: prometheus
  - name: CONFIGTX_ORDERER_ORDERERTYPE
    value: kafka
  - name: CONFIGTX_ORDERER_KAFKA_BROKERS
    value: "broker-0.broker:9092,broker-1.broker:9092"
  - name: ORDERER_KAFKA_RETRY_SHORTINTERVAL
    value: 1s
  - name: ORDERER_KAFKA_RETRY_SHORTTOTAL
    value: 30s
  - name: ORDERER_KAFKA_VERBOSE
    value: "true"
            

ORDERER_GENERAL_GENESISFILE: path to genesis file path

ORDERER_GENERAL_LOCALMSPID: ID to load the MSP definition

ORDERER_GENERAL_LOCALMSPDIR: MSPDir is the filesystem path which contains the MSP configuration

ORDERER_GENERAL_TLS_ENABLED: enable TLS with client authentication.

ORDERER_GENERAL_TLS_PRIVATEKEY: fully qualified path of the file that contains the server private key

ORDERER_GENERAL_TLS_CERTIFICATE: fully qualified path of the file that contains the server certificate

ORDERER_GENERAL_TLS_ROOTCAS : fully qualified path of the file that contains the certificate chain of the CA that issued TLS server certificate

ORDERER_GENERAL_GENESISMETHOD: file is used when you want provide the genesis block as file to the container

ORDERER_GENERAL_TLS_CLIENTROOTCAS: fully qualified path of the file that contains the certificate chain of the CA that issued TLS server certificate

CONFIGTX_ORDERER_ORDERERTYPE: The orderer implementation to start.Available types are solo,kafka and etcdraft.

ORDERER_KAFKA_RETRY_SHORTINTERVAL: The order node may fail to connect kafka_ kafka_ RETRY_ Shortentreval is the interval between retries.

ORDERER_KAFKA_RETRY_SHORTTOTAL: Total number of retries.

CONFIGTX_ORDERER_KAFKA_BROKERS:Instructs Orderer how to get in touch with Kafka.

ORDERER_GENERAL_LISTENPORT: This value is the port that the orderer listens to.

ORDERER_KAFKA_RETRY_SHORTINTERVAL:orderer node may fail to connect to kafka, This value is the retry interval.

broker-0.broker: kafka service name

9092:kafka service port

orderer metrics will be dump on the below port.

                - name: ORDERER_OPERATIONS_LISTENADDRESS # metric endpoint
    value: 0.0.0.0:9444
- name: ORDERER_METRICS_PROVIDER
   value: prometheus
            

Orderer Service

Let’s create a service for orderer.

The yaml files that create it are in the following directory in the project below.

deploy/orderer/order-svc.yaml

deploy/orderer/order2-svc.yaml

deploy/orderer/order3-svc.yaml

deploy/orderer/order4-svc.yaml

deploy/orderer/order5-svc.yaml

                apiVersion: v1
kind: Service
metadata:
  name: orderer
  labels: 
    app: orderer
            

This specification creates a new Service object named “orderer”.The Service for orderer 5, named orderer5.In other orderers, this value is assigned to orderer2,orderer3,orderer4.

                ports:
- name: grpc
  protocol: TCP
  targetPort: 7050 
  port: 7050
            

targetPort: container port.7050 is assigned.

port: kubernetes service port.7050 is assigned.

                apiVersion: v1
kind: Service
metadata:
  name: orderer-metrics
  labels:
    app: orderer
    metrics-service: "true"
            

A new service has been created object named “orderer-metrics” to retrieve orderer metric information.The metric service for orderer 5, named orderer5-metrics.

                spec:
  type: ClusterIP
  selector:
    app: orderer
  ports:
  - name: "orderer-metrics"
    targetPort: 9444 # container metric port
    port: 9444
            

The target port 9444 is the container metric portit has an open port 9444.

Service maps port 9444 of the container to the node’s external IP:Port for all containers with the labels app:orderer.

My article ends here. In general,I introduced orderer and explained the deployment of orderer on Kubernetes.

See you in the next articles.

Deploy Orderer on Kubernetes

Let’s connect to the kubernetes master node virtual machine with the vagrant ssh command.

                $ vagrant ssh k8smaster
            

Let’s go to the directory where the kubernetes installation scripts are located.This directory is the same as the deploy/k8s folder in the project. With Vagrant, this directory is synchronized to the virtual machine.

                $ cd /vagrant/k8s
            

Deploying the deployments,services for orderer.

                $ kubectl apply -f orderer/
            

Orderer creation pending completion.

                $ kubectl wait --for condition=available --timeout=300s deployment -l "app in (orderer,orderer2,orderer3,orderer4,orderer5)"
            

Orderer created successfully.

Finally, let’s check the conditions of the pods we run from the lens ide.

Finally, let’s check the conditions of the pods we run from the lens ide.

The state of orderer pods appear to be running.

My article ends here. In general,I introduced Orderer and explained the deployment of these tools on Kubernetes.

See you in the next articles.

Project Links

Spring Boot Hlf Starter Project details and installation can be accessed via this link.

Asset Transfer Project details and installation can be accessed via this link

Şuayb Şimşek

Software Engineer at Yapı ve Kredi Bankası via Infonal | Java, Devops, Blockchain developer

FollowŞUAYB ŞIMŞEK FOLLOWS

See all (64)

11


Only registered users can post comments. Please, login or signup.

Start blogging about your favorite technologies and get more readers

Join other developers and claim your FAUN account now!

Avatar

Şuayb Şimşek

Software Engineer, Infonal

@suaybsimsek58
Software Engineer at Yapı ve Kredi Bankası via Infonal | Java, Devops, Blockchain developer
Stats
24

Influence

1k

Total Hits

1

Posts

Discussed tools