logo

OpenWare OPEX

rk_logo

#OPEX Technical Architecture

Version 2.2 - July 2019

#Table of contents

[TOC]

#Introduction

The influence of cryptocurrency (CCY) on the world's financial markets and service industry, is gaining ground on the traditional financial models at a steady and seemingly relentless pace. With increased media coverage and volatile market behavior it continues to attract businesses, traders, crypto-enthusiasts and your every day world citizen believing in the long term value of a relatively small investment today.

However, a list of serious problems has plagued this industry throughout it's early stages and it's typically related to the centralized nature of every exchange, starting with unbalanced loads, lack of tech support, inability managing registration parameters, undetected or unresolved security holes and the like. Results of these problems always lead to financial losses, with hundreds of million Euro lost in stand alone instances. Billions have found their way into hacker's wallets or 'misapropriated' for a lack of internal security measures.

Understanding the landscape of technology enabling the cryptocurrency market, our team of FinTech seasoned DevOps and cryptocurrency experts decided to create the OpenWare OPEX. OPEX is a complete cryptocurrency exchange solution, designed and crafted specifically for high-load, zero-downtime, autoscaling and secure deployments.

This document aims to describe every aspect of the OPEX in a comprehensive and understandable way.

#System description

#System Overview

System Overview

The OPEX system consists of 6 main subsystems:

  • Peatio is the core cryptoexchange engine which handles all the trading and accounting operations
  • Barong is an authentication server created and maintained by our team. It exposes an API for user management and also acts as an authenticator for all the API requests
  • Applogic is an intermediate application connected to both Barong and Peatio Management API's. It is designed for extending the stack's functionality, for example adding a payment gateway or a custom AML provider
  • Baseapp frontend application is one of the components that can be seen by the end user. It handles all the outside system interaction and is connected directly to the Applogic component
  • Tower is the frontend application designed specifically for KyC and user base administration
  • Kite Infrastructure is the system's foundation consisting of a set of DevOps tools, cloud components and deployment files, held together and managed by Kite, our next-gen cloud environment management tool

#Sub-system Description

#Components description

Network overview

Users access to services through a cloud load balancer service which load balances requests over Kubernetes cluster nodes. Application components are running in pods (Docker containers) in the Kubernetes cluster, communications between pods are in a private network layer inside the Kubernetes cluster. All the cluster nodes are secured behind a NAT without any public IPs so that they can't be accessed from the outside. A Cloud SQL service is used as central database, a secure TCP tunnel connection is established between the Kubernetes private network layer and the Cloud SQL service to ensure the privacy of connections.

Network Overview

All the traffic flowing into the stack can be separated by two directions:

  • Frontend application requests that are routed directly to Kubernetes services responsible for serving Baseapp and Tower
  • API requests that go through the Envoy API gateway and are authenticated by Barong Authz before reaching any API services

Network Traffic Overview

Baseapp

Baseapp frontend application is a user interface component for interaction with other parts of OPEX.

It is a React based application with all necessary tools for your wallets and orders management, it provides the connection of UI and actions on Peatio and Barong backend.

Frontend component has the following structure:

img

Trade Page

Trade page is one of the main parts of Frontend components and provides the following functionality:

  1. Possibility of creating orders for buying or selling CCY
  2. Ability to choose the most appropriate market and create orders within
  3. Monitoring of: • current market trades • order book displaying real time transactions • user's open order • user's trades
  4. Create orders based on the order book of the current market
  5. Chart, displaying completed actions for buying or selling CCY

The summary architecture is showed on scheme:

Title: Trade page
User->Market: Input amount and price of asked CCY
Market-->User: Match ask with appropriate bid
User->Market: Input amount and price of bid CCY
Market-->User: Match bid with appropriate ask

Wallets page:

The wallets page gives user the ability to manage his fiat an CCY wallets. This component is responsible for deposit and withdrawal process.

Deposit process:
Title: Fiat Deposit
User->Frontend: Get information about bank account and SN
User->Bank: Include SN in payment description
Peatio->Frontend: Administrator accepts deposit
User->Frontend: Refresh page
Frontend-->User: Update balance

### Peatio
Peatio is main Crypto-Currency exchange component facilitating the trade of cryptocurrencies for assets, conventional fiat and an everygrowing array of different digital currencies.

![img](images/peatio-logo.png)

Crypto-Currency exchange component requirements:

- Website and server safety
- High performance
- Usability and scalibility
- Highly configurable and extendable
- Support multiple digital currencies
- Support for FIAT currency
- Industry standard security

Peatio.tech version of Peatio strives to and goes above all of these requirements.

#### Peatio subcomponents

##### Peatio daemons:

All Peatio daemons could be devided into two groups by functionallity.

**Trading Daemons** are the ones that perform all trading actions from order creation to ticker and k-line updates:

- *Market Ticker* - updates market ticker when some orders or trades are created or updated.
- *Matching* - matches orders and sends them to amqp:trade_executor.
- *Order Processor* - processes cancellation and submission of orders.
- *Pusher Market* - delivers new public and private trade events to Ranger.
- *Pusher Member* - delivers private member events.
- *Slave Book* - periodically caches market depth in Redis. Market depth is needed for trading UI and market order estimation.
- *Global State* - sends orderbook to Ranger every 5 seconds.
- *K* - updates k-lines every 15 seconds. K-line data is used by the trading chart.
- *Trade Executor* - performs partial or full fulfillment of two orders, updates their state in DB and creates trades.

**Deposit-Withdraw Daemons** are the ones that perform all deposit and withdrawal operations from deposit detection to sending transactions to a blockchain:

- *Deposit Collection* - transfers incoming deposits from the deposit wallet to withdraw wallets (hot, warm, cold).
- *Deposit Collection Fees* - performs custom actions which are required before deposit collection (coin specific ones e.g. transfer ETH for sending ERC20).
- *Deposit Coin Address* - cryptocurrency deposit wallet address generation.
- *Withdraw Coin* - publishes signed transactions to a blockchain network.
- *Blockchain* - monitors a blockchain for incoming deposits and outcoming withdrawals and updates their state in the database.
- *Withdraw Audit* - validates withdrawals and submits them to the Withdraw Coin daemon.

##### Pluggable Coin API:

Peatio Plugin API v2 gives ability to extend Peatio with any coin which fits into basic Blockchain and Wallet inteface. This API gives you abillity to integrate new coins into Peatio without touching source code and core Peatio buisness logic. For developing new plugins you just need to create a gem and inherit Blockchain and Wallet abstract classes and put your coin buisness logic here.

##### Admin panel:

- Currencies Summary

- Deposit/Withdraw management and processing

- Blockchains/Wallets/Currencies management

- Member Accounts and Funds management

- Accounting reports

#### Cryptocurrency exchange in action

##### Automated cryptocurrency deposit

​```sequence
participant End User
participant Peatio
participant Peatio Worker
End User->Peatio: Visit cryptocurrency\ndeposit page
Peatio-->End User: Deposit address
Note left of End User: Blockchain\ntransaction
Note right of Peatio Worker: Deposit process
Peatio Worker-->Peatio: Confirm deposit
Note right of Peatio: Wallet stats\nrefresh
Peatio-->End User: Update balance
Trade flow
participant End User
participant Peatio
participant Peatio Workers
End User->Peatio: Trade page
Peatio-->End User: Trade page
End User->Peatio: Bid/Ask order
Peatio->Peatio Workers: Process order request
Peatio Workers-->End User: Created order notification
Note right of Peatio Workers: Order matching
Peatio Workers-->Peatio: Stats refresh
Note right of Peatio: Close order
Peatio Workers-->End User: Proccesed order notification

Barong

Barong is a KYC/AML component which acts as a central authentication and authorization system in OPEX.

KYC controls typically include the following:

  • Collection and analysis of basic identity information such as Identity documents (referred to in US regulations and practice as a "Customer Identification Program" or CIP)
  • Name matching against lists of known parties (such as "politically exposed person" or PEP)
  • Determination of the customer's risk in terms of propensity to commit money laundering, terrorist finance, or identity theft
  • Creation of an expectation of a customer's transactional behavior
  • Monitoring of a customer's transactions against expected behavior and recorded profile as well as that of the customer's peers

Barong is designed to be customizable using plugins and Applogic integrations so that any market regulator's requirements can be met with ease.

Core Barong features include:

  • KYC verification for users
  • Level-based KyC process
  • Role-based access control(RBAC)
  • Phone number verification
  • Two-factor authentication(2FA)
  • Transaction Signature support
  • Flexible Applogic and mobile app integration

Barong also acts as an authenticator(authz) for all incoming API requests, verifying them and only letting through the ones that pass all the security filters.

Registration flow

This flow is customer specific and must comply actual regulations on market.

Barong provides email verification, phone verification and document upload support out of the box.

Authentication flow

Authentication Flow Diagram

TOTP sign flow

TOTP Sign Flow Diagram

Title: TOTP Sign withdraw and withdraw destination
User->Applogic: (Header JWT) TOTP sign request: action, data, nbf
Applogic->Barong: Forward to Barong
Barong->Barong: Action and payload saved
Barong->Vault: Create TOTP
Vault-->Barong: Send OTP to User via Barong
Barong->>User: Send OTP to User (sms/email/ga)

Barong-->Applogic: Ready to accept TOTP
Applogic-->User: TOTP form
User->Applogic: TOTP
Applogic->Barong: Check TOTP verified
Barong->Vault: Check TOTP verified
Vault-->Barong: Accepted
Barong-->Applogic: Signed document with nbf
Applogic->Peatio: Put document to queue
Peatio->Peatio: Lock funds / create withdraw destination
Peatio-->Applogic: Document accepted
Applogic-->User: Document accepted
Example create withdraw destination payload coins
{
  "action": "withdraw_destination#create",
  "data": {
    "currency": "btc",
    "label": "My Bitcoin Wallet",
    "type": "coin",
    "address": "18VTUbTmBoXhZ9BJRKy2YMYuNbo8Xta8SQ"
  }
}
Example create withdraw destination payload fiat
{
  "action": "withdraw_destination#create",
  "data": {
    "currency": "usd",
    "label": "My Bank Account",
    "type": "fiat",
    "bank_name": "International Bank",
    "bank_branch_name": "International Bank (branch #12345)",
    "bank_branch_address": "Planet Earth",
    "bank_identifier_code": "IB_12345_67890",
    "bank_account_number": "BAN123456789",
    "bank_account_holder_name": "John Doe"
  }
}
Example create withdraw payload
{
  "action": "withdraw#create",
  "data": {
    "amount": "10.0",
    "fee": "0.0005",
    "destination_id": "1"
  },
  "params": {
    "exp": 123456789,
    "nbf": 123456252
  }
}

Application registration flow

New Application Flow Diagram

User->Barong: Login to /admin
Barong-->User: Ok
User->Barong: Navigate to applications
Barong-->User: Ok
User->Barong: Fill create application form
Barong-->User: Ok

Applogic Application

Applogic is a component acting both as a proxy to the Barong and Peatio API's used by the frontend and as an extendable base application containing extra logic, e.g. a payment gateway or an interface for a third party party KYC/AML provider.

Business Cases

Third Party KYC/AML provider
End User->Barong: Submit documents
Barong-->End User: Confirm upload
Barong->App Logic: Push Document upload event
App Logic->AML Provider: Submit document to AML Api
AML Provider-->App Logic: AML approval
App Logic->Barong: Update document state
Note right of Barong: Compliance approves
Barong->App Logic: Push KYC Approval event
App Logic-->End User: Notify user by email
Multi-signature withdrawal request

Process below describes an automated crypto-currency withdrawal.

End User->App Logic: Withdrawal request with OTP
App Logic->Barong: Ask Barong to verify OTP and sign
App Logic-->End User: Email confirmation link
End User-->App Logic: Click on confirmation link
App Logic->Peatio: Request withdraw with Barong and Applogic signature
Peatio->Peatio Worker: Submit withdrawal on network
Peatio Worker-->Peatio: TxID is returned
Peatio-->End User: Transaction confirmations
FIAT Withdrawal
participant End User
participant Accountant
participant App Logic
participant Barong
End User->Barong: Request signed bank details
Barong-->End User: Returns OTP signed payload
End User->App Logic: Create beneficiary
App Logic-->End User: Beneficiary status
Note right of App Logic: Beneficiary is reviewed
End User->Barong: Request signed withdrawal
Barong-->End User: Returns OTP signed payload
End User->App Logic: Submit Withdrawal request
Note right of App Logic: Withdrawal is opened
App Logic-->End User: Return status
Note right of App Logic: Withdrawal is reviewed
Note left of Accountant: Issue bank transfer
Accountant->App Logic: Update withdrawal status
App Logic->Peatio: Submit withdrawal signed payload
Peatio-->App Logic: Confirm transaction
Note right of App Logic: Withdrawal is closed
App Logic-->End User: Email Confirmation
Payment Gateway

Deposit through a third party payment gateway

End User->App Logic: Select payment option
App Logic->Payment Gate: Enter payment details
Payment Gate-->App Logic: Webhook Callback
Note right of App Logic: Update order state
App Logic->Peatio: Submit signed deposit payload
Peatio-->App Logic: Return status
Note right of App Logic: Close Order
Manual Bank SWIFT deposit
participant End User
participant Accountant
End User->App Logic: Visit bank deposit page
App Logic-->End User: Returns deposit SN
Note left of End User: Issue bank wire
Accountant->App Logic: Consolidate payment received
Note right of App Logic: Deposit confirmed
App Logic->Peatio: Submit signed deposit request
Peatio-->App Logic: Return status
Note right of App Logic: Close Deposit
App Logic-->End User: Send Email confirmation

Backend services

Cloud SQL

img

Cloud SQL for MySQL is a fully-managed database service that makes it easy to set up, maintain, manage, and administer your MySQL relational databases on GCP.

img

Features:

  • Fully managed MySQL Community Edition databases in the cloud.
  • Second Generation instances support MySQL 5.6 or 5.7, and provide up to 416 GB of RAM and 10 TB data storage, with the option to automatically increase the storage - size as needed.
  • First Generation instances support MySQL 5.5 or 5.6, and provide up to 16 GB of RAM and 500 GB data storage.
  • Create and manage instances in the Google Cloud Platform Console.
  • Instances available in US, EU, or Asia.
  • Customer data encrypted on Google’s internal networks and in database tables, temporary files, and backups.
  • Support for secure external connections with the Cloud SQL Proxy or with the Secure Sockets Layer (SSL) protocol.
  • Data replication between multiple zones with automatic failover.
  • Import and export databases using mysqldump, or import and export CSV files.
  • Support for MySQL wire protocol and standard MySQL connectors.
  • Automated and on-demand backups, and point-in-time recovery.
  • Instance cloning.
  • Integration with Stackdriver logging and monitoring.
  • ISO/IEC 27001 compliant.

Redis

img

Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius queries. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.

In order to achieve its outstanding performance, Redis works with an in-memory dataset. Depending on your use case, you can persist it either by dumping the dataset to disk every once in a while, or by appending each command to a log. Persistence can be optionally disabled, if you just need a feature-rich, networked, in-memory cache.

Redis also supports trivial-to-setup master-slave asynchronous replication, with very fast non-blocking first synchronization, auto-reconnection with partial resynchronization on net split.

img

Other features include:

  • Transactions
  • Pub/Sub
  • Lua scripting
  • Keys with a limited time-to-live
  • LRU eviction of keys
  • Automatic failover

Vault

img

Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log.

A modern system requires access to a multitude of secrets: database credentials, API keys for external services, credentials for service-oriented architecture communication, etc. Understanding who is accessing what secrets is already very difficult and platform-specific. Adding on key rolling, secure storage, and detailed audit logs is almost impossible without a custom solution. This is where Vault steps in.

The key features of Vault are:

  • Secure Secret Storage: Arbitrary key/value secrets can be stored in Vault. Vault encrypts these secrets prior to writing them to persistent storage, so gaining access to the raw storage isn't enough to access your secrets. Vault can write to disk, Consul, and more.

  • Dynamic Secrets: Vault can generate secrets on-demand for some systems, such as AWS or SQL databases. For example, when an application needs to access an S3 bucket, it asks Vault for credentials, and Vault will generate an AWS keypair with valid permissions on demand. After creating these dynamic secrets, Vault will also automatically revoke them after the lease is up.

  • Data Encryption: Vault can encrypt and decrypt data without storing it. This allows security teams to define encryption parameters and developers to store encrypted data in a location such as SQL without having to design their own encryption methods.

  • Leasing and Renewal: All secrets in Vault have a lease associated with them. At the end of the lease, Vault will automatically revoke that secret. Clients are able to renew leases via built-in renew APIs.

  • Revocation: Vault has built-in support for secret revocation. Vault can revoke not only single secrets, but a tree of secrets, for example all secrets read by a specific user, or all secrets of a particular type. Revocation assists in key rolling as well as locking down systems in the case of an intrusion.

img

Use case

At a bare minimum, Vault can be used for the storage of any secrets. For example, Vault would be a fantastic way to store sensitive environment variables, database credentials, API keys, etc.

Compare this with the current way to store these which might be plaintext in files, configuration management, a database, etc. It would be much safer to query these using vault read or the API. This protects the plaintext version of these secrets as well as records access in the Vault audit log.

In addition to being able to store secrets, Vault can be used to encrypt/decrypt data that is stored elsewhere. The primary use of this is to allow applications to encrypt their data while still storing it in the primary data store.

The benefit of this is that developers do not need to worry about how to properly encrypt data. The responsibility of encryption is on Vault and the security team managing it, and developers just encrypt/decrypt data as needed.

RabbitMQ

RabbitMQ is a messaging broker that supports multiple messaging protocols. It is lightweight and easy to deploy. RabbitMQ runs on many operating systems and cloud environments, and provides a wide range of developer tools for most popular languages. It ships in a state where it can be used straight away in simple cases such as development and QA environments - just start the server and it's ready to go.

img

Messaging brokers receive messages from publishers (applications that publish them, also known as producers) and route them to consumers (applications that process them).

Since it is a network protocol, the publishers, consumers and the broker can all reside on different machines.

Features
Asynchronous MessagingSupports multiple messaging protocols, message queuing, delivery acknowledgement, flexible routing to queues, multiple exchange type.
Developer ExperienceDeploy with BOSH, Chef, Docker and Puppet. Develop cross-language messaging with favorite programming languages such as: Java, .NET, PHP, Python, JavaScript, Ruby, Go and many others.
Distributed DeploymentDeploy as clusters for high availability and throughput; federate across multiple availability zones and regions.
Enterprise & Cloud ReadyPluggable authentication, authorization, supports TLS and LDAP. Lightweight and easy to deploy in public and private clouds.
Tools & PluginsDiverse array of tools and plugins supporting continuous integration, operational metrics, and integration to other enterprise systems. Flexible plug-in approach for extending RabbitMQ functionality.
Management & MonitoringHTTP-API, command line tool, and UI for managing and monitoring RabbitMQ.

BitGo

Every cryptoexchange needs an ultimately secure wallet solution and it's not the easiest thing to do in a cloud environment. However, BitGo comes to rescue, offering a maximum-security multisignature wallets as a service.

img

Bitgo offers a very secure wallet solution which remains approachable by the average consumer on the street. Convenience and security often do not mix that well, but Bitgo shows it can be done. However it does not provide users with much anonymity, but that is not something most people are looking for. There are also no extra features to take advantage of, but that is not a deal breaker either.

Its technology solves the most difficult security, compliance and custodial problems associated with blockchain-based currencies, enabling the integration of digital currency into the global financial system.

For any Ethereum users there is an Ethereum-based wallet which uses the same Bitgo technology. In fact, it is built by most of the engineers who are working on the Bitcoin wallet. This project is known as Ether.li, and features all of the same aspects one can find in the Bitgo Bitcoin wallet. It provides multisignature solutions for Ethereum users, which will be appreciated by some users. Bringing this functionality to different cryptocurrencies is a smart strategy by Bitgo.

Using the web version means private keys are not generated in a secure manner, even though users can protect them with a password. The mobile apps work as one would expect, without any unnecessary bells and whistles. Advanced users not looking for multisignature support may find Bitgo a bit boring, but for novice users, it is a very powerful solution. There is no reason not to give Bitgo wallet a try.

Supported currencies:

img

Envoy

Envoy is an L7 proxy and communication bus designed for large modern service oriented architectures.

Envoy proxy

Envoy uses a single process with multiple threads architecture. A single master thread controls various sporadic coordination tasks while some number of worker threads perform listening, filtering, and forwarding.

img

Scaling and availability

Envoy relies on Kubernetes for scaling, high availability, and persistence. All Envoy configuration is stored directly in Kubernetes as a configmap; there is no database. Envoy Proxy is packaged as a single container. By default, Envoy is deployed as a Kubernetes deployment and can be scaled and managed like any other Kubernetes deployment.

Usage flow
  1. Configuration for Services is defined in envoy.yaml.
  2. When Envoy is deployed the configuration is mounted to the container and stored in configmap.
  3. When configmap is changed or Envoy is redeployed, container recreates with new configuration.

Kite infrastructure

OpenWare OPEX infrastructure is a diverse and expansive topic. To work as a fail-safe and zero-downtime platform it relies on a cloud platform proven by years of usage and a set of chosen DevOps tools. However, it’s hard to keep all the things together in control and harmony. Thus, we’ve created a one tool to rule them all and help people get high up to the clouds - Kite.

Kite architecture

Kite is a CLI for scaffolding, managing DevOps environments and enhancing different tools by fusing them into one workflow.

Purpose

Kite follows Ruby on Rails guidelines in terms of environment handling and file scaffolding.

Main Kite features are:

  • Modular architecture
  • Fast and simple code scaffolding
  • Support for isolated DevOps environments
  • Provides a Terraform wrapper
  • Ability to interconnect most existing DevOps tools
  • Designed for convenient team collaboration
Components

img

A diagram depicting the Kite project structure

Every Kite deployment consists of the following components:

  • Project is a base skeleton which consists only of a configuration file for future environments, a Kite executable and initial documentation
  • Environment is a separated directory with its own credentials and modules which contains base files for connection to the IaaS
  • Module is a repository consisting of scripts, configs and basically any types of files which form a component or a whole stack when coupled together

Workflow

img

Typical Kite workflow is as follows:

  • Create a new project
  • Generate an environment
  • Initialize a module in this environment
  • Fill in all the module’s variables and render it

Docker

Every cloud deployment needs a virtualization provider and our team has found Docker to be the best fit.

Docker logo

The Docker platform is the only container platform to build, secure and manage the widest array of applications from development to production both on premises and in the cloud.

Some of its key feature are:

  • Agility - accelerate software development and deployment by 13X and respond instantly to customer needs
  • Portability - eliminate the “works on my machine” once and for all. Gain independence across on-prem and cloud environments
  • Security - deliver applications safer across the entire lifecycle with built in security capabilities and configurations out of the box
  • Cost savings - optimize the use of your infrastructure resources and streamline operations to save 50% in total costs.
  • Simplicity - Docker makes powerful tools for application creation and orchestration, accessible to everyone
  • Openness - built with open source technology and a modular design makes it easy to integrate into your existing environment
  • Independence - Docker creates a separation of concerns between developers and IT and between applications and infrastructure to unlock innovation
What is a Container

Docker composition

A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings.

Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.

Key points of containers:

  • Lightweight

    Docker containers running on a single machine share that machine's operating system kernel; they start instantly and use less compute and RAM. Images are constructed from filesystem layers and share common files. This minimizes disk usage and image downloads are much faster

  • Standart

    Docker containers are based on open standards and run on all major Linux distributions, Microsoft Windows, and on any infrastructure including VMs, bare-metal and in the cloud

  • Secure

    Docker containers isolate applications from one another and from the underlying infrastructure. Docker provides the strongest default isolation to limit app issues to a single container instead of the entire machine

Comparing between Containers and VMs

Containers and virtual machines have similar resource isolation and allocation benefits, but function differently because containers virtualize the operating system instead of hardware. Containers are more portable and efficient.

Docker container

Containers

Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. Containers take up less space than VMs (container images are typically tens of MBs in size), and start almost instantly.

VM

Virtual Machines

Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries and libraries - taking up tens of GBs. VMs can also be slow to boot.

Summary

Docker is a breakthrough technology allowing for complex cloud deployments which are fast, reliable and immutable. Because of this Docker is a base for more complex cloud technologies, including Kubernetes and Docker Compose.

Google Cloud Platform

Every cloud platform needs a reliable infrastructure provider. Our team made a thorough research on the topic and chose the Google Cloud Platform(GCP) as the best candidate because of its speed, competitive pricing and progressive design.

GCP logo

GCP resources

GCP consists of a set of physical assets, such as computers and hard disk drives, and virtual resources, such as virtual machines (VMs), that are contained in Google's data centers around the globe. Each data center location is in a global region. Regions include Central US, Western Europe, and East Asia. Each region is a collection of zones, which are isolated from each other within the region. Each zone is identified by a name that combines a letter identifier with the name of the region. For example, zone a in the East Asia region is named asia-east1-a.

This distribution of resources provides several benefits, including redundancy in case of failure and reduced latency by locating resources closer to clients. This distribution also introduces some rules about how resources can be used together.

Accessing resources through services

In cloud computing, what you might be used to thinking of as software and hardware products, become services. These services provide access to the underlying resources. The list of available GCP services is long, and it keeps growing. When you develop your website or application on GCP, you mix and match these services into combinations that provide the infrastructure you need, and then add your code to enable the scenarios you want to build.

Global, regional, and zonal resources

Some resources can be accessed by any other resource, across regions and zones. These global resources include preconfigured disk images, disk snapshots, and networks. Some resources can be accessed only by resources that are located in the same region. These regional resources include static external IP addresses. Other resources can be accessed only by resources that are located in the same zone. These zonal resources include VM instances, their types, and disks.

The following diagram shows the relationship between global scope, regions and zones, and some of their resources:

GCP global scope

Projects

Any GCP resources that you allocate and use must belong to a project. You can think of a project as the organizing entity for what you're building. A project is made up of the settings, permissions, and other metadata that describe your applications. Resources within a single project can work together easily, for example by communicating through an internal network, subject to the regions-and-zones rules. The resources that each project contains remain separate across project boundaries; you can only interconnect them through an external network connection.

Each GCP project has:

  • A project name, which you provide.
  • A project ID, which you can provide or GCP can provide for you.
  • A project number, which GCP provides.
Google Cloud Platform Console

The Google Cloud Platform Console provides a web UI

The Google Cloud Platform Console provides a web-based, graphical user interface that you can use to manage your GCP projects and resources. When you use the GCP Console, you create a new project, or choose an existing project, and use the resources that you create in the context of that project. You can create multiple projects, so you can use projects to separate your work in whatever way makes sense for you. For example, you might start a new project if you want to make sure only certain team members can access the resources in that project, while all team members can continue to access resources in another project.

Command-line interface

If you prefer to work in a terminal window, the Google Cloud SDK provides the gcloud command-line tool, which gives you access to the commands you need. The gcloud tool can be used to manage both your development workflow and your GCP resources. See the gcloud reference for the complete list of available commands.

GCP also provides Cloud Shell, a browser-based, interactive shell environment for GCP. You can access Cloud Shell from the GCP console.

GCP Cloud Shell interface

Cloud Shell provides:

  • A temporary Compute Engine virtual machine instance.
  • Command-line access to the instance from a web browser.
  • A built-in code editor.
  • 5 GB of persistent disk storage.
  • Pre-installed Google Cloud SDK and other tools.
  • Language support for Java, Go, Python, Node.js, PHP, Ruby and .NET.
  • Web preview functionality.
  • Built-in authorization for access to GCP Console projects and resources.
Computing and hosting services

GCP gives you options for computing and hosting. You can choose to:

  • Work in a serverless environment.
  • Use a managed application platform.
  • Leverage container technologies to gain lots of flexibility.
  • Build your own cloud-based infrastructure to have the most control and flexibility.

You can imagine a spectrum where, at one end, you have most of the responsibilities for resource management and, at the other end, Google has most of those responsibilities:

GCP Ops Workflow

Containers

With container-based computing, you can focus on your application code, instead of on deployments and integration into hosting environments. Google Kubernetes Engine, GCP's containers as a service(CaaS) offering, is built on the open source Kubernetes system, which gives you the flexibility of on-premises or hybrid clouds, in addition to GCP's public cloud infrastructure.

When you build with Kubernetes Engine, you can:

  • Create and manage groups of Compute Engine instances running Kubernetes, called clusters. Kubernetes Engine uses Compute Engine instances as nodes in a cluster. Each node runs the Docker runtime, a Kubelet agent that monitors the health of the node, and a simple network proxy.
  • Use Google Container Registry for secure, private storage of Docker images. You can push images to your registry and then you can pull images to any Compute Engine instance or your own hardware by using an HTTP endpoint.
  • Create an external network load balancer.

Virtual machines

GCP's unmanaged compute service is Google Compute Engine. You can think of Compute Engine as providing an infrastructure as a service (IaaS), because the system provides a robust computing infrastructure, but you must choose and configure the platform components that you want to use. With Compute Engine, it's your responsibility to configure, administer, and monitor the systems. Google will ensure that resources are available, reliable, and ready for you to use, but it's up to you to provision and manage them. The advantage, here, is that you have complete control of the systems and unlimited flexibility.

When you build on Compute Engine, you can:

  • Use virtual machines (VMs), called instances, to build your application, much like you would if you had your own hardware infrastructure. You can choose from a variety of instance types to customize your configuration to meet your needs and your budget.
  • Choose which global regions and zones to deploy your resources in, giving you control over where your data is stored and used.
  • Choose which operating systems, development stacks, languages, frameworks, services, and other software technologies you prefer.
  • Create instances from public or private images .
  • Use GCP storage technologies or any third-party technologies you prefer.
  • Create instance groups to more easily manage multiple instances together.
  • Use autoscaling with an instance group to automatically add and remove capacity.
  • Attach and detach disks as needed.
  • Use SSH to connect directly to your instances.
Storage services

Whatever your application, you'll probably need to store some data. GCP provides a variety of storage services, including:

  • A SQL database in Cloud SQL, which provides either MySQL or PostgreSQL databases.
  • A fully managed, mission-critical, relational database service in Cloud Spanner that offers transactional consistency at global scale, schemas, SQL querying, and automatic, synchronous replication for high availability.
  • Two options for NoSQL data storage: Cloud Datastore and Cloud Bigtable.
  • Consistent, scalable, large-capacity data storage in Cloud Storage. Cloud Storage comes in several flavors:
    • Multi-Regional provides maximum availability and geo-redundancy.
    • Regional provides high availability and a localized storage location.
    • Nearline provides low-cost archival storage ideal for data accessed less than once a month.
    • Coldline provides the lowest-cost archival storage for backup and disaster recovery.
  • Persistent disks on Compute Engine, for use as primary storage for your instances. Compute Engine offers both hard-disk-based persistent disks, called standard persistent disks, and solid-state persistent disks (SSD).
Summary

Google Cloud Platform is a progressive Infrastructure as a Service provider with a diverse product lineup and innovative approach to cloud computing. Our team's experience shows that over the years GCP is becoming better and better just like a good wine, and each one of our DevOps would choose GCP over any other cloud IaaS.

Terraform

For most of the initial IaaS interaction(resource creation and provisioning) Kite relies on Terraform.

Terraform logo

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. It can manage existing and popular service providers as well as custom in-house solutions.

The key features of Terraform are:

  • Infrastructure as Code

    Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.

  • Execution Plans

    Terraform has a "planning" step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.

  • Resource Graph

    Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.

  • Change Automation

    Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.

Packer

Every VM-based deployment greatly benefits from having standardized, stored as code and pre-baked machine images which can be deployed in a matter of seconds. For this purpose Kite relies on Packer.

Packer logo

Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel. Packer does not replace configuration management like Chef or Puppet. In fact, when building images, Packer is able to use tools like Chef or Puppet to install software onto the image.

A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines. Machine image formats change for each platform. Some examples include AMIs for EC2, VMDK/VMX files for VMware, OVF exports for VirtualBox, etc.

Advantages of using Packer
  • Super fast infrastructure deployment. Packer images allow you to launch completely provisioned and configured machines in seconds, rather than several minutes or hours. This benefits not only production, but development as well, since development virtual machines can also be launched in seconds, without waiting for a typically much longer provisioning time.
  • Multi-provider portability. Because Packer creates identical images for multiple platforms, you can run production in AWS, staging/QA in a private cloud like OpenStack, and development in desktop virtualization solutions such as VMware or VirtualBox. Each environment is running an identical machine image, giving ultimate portability.
  • Improved stability. Packer installs and configures all the software for a machine at the time the image is built. If there are bugs in these scripts, they'll be caught early, rather than several minutes after a machine is launched.
  • Greater testability. After a machine image is built, that machine image can be quickly launched and smoke tested to verify that things appear to be working. If they are, you can be confident that any other machines launched from that image will function properly.
Summary

Packer plays a major role in keeping the deployments stable, persistent and predictable. Ability to create images that are fit for any platform allow companies to manage multi-provider environments with ease when it comes to VM management.

Drone

Drone is a self-service Continuous Delivery platform for busy development teams.

img

Configuration as code

Drone searches for a configuration file drone.yml in the repository that is authorized within your drone server. Example of a drone.yml

kind: pipeline
name: default

steps:
- name: frontend
  image: node
  commands:
  - npm install
  - npm test

- name: backend
  image: golang
  commands:
  - go build
  - go test
Pipeline

Pipelines are configured with a simple, easy‑to‑read file that you commit to your git repository.

Each Pipeline step is executed inside an isolated Docker container that is automatically downloaded at runtime.

Plugins

Plugins are docker containers that encapsulate commands, and can be shared and re-used in your pipeline. Examples of plugins include sending Slack notifications, building and publishing Docker images, and uploading artifacts to S3.

Example Slack plugin:

- name: notify
  image: plugins/slack
  settings:
    room: general
    webhook: https://...```
Advantages
  • Streamlined development process

  • Smaller gaps between development, QA and deployment

  • Optimized build process

  • Reduced probability of shipping broken code

  • Faster code review process

    img

Example pipelines

Pipeline example

Example review pipeline

Pipeline example

Example release candidate pipeline

Pipeline example

Kubernetes

Kubernetes Logo

Production-Grade Container Orchestration

Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half of experience that Google has with running production workloads at scale, combined with best-of-breed ideas and practices from the community.

It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

  • Planet Scale

Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team.

  • Never Outgrow

Whether testing locally or running a global enterprise, Kubernetes flexibility grows with you to deliver your applications consistently and easily no matter how complex your need is.

  • Run Anywhere

Kubernetes is open source giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.

Why containers

Why containers Image

Looking for reasons why you should be using containers? TODO: ADD LOGO

The Old Way to deploy applications was to install the applications on a host using the operating-system package manager. This had the disadvantage of entangling the applications’ executables, configuration, libraries, and lifecycles with each other and with the host OS. One could build immutable virtual-machine images in order to achieve predictable rollouts and rollbacks, but VMs are heavyweight and non-portable.

The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host: they have their own filesystems, they can’t see each others’ processes, and their computational resource usage can be bounded. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host filesystem, they are portable across clouds and OS distributions.

Because containers are small and fast, one application can be packed in each container image. This one-to-one application-to-image relationship unlocks the full benefits of containers. With containers, immutable container images can be created at build/release time rather than deployment time, since each application doesn’t need to be composed with the rest of the application stack, nor married to the production infrastructure environment. Generating container images at build/release time enables a consistent environment to be carried from development into production. Similarly, containers are vastly more transparent than VMs, which facilitates monitoring and management. This is especially true when the containers’ process lifecycles are managed by the infrastructure rather than hidden by a process supervisor inside the container. Finally, with a single application per container, managing the containers becomes tantamount to managing deployment of the application.

Summary of container benefits:

  • Agile application creation and deployment: Increased ease and efficiency of container image creation compared to VM image use.

  • Continuous development, integration, and deployment: Provides for reliable and frequent container image build and deployment with quick and easy rollbacks (due to image immutability).

  • Dev and Ops separation of concerns: Create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.

  • Observability Not only surfaces OS-level information and metrics, but also application health and other signals.

  • Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.

  • Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, and anywhere else.

  • Application-centric management: Raises the level of abstraction from running an OS on virtual hardware to run an application on an OS using logical resources.

  • Loosely coupled, distributed, elastic, liberated micro-services**: Applications are broken into smaller, independent pieces and can be deployed and managed dynamically – not a fat monolithic stack running on one big single-purpose machine.

  • Resource isolation: Predictable application performance.

  • Resource utilization: High efficiency and density.

For more information about kubernetes visit Kubernetes Official Documentation

Helm

Helm Logo

The package manager for Kubernetes

Helm is the best way to find, share, and use software built for Kubernetes. Helm makes work with kubernetes much faster and flexible. Helm helps you manage Kubernetes applications - Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application.

Charts are easy to create, version, share, and publish - so start using Helm and stop the copy-and-paste madness. Helm aleady has a huge library of ready high flexible charts, that gives ability to deploy it in shortly on yout kubernetes cluster

The latest version of Helm is maintained by the CNCF - in collaboration with Microsoft, Google, Bitnami and the Helm contributors community.

Purpose at high level

Helm Architecture

Helm is a tool for managing Kubernetes packages called charts. Helm can do the following:

  • Create new charts from scratch
  • Package charts into chart archive (tgz) files
  • Interact with chart repositories where charts are stored
  • Install and uninstall charts into an existing Kubernetes cluster
  • Manage the release cycle of charts that have been installed with Helm

For Helm, there are three important concepts:

  1. The chart is a bundle of information necessary to create an instance of a Kubernetes application.
  2. The config contains configuration information that can be merged into a packaged chart to create a releasable object.
  3. A release is a running instance of a chart, combined with a specific config.
Helm Components

Helm has two major components:

  1. The Helm Client is a command-line client for end users. The client is responsible for the following domains:
  • Local chart development
  • Managing repositories
  • Interacting with the Tiller server
    • Sending charts to be installed
    • Asking for information about releases
    • Requesting upgrading or uninstalling of existing releases
    • The Tiller Server is an in-cluster server that interacts with the Helm client, and interfaces with the Kubernetes API server. The server is responsible for the following:
  1. The Tiller Server is an in-cluster server that interacts with the Helm client, and interfaces with the Kubernetes API server. The server is responsible for the following:
  • Listening for incoming requests from the Helm client
  • Combining a chart and configuration to build a release
  • Installing charts into Kubernetes, and then tracking the subsequent release
  • Upgrading and uninstalling charts by interacting with Kubernetes

In a nutshell, the client is responsible for managing charts, and the server is responsible for managing releases.