[TOC]
The influence of cryptocurrency (CCY) on the world's financial markets and service industry, is gaining ground on the traditional financial models at a steady and seemingly relentless pace. With increased media coverage and volatile market behavior it continues to attract businesses, traders, crypto-enthusiasts and your every day world citizen believing in the long term value of a relatively small investment today.
However, a list of serious problems has plagued this industry throughout it's early stages and it's typically related to the centralized nature of every exchange, starting with unbalanced loads, lack of tech support, inability managing registration parameters, undetected or unresolved security holes and the like. Results of these problems always lead to financial losses, with hundreds of million Euro lost in stand alone instances. Billions have found their way into hacker's wallets or 'misapropriated' for a lack of internal security measures.
Understanding the landscape of technology enabling the cryptocurrency market, our team of FinTech seasoned DevOps and cryptocurrency experts decided to create the OpenWare OPEX. OPEX is a complete cryptocurrency exchange solution, designed and crafted specifically for high-load, zero-downtime, autoscaling and secure deployments.
This document aims to describe every aspect of the OPEX in a comprehensive and understandable way.
The OPEX system consists of 6 main subsystems:
You can find the dedicated documentation for the platform installation and administration here.
Users access to services through a cloud load balancer service which load balances requests over Kubernetes cluster nodes. Application components are running in pods (Docker containers) in the Kubernetes cluster, communications between pods are in a private network layer inside the Kubernetes cluster. All the cluster nodes are secured behind a NAT without any public IPs so that they can't be accessed from the outside. A Cloud SQL service is used as central database, a secure TCP tunnel connection is established between the Kubernetes private network layer and the Cloud SQL service to ensure the privacy of connections.
All the traffic flowing into the stack can be separated by two directions:
Baseapp frontend application is a user interface component for interaction with other parts of OPEX.
It is a React based application with all necessary tools for your wallets and orders management, it provides the connection of UI and actions on Peatio and Barong backend.
Frontend component has the following structure:
Trade page is one of the main parts of Frontend components and provides the following functionality:
The summary architecture is showed on scheme:
Title: Trade page
User->Market: Input amount and price of asked CCY
Market-->User: Match ask with appropriate bid
User->Market: Input amount and price of bid CCY
Market-->User: Match bid with appropriate ask
The wallets page gives user the ability to manage his fiat an CCY wallets. This component is responsible for deposit and withdrawal process.
Title: Fiat Deposit
User->Frontend: Get information about bank account and SN
User->Bank: Include SN in payment description
Peatio->Frontend: Administrator accepts deposit
User->Frontend: Refresh page
Frontend-->User: Update balance
### Peatio
Peatio is main Crypto-Currency exchange component facilitating the trade of cryptocurrencies for assets, conventional fiat and an everygrowing array of different digital currencies.
![img](images/peatio-logo.png)
Crypto-Currency exchange component requirements:
- Website and server safety
- High performance
- Usability and scalibility
- Highly configurable and extendable
- Support multiple digital currencies
- Support for FIAT currency
- Industry standard security
Peatio.tech version of Peatio strives to and goes above all of these requirements.
#### Peatio subcomponents
##### Peatio daemons:
All Peatio daemons could be devided into two groups by functionallity.
**Trading Daemons** are the ones that perform all trading actions from order creation to ticker and k-line updates:
- *Market Ticker* - updates market ticker when some orders or trades are created or updated.
- *Matching* - matches orders and sends them to amqp:trade_executor.
- *Order Processor* - processes cancellation and submission of orders.
- *Pusher Market* - delivers new public and private trade events to Ranger.
- *Pusher Member* - delivers private member events.
- *Slave Book* - periodically caches market depth in Redis. Market depth is needed for trading UI and market order estimation.
- *Global State* - sends orderbook to Ranger every 5 seconds.
- *K* - updates k-lines every 15 seconds. K-line data is used by the trading chart.
- *Trade Executor* - performs partial or full fulfillment of two orders, updates their state in DB and creates trades.
**Deposit-Withdraw Daemons** are the ones that perform all deposit and withdrawal operations from deposit detection to sending transactions to a blockchain:
- *Deposit Collection* - transfers incoming deposits from the deposit wallet to withdraw wallets (hot, warm, cold).
- *Deposit Collection Fees* - performs custom actions which are required before deposit collection (coin specific ones e.g. transfer ETH for sending ERC20).
- *Deposit Coin Address* - cryptocurrency deposit wallet address generation.
- *Withdraw Coin* - publishes signed transactions to a blockchain network.
- *Blockchain* - monitors a blockchain for incoming deposits and outcoming withdrawals and updates their state in the database.
- *Withdraw Audit* - validates withdrawals and submits them to the Withdraw Coin daemon.
##### Pluggable Coin API:
Peatio Plugin API v2 gives ability to extend Peatio with any coin which fits into basic Blockchain and Wallet inteface. This API gives you abillity to integrate new coins into Peatio without touching source code and core Peatio buisness logic. For developing new plugins you just need to create a gem and inherit Blockchain and Wallet abstract classes and put your coin buisness logic here.
##### Admin panel:
- Currencies Summary
- Deposit/Withdraw management and processing
- Blockchains/Wallets/Currencies management
- Member Accounts and Funds management
- Accounting reports
#### Cryptocurrency exchange in action
##### Automated cryptocurrency deposit
```sequence
participant End User
participant Peatio
participant Peatio Worker
End User->Peatio: Visit cryptocurrency\ndeposit page
Peatio-->End User: Deposit address
Note left of End User: Blockchain\ntransaction
Note right of Peatio Worker: Deposit process
Peatio Worker-->Peatio: Confirm deposit
Note right of Peatio: Wallet stats\nrefresh
Peatio-->End User: Update balance
participant End User
participant Peatio
participant Peatio Workers
End User->Peatio: Trade page
Peatio-->End User: Trade page
End User->Peatio: Bid/Ask order
Peatio->Peatio Workers: Process order request
Peatio Workers-->End User: Created order notification
Note right of Peatio Workers: Order matching
Peatio Workers-->Peatio: Stats refresh
Note right of Peatio: Close order
Peatio Workers-->End User: Proccesed order notification
Barong is a KYC/AML component which acts as a central authentication and authorization system in OPEX.
KYC controls typically include the following:
Barong is designed to be customizable using plugins and Applogic integrations so that any market regulator's requirements can be met with ease.
Core Barong features include:
Barong also acts as an authenticator(authz) for all incoming API requests, verifying them and only letting through the ones that pass all the security filters.
This flow is customer specific and must comply actual regulations on market.
Barong provides email verification, phone verification and document upload support out of the box.
Title: TOTP Sign withdraw and withdraw destination
User->Applogic: (Header JWT) TOTP sign request: action, data, nbf
Applogic->Barong: Forward to Barong
Barong->Barong: Action and payload saved
Barong->Vault: Create TOTP
Vault-->Barong: Send OTP to User via Barong
Barong->>User: Send OTP to User (sms/email/ga)
Barong-->Applogic: Ready to accept TOTP
Applogic-->User: TOTP form
User->Applogic: TOTP
Applogic->Barong: Check TOTP verified
Barong->Vault: Check TOTP verified
Vault-->Barong: Accepted
Barong-->Applogic: Signed document with nbf
Applogic->Peatio: Put document to queue
Peatio->Peatio: Lock funds / create withdraw destination
Peatio-->Applogic: Document accepted
Applogic-->User: Document accepted
{
"action": "withdraw_destination#create",
"data": {
"currency": "btc",
"label": "My Bitcoin Wallet",
"type": "coin",
"address": "18VTUbTmBoXhZ9BJRKy2YMYuNbo8Xta8SQ"
}
}
{
"action": "withdraw_destination#create",
"data": {
"currency": "usd",
"label": "My Bank Account",
"type": "fiat",
"bank_name": "International Bank",
"bank_branch_name": "International Bank (branch #12345)",
"bank_branch_address": "Planet Earth",
"bank_identifier_code": "IB_12345_67890",
"bank_account_number": "BAN123456789",
"bank_account_holder_name": "John Doe"
}
}
{
"action": "withdraw#create",
"data": {
"amount": "10.0",
"fee": "0.0005",
"destination_id": "1"
},
"params": {
"exp": 123456789,
"nbf": 123456252
}
}
User->Barong: Login to /admin
Barong-->User: Ok
User->Barong: Navigate to applications
Barong-->User: Ok
User->Barong: Fill create application form
Barong-->User: Ok
Applogic is a component acting both as a proxy to the Barong and Peatio API's used by the frontend and as an extendable base application containing extra logic, e.g. a payment gateway or an interface for a third party party KYC/AML provider.
End User->Barong: Submit documents
Barong-->End User: Confirm upload
Barong->App Logic: Push Document upload event
App Logic->AML Provider: Submit document to AML Api
AML Provider-->App Logic: AML approval
App Logic->Barong: Update document state
Note right of Barong: Compliance approves
Barong->App Logic: Push KYC Approval event
App Logic-->End User: Notify user by email
Process below describes an automated crypto-currency withdrawal.
End User->App Logic: Withdrawal request with OTP
App Logic->Barong: Ask Barong to verify OTP and sign
App Logic-->End User: Email confirmation link
End User-->App Logic: Click on confirmation link
App Logic->Peatio: Request withdraw with Barong and Applogic signature
Peatio->Peatio Worker: Submit withdrawal on network
Peatio Worker-->Peatio: TxID is returned
Peatio-->End User: Transaction confirmations
participant End User
participant Accountant
participant App Logic
participant Barong
End User->Barong: Request signed bank details
Barong-->End User: Returns OTP signed payload
End User->App Logic: Create beneficiary
App Logic-->End User: Beneficiary status
Note right of App Logic: Beneficiary is reviewed
End User->Barong: Request signed withdrawal
Barong-->End User: Returns OTP signed payload
End User->App Logic: Submit Withdrawal request
Note right of App Logic: Withdrawal is opened
App Logic-->End User: Return status
Note right of App Logic: Withdrawal is reviewed
Note left of Accountant: Issue bank transfer
Accountant->App Logic: Update withdrawal status
App Logic->Peatio: Submit withdrawal signed payload
Peatio-->App Logic: Confirm transaction
Note right of App Logic: Withdrawal is closed
App Logic-->End User: Email Confirmation
Deposit through a third party payment gateway
End User->App Logic: Select payment option
App Logic->Payment Gate: Enter payment details
Payment Gate-->App Logic: Webhook Callback
Note right of App Logic: Update order state
App Logic->Peatio: Submit signed deposit payload
Peatio-->App Logic: Return status
Note right of App Logic: Close Order
participant End User
participant Accountant
End User->App Logic: Visit bank deposit page
App Logic-->End User: Returns deposit SN
Note left of End User: Issue bank wire
Accountant->App Logic: Consolidate payment received
Note right of App Logic: Deposit confirmed
App Logic->Peatio: Submit signed deposit request
Peatio-->App Logic: Return status
Note right of App Logic: Close Deposit
App Logic-->End User: Send Email confirmation
Cloud SQL for MySQL is a fully-managed database service that makes it easy to set up, maintain, manage, and administer your MySQL relational databases on GCP.
Features:
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius queries. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.
In order to achieve its outstanding performance, Redis works with an in-memory dataset. Depending on your use case, you can persist it either by dumping the dataset to disk every once in a while, or by appending each command to a log. Persistence can be optionally disabled, if you just need a feature-rich, networked, in-memory cache.
Redis also supports trivial-to-setup master-slave asynchronous replication, with very fast non-blocking first synchronization, auto-reconnection with partial resynchronization on net split.
Other features include:
Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log.
A modern system requires access to a multitude of secrets: database credentials, API keys for external services, credentials for service-oriented architecture communication, etc. Understanding who is accessing what secrets is already very difficult and platform-specific. Adding on key rolling, secure storage, and detailed audit logs is almost impossible without a custom solution. This is where Vault steps in.
The key features of Vault are:
Secure Secret Storage: Arbitrary key/value secrets can be stored in Vault. Vault encrypts these secrets prior to writing them to persistent storage, so gaining access to the raw storage isn't enough to access your secrets. Vault can write to disk, Consul, and more.
Dynamic Secrets: Vault can generate secrets on-demand for some systems, such as AWS or SQL databases. For example, when an application needs to access an S3 bucket, it asks Vault for credentials, and Vault will generate an AWS keypair with valid permissions on demand. After creating these dynamic secrets, Vault will also automatically revoke them after the lease is up.
Data Encryption: Vault can encrypt and decrypt data without storing it. This allows security teams to define encryption parameters and developers to store encrypted data in a location such as SQL without having to design their own encryption methods.
Leasing and Renewal: All secrets in Vault have a lease associated with them. At the end of the lease, Vault will automatically revoke that secret. Clients are able to renew leases via built-in renew APIs.
Revocation: Vault has built-in support for secret revocation. Vault can revoke not only single secrets, but a tree of secrets, for example all secrets read by a specific user, or all secrets of a particular type. Revocation assists in key rolling as well as locking down systems in the case of an intrusion.
At a bare minimum, Vault can be used for the storage of any secrets. For example, Vault would be a fantastic way to store sensitive environment variables, database credentials, API keys, etc.
Compare this with the current way to store these which might be plaintext in files, configuration management, a database, etc. It would be much safer to query these using vault read or the API. This protects the plaintext version of these secrets as well as records access in the Vault audit log.
In addition to being able to store secrets, Vault can be used to encrypt/decrypt data that is stored elsewhere. The primary use of this is to allow applications to encrypt their data while still storing it in the primary data store.
The benefit of this is that developers do not need to worry about how to properly encrypt data. The responsibility of encryption is on Vault and the security team managing it, and developers just encrypt/decrypt data as needed.
RabbitMQ is a messaging broker that supports multiple messaging protocols. It is lightweight and easy to deploy. RabbitMQ runs on many operating systems and cloud environments, and provides a wide range of developer tools for most popular languages. It ships in a state where it can be used straight away in simple cases such as development and QA environments - just start the server and it's ready to go.
Messaging brokers receive messages from publishers (applications that publish them, also known as producers) and route them to consumers (applications that process them).
Since it is a network protocol, the publishers, consumers and the broker can all reside on different machines.
Features | |
---|---|
Asynchronous Messaging | Supports multiple messaging protocols, message queuing, delivery acknowledgement, flexible routing to queues, multiple exchange type. |
Developer Experience | Deploy with BOSH, Chef, Docker and Puppet. Develop cross-language messaging with favorite programming languages such as: Java, .NET, PHP, Python, JavaScript, Ruby, Go and many others. |
Distributed Deployment | Deploy as clusters for high availability and throughput; federate across multiple availability zones and regions. |
Enterprise & Cloud Ready | Pluggable authentication, authorization, supports TLS and LDAP. Lightweight and easy to deploy in public and private clouds. |
Tools & Plugins | Diverse array of tools and plugins supporting continuous integration, operational metrics, and integration to other enterprise systems. Flexible plug-in approach for extending RabbitMQ functionality. |
Management & Monitoring | HTTP-API, command line tool, and UI for managing and monitoring RabbitMQ. |
Every cryptoexchange needs an ultimately secure wallet solution and it's not the easiest thing to do in a cloud environment. However, BitGo comes to rescue, offering a maximum-security multisignature wallets as a service.
Bitgo offers a very secure wallet solution which remains approachable by the average consumer on the street. Convenience and security often do not mix that well, but Bitgo shows it can be done. However it does not provide users with much anonymity, but that is not something most people are looking for. There are also no extra features to take advantage of, but that is not a deal breaker either.
Its technology solves the most difficult security, compliance and custodial problems associated with blockchain-based currencies, enabling the integration of digital currency into the global financial system.
For any Ethereum users there is an Ethereum-based wallet which uses the same Bitgo technology. In fact, it is built by most of the engineers who are working on the Bitcoin wallet. This project is known as Ether.li, and features all of the same aspects one can find in the Bitgo Bitcoin wallet. It provides multisignature solutions for Ethereum users, which will be appreciated by some users. Bringing this functionality to different cryptocurrencies is a smart strategy by Bitgo.
Using the web version means private keys are not generated in a secure manner, even though users can protect them with a password. The mobile apps work as one would expect, without any unnecessary bells and whistles. Advanced users not looking for multisignature support may find Bitgo a bit boring, but for novice users, it is a very powerful solution. There is no reason not to give Bitgo wallet a try.
Supported currencies:
Envoy is an L7 proxy and communication bus designed for large modern service oriented architectures.
Envoy uses a single process with multiple threads architecture. A single master thread controls various sporadic coordination tasks while some number of worker threads perform listening, filtering, and forwarding.
Envoy relies on Kubernetes for scaling, high availability, and persistence. All Envoy configuration is stored directly in Kubernetes as a configmap; there is no database. Envoy Proxy is packaged as a single container. By default, Envoy is deployed as a Kubernetes deployment and can be scaled and managed like any other Kubernetes deployment.
OpenWare OPEX infrastructure is a diverse and expansive topic. To work as a fail-safe and zero-downtime platform it relies on a cloud platform proven by years of usage and a set of chosen DevOps tools. However, it’s hard to keep all the things together in control and harmony. Thus, we’ve created a one tool to rule them all and help people get high up to the clouds - Kite.
Kite is a CLI for scaffolding, managing DevOps environments and enhancing different tools by fusing them into one workflow.
Kite follows Ruby on Rails guidelines in terms of environment handling and file scaffolding.
Main Kite features are:
A diagram depicting the Kite project structure
Every Kite deployment consists of the following components:
Workflow
Typical Kite workflow is as follows:
Every cloud deployment needs a virtualization provider and our team has found Docker to be the best fit.
The Docker platform is the only container platform to build, secure and manage the widest array of applications from development to production both on premises and in the cloud.
Some of its key feature are:
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings.
Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.
Key points of containers:
Lightweight
Docker containers running on a single machine share that machine's operating system kernel; they start instantly and use less compute and RAM. Images are constructed from filesystem layers and share common files. This minimizes disk usage and image downloads are much faster
Standart
Docker containers are based on open standards and run on all major Linux distributions, Microsoft Windows, and on any infrastructure including VMs, bare-metal and in the cloud
Secure
Docker containers isolate applications from one another and from the underlying infrastructure. Docker provides the strongest default isolation to limit app issues to a single container instead of the entire machine
Containers and virtual machines have similar resource isolation and allocation benefits, but function differently because containers virtualize the operating system instead of hardware. Containers are more portable and efficient.
Containers
Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. Containers take up less space than VMs (container images are typically tens of MBs in size), and start almost instantly.
Virtual Machines
Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries and libraries - taking up tens of GBs. VMs can also be slow to boot.
Docker is a breakthrough technology allowing for complex cloud deployments which are fast, reliable and immutable. Because of this Docker is a base for more complex cloud technologies, including Kubernetes and Docker Compose.
Every cloud platform needs a reliable infrastructure provider. Our team made a thorough research on the topic and chose the Google Cloud Platform(GCP) as the best candidate because of its speed, competitive pricing and progressive design.
GCP consists of a set of physical assets, such as computers and hard disk drives, and virtual resources, such as virtual machines (VMs), that are contained in Google's data centers around the globe. Each data center location is in a global region. Regions include Central US, Western Europe, and East Asia. Each region is a collection of zones, which are isolated from each other within the region. Each zone is identified by a name that combines a letter identifier with the name of the region. For example, zone a
in the East Asia region is named asia-east1-a
.
This distribution of resources provides several benefits, including redundancy in case of failure and reduced latency by locating resources closer to clients. This distribution also introduces some rules about how resources can be used together.
In cloud computing, what you might be used to thinking of as software and hardware products, become services. These services provide access to the underlying resources. The list of available GCP services is long, and it keeps growing. When you develop your website or application on GCP, you mix and match these services into combinations that provide the infrastructure you need, and then add your code to enable the scenarios you want to build.
Some resources can be accessed by any other resource, across regions and zones. These global resources include preconfigured disk images, disk snapshots, and networks. Some resources can be accessed only by resources that are located in the same region. These regional resources include static external IP addresses. Other resources can be accessed only by resources that are located in the same zone. These zonal resources include VM instances, their types, and disks.
The following diagram shows the relationship between global scope, regions and zones, and some of their resources:
Any GCP resources that you allocate and use must belong to a project. You can think of a project as the organizing entity for what you're building. A project is made up of the settings, permissions, and other metadata that describe your applications. Resources within a single project can work together easily, for example by communicating through an internal network, subject to the regions-and-zones rules. The resources that each project contains remain separate across project boundaries; you can only interconnect them through an external network connection.
Each GCP project has:
The Google Cloud Platform Console provides a web-based, graphical user interface that you can use to manage your GCP projects and resources. When you use the GCP Console, you create a new project, or choose an existing project, and use the resources that you create in the context of that project. You can create multiple projects, so you can use projects to separate your work in whatever way makes sense for you. For example, you might start a new project if you want to make sure only certain team members can access the resources in that project, while all team members can continue to access resources in another project.
If you prefer to work in a terminal window, the Google Cloud SDK provides the gcloud
command-line tool, which gives you access to the commands you need. The gcloud
tool can be used to manage both your development workflow and your GCP resources. See the gcloud reference for the complete list of available commands.
GCP also provides Cloud Shell, a browser-based, interactive shell environment for GCP. You can access Cloud Shell from the GCP console.
Cloud Shell provides:
GCP gives you options for computing and hosting. You can choose to:
You can imagine a spectrum where, at one end, you have most of the responsibilities for resource management and, at the other end, Google has most of those responsibilities:
With container-based computing, you can focus on your application code, instead of on deployments and integration into hosting environments. Google Kubernetes Engine, GCP's containers as a service(CaaS) offering, is built on the open source Kubernetes system, which gives you the flexibility of on-premises or hybrid clouds, in addition to GCP's public cloud infrastructure.
When you build with Kubernetes Engine, you can:
GCP's unmanaged compute service is Google Compute Engine. You can think of Compute Engine as providing an infrastructure as a service (IaaS), because the system provides a robust computing infrastructure, but you must choose and configure the platform components that you want to use. With Compute Engine, it's your responsibility to configure, administer, and monitor the systems. Google will ensure that resources are available, reliable, and ready for you to use, but it's up to you to provision and manage them. The advantage, here, is that you have complete control of the systems and unlimited flexibility.
When you build on Compute Engine, you can:
Whatever your application, you'll probably need to store some data. GCP provides a variety of storage services, including:
Google Cloud Platform is a progressive Infrastructure as a Service provider with a diverse product lineup and innovative approach to cloud computing. Our team's experience shows that over the years GCP is becoming better and better just like a good wine, and each one of our DevOps would choose GCP over any other cloud IaaS.
For most of the initial IaaS interaction(resource creation and provisioning) Kite relies on Terraform.
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. It can manage existing and popular service providers as well as custom in-house solutions.
The key features of Terraform are:
Infrastructure as Code
Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.
Execution Plans
Terraform has a "planning" step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.
Resource Graph
Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.
Change Automation
Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.
Every VM-based deployment greatly benefits from having standardized, stored as code and pre-baked machine images which can be deployed in a matter of seconds. For this purpose Kite relies on Packer.
Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel. Packer does not replace configuration management like Chef or Puppet. In fact, when building images, Packer is able to use tools like Chef or Puppet to install software onto the image.
A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines. Machine image formats change for each platform. Some examples include AMIs for EC2, VMDK/VMX files for VMware, OVF exports for VirtualBox, etc.
Packer plays a major role in keeping the deployments stable, persistent and predictable. Ability to create images that are fit for any platform allow companies to manage multi-provider environments with ease when it comes to VM management.
Drone is a self-service Continuous Delivery platform for busy development teams.
Drone searches for a configuration file drone.yml
in the repository that is authorized within your drone server. Example of a drone.yml
kind: pipeline
name: default
steps:
- name: frontend
image: node
commands:
- npm install
- npm test
- name: backend
image: golang
commands:
- go build
- go test
Pipelines are configured with a simple, easy‑to‑read file that you commit to your git repository.
Each Pipeline step is executed inside an isolated Docker container that is automatically downloaded at runtime.
Plugins are docker containers that encapsulate commands, and can be shared and re-used in your pipeline. Examples of plugins include sending Slack notifications, building and publishing Docker images, and uploading artifacts to S3.
Example Slack plugin:
- name: notify
image: plugins/slack
settings:
room: general
webhook: https://...```
Streamlined development process
Smaller gaps between development, QA and deployment
Optimized build process
Reduced probability of shipping broken code
Faster code review process
Example review pipeline
Example release candidate pipeline
Production-Grade Container Orchestration
Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half of experience that Google has with running production workloads at scale, combined with best-of-breed ideas and practices from the community.
It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.
Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team.
Whether testing locally or running a global enterprise, Kubernetes flexibility grows with you to deliver your applications consistently and easily no matter how complex your need is.
Kubernetes is open source giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.
Looking for reasons why you should be using containers? TODO: ADD LOGO
The Old Way to deploy applications was to install the applications on a host using the operating-system package manager. This had the disadvantage of entangling the applications’ executables, configuration, libraries, and lifecycles with each other and with the host OS. One could build immutable virtual-machine images in order to achieve predictable rollouts and rollbacks, but VMs are heavyweight and non-portable.
The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host: they have their own filesystems, they can’t see each others’ processes, and their computational resource usage can be bounded. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host filesystem, they are portable across clouds and OS distributions.
Because containers are small and fast, one application can be packed in each container image. This one-to-one application-to-image relationship unlocks the full benefits of containers. With containers, immutable container images can be created at build/release time rather than deployment time, since each application doesn’t need to be composed with the rest of the application stack, nor married to the production infrastructure environment. Generating container images at build/release time enables a consistent environment to be carried from development into production. Similarly, containers are vastly more transparent than VMs, which facilitates monitoring and management. This is especially true when the containers’ process lifecycles are managed by the infrastructure rather than hidden by a process supervisor inside the container. Finally, with a single application per container, managing the containers becomes tantamount to managing deployment of the application.
Summary of container benefits:
Agile application creation and deployment: Increased ease and efficiency of container image creation compared to VM image use.
Continuous development, integration, and deployment: Provides for reliable and frequent container image build and deployment with quick and easy rollbacks (due to image immutability).
Dev and Ops separation of concerns: Create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
Observability Not only surfaces OS-level information and metrics, but also application health and other signals.
Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.
Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, and anywhere else.
Application-centric management: Raises the level of abstraction from running an OS on virtual hardware to run an application on an OS using logical resources.
Loosely coupled, distributed, elastic, liberated micro-services**: Applications are broken into smaller, independent pieces and can be deployed and managed dynamically – not a fat monolithic stack running on one big single-purpose machine.
Resource isolation: Predictable application performance.
Resource utilization: High efficiency and density.
For more information about kubernetes visit Kubernetes Official Documentation
The package manager for Kubernetes
Helm is the best way to find, share, and use software built for Kubernetes. Helm makes work with kubernetes much faster and flexible. Helm helps you manage Kubernetes applications - Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish - so start using Helm and stop the copy-and-paste madness. Helm aleady has a huge library of ready high flexible charts, that gives ability to deploy it in shortly on yout kubernetes cluster
The latest version of Helm is maintained by the CNCF - in collaboration with Microsoft, Google, Bitnami and the Helm contributors community.
Helm is a tool for managing Kubernetes packages called charts. Helm can do the following:
For Helm, there are three important concepts:
Helm has two major components:
In a nutshell, the client is responsible for managing charts, and the server is responsible for managing releases.