We're consuming the platform's API to pass the trading data into the trading widget which only uses the TradingView frontend library, tradingview.com doesn't interact with the platform at all.
All the system's components have an in-depth description in the architectural documentation, here's a quick overview of each one of them:
Our deployment platform of choice is Kubernetes which consists of clusters deployed over multiple hosts in multiple availability zones.
All microservices are located behind an API gateway(Envoy), all the incoming API calls are authorized by Barong AuthZ service which injects JWT to all successful requests.
All the inter-microservice messaging is done via Event API, an event-based protocol utilizing RabbitMQ and AMQP with signed JWT payloads. Event API lets components communicate new events(e.g. deposit creation) to all consumers that are listening for such events.
We use Google Cloud SQL/Amazon RDS database instances replicated across multiple availability zones for production setups.
Additionally, you can use any database deployment you'd like, either cloud-based or deployed inside of the cluster.
The system is scalable since we're using Kubernetes clusters which can be scaled up to hundreds of thousands instances.
The data is encrypted using RS256 keys, database encryption keys provided by cloud KMS and encrypted at rest using GCS/S3 provided encryption keys.
Customer-uploaded KYC documents are stored inside encrypted GCS/S3 buckets.
Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond health check, and doesn't send traffic to them until they are ready to serve.
We're using Prometheus, Grafana and Alertmanager to monitor the deployment.
Each one of these components is an integral part of Cloud Native Computing Foundation(CNCF), having been designed with flexibility, extendability and cloud workflows in mind.
We're using the ELK stack for our deployments.
Each cluster node has a Fluentd Pod running which gathers all Docker and Kubernetes nodes and pushes them to an Elasticsearch deployment which is then visualized using Kibana.
Since all logging and monitoring components are deployed on the same cluster, there aren't any additional operating costs except for infrastructure costs.
The system tracks user device & browser via their user agents. If the user has enabled 2FA on his account(which is recommended), he'd be required to enter the 2FA code regardless of the device which adds another authentication factor.
We're using deployment tools made from scratch exactly for the purpose of platform deployment along with Terraform, Packer and Helm.
Our deployment system provisions the cloud infrastructure, deploys all cluster dependencies to the cluster and installs all the applications and tools required for a full-fledged platform.
We're using Drone as our preferred CI system, all of our components have comprehensive CI/CD pipelines with tests included.
We have a load testing guide which covers the whole process of benchmarking the deployed platform: