Kyvos builds an OLAP-based BI acceleration layer directly on AWS that consists of two main components: BI Servers and Query Engines. The Kyvos BI server is deployed on a standalone EC2 instance. Query engines are deployed in an Auto Scaling Group. They can be configured to increase or decrease depending upon the load.
Once the cubes are built, they are stored in S3 for persistent storage. To achieve high performance, Kyvos replicates the cuboids and their metadata on shared storage. This helps in delivering much higher performance as compared to querying cubes directly on S3.
The auto-scaling feature enables Kyvos to scale up and down on AWS at the time of building cubes using the Amazon EMR service.
- Kyvos reads data from S3 and processes it using the EMR cluster. It launches a series of MapReduce or Spark jobs for cube building.
- At the time of Kyvos deployment, EMR is configured such that the cluster can scale in or scale out to use only the resources that are needed.
- This ensures that only the required number of machines are running in the on-demand EMR cluster during cube build.
Kyvos supports querying elasticity through scheduled scaling. Based upon the expected loads, you can specify the day/time when resources need to scale up or down. This helps reduce costs during lean periods.
Kyvos' modern architecture enables deep integration with AWS, as shown here.
The cluster runs in one of these modes: Running, Suspended, or Default.
In each case, the process of starting or stopping the cluster creates or terminates the EMR cluster, creates a default schedule if applicable, and sends appropriate notifications such as cluster start or cluster down.
- Separate storage and computation layers
- The elastic architecture ensures optimal utilization of resources
- Amazon S3 leveraged for cube storage
- Elastic cube building using Amazon EMR service
- High-performance, elastic querying using Auto Scaling group