25 Apr, 2023 / casestudy By: Aswathy Raj

To fulfill the client’s requirement of developing an AI-powered fleet management system, we received data from approximately 5000 GPS devices located in various regions such as Kenya, Rwanda, Mumbai, and others. Initially, we only had fewer than 1000 IoT devices, which were managed through a Windows Dotnet application running on AWS. We scaled the Windows applications to ensure high availability across the various application components. The back-end endpoints, which receive data from the IoT devices, were autoscaled using AWS Autoscaling Group based on the application’s network throughput, CPU, and memory usage. The frontend dashboard was hosted on Windows IIS web servers that were load-balanced using ALB. Three IIS web servers were distributed across different availability zones to ensure high availability. As the device count exceeded 1000, we transitioned from Dotnet to Angular and Node frameworks. We replaced the data-receiving endpoints with Node-written wrappers, which were deployed on AWS ECS.

The frontend sites were redesigned and hosted on S3, and CloudFront was utilized as a CDN to cache the website’s content worldwide, enabling the site to be delivered faster and more efficiently. We implemented other components such as S3, AWS Elasticsearch, Kafka, Kinesis, Lambda, and CloudWatch to support the platform, which was capable of handling 5000 GPS devices sending data at 3-second intervals. The stack was successfully operating on AWS for over three years, until our client requested a migration to a different cloud provider, prompting us to make the platform cloud-independent. Since the platform was heavily reliant on AWS services, we initiated a POC to make it independent by replacing ECS with Kubernetes, S3 with Minio, Lambda with Docker, AWS Elasticsearch with self-hosted Elasticsearch, and self-hosted databases, among other things. The POC was successful, and we were able to replace all AWS components with open-source alternatives.

We managed our entire codebase with a self-hosted GitLab instance and used Jenkins and Gitlab-ci for CI/CD depending on the situation. Our Kubernetes cluster was provisioned with compute-optimized instance types to handle heavy computations, and we utilized three separate Elasticsearch clusters configured with Helm charts. Instead of S3, we switched to Minio, and we shifted our Lambda functions to Docker services deployed on Kubernetes. External traffic was routed through the Traefik ingress controller, and we used Graylog and Prometheus in place of Cloudwatch for monitoring. Kibana was also employed to keep tabs on Elasticsearch clusters. To achieve cost savings of $5000, we ran our Kubernetes cluster on five master nodes and twenty worker nodes, with a mix of general-purpose and compute-optimized instance types. We employed the NodeSelector attribute in the Helm chart to ensure that services ran on the correct instance type, and we used Nagios for alerting. Through our POC, we successfully replaced all AWS components with open-source alternatives, making our infrastructure cloud-independent.


  1. The migration of elasticsearch clusters posed a risk of data loss which could lead to errors in the frontend dashboard and incorrect vehicle location details in the daily run report. This was particularly concerning as we were dealing with a daily influx of 100GB of GPS data.


To prevent any data loss during the migration of Elasticsearch clusters, we took a precautionary measure by configuring the new Elasticsearch cluster as a secondary endpoint. Concurrently, we reindexed the old data from the old Elasticsearch clusters. This allowed us to process both the incoming data and the reindexing of old data simultaneously.

With this approach, we were able to migrate the data from AWS Elasticsearch to the self-hosted Elasticsearch with no data loss. As a result, we were able to ensure the accuracy of the vehicle daily run report and prevent false location details from appearing on the frontend dashboard.

  1. To optimize costs for the entire architecture, we initially separated the data-related services into private subnets and the frontend into a public subnet. However, we faced significant expenses due to the processing of NAT data. Additionally, our AWS Lambda functions were set up to log to Cloudwatch, which added to our expenses.


When we migrated to Kubernetes, we were able to eliminate this cost by leveraging service discovery and CoreDNS for internal traffic distribution within the cluster. Instead of using AWS Lambda, we switched to Docker, and replaced CloudWatch with Graylog. The Docker service and Graylog were configured to communicate within the cluster, resulting in significant cost savings. Overall, we were able to reduce the monthly running cost by $5000.

  1. Cloud Platform
    1. AWS
  2. Source Code Management
    1. BitBucket
    2. Gitlab
  3. Continuous Integration & Deployment
    1. CodeBuild
    2. Gitlab-CI
    3. Jenkins
  4. Databases
    1. MS-SQL
    2. MySql
    3. ElasticSearch
    4. PSQL
  1. Infra Provisioning Tools & Configuration Management
    1. Terraform
    2. Aws-Cli
    3. Rancher
    4. Helm Charts
  2. Containerization & Deployment
    1. Docker
    2. ECS – EC2
    3. Kubernetes
  3. Message Queuing
    1. Rabbitmq
    2. Kafka
    3. VernMQ
  4. Authentication
    1. Keycloak
  1. Logging, Monitoring & Alerting
    1. Kibana
    2. Nagios
    3. Graylog
    4. Prometheus
    5. AWS Cloudwatch
  2. Load Balancing
    1. AWS ALB
  3. Ingress Controller
    1. Traefik
  4. Content Delivery Network
    1. AWS Cloudfront
  5. Web Hosting
    1. AWS S3
    2. AWS Lightsail
    3. Windows IIS
  1. Storage
    1. S3
    2. Minio
    3. EBS
  2. Collaboration and Ticketing
    1. Mantis
    2. Skype