A solution for a unique USA based digital healthcare company that provides a 24/7 telehealth treatment, triage, and navigation service through its digital front door platform. In addition to offering robust clinical assistance and educational resources for patients, our solution was dedicated to delivering empathetic care that prioritizes the needs of the patient. Due to the sensitive nature of PHI data, our Client prioritized security and confidentiality in addition to the HIPAA compliant platform performance and scalability. To meet their client’s requirements, We chose AWS as their cloud provider and N.Virginia as the region. From the initial stages, We designed the project’s base architecture with a focus on high data security. All components at different levels of the architecture were equipped with data security elements. To ensure high availability, terraform templates with terragrunt were used to provision the VPC infrastructure in multiple availability zones. We used public subnets to provision services that needed to be exposed to the outside world and two types of private subnets across multiple availability zones. To make the database accessible only to the components that require access to it, network isolation was used. Resources that required internet access were provisioned in a subnet attached with a NAT gateway, and incoming connections from the outside network were restricted in this private subnet. Data-related services such as RDS instances and Redis were provisioned in a private subnet with access within the VPC only. Outbound rules of security groups were blocked to prevent any potential data leaks. To control inbound and outbound traffic at the subnet level, network ACLs were used as firewalls, and a network ACL-protected subnet was created in each Availability Zone. To add an additional layer of defense, these network ACLs provide individual controls that can be customized. To ensure encryption at rest, our client considered it important, and volumes underpinning the database and snapshots were encrypted. Encryption was enabled with customer-managed KMS keys of databases and snapshots that deal with sensitive data. Sensitive PHI data was encrypted for S3, databases, passwords/secret keys, EBS volumes, etc. Serverless technologies such as AWS Lambda were used to serve backend services, and the serverless framework enabled them to scale the architecture in a cost-optimized way.
Lambda is compliant with various security standards such as SOC 1, SOC 2, SOC 3, PCI DSS, and HIPAA. The source code for the project was managed using AWS CodeCommit, which automatically encrypts files in transit and at rest and is integrated with AWS IAM for user-specific repository access. To further secure access to the CodeCommit repository, VPN was implemented to restrict access only to users within the organization. For development, Cloud9 IDE was utilized, with individual logins provided to developers to prevent unauthorized access or data leaks. Sonarqube was integrated into the code pipeline to ensure code quality and application security, with quality gate status changes being emailed to developers for code review. Build jobs in the QA environment were integrated with functionality-based automation testing using Selenium. Access to Cloud9 IDE was provided through a VDI instance accessed through a VPN tunnel, with multi-factor authentication implemented for AWS user logins, secured with one-time passwords having a limited lifetime. Login email alerts were set up to notify on each user login, and password policies were implemented for IAM users’ password complexity requirements and mandatory rotation periods. To better manage and ensure visibility, Pritunl OpenVPN was used, and CodeCommit integrated with CodePipeline served as the CI/CD for the Lambda deployment.
In our deployment process for lambda functions using CloudFormation, we utilized a Blue/Green deployment strategy. This allowed us to seamlessly switch between versions in critical situations using Lambda versioning and Aliasing features. To ensure data safety, we automated and scheduled backups with retention periods. Additionally, we prioritized security by not hardcoding secrets for databases and applications, and instead, pulled them from the secure secrets store in AWS. For extra protection, we implemented an automatic rotation for both database credentials and application user credentials every 30/90 days.
Liquibase, an open-source database-independent library, was utilized for tracking, managing, and applying database schema changes. Postgres RDS instances were set up with Multi-AZ to ensure availability and reliability in case of a disaster. Each AZ runs on separate and independent infrastructure designed for high reliability. Amazon RDS performs an automatic failover to the standby in case of an infrastructure failure to avoid database operation outages. Dockerized microservices were deployed using ECS Fargate for rapid scalability. The Docker images were versioned, tagged, and stored in the Elastic Container Registry for a safe rollback in critical situations. Frontend sites were hosted on S3 buckets, and CloudFront was used as a CDN to cache the website globally. CloudFront Identity prevented unauthorized public access to the S3 bucket and improved the website’s performance and efficiency.
In our infrastructure setup, AWS Route 53 was used for DNS management. For alerting, we integrated DataDog with SNS, Lambda, and Amazon Connect. We conducted periodic internal security audits with the help of dedicated tools. To ensure security and compliance, we used AWS Security Hub, which collects and prioritizes security findings and performs automated, continuous configuration and compliance checks based on industry standards and best practices. We also used Amazon Inspector to perform security assessments and check for unintended network accessibility and vulnerabilities on our Amazon EC2 instances. IAM Analyser was used to continuously monitor and analyze permissions granted using policies for various AWS services such as S3 buckets, KMS keys, SQS queues, IAM roles, and Lambda functions. AWS CloudTrail was used to continuously log, monitor, and retain API activity related to actions across our company’s AWS infrastructure. To protect our accounts, workloads, and data stored in Amazon S3, we enabled Amazon GuardDuty, which is a threat detection service that uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats. Amazon GuardDuty is integrated with AWS SecurityHub, and the findings are displayed and aligned with the SecurityHub scorecard. We also periodically performed scans on security operations using Qualys to ensure compliance of critical components in the entire architecture. In addition, we hosted Jitsi, a more secure and flexible video conferencing solution, in a highly available cluster architecture using HAProxy for our meetings. Open project was configured and used for ticketing and collaboration purposes.
To address this issue, we implemented RDS Proxy. With RDS Proxy, an instance is maintained that maintains a pool of established connections to the RDS database instances. This helps to reduce the stress on the database’s computing and memory resources that occurs when new connections are established. Additionally, RDS Proxy shares infrequently used database connections, resulting in fewer connections accessing the RDS database. This connection pooling enables the database to efficiently support a large number and frequency of application connections while maintaining optimal performance, allowing the application to scale without performance compromise.
To address this, we implemented the AWS Secret Manager service, which we integrated with RDS. This allowed us to automate the rotation of database passwords according to a set schedule.