HelloπŸ‘‹, I'm

Deependra Kumar

Software Developer


Deependra_Kumar

About Me

Introduction
coder gif

Passionate Backend Developer with 1.5+ years of hands-on experience in designing, building, and deploying enterprise-grade scalable systems. Specialized in Python ecosystem with expertise in FastAPI, PostgreSQL, Redis, and cloud technologies including AWS (SageMaker, Lambda, Step Functions) and Google Cloud Platform (GKE).

Proven track record in microservices architecture, containerization with Docker and Kubernetes, and implementing comprehensive monitoring solutions. Known for taking end-to-end ownership of projects from conception to production deployment, with strong focus on performance optimization and system reliability.

Recognized for exceptional problem-solving abilities with HackerRank Gold badges in Python and Problem Solving, and recipient of multiple awards including the K Vasudevan Award and Best Team Award. Committed to continuous learning and contributing to high-impact projects in collaborative engineering environments.

Deependra Kumar

+91-9685468824
kdeep05.dg@gmail.com

Education

Academic Background

Bachelor of Engineering in Electrical Engineering

SHRI VAISHNAV INSTITUTE OF TECHNOLOGY AND SCIENCE (SVITS)

Location: Indore, India

CGPA: 8.1/10

Achievements & Awards

Recognition & Honors

Python Gold Badge

Issued by HackerRank

2023

Problem Solving Gold Badge

Issued by HackerRank

2023

SQL Silver Badge

Issued by HackerRank

2023

Best Team Award

Team achievement with client appreciation

Professional

K Vasudevan Award

Among top 1000 final year students, SVITS Indore

2019

Certifications

My Certificates Click on Certificate To View Certificates

Technical Skills

Skills & Tools

Backend Technologies

Python
FastAPI
PostgreSQL
Redis Redis
Redis Streams Redis Streams
REST APIs
Microservices

DevOps & Infrastructure

Docker
Kubernetes
Git
CI/CD
Logging & Monitoring
Alerting

Cloud Platforms

AWS SageMaker
AWS Lambda
AWS Step Functions
AWS ECS
Google Cloud (GKE)
GCP Monitoring

Tools & OS

GitHub
SQL
VS Code
Cursor
Linux
Windows

Soft Skills

Team Player
Result-Oriented
Ownership & Accountability

Professional Projects & Technical Leadership

Enterprise-Grade Backend Systems & Cloud Architecture

LLM-Powered Customer Support Chatbot - Sean

Nov 2024 - Present Production System
Live Demo

System Design & Scalable Backend Development

  • Built backend architecture from scratch using FastAPI, Redis, Redis Streams, and PostgreSQL, optimized to handle up to 2000 requests per minute (RPM)
  • Designed and implemented a modular, scalable backend structure to support complex business logic and performant database operations
  • Translated product designs and wireframes into production-ready code using FastAPI and PostgreSQL
  • Developed and deployed microservices with strong focus on scalability, modularity, and observability

Cloud-Native Deployment & Infrastructure (GCP)

  • Set up Google Kubernetes Engine (GKE) from scratch and managed end-to-end deployment of backend services using Docker and Kubernetes
  • Integrated Redis & Redis Streams for real-time message transformation and conversation state tracking across services
  • Enabled WebSocket communication at scale via Nginx Ingress Controller in Kubernetes, supporting real-time bidirectional flows

Auto-Scaling, reliability, Monitoring & Observability

  • Implemented Horizontal Pod Autoscaling (HPA) and custom Horizontal Pod Autoscaling (HPA) based on Prometheus metrics, dynamically scaling WebSocket services based on active connections
  • configure Pod Distruption Budget (PDB) to avoid down time during any disruption
  • Set up distributed tracing and logging using OpenTelemetry (OTEL) to monitor service health and debug across microservices
  • Established proactive alerting and anomaly detection using GCP Monitoring Alerts

CI/CD & DevOps

  • Set up CI/CD pipelines using GitHub Actions for automated building, and deployment
FastAPI Redis Streams PostgreSQL Kubernetes GKE Docker Prometheus OpenTelemetry WebSocket Nginx GitHub Actions

Serverless ML Inference Pipeline on AWS

May 2024 - Nov 2024 AWS Architecture

End-to-End Serverless Architecture

  • Architected and deployed an end-to-end ML inference pipeline on AWS for a retail use case, designed to process batch JSON inputs derived from CSV files uploaded to S3
  • Implemented a Lambda-triggered workflow with automated CSV to JSON conversion and timestamp-based file tracking

Automated Workflow & Concurrency Management

  • Designed workflow: Users upload CSV β†’ Lambda converts to JSON β†’ Second Lambda triggers 7 SageMaker endpoints serving distinct ONNX models β†’ invokes step Function β†’ final output
  • Managed concurrency using AWS Step Functions with max concurrency of 2 parallel branches, processing 500 records concurrently across all 7 endpoints
  • Implemented intelligent result aggregation with custom business rules and automated cleanup mechanisms

Fault Tolerance & Cost Optimization

  • Designed robust cleanup mechanism using Lambda functions to automatically destroy all 7 SageMaker endpoints after successful completion or failure scenarios
  • Ensured complete serverless, event-driven automation with strong emphasis on cost-efficiency, scalability, and fault-tolerance across all pipeline stages
  • Handled OOM errors, runtime exceptions, and timeout triggers with automated recovery
AWS Lambda SageMaker Step Functions Amazon S3 Event Scheduler ONNX Models Python Boto3 CloudWatch

GitHub Statistics

& Calendar

GitHub Activity Graph
πŸ“Š

150+

Pull Requests

Across Private & Organization Repos
πŸ’»

500+

Commits

1.5+ Years of Active Development
πŸ†

Pull Shark x3

GitHub Achievement

Significant PR Contributions

Note: Most GitHub activity is in private/organization repositories. View Public Profile β†’

Public Repository Activity

GitHub Streak Stats
View full profile on GitHub β†’