Introduction
There’s a common misconception that moving from VMs to containers automatically reduces resource requirements. Some also assume that microservices are inherently small, leading to unrealistic expectations about infrastructure sizing.
However, containerization is not about reducing resource usage—it’s about flexibility, automation, cloud adoption, and efficient resource utilization. Similarly, “micro” in microservices refers to modularity, not minimal resource footprint. Let’s clear up these misunderstandings using a simple house metaphor and discuss how to approach sizing effectively.
Misconception 1: “Deploying to Containers Means Smaller Sizing”
Imagine you have a house representing an application running on a Virtual Machine (VM). Now, you move that house into a containerized environment. Does the house suddenly shrink? No! The number of people inside the house remains the same, and they still need the same space, resources, and utilities.

📌 Key takeaway: If an app requires X amount of resources on a VM, it will still require X on a container. Containerization does not magically reduce CPU, RAM, or storage needs. The workload defines the resource needs, not the deployment model.
🔹 What does containerization offer instead?
- Portability – Move workloads seamlessly across environments, whether on-premises or cloud.
- Automation – Deploy, scale, and manage applications efficiently with orchestration tools like Kubernetes.
- Cloud-native benefits – Leverage cloud elasticity, managed services, and optimized cost strategies.
- Faster deployments & updates – Containers simplify CI/CD pipelines, reducing downtime and increasing agility.
- Resource efficiency – While containerization doesn’t shrink app needs, it allows for better resource pooling and dynamic scaling.
🚀 What can be optimized? While the base resource need remains the same, autoscaling, workload profiling, and optimized instance selection can help manage infrastructure costs over time. Instead of focusing on reducing size, teams should focus on better resource utilization using Kubernetes features like Horizontal Pod Autoscaler (HPA) and Cluster Autoscaler.
Misconception 2: “Microservices Means Tiny Apps”
The word “micro” in microservices refers to modularity, not size. A microservice is independent, meaning it needs its own resources, runtime, and supporting components.

Using the house metaphor again, consider two approaches when moving from VMs to containers:
- Keeping the entire house as-is inside a container → This is similar to running a monolithic application in a container, where all functionalities remain tightly coupled.
- Breaking the house into smaller units → This is the microservices approach. Each smaller house (service) is independent, meaning it needs its own kitchen, bathrooms, and utilities. While this approach enables better scalability and flexibility, it also adds overhead because each unit must function autonomously.
📌 Key takeaway: Microservices are not necessarily “tiny.” While they offer flexibility, independent scaling, and fault isolation, each service adds weight to the overall system. A service that seems small in terms of functionality may still have a significant runtime footprint due to dependencies, API communication, and state management needs.
🚀 Additional Considerations:
- Operational Complexity – Managing many independent services increases deployment, monitoring, and troubleshooting efforts.
- Communication Overhead – Unlike monoliths, microservices need inter-service communication, often adding latency and requiring API gateways.
- Infrastructure Cost – Each service has its own resource allocation, logging, and security configurations, which can lead to higher cumulative costs if not managed well.
So, How Should We Think About Sizing?
- Base sizing is determined by workload needs, not just deployment type.
- The infrastructure footprint is based on peak loads, concurrency requirements, and projected usage patterns.
- Can we start small? Yes!
- Autoscalers can help adjust resource allocation dynamically to match demand.
- Example: Increase teller service pods during the day when traffic is high, then repurpose resources for batch processing at night.
- Microservices introduce overhead but enable flexibility.
- Not all applications benefit from microservices—monoliths are still valid for certain workloads.
- The minimum footprint is higher because each service requires its own dependencies, networking, and scaling logic.
✅ Best Practice: Instead of focusing solely on microservices or containers, teams should take a hybrid approach, carefully evaluating what should be split into microservices and what should remain together to balance flexibility with efficiency.
Final Thoughts
Understanding what containerization and microservices truly offer helps in setting the right expectations. Moving to containers does not automatically shrink application resource needs, and microservices are not inherently lightweight—they offer different advantages that must be balanced with architectural considerations.
By taking a **measured approach to sizing and scaling