π .NET Memory Types β What They Are, When to Use Them, and Why They Matter
Memory issues in .NET rarely come from βnot enough RAMβ.
They come from using the wrong memory model for the job.
Understanding how .NET manages memory helps you:
- Avoid performance traps
- Reduce GC pressure
- Write safer, faster code
Letβs break it down clearly and practically π
π§ The Big Picture
In .NET, memory is mainly divided into:
- Stack
- Managed Heap
- Unmanaged Memory
And inside the managed heap, the GC uses generations and specialized heaps.
1οΈβ£ Stack Memory β Fast, Scoped, Automatic
Used for:
- Local variables
- Method parameters
- Value types (usually)
- References to heap objects
void Process() { int count = 10; // stack var order = new Order(); // reference on stack, object on heap }
β Pros
- Extremely fast
- Automatically cleaned up
- No GC involvement
β οΈ Limits
- Small size
- Short lifetime
- Not suitable for large objects
π Think: temporary, short-lived data
2οΈβ£ Managed Heap β Flexible, GC-Controlled
Used for:
- Reference types
- Objects created with
new - Long-lived data
var customer = new Customer();
β Pros
- Flexible lifetimes
- Automatic memory management
- Safer than manual allocation
β οΈ Trade-offs
- Garbage Collection pauses
- Fragmentation risks
- Allocation cost under pressure
3οΈβ£ Garbage Collector Generations (Gen 0, 1, 2)
The GC optimizes based on object lifetime.
- Gen 0 β short-lived objects
- Gen 1 β survivors
- Gen 2 β long-lived objects (caches, singletons)
Gen 0 β Gen 1 β Gen 2
π‘ Most objects die young β and the GC is optimized for that.
4οΈβ£ Large Object Heap (LOH)
Objects β₯ ~85 KB go to the LOH.
var buffer = new byte[100_000];
β οΈ Important characteristics
- Collected only during Gen 2 GC
- Historically not compacted by default
- Fragmentation can be expensive
π Avoid frequent allocation of large arrays.
5οΈβ£ Pinned Object Heap (POH)
Used when objects must not move in memory.
GCHandle.Alloc(buffer, GCHandleType.Pinned);
Used for:
- Interop
- Native APIs
- Fixed buffers
β οΈ Overuse causes heap fragmentation.
6οΈβ£ Unmanaged Memory β Manual but Predictable
Allocated outside the GC.
IntPtr ptr = Marshal.AllocHGlobal(1024);
Marshal.FreeHGlobal(ptr);
β Pros
- No GC impact
- Predictable lifetime
- Required for interop
β Cons
- Manual cleanup
- Memory leaks if misused
- More error-prone
π Use only when necessary.
7οΈβ£ Span<T> β Stack-Friendly, Zero Allocation
Represents a slice of memory.
Span<byte> buffer = stackalloc byte[256];
β Benefits
- No heap allocation
- Extremely fast
- Great for parsing, IO, serialization
β οΈ Constraints
- Stack-only
- Cannot escape the method
- No async/await
8οΈβ£ Memory<T> β Heap-Friendly Span
Safe alternative to Span<T> for async scenarios.
Memory<byte> buffer = new byte[256]; await stream.ReadAsync(buffer);
βοΈ Can live on the heap
βοΈ Async-safe
β Slightly slower than Span<T>
π§ Choosing the Right Memory Type
| Scenario | Best Choice |
|---|---|
| Short-lived locals | Stack |
| Normal objects | Managed Heap |
| Large buffers | Pooling / LOH-aware |
| Interop | Unmanaged / POH |
| High-performance parsing | Span<T> |
| Async buffers | Memory<T> |
β οΈ Common Memory Mistakes
β Allocating large arrays repeatedly
β Ignoring LOH fragmentation
β Pinning too much memory
β Overusing unmanaged memory
β Not pooling buffers
Most performance bugs are allocation bugs.
π― Final Takeaway
.NET memory management is automatic β but performance is still your responsibility.
Know where your data lives.
Know how long it lives.
And choose the right memory tool.
Thatβs how you write high-performance .NET code.
#dotnet #csharp #memorymanagement #performance #gc #softwareengineering #backend