Which calculator should I use?
|
Basic |
Advanced |
| Best for |
Quick estimates, early planning, procurement conversations |
Detailed sizing, matching your actual service portfolio |
| You know |
How many cores you have (or are buying) |
How many services you want to run |
| You get back |
Recommended memory and maximum services |
Recommended cores and memory |
| Direction |
Top-down: cores → services + memory |
Bottom-up: services → cores + memory |
| ArcSOC memory |
Fixed Esri rule of thumb (4 GB/core) |
Configurable per-instance value - use your own measurements |
| Accuracy |
Conservative estimate, good for initial budgeting |
More precise, reflects your specific workload |
Tip: Use the Basic calculator first to get a ballpark, then use the Advanced calculator to refine. At 25% concurrency with default settings, both calculators agree - the Advanced calculator confirms the Basic's 4 GB/core rule of thumb.
How the Basic calculator works
The Basic calculator applies Esri best-practice rules of thumb directly to your core count.
| Output | Formula | Basis |
| Memory |
Cores × 4 GB (physical/cloud) or × 5 GB (on-prem VM), rounded to 8 GB |
Esri recommendation. +1 GB/core for on-premises VMs covers hypervisor headroom. Cloud IaaS providers (Azure, AWS) guarantee dedicated memory so no overhead applies. |
| Recommended Max Services |
Cores × 3 |
~4 ArcSOC instances per core (Esri Architecture Center). At default min=1/max=2 instances, 3 services × 1.33 avg instances ≈ 4 per core. |
| Real World Max Services |
Cores × 4 |
All services loaded at minimum 1 instance each. Optimistic - assumes no service ever scales to max instances. |
I like to live dangerously
Enabling this mode assumes only a percentage of services are active at any one time -
the rest have zero running instances. This allows you to configure significantly more
services on the same hardware.
| Concurrency | Services (4 cores) |
| Conservative (best practice) | 12 |
| 50% concurrent | 32 |
| 25% concurrent | 64 |
| 10% concurrent | 160 |
The risk: When a request hits a service with zero instances, ArcGIS Server
must cold-start an ArcSOC process. This can take several seconds to over a minute
depending on the service's data and complexity. Requests during this startup window
will queue or fail. This approach is only appropriate for services with tolerant
response time requirements and highly predictable, low-concurrency load.
How the Advanced calculator works
The Advanced calculator models the actual ArcSOC process lifecycle to derive memory
and core requirements from your service count and configuration.
Memory model
Memory is calculated from all ArcSOC processes that are loaded in RAM - including idle ones:
Loaded instances = (min_instances × (services + system_services))
+ (busy% × loaded instances × (max - min)) ← burst
Memory = OS overhead + ArcGIS overhead + (loaded instances × ArcSOC memory per instance)
Idle ArcSOC processes (at minimum instances) consume RAM even when not serving requests.
This is the key difference from CPU - RAM is always consumed, CPU is only consumed under load.
Cores model
Cores are calculated only from instances actively consuming CPU at peak concurrency:
CPU-active instances = busy% × (services + system_services) × max_instances
Cores = CPU-active instances ÷ 4 instances/core
(rounded up to nearest 4 for Esri licensing)
Idle min-instance ArcSOC processes sit in RAM but consume negligible CPU.
Sizing cores to all loaded instances would over-provision significantly.
System services
ArcGIS Server starts approximately 6 system ArcSOC processes by default
(CachingTools, PublishingTools ×2, SyncTools, ReportingTools, SceneCachingTools).
These are included in the calculation and cannot be disabled on a standard installation.
Min Instances = 0
Setting minimum instances to 0 means no ArcSOC processes are pre-loaded.
Memory drops significantly - only services actively handling requests consume RAM.
The calculator treats the busy percentage as the fraction of services that are
hot (loaded) at any given time.
The trade-off is cold-start latency on the first request to any idle service.
This is appropriate for infrequently-used services where response time is not critical.
Measuring your ArcSOC memory
The included PowerShell script Get-ArcSOCMemory.ps1 scans all running
ArcSOC.exe processes on the current machine and reports working set and peak working
set statistics including min, median, mean, and max. Run it on your ArcGIS Server
machine during a representative period of normal load:
.\Get-ArcSOCMemory.ps1
The script outputs a suggested value for the ArcSOC Memory field in the Advanced
calculator - the peak working set median, rounded up to the nearest 100 MB.
Using peak rather than current working set accounts for services that have been
under load at some point since the server started.