ArcGIS Server Memory Calculator - Help

Help and guidance

Which calculator should I use?

Basic Advanced
Best for Quick estimates, early planning, procurement conversations Detailed sizing, matching your actual service portfolio
You know How many cores you have (or are buying) How many services you want to run
You get back Recommended memory and maximum services Recommended cores and memory
Direction Top-down: cores → services + memory Bottom-up: services → cores + memory
ArcSOC memory Fixed Esri rule of thumb (4 GB/core) Configurable per-instance value - use your own measurements
Accuracy Conservative estimate, good for initial budgeting More precise, reflects your specific workload
Tip: Use the Basic calculator first to get a ballpark, then use the Advanced calculator to refine. At 25% concurrency with default settings, both calculators agree - the Advanced calculator confirms the Basic's 4 GB/core rule of thumb.

How the Basic calculator works

The Basic calculator applies Esri best-practice rules of thumb directly to your core count.

OutputFormulaBasis
Memory Cores × 4 GB (physical/cloud) or × 5 GB (on-prem VM), rounded to 8 GB Esri recommendation. +1 GB/core for on-premises VMs covers hypervisor headroom. Cloud IaaS providers (Azure, AWS) guarantee dedicated memory so no overhead applies.
Recommended Max Services Cores × 3 ~4 ArcSOC instances per core (Esri Architecture Center). At default min=1/max=2 instances, 3 services × 1.33 avg instances ≈ 4 per core.
Real World Max Services Cores × 4 All services loaded at minimum 1 instance each. Optimistic - assumes no service ever scales to max instances.

I like to live dangerously

Enabling this mode assumes only a percentage of services are active at any one time - the rest have zero running instances. This allows you to configure significantly more services on the same hardware.

ConcurrencyServices (4 cores)
Conservative (best practice)12
50% concurrent32
25% concurrent64
10% concurrent160

The risk: When a request hits a service with zero instances, ArcGIS Server must cold-start an ArcSOC process. This can take several seconds to over a minute depending on the service's data and complexity. Requests during this startup window will queue or fail. This approach is only appropriate for services with tolerant response time requirements and highly predictable, low-concurrency load.

How the Advanced calculator works

The Advanced calculator models the actual ArcSOC process lifecycle to derive memory and core requirements from your service count and configuration.

Memory model

Memory is calculated from all ArcSOC processes that are loaded in RAM - including idle ones:

Loaded instances = (min_instances × (services + system_services))
                 + (busy% × loaded instances × (max - min))   ← burst

Memory = OS overhead + ArcGIS overhead + (loaded instances × ArcSOC memory per instance)
            

Idle ArcSOC processes (at minimum instances) consume RAM even when not serving requests. This is the key difference from CPU - RAM is always consumed, CPU is only consumed under load.

Cores model

Cores are calculated only from instances actively consuming CPU at peak concurrency:

CPU-active instances = busy% × (services + system_services) × max_instances

Cores = CPU-active instances ÷ 4 instances/core
        (rounded up to nearest 4 for Esri licensing)
            

Idle min-instance ArcSOC processes sit in RAM but consume negligible CPU. Sizing cores to all loaded instances would over-provision significantly.

System services

ArcGIS Server starts approximately 6 system ArcSOC processes by default (CachingTools, PublishingTools ×2, SyncTools, ReportingTools, SceneCachingTools). These are included in the calculation and cannot be disabled on a standard installation.

Min Instances = 0

Setting minimum instances to 0 means no ArcSOC processes are pre-loaded. Memory drops significantly - only services actively handling requests consume RAM. The calculator treats the busy percentage as the fraction of services that are hot (loaded) at any given time.

The trade-off is cold-start latency on the first request to any idle service. This is appropriate for infrequently-used services where response time is not critical.

Key parameters explained

ParameterGuidance
ArcSOC Memory The single most important parameter. Typical ranges: simple feature/tile services 80–200 MB; standard map services 200–500 MB; complex MSD/GP/image services 500–900 MB+. Use the included Get-ArcSOCMemory.ps1 script to measure your actual processes. Default is 300 MB - a reasonable general average.
Busy ArcSOC % The percentage of services that are simultaneously under enough load to have spawned additional instances up to their maximum. 25% is a reasonable default for mixed enterprise deployments. Use 100% for worst-case capacity planning.
Min / Max Instances Esri defaults are min=1, max=2. Min=1 keeps one ArcSOC per service always loaded (fast first response). Max=2 allows each service to handle two concurrent requests before queuing. Increasing max improves concurrency at the cost of memory. From ArcGIS Server 10.9+, ArcGIS Pro–published services default to the shared instance pool - min/max settings only apply to dedicated-instance services.
Deployment Type Physical and cloud IaaS (Azure, AWS EC2) both use 4 GB/core. Cloud providers guarantee dedicated, non-overcommitted memory - the hypervisor overhead that justified the old +1 GB/core rule is absorbed by the provider. On-premises VMs (VMware, Hyper-V) still apply +1 GB/core because the customer manages the hypervisor host and must leave headroom for it.
ArcGIS Service Overhead Memory used by ArcGIS Server's own processes (javaw.exe ×1 + ArcGISServer.exe). 1000 MB is a reasonable floor. Portal-integrated hosting servers (with the Hosting Server role) may see javaw grow significantly under load - consider 1500–2000 MB for hosting server deployments.
OS Overhead ~3000 MB for Windows Server 2019/2022 at idle. Linux deployments typically use 1000–1500 MB.

Limitations

  1. Both calculators model the dedicated instance pool only. From ArcGIS Server 10.9+, new sites use a shared instance pool by default for ArcGIS Pro–published services. Shared pool planning requires a different approach - pool size is typically set to twice the number of physical cores.
  2. Calculations assume a homogeneous service portfolio. If you have a mix of lightweight feature services and heavy GP services, consider running two scenarios with different ArcSOC memory values.
  3. Throughput is not modelled. These calculators estimate memory and core count, not requests-per-second capacity. Use Esri System Designer or CPT for throughput planning.
  4. No allowance is made for caching, long-running geoprocessing, or raster analysis workloads, which can consume significantly more memory and CPU.
  5. Memory is rounded to the nearest 8 GB to align with common DIMM configurations.
  6. Cores are rounded to the nearest 4 to align with Esri's per-4-core licensing model.

Measuring your ArcSOC memory

The included PowerShell script Get-ArcSOCMemory.ps1 scans all running ArcSOC.exe processes on the current machine and reports working set and peak working set statistics including min, median, mean, and max. Run it on your ArcGIS Server machine during a representative period of normal load:

.\Get-ArcSOCMemory.ps1

The script outputs a suggested value for the ArcSOC Memory field in the Advanced calculator - the peak working set median, rounded up to the nearest 100 MB. Using peak rather than current working set accounts for services that have been under load at some point since the server started.