Select to view content in your preferred language

ArcGIS Enterprise

204
1
02-23-2025 07:08 AM
Labels (1)
SaurabhUpadhyaya
Frequent Contributor

Hello Everyone,

I need guidance on ArcGIS Enterprise 11.1 capacity planning with Postgres RDS as the backend.

Scenario:

  • Services Published:

    1. Feature Service (Min Instances: 2, Max Instances: 5)
    2. Map Service
    3. Geoprocessing Service
  • Data:

    • ~2 million point features stored in Postgres RDS.
    • Each team will load ~2000 point features at a time through the ArcGIS JS API application.
  • Application Functionalities:

    • Load filtered point features per team.
    • Search and buffer operations.
    • Export to PDF.

Question:
How many concurrent users can a single ArcGIS Server support under this setup?

Would appreciate any insights on capacity planning and optimization strategies.

Thanks in advance!

0 Kudos
1 Reply
ArchitSrivastava
Frequent Contributor

Hello @SaurabhUpadhyaya ,

Your question covers a broad range of factors and points.

Capacity planning can get quite detailed, and there are many variables at play. I would suggest starting with a few key questions and refining the setup based on what we observe during testing. Instead of jumping straight into complex planning, we can start small, gather insights, and scale accordingly.

If it would be up to me, I would start with the following key questions:

  • How much computing power (CPU, RAM) we need on AWS for ArcGIS Server?
  • How many users can work simultaneously without performance issues?
  • How much load can PostgreSQL RDS handle efficiently?

We can utilize the following to estimate and answer to the above queries

  • ESRI’s Capacity Planning Tool (CPT) for predicting system requirements.
  • As you mentioned AWS, "AWS Cloudwatch" to monitor ArcGIS Server and database performance.
  • JMeter for simulating real-world load and identifying bottlenecks

We would also need to figure out the following to size the system properly and to decide AWS instance size.

How many requests does the system handle? i.e. "Request Volume"

  • If 50 users interact with the system at once,
  • And each user makes 20 requests in one session,
  • That’s 1,000 requests per hour (Transactions Per Hour - TPH).

What are the most resource-heavy operations (which you have as functionalities to application)?

  • Feature Service: Displays ~2 million points. 
  • Search and Buffer: These queries hit the database the hardest.
  • Export to PDF: Consumes a lot of CPU and memory.

What can help in understanding

  • ArcGIS Server logs → To see which requests take the longest which will allow us to identify slow requests
  • PostgreSQL slow query logs → To find heavy database operations and allows us to pinpoint database bottlenecks
  • JMeter load tests → Allows us to simulate concurrent users and test response time
  • AWS CloudWatch→ Track CPU, memory, and database performance.

You can also setup some "Optimization Strategies" beforehand

ArcGIS Server Optimization

  • Feature Service Performance
  • Search & Buffer Optimization
    • Index spatial columns in PostgreSQL using GIST indexes for faster queries.
    • Optimize queries: Use SQL EXPLAIN ANALYZE to check query efficiency.  
  • Export to PDF
    • Run exports in a separate ArcGIS instance to avoid slowing down others.
    • Use asynchronous processing to handle large PDF exports.

AWS Infrastructure Optimization

Concurrent UsersSuggested EC2 Instance
50 Users`m6i.xlarge` (4 vCPU, 16GB RAM)
100 Users`m6i.2xlarge` (8 vCPU, 32GB RAM)
200+ Users`m6i.4xlarge` (16 vCPU, 64GB RAM)

 

Cautiously monitor all the traffic by using AWS CloudWatch to track CPU and memory usage.
If CPU is consistently over 80%, increase EC2 size or add more servers. (Remember always start with the small instance type and then move up)

PostgreSQL RDS Performance

Testing and Scaling (use these as suggestions):

  • Baseline Testing
    • Deploy a basic setup.
    • Use ArcGIS Server logs and CloudWatch to measure usage.
  • Load Testing
  • Scaling Plan
    • If CPU is above 80%, increase EC2 size.
    • If database is slow, upgrade RDS or add read replicas.
    • If requests queue up, increase ArcGIS Server instances.

Furthermore, these are just baseline recommendations based on standard performance expectations. The actual system behavior depends on multiple factors like data complexity, network latency, and real-world user interaction. I would suggest starting with these guidelines, monitoring system performance, and then adjusting based on real findings.

Additionally, let me know your thoughts on this or if you want to dive into any specific concerns

Hope it helps!