Skip navigation
All Places > Implementing ArcGIS > Blog > 2019 > July

GIS ROI StoryMap

Posted by acarnow-esristaff Employee Jul 18, 2019

Return on Investment, commonly abbreviated as ROI, is an important goal for any GIS project that should be calculated, documented, and shared.  I put this StoryMap together to share information and examples of GIS ROI.  It includes many real world examples, articles, videos and additional resources to help you find the ROI in your GIS work.

Numerous technical sessions, spotlight talks, and conversations referenced the "Architecting the ArcGIS Platform: Best Practices" whitepaper ( This document presents some implementation guidelines in the form of a conceptual reference architecture diagram and associated best practice briefs. You can use these guidelines to maximize the value of your ArcGIS implementation and meet your organizational goals.

If you're headed to the 2019 Esri International User Conference and would like to connect, on Tuesday I will be at the GIS Managers' Open Summit (GISMOS), for most of Wednesday and Thursday, you can find me in the “Guiding your Geospatial Journey” area in the Expo.

GIS has evolved at a rapid pace ever since computers took up to the challenge of providing spatial capabilities. The evolution of GIS from a Map creation platform to a Location Intelligence platform necessitates the need to build systems that are robust, reliable, and elastic as they are utilized to host solutions that your business relies on to be successful. GIS is everywhere running from servers running within traditional data centers, the Cloud, and from mobile and IoT devices. Supporting such a diverse landscape creates unique architectural challenges that require a systematic approach to designing your GIS.


Esri system architecture design is based upon traditional architecture patterns centered around multiple tiers, namely a Client, Presentation, Services, and Data tier. Each tier aligns with Esri products and solution components as depicted by the example logical architecture below. Following a systematic approach, this blog series will explore the various architectural tiers and their related solution components in support of building modern GIS platforms to meet today's business and Location Intelligence needs.


I work with a lot of Esri users to help them successfully implement ArcGIS within their organization. Customers reach out to us because they are often challenged in a few key areas:

  • Seeking ways to reduce the time it takes to install and configure our products
  • Getting started and building proficiency
  • Maximizing effectiveness and productivity of the implementation (which could include data, maps, apps, etc.)

These are common challenges as our users are often supporting many roles within their organization (analysts, managers, system administrators, etc.) and often have limited bandwidth to roll out the latest version of our products.

I find our users can experience a much quicker time to value and improved productivity through a services package. In these services packages, we often focus on a few best practices:

  • Considering the needs of all stakeholders
  • Supporting quick wins through understanding customer needs and configuring maps and apps to meet those needs
  • Building a better user experience

Please find attached a flyer that can be used a resource to read and understand more about some of our product and subject matter expert specific services packages (ArcGIS GeoEvent Server, ArcGIS Monitor, Insights for ArcGIS, etc.)


What is Infrastructure as Code?

Taken directly from Microsoft, 

"Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model".

In the simplest terms, Infrastructure as Code (IaC) is a methodology to begin treating cloud infrastructure the same as application source code. No longer are changes made directly to the infrastructure, but to the source code for a given environment or deployment. This change in behavior will lead to environments that are easily reproducible, quickly deployable and accurate.



Note:   This post is one (1) in a series on engineering with ArcGIS.


What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage components such as virtual machines, networking, storage, firewall rules and many others. To define what needs to be deployed or changed, terraform uses what is called a terraform configuration which can be made up of one or more individual files within the same folder. Terraform files utilize the file extension


as well as HCL (HashiCorp Configuration Language) as the configuration language. HCL is a structured configuration language that is both human and machine friendly for use with command-line tools, but specifically targeted towards devops tools, servers, etc. 


In complex deployments, administrators may find they can utilize a different configuration file for each aspect of their environment as such,





whereas with simple deployments, a single file may be sufficient, such as


To actually create and manage infrastructure, terraform has a number of constructs to allow users to define Infrastructure as Code but the most important two are Providers and Resources. 



Resources are the mechanism that tell terraform how the infrastructure should be deployed and configured. Each cloud provider will have its own list of Resources that users will have access to. An example would look something like this which creates an azure resource group and a virtual network within.


resource "azurerm_resource_group" "rg" {
   name = "prd"
   location = "westus2"



resource "azurerm_virtual_network" "vnet" {
   name = "prd-vnet"
   location = "westus2"
   address_space = [""]
   resource_group_name = "prd"



Providers are the mechanism for defining what cloud provider or on-premise Resources are available for use. Each provider offers a set resources, and defines for each resource which arguments it accepts, which attributes it exports, and how changes to resources of that type are actually applied to remote APIs. Most of the available providers correspond to one cloud or on-premises infrastructure platform, and offer resource types that correspond to each of the features of that platform. In simpler terms, if the goal is to define infrastructure on Azure, an Azure based provider must be used before Resources can be defined.


A provider example for Azure would look something like this,

provider "azurerm" {
   subscription_id = "this-is-not-a-real-subscription-id"
   client_id = "this-is-not-a-real-client-id"
   client_secret = "this-is-not-a-real-client-secret"
   tenant_id = "this-is-not-a-real-tenant-id"

Terraform has multiple methods for authenticating to a given cloud provider and in this example, a Service Principal is being utilized.



Lastly, terraform makes use of a State File that keeps track of the infrastructure that has been deployed and configured. This state file is what allows terraform to run checks against the last recorded state of an environment compared to the current run and provide users the delta so validation can be done before making changes. This aspect is very important in that it allows terraform to be idempotent, which is a key aspect of IaC.


Infrastructure Life-cycle

For the purposes of this introduction, we will be using the attached terraform template as the basis for the following examples. This template will be posted to GitHub in the near future for future articles to utilize and build off. It is designed to deploy the following within an Azure subscription.


Before it can be deployed, the deployInfo variable group (Map) will need to be populated. When creating the Service Principal, the following documentation (Service Principal creation) can be reference for assistance. Terraform will also need to be available locally. The steps to ensure it is can be found here.


  • Resource Group
  • Virtual Network
  • Subnet
  • Network Security Group
    • Rule to allow 80 and 443 traffic from the internet into the virtual network
    • Rule to allow RDP access (3389) into the virtual network from the public IP of the person who deploys the template. This is accomplished by querying the web during deployment and retrieving your public IP. This workflow is not recommended in production environments and is only being used for example purposes.
    • Rule to block all other internet traffic into the virtual network.
  • Storage account
  • Availability Set
  • Public IP
  • Network Interface
  • Virtual Machine
    • Windows Defender Extension
    • Recovery Services Vault Extension
  • Key Vault
  • Recovery Services Vault
    • Backup Policy



Once terraform is available locally and the deployInfo variable map has been completed, the first step in deploying infrastructure is to initialize terraform. This can be accomplished by navigating to the directory in which you have saved the above template with the .tf file extension and running the following command, which will prepare various local settings and data that will be used by subsequent commands.


terraform init

The output from initializing terraform will resemble the following.


With terraform successfully initialized, the next step in the process is to have terraform review the template and determine what changes need to take place. This step will have terraform compare the template with the current state file and produce an output showing the deltas as follows. To do so, use the following command.

terraform plan


Output has been truncated.

As this is the first run of terraform, terraform is only able to see resources it needs to add (create). Later in this post, we will walk through updating existing resources.


With terraform successfully prepared to deploy our infrastructure, we can being the deployment by using the following command which will start the process of creating the resources within Azure and provide the following output when complete.


terraform apply




As the deployment is utilized, it may be determined that the current infrastructure is not adequately sized and resources need to be increased. As our resources are defined as code, so is the virtual machine sizing. Within the "arcgisEnterpriseSpecs" variable map, a variable is defined with the name "size" which is the Azure machine sizing that is used by terraform when deploying resources.


If this variable is changed to "Standard_D8s_v3" and terraform plan is ran again, it will detect a delta between what is defined in code and what is actually deployed within Azure and present this in the output as such.


Once this delta is planned and the changes are ready to be pushed to the actual infrastructure within Azure, simply running terraform apply again will begin the process.



The last phase of a given deployments life-cycle is decommissioning. Once infrastructure has reached the end of it life, it must be terminated and removed and thankfully, terraform provides an easy method to do so with the following command.

terraform destroy


The output of destroying the infrastructure will resemble the following.


In conclusion, as teams aim to become more agile and move at a much faster pace, moving to modern methodologies such as infrastructure as code (IaC) is a great first step and should not be viewed as unnecessary but a crucial step in the right direction.


I hope you find this helpful, do not hesitate to post your questions here: Engineering ArcGIS Series: Tools of an Engineer 


Note: The contents presented above are examples and should be reviewed and modified as needed for each specific environment.

The Managed Cloud Services team in Professional Services is pleased to announce a new series that will be highlighting various tools and best practices for implementing ArcGIS Enterprise using modern methodologies.


System implementation, configuration and management of the deployment is a fun challenge, similar to Tetris. As an ArcGIS Enterprise or ArcGIS Server administrator, you are likely tasked with standing up new systems, ensuring they are configured correctly and maintaining them over time. Traditionally, these tasks were done manually where an administrator would work through procuring a new virtual machine, install the required software, work through the configuration steps needed and then ensure users were able to access it. Over time, the needs of the users may change and as such, the administrator would need to further modify the system and its software to meet those needs.


Here at Managed Services, we are faced with managing 100's of customer implementation. This series will cover best practices we developed overtime. Each entry will cover a specific aspect of automating the deployment, configuration & life-cycle management (updates, monitoring, scaling, etc) of both infrastructure and the ArcGIS suite. 


automation, devops, implementation , engineering, cloud

As cloud adoption evolves from Web GIS to full GIS deployments, questions continue to be raised such as, “What about the desktops?”. That is, when moving desktops to the cloud, what technologies should be used to support Esri desktop GIS? The cloud offers multiple desktop options and the following will provide some high-level guidance as to how and when these technologies should be used. It is important to realize that each deployment is unique and deciding on which of these technologies to deploy involves multiple factors. The purpose of providing this information is primarily to share information regarding all of the potential solutions and high-lighting some of their key characteristics. However, they will not be ranked in any way as deciding on one approach over the other requires more detailed analysis and discussion based on specified requirements, costs, and constraints. Further, this list can likely be expanded but the solutions below represent the most common options that Esri has encountered. 


Note: ArcGIS Pro requires a GPU-enabled machine type for the underlying host VM. Examples include the NV6 for Azure and a Graphics Design Instance for AWS AppStream.


Virtual Machines - Azure and AWS

  • Use Case: Typically used to support administrative functions or small number of desktop users
  • Client Connectivity: Utilizes the Remote Desktop Connection client and the RDP protocol
  • User Experience: Published desktop with growing visual latency as geographic distance increases
  • Scalability: Limited due to no more than two concurrent users per VM
  • Management: Typically deployed without a base image
  • User Profiles: Locally stored per VM


Remote Desktop Services - Azure and AWS

  • Use Case: Supporting users at scale where the users are not globally distributed
  • Connectivity: Utilizes the Remote Desktop Connection client and the RDP protocol
  • User Experience: Published desktop or apps with growing visual latency as geographic distance increases
  • Scalability: Limited for ArcGIS Pro based on the number of concurrent sessions that can share a GPU
  • Management: Can be used with snapshot technology to create a base image
  • User Profiles: Roaming profile or equivalent, assuming at least two servers deployed


Citrix Virtual Apps and Desktops (XenApp) - Azure and AWS

  • Use Case: Supporting users at scale where the users could be globally distributed
  • Connectivity: Utilizes the Citrix Workspace app and the HDX protocol
  • User Experience: Supports both published desktops and apps and performs well with high-latency
  • Scalability: Limited for ArcGIS Pro based on the number of concurrent sessions that can share a GPU
  • Management: Can be used with snapshot technology to create a base image
  • User Profiles: Roaming profile or equivalent, assuming at least two servers deployed
  • Other: Can utilize Citrix Cloud to manage the "back-end" (e.g., Controllers/Licensing)


Amazon WorkSpaces - AWS

  • Use Case: Supporting users at scale where client bandwidth is not a limiting factor
  • Connectivity: Utilizes either a desktop or web client with the PCoIP protocol
  • User Experience: Supports a published desktop to an assigned WorkSpace instance
  • Scalability: Can scale as needed as users increase but is 1:1 user to VM assignment
  • Management: Cannot be used with snapshot technology so each WorkSpace is an independent deployment
  • User Profiles: Locally stored on each WorkSpace


Amazon AppStream 2.0 - AWS

  • Use Case: Supporting users at scale, but not all use cases are known since this is a newer offering
  • Connectivity: Utilizes either a desktop or web client with the NICE DCV protocol
  • User Experience: Supports published desktop applications
  • Scalability: Can scale as needed as back-end infrastructure capacity is managed by AWS
  • Management: Based on creating base images for different application configurations as needed
  • User Profiles: Saved to a Virtual Hard Disk (VHD) and synchronized to Amazon S3
  • Other: Esri / AWS AppStream 2.0 Deployment Guide


Windows Virtual Desktop - Azure*

  • Use Case: Supporting users at scale, but not all use cases are known since this is a newer offering
  • Connectivity: Utilizes the Remote Desktop Connection client and the RDP protocol
  • User Experience: Supports both published desktops and apps
  • Scalability: Can scale as needed and either be deployed as one user per VM or concurrent sessions
  • Management: Can be used with snapshot technology to create a base image
  • User Profiles: Roaming profile or equivalent, assuming at least two servers deployed
  • Other: The only solution supporting multi-session Windows 10


* Third-party vendors are working to extend the core Windows Virtual Desktop capabilities and providing additional client options and management features. Current examples include CloudJumper Cloud Workspace and Citrix Managed Desktops.

Filter Blog

By date: By tag: