Skip navigation
All Places > Implementing ArcGIS > Blog
1 2 3 Previous Next

Implementing ArcGIS

108 posts

A common request I get from people in the GIS industry that are current, or future, managers/leaders is: where can I go to find some resources to help me with the business/culture/people/management side of GIS? I have found no single place that contains the most useful information I know of, so I thought I would share a list of resources I have collected over the years, in no particular order. Please feel free to join the discussion and add your own resources to share.

 

  • Follow, and participate in, the Implementing ArcGIS community on GeoNet. Implementing ArcGIS is a public community group to gain tips and advice on implementation best practices, discuss challenging topics, learn from the innovative minds of GIS professionals, share your experiences and stay connected. It covers six categories:
    • Strategy & Planning
    • Architecture & Security
    • Geodata Engineering
    • Configuration & Integration
    • Workforce Development
    • Operational Support
  • One of the most important documents Esri has produced is Architecting the ArcGIS Platform: Best Practices - this document is updated at least once a year, so be sure to keep up with the latest version.  Reach out to your Esri Account Team for additional guidance on how to implement these.
  • Every organization that uses GIS needs a Geospatial Strategy in order to maximize the impact the technology can have on the organization. Check out this Introduction to Geospatial Strategy presentation from the 2019 Esri International User Conference, to learn more on the best practices Esri has learned from our work with customers across the globe over the years.  Reach out to your Esri Account Team if you'd like to learn more about this.
  • Consider some assistance with Change Management, it is "people-focused planning that drives technology adoption."
  • Join, and participate in, the Managers in GIS group on LinkedIn.
  • The GIS Success program provides a lot of great content. It is "a blog where together, we’ll leverage Geographical Information Systems (GIS) Strategies, Technologies, and Techniques to stimulate opportunities to advance GIS in the local government." It includes webcasts, training and free guides.
  • URISA runs a GIS Leadership Academy (GLA) that provides "five days of targeted GIS leadership training taught by GIS leaders."
  • Here are eight videos from the GIS Manager Track from the 2018 Esri International User Conference covering:
    • Enterprise GIS: Strategic Planning for Success
    • Communicating the Value of GIS
    • Architecting the ArcGIS Platform: Best Practices
    • Increase Adoption by Integrating Change Management
    • Governance for GIS
    • Moving Beyond Anecdotal GIS Success: An ROI Conversation
    • Workforce Development Planning: A People Strategy for Organizations
    • Supporting Government Transformation and Innovation
  • This is a great article: GIS should be about digital transformation
  • This is an excellent report from Esri Canada, Winning with Location Intelligence: The Essential Practices.
  • Read the articles from these people, and follow them on social media:
  • GIS maturity models can be helpful with identifying ways to improve. Here are three to investigate:
  • Keep your eyes open for:
    • A GIS Manager/Leadership Workshop near you,
    • GIS Manager/Leadership sessions at GIS events,
    • And if you're headed to the Esri International User Conference, consider attending the GIS Managers' Open Summit (GISMOS).
  • My own resources include:

Interesting discussion in this recent report from the US DOT regarding Commercial-Off-The-Shelf (COTS) vs. Custom mobile app development in section 3.3.2, page 6:

 

"Those interviewees from agencies that have put forth the effort to build custom mobile applications all concluded that, had they known then what they know now, they would have foregone application development altogether, and used a COTS product from the beginning."

Case Studies | GIS in Transportation | Planning, Environment & Realty | FHWA 

Really interesting report from Esri Canada on Winning with Location Intelligence - it identifies the commonalities & best practices from organizations that are successful with the tech.  Definitely worth a read.

 

Winning with Location Intelligence | Esri Canada 

 

In this entry, we will be looking at what a deployment looks like from the infrastructure as code (IaC) perspective with Terraform as well as the configuration management side with PowerShell DSC (Desired State Configuration). Both play important roles in automating ArcGIS Enterprise deployments, so let's jump in.

 

This deployment will follow a single machine model as described in the ArcGIS Enterprise documentation. It will consist of the following.

 

 

  • Portal for ArcGIS 10.7.1

  • ArcGIS Server 10.7.1 (Set as Hosting Server)

    • Services Directory is disabled

  • ArcGIS Data Store 10.7.1 (Relational Storage)

  • Two (2) Web Adaptors within IIS

    • Portal context: portal

    • Server context: hosted

    • Self-Signed certificate matching the public DNS

 

Additional Configurations

  • WebGISDR is configured to perform weekly full backups that are stored within Azure Blob Storage via Task Scheduler

  • Virtual machine is configured for nightly backups to an Azure Recovery Services Vault

  • RDP (3389) access is restricted via Network Security Group to the public IP of the box in which Terraform is ran from.

  • Internet access (80, 443) is configured for ArcGIS Enterprise via Network Security Group

  • Azure Anti-malware is configured for the virtual machine

 

The complete code and configurations can be found attached below. You will need to provide your own ArcGIS Enterprise licenses however.

 

Note:   This is post two (2) in a series on engineering with ArcGIS.

 

Infrastructure Deployment

If you are not already familiar with Terraform and how it can be used to efficiently handle the lifecycle of your infrastructure, I would recommend taking the time to read through the first entry in this series which can be found here. The Terraform code in that first entry will be used as the basis for the work that will be done in this posting.

As discussed in the first entry, Terraform is a tool designed to help manage the lifecycle of your infrastructure. Instead of rehashing the benefits of Terraform this time however, we will jump straight into the code and review what is being done. As mentioned above, the template from the first entry in this series is used again here with additional code added to perform specific actions needed for configuring ArcGIS Enterprise. Let's take a look at those additions.

 

These additions handle the creation of two blob containers that will be used for uploading deployment resources ("artifacts") and a empty container ("webgisdr") that will be used when configuring webgisdr backups along with the uploading of the license files, the PowerShell DSC archive and lastly, the web adaptor installer.

 

resource "azurerm_storage_container" "artifacts" {
  name                  = "${var.deployInfo["projectName"]}${var.deployInfo["environment"]}-deployment"
  resource_group_name   = "${azurerm_resource_group.rg.name}"
  storage_account_name  = "${azurerm_storage_account.storage.name}"
  container_access_type = "private"
}

resource "azurerm_storage_container" "webgisdr" {
  name                  = "webgisdr"
  resource_group_name   = "${azurerm_resource_group.rg.name}"
  storage_account_name  = "${azurerm_storage_account.storage.name}"
  container_access_type = "private"
}

resource "azurerm_storage_blob" "serverLicense" {
  name                   = "${var.deployInfo["serverLicenseFileName"]}"
  resource_group_name    = "${azurerm_resource_group.rg.name}"
  storage_account_name   = "${azurerm_storage_account.storage.name}"
  storage_container_name = "${azurerm_storage_container.artifacts.name}"
  type                   = "block"
  source                 = "./${var.deployInfo["serverLicenseFileName"]}"
}

resource "azurerm_storage_blob" "portalLicense" {
  name                   = "${var.deployInfo["portalLicenseFileName"]}"
  resource_group_name    = "${azurerm_resource_group.rg.name}"
  storage_account_name   = "${azurerm_storage_account.storage.name}"
  storage_container_name = "${azurerm_storage_container.artifacts.name}"
  type                   = "block"
  source                 = "./${var.deployInfo["portalLicenseFileName"]}"
}

resource "azurerm_storage_blob" "dscResources" {
  name                   = "dsc.zip"
  resource_group_name    = "${azurerm_resource_group.rg.name}"
  storage_account_name   = "${azurerm_storage_account.storage.name}"
  storage_container_name = "${azurerm_storage_container.artifacts.name}"
  type                   = "block"
  source                 = "./dsc.zip"
}

resource "azurerm_storage_blob" "webAdaptorInstaller" {
  name                   = "${var.deployInfo["marketplaceImageVersion"]}-iiswebadaptor.exe"
  resource_group_name    = "${azurerm_resource_group.rg.name}"
  storage_account_name   = "${azurerm_storage_account.storage.name}"
  storage_container_name = "${azurerm_storage_container.artifacts.name}"
  type                   = "block"
  source                 = "./${var.deployInfo["marketplaceImageVersion"]}-iiswebadaptor.exe"
}

 

This addition handles the generation of a short-lived SAS token from the storage account that is then used during the configuration management portion to actually grab the needed files from storage securely. In this situation, we could simplify the deployment by marking our containers as public and not requiring a token but that is not recommended.

 

data "azurerm_storage_account_sas" "token" {
  connection_string = "${azurerm_storage_account.storage.primary_connection_string}"
  https_only        = true
  start             = "${timestamp()}"
  expiry            = "${timeadd(timestamp(), "5h")}"

  resource_types {
    service   = false
    container = false
    object    = true
  }

  services {
    blob  = true
    queue = false
    table = false
    file  = false
  }

  permissions {
    read    = true
    write   = true
    delete  = true
    list    = true
    add     = true
    create  = true
    update  = true
    process = true
  }
}

 

The final change is the addition of an extension to the virtual machine that will handle the configuration management task using PowerShell DSC. Instead of reviewing this in-depth here, just know that the data that is getting passed under the settings and protected_settings json will be passed to PowerShell DSC as parameters for use as needed by the configuration file.

 

resource "azurerm_virtual_machine_extension" "arcgisEnterprise-dsc" {
  name                       = "dsc"
  location                   = "${azurerm_resource_group.rg.location}"
  resource_group_name        = "${azurerm_resource_group.rg.name}"
  virtual_machine_name       = "${element(azurerm_virtual_machine.arcgisEnterprise.*.name, count.index)}"
  publisher                  = "Microsoft.Powershell"
  type                       = "DSC"
  type_handler_version       = "2.9"
  auto_upgrade_minor_version = true
  count                      = "${var.arcgisEnterpriseSpecs["count"]}"

  settings = <<SETTINGS
     {
          "configuration": {
          "url": "${azurerm_storage_blob.dscResources.url}${data.azurerm_storage_account_sas.token.sas}",
          "function": "enterprise",
            "script": "enterprise.ps1"
          },
          "configurationArguments": {
          "webAdaptorUrl": "${azurerm_storage_blob.webAdaptorInstaller.url}${data.azurerm_storage_account_sas.token.sas}",
          "serverLicenseUrl": "${azurerm_storage_blob.serverLicense.url}${data.azurerm_storage_account_sas.token.sas}",
          "portalLicenseUrl": "${azurerm_storage_blob.portalLicense.url}${data.azurerm_storage_account_sas.token.sas}",
          "externalDNS": "${azurerm_public_ip.arcgisEnterprise.fqdn}",
            "arcgisVersion" : "${var.deployInfo["marketplaceImageVersion"]}",
          "BlobStorageAccountName": "${azurerm_storage_account.storage.name}",
          "BlobContainerName": "${azurerm_storage_container.webgisdr.name}",
          "BlobStorageKey": "${azurerm_storage_account.storage.primary_access_key}"
      }
     }
     SETTINGS
  protected_settings = <<PROTECTED_SETTINGS
     {
          "configurationArguments": {
          "serviceAccountCredential": {
            "username": "${var.deployInfo["serviceAccountUsername"]}",
            "password": "${var.deployInfo["serviceAccountPassword"]}"
      },
          "arcgisAdminCredential": {
               "username": "${var.deployInfo["arcgisAdminUsername"]}",
               "password": "${var.deployInfo["arcgisAdminPassword"]}"
               }
          }
     }
     PROTECTED_SETTINGS
}

Configuration Management

 

As was touched on above, we are utilizing PowerShell DSC (Desired State Configuration) to handle the configuration of ArcGIS Enterprise as well as a few other tasks on the instance. To simplify things, I have included v2.1 of the ArcGIS module within the archive but the public repo can be found here. The ArcGIS module provides a means with which to interact with ArcGIS Enterprise in a controlled manner by provided various "resources" that perform specific tasks. One of the major benefits of PowerShell DSC is that it is idempotent. This means that we can continually run our configuration and nothing will be modified if the system matches our code. This provides administrators the ability to push changes and updates without altering existing resources as well as detecting configuration drift over time. 

 

To highlight the use of one of these resources, let's take a quick look at the ArcGIS_Portal resource which is designed to  configure a new site without having to manually do so through the typical browser based workflow. In this deployment, our ArcGIS_Portal resource looks exactly like the below code. The resource specifies the parameters that we must be provided to successfully configure the portal site and will error out if all required parameters are not provided. 

 

ArcGIS_Portal arcgisPortal {
    PortalEndPoint = (Get-FQDN $env:COMPUTERNAME)
    PortalContext = 'portal'
    ExternalDNSName = $externalDNS
    Ensure = 'Present'
    PortalAdministrator = $arcgisAdminCredential
    AdminEMail = 'example@esri.com'
    AdminSecurityQuestionIndex = '12'
    AdminSecurityAnswer = 'none'
    ContentDirectoryLocation = $portalContentLocation
    LicenseFilePath = (Join-Path $(Get-Location).Path (Get-FileNameFromUrl $portalLicenseUrl))
    DependsOn = $Depends
}
$Depends += '[ArcGIS_Portal]arcgisPortal'

 

Because of the scope of what is being done within the configuration script here, we will not be doing a deep dive. This will come in a later article.

Putting it together

 

With the changes to the Terraform template as well as a very high level overview of PowerShell DSC and its purpose, we can deploy the environment using the same commands mentioned in the first entry in the series. Within the terminal of your choosing, navigate into the extracted archive that contains your licenses, template and DSC archive, and start by initializing Terraform with the following command.

 

terraform init

 

Next, you can run the following to start the deployment process. Keep in mind, we are not only deploying the infrastructure but configuring ArcGIS Enterprise so the time for completion will vary. When it completes, it will output the public facing url to access your ArcGIS Enterprise portal.

 

terraform apply


Summary

 

As you should quickly be able to see, removing the manual configuration aspects of software as well as the deployment of infrastructure, a large portion of problems can be mitigated by moving toward IaC and Configuration Management. There are many solutions out there to handle both aspects and these are just two options. Explore what works for you and start moving toward a more DevOps centric approach.

 

I hope you find this helpful, do not hesitate to post your questions here: Engineering ArcGIS Series: Tools of an Engineer 

 

Note: The contents presented above are examples and should be reviewed and modified as needed for each specific environment.

It’s a long-time dilemma for many organizations whether to devote resources to custom software application development or go for the commercial off-the-shelf (COTS) solution? In this blog post, we will highlight some main fundamental strategies that will help any GIS department choose what will best fit its user needs.


In short, the answer to this question also links us back to what the requirements are and what current infrastructure is. An application implementation strategy is an approach to delivering capabilities that meet your business needs with technology.

 

Gathering Requirements is crucial for the success of your application whether it's COTS or custom. You should closely work with the sponsor, stakeholders, users, and IT focusing on the requirements, not the solution. Types of the requirements you will have to gather are:

 

  • Business
High-level vision statements (ex. Share information with the public)
  • Functional
What the application should do

(from a user perspective)

  • Non-functional
How the application does it (usability, security, performance, etc.)

 

Following the requirements, there are many other factors to consider when deciding the best way to deliver new capabilities through apps. These factors include resourcing, initial development effort, ongoing app maintenance, user training, and technical support. In addition, users now expect frequent updates to their apps, which increases demand for resources to develop and maintain custom apps. As a result, it’s best to select the approach that delivers the capabilities you need with the least cost and effort. An ideal strategy will minimize cost and optimize the use of development resources.

 

 

By applying a “configure first” philosophy that prioritizes commercial off-the-shelf (COTS) apps and least-effort design patterns, you can reduce the cost and effort needed to deploy and maintain applications for your users. Organizations that adopt a configure-first philosophy start by configuring COTS apps, then extend and customize apps only when needed. Using this least-effort approach in your application implementation strategy lets you deliver capabilities faster and reserve your development resources for more complex tasks.

 

Depending on your specific requirements, you can:

 

  • Configure COTS apps to meet your business needs. ArcGIS provides many configurable COTS apps that support key workflows out of the box. Using COTS apps requires the least effort and the lowest ongoing cost. ArcGIS COTS Apps (Web App Builder, Story Maps, Operational Dashboards, Collector, GeoForm, and Survey 123)

 

  • Extend existing apps, either by modifying templates or by creating widgets for COTS apps. Esri offers app templates at solutions.arcgis.com and github.com/esri that provide focused solutions for specific problems; you can modify the source code for these templates to add discrete capabilities. In addition, several ArcGIS COTS apps use modular frameworks that let you create custom widgets and plug them into the apps. Extending existing apps lets you develop only the additional functionality you need, saving money and effort.

 

  • Customize apps using ArcGIS APIs and SDKs. These APIs and SDKs provide objects like the Identity Manager to manage credentials within custom apps that expose parts of the ArcGIS platform (such as secure web maps). Because you don’t have to code those parts yourself, you can build business-focused apps to take advantage of ArcGIS COTS capabilities, reducing the overhead for app development and maintenance. Check out developers.esri.com for more information on the wide selection of APIs and SDKs available.

 

 

In conclusion, to establish an effective application implementation strategy for your organization, look deep into your requirements and available resources, and try to follow these 3 simple guidelines:

 

  1. Adopt a configure-first philosophy, configuring COTS apps when possible to deliver the capabilities you need.
  2. If you have a requirement that cannot be met with configuration alone, extend existing apps with discrete capabilities and widgets.
  3. When you need capabilities that you can’t provide by configuring and extending existing apps, customize apps using ArcGIS APIs and SDKs.
acarnow-esristaff

GIS ROI StoryMap

Posted by acarnow-esristaff Employee Jul 18, 2019

Return on Investment, commonly abbreviated as ROI, is an important goal for any GIS project that should be calculated, documented, and shared.  I put this StoryMap together to share information and examples of GIS ROI.  It includes many real world examples, articles, videos and additional resources to help you find the ROI in your GIS work. 

 

https://arcg.is/1Wi0CO

Numerous technical sessions, spotlight talks, and conversations referenced the "Architecting the ArcGIS Platform: Best Practices" whitepaper (https://go.esri.com/bp). This document presents some implementation guidelines in the form of a conceptual reference architecture diagram and associated best practice briefs. You can use these guidelines to maximize the value of your ArcGIS implementation and meet your organizational goals.

If you're headed to the 2019 Esri International User Conference and would like to connect, on Tuesday I will be at the GIS Managers' Open Summit (GISMOS), for most of Wednesday and Thursday, you can find me in the “Guiding your Geospatial Journey” area in the Expo.

GIS has evolved at a rapid pace ever since computers took up to the challenge of providing spatial capabilities. The evolution of GIS from a Map creation platform to a Location Intelligence platform necessitates the need to build systems that are robust, reliable, and elastic as they are utilized to host solutions that your business relies on to be successful. GIS is everywhere running from servers running within traditional data centers, the Cloud, and from mobile and IoT devices. Supporting such a diverse landscape creates unique architectural challenges that require a systematic approach to designing your GIS.

 

Esri system architecture design is based upon traditional architecture patterns centered around multiple tiers, namely a Client, Presentation, Services, and Data tier. Each tier aligns with Esri products and solution components as depicted by the example logical architecture below. Following a systematic approach, this blog series will explore the various architectural tiers and their related solution components in support of building modern GIS platforms to meet today's business and Location Intelligence needs.

 

I work with a lot of Esri users to help them successfully implement ArcGIS within their organization. Customers reach out to us because they are often challenged in a few key areas:

  • Seeking ways to reduce the time it takes to install and configure our products
  • Getting started and building proficiency
  • Maximizing effectiveness and productivity of the implementation (which could include data, maps, apps, etc.)

These are common challenges as our users are often supporting many roles within their organization (analysts, managers, system administrators, etc.) and often have limited bandwidth to roll out the latest version of our products.

I find our users can experience a much quicker time to value and improved productivity through a services package. In these services packages, we often focus on a few best practices:

  • Considering the needs of all stakeholders
  • Supporting quick wins through understanding customer needs and configuring maps and apps to meet those needs
  • Building a better user experience

Please find attached a flyer that can be used a resource to read and understand more about some of our product and subject matter expert specific services packages (ArcGIS GeoEvent Server, ArcGIS Monitor, Insights for ArcGIS, etc.)

 

What is Infrastructure as Code?

Taken directly from Microsoft, 

"Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model".

In the simplest terms, Infrastructure as Code (IaC) is a methodology to begin treating cloud infrastructure the same as application source code. No longer are changes made directly to the infrastructure, but to the source code for a given environment or deployment. This change in behavior will lead to environments that are easily reproducible, quickly deployable and accurate.

 

 

Note:   This post is one (1) in a series on engineering with ArcGIS.

 

What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage components such as virtual machines, networking, storage, firewall rules and many others. To define what needs to be deployed or changed, terraform uses what is called a terraform configuration which can be made up of one or more individual files within the same folder. Terraform files utilize the file extension

.tf

as well as HCL (HashiCorp Configuration Language) as the configuration language. HCL is a structured configuration language that is both human and machine friendly for use with command-line tools, but specifically targeted towards devops tools, servers, etc. 

 

In complex deployments, administrators may find they can utilize a different configuration file for each aspect of their environment as such,

/environment_a

   /production

      network.tf

      machines.tf

      storage.tf

      security.tf

   /staging

      network.tf

      machines.tf

      storage.tf

      security.tf

 

whereas with simple deployments, a single file may be sufficient, such as

/environment_a

   main.tf


To actually create and manage infrastructure, terraform has a number of constructs to allow users to define Infrastructure as Code but the most important two are Providers and Resources. 

 

Resources

Resources are the mechanism that tell terraform how the infrastructure should be deployed and configured. Each cloud provider will have its own list of Resources that users will have access to. An example would look something like this which creates an azure resource group and a virtual network within.

 

resource "azurerm_resource_group" "rg" {
   name = "prd"
   location = "westus2"

}

 

resource "azurerm_virtual_network" "vnet" {
   name = "prd-vnet"
   location = "westus2"
   address_space = ["10.0.0.0/16"]
   resource_group_name = "prd"
}

 

Providers

Providers are the mechanism for defining what cloud provider or on-premise Resources are available for use. Each provider offers a set resources, and defines for each resource which arguments it accepts, which attributes it exports, and how changes to resources of that type are actually applied to remote APIs. Most of the available providers correspond to one cloud or on-premises infrastructure platform, and offer resource types that correspond to each of the features of that platform. In simpler terms, if the goal is to define infrastructure on Azure, an Azure based provider must be used before Resources can be defined.

 

A provider example for Azure would look something like this,

provider "azurerm" {
   subscription_id = "this-is-not-a-real-subscription-id"
   client_id = "this-is-not-a-real-client-id"
   client_secret = "this-is-not-a-real-client-secret"
   tenant_id = "this-is-not-a-real-tenant-id"
}

Terraform has multiple methods for authenticating to a given cloud provider and in this example, a Service Principal is being utilized.

 

State

Lastly, terraform makes use of a State File that keeps track of the infrastructure that has been deployed and configured. This state file is what allows terraform to run checks against the last recorded state of an environment compared to the current run and provide users the delta so validation can be done before making changes. This aspect is very important in that it allows terraform to be idempotent, which is a key aspect of IaC.

 


Infrastructure Life-cycle

For the purposes of this introduction, we will be using the attached terraform template as the basis for the following examples. This template will be posted to GitHub in the near future for future articles to utilize and build off. It is designed to deploy the following within an Azure subscription.

 

Before it can be deployed, the deployInfo variable group (Map) will need to be populated. When creating the Service Principal, the following documentation (Service Principal creation) can be reference for assistance. Terraform will also need to be available locally. The steps to ensure it is can be found here.

 

  • Resource Group
  • Virtual Network
  • Subnet
  • Network Security Group
    • Rule to allow 80 and 443 traffic from the internet into the virtual network
    • Rule to allow RDP access (3389) into the virtual network from the public IP of the person who deploys the template. This is accomplished by querying the web during deployment and retrieving your public IP. This workflow is not recommended in production environments and is only being used for example purposes.
    • Rule to block all other internet traffic into the virtual network.
  • Storage account
  • Availability Set
  • Public IP
  • Network Interface
  • Virtual Machine
    • Windows Defender Extension
    • Recovery Services Vault Extension
  • Key Vault
  • Recovery Services Vault
    • Backup Policy

 

Creation

Once terraform is available locally and the deployInfo variable map has been completed, the first step in deploying infrastructure is to initialize terraform. This can be accomplished by navigating to the directory in which you have saved the above template with the .tf file extension and running the following command, which will prepare various local settings and data that will be used by subsequent commands.

 

terraform init

The output from initializing terraform will resemble the following.

 

With terraform successfully initialized, the next step in the process is to have terraform review the template and determine what changes need to take place. This step will have terraform compare the template with the current state file and produce an output showing the deltas as follows. To do so, use the following command.

terraform plan

 

Output has been truncated.

As this is the first run of terraform, terraform is only able to see resources it needs to add (create). Later in this post, we will walk through updating existing resources.

 

With terraform successfully prepared to deploy our infrastructure, we can being the deployment by using the following command which will start the process of creating the resources within Azure and provide the following output when complete.

 

terraform apply

 

 

Updates

As the deployment is utilized, it may be determined that the current infrastructure is not adequately sized and resources need to be increased. As our resources are defined as code, so is the virtual machine sizing. Within the "arcgisEnterpriseSpecs" variable map, a variable is defined with the name "size" which is the Azure machine sizing that is used by terraform when deploying resources.

 

If this variable is changed to "Standard_D8s_v3" and terraform plan is ran again, it will detect a delta between what is defined in code and what is actually deployed within Azure and present this in the output as such.

 

Once this delta is planned and the changes are ready to be pushed to the actual infrastructure within Azure, simply running terraform apply again will begin the process.

 

Decommission

The last phase of a given deployments life-cycle is decommissioning. Once infrastructure has reached the end of it life, it must be terminated and removed and thankfully, terraform provides an easy method to do so with the following command.

terraform destroy

 

The output of destroying the infrastructure will resemble the following.

 

In conclusion, as teams aim to become more agile and move at a much faster pace, moving to modern methodologies such as infrastructure as code (IaC) is a great first step and should not be viewed as unnecessary but a crucial step in the right direction.

 

I hope you find this helpful, do not hesitate to post your questions here: Engineering ArcGIS Series: Tools of an Engineer 

 

Note: The contents presented above are examples and should be reviewed and modified as needed for each specific environment.

The Managed Cloud Services team in Professional Services is pleased to announce a new series that will be highlighting various tools and best practices for implementing ArcGIS Enterprise using modern methodologies.

 

System implementation, configuration and management of the deployment is a fun challenge, similar to Tetris. As an ArcGIS Enterprise or ArcGIS Server administrator, you are likely tasked with standing up new systems, ensuring they are configured correctly and maintaining them over time. Traditionally, these tasks were done manually where an administrator would work through procuring a new virtual machine, install the required software, work through the configuration steps needed and then ensure users were able to access it. Over time, the needs of the users may change and as such, the administrator would need to further modify the system and its software to meet those needs.

 

Here at Managed Services, we are faced with managing 100's of customer implementation. This series will cover best practices we developed overtime. Each entry will cover a specific aspect of automating the deployment, configuration & life-cycle management (updates, monitoring, scaling, etc) of both infrastructure and the ArcGIS suite. 

 

 

automation, devops, implementation , engineering, cloud

As cloud adoption evolves from Web GIS to full GIS deployments, questions continue to be raised such as, “What about the desktops?”. That is, when moving desktops to the cloud, what technologies should be used to support Esri desktop GIS? The cloud offers multiple desktop options and the following will provide some high-level guidance as to how and when these technologies should be used. It is important to realize that each deployment is unique and deciding on which of these technologies to deploy involves multiple factors. The purpose of providing this information is primarily to share information regarding all of the potential solutions and high-lighting some of their key characteristics. However, they will not be ranked in any way as deciding on one approach over the other requires more detailed analysis and discussion based on specified requirements, costs, and constraints. Further, this list can likely be expanded but the solutions below represent the most common options that Esri has encountered. 

 

Note: ArcGIS Pro requires a GPU-enabled machine type for the underlying host VM. Examples include the NV6 for Azure and a Graphics Design Instance for AWS AppStream.

 

Virtual Machines - Azure and AWS

  • Use Case: Typically used to support administrative functions or small number of desktop users
  • Client Connectivity: Utilizes the Remote Desktop Connection client and the RDP protocol
  • User Experience: Published desktop with growing visual latency as geographic distance increases
  • Scalability: Limited due to no more than two concurrent users per VM
  • Management: Typically deployed without a base image
  • User Profiles: Locally stored per VM

 

Remote Desktop Services - Azure and AWS

  • Use Case: Supporting users at scale where the users are not globally distributed
  • Connectivity: Utilizes the Remote Desktop Connection client and the RDP protocol
  • User Experience: Published desktop or apps with growing visual latency as geographic distance increases
  • Scalability: Limited for ArcGIS Pro based on the number of concurrent sessions that can share a GPU
  • Management: Can be used with snapshot technology to create a base image
  • User Profiles: Roaming profile or equivalent, assuming at least two servers deployed

 

Citrix Virtual Apps and Desktops (XenApp) - Azure and AWS

  • Use Case: Supporting users at scale where the users could be globally distributed
  • Connectivity: Utilizes the Citrix Workspace app and the HDX protocol
  • User Experience: Supports both published desktops and apps and performs well with high-latency
  • Scalability: Limited for ArcGIS Pro based on the number of concurrent sessions that can share a GPU
  • Management: Can be used with snapshot technology to create a base image
  • User Profiles: Roaming profile or equivalent, assuming at least two servers deployed
  • Other: Can utilize Citrix Cloud to manage the "back-end" (e.g., Controllers/Licensing)

 

Amazon WorkSpaces - AWS

  • Use Case: Supporting users at scale where client bandwidth is not a limiting factor
  • Connectivity: Utilizes either a desktop or web client with the PCoIP protocol
  • User Experience: Supports a published desktop to an assigned WorkSpace instance
  • Scalability: Can scale as needed as users increase but is 1:1 user to VM assignment
  • Management: Cannot be used with snapshot technology so each WorkSpace is an independent deployment
  • User Profiles: Locally stored on each WorkSpace

 

Amazon AppStream 2.0 - AWS

  • Use Case: Supporting users at scale, but not all use cases are known since this is a newer offering
  • Connectivity: Utilizes either a desktop or web client with the NICE DCV protocol
  • User Experience: Supports published desktop applications
  • Scalability: Can scale as needed as back-end infrastructure capacity is managed by AWS
  • Management: Based on creating base images for different application configurations as needed
  • User Profiles: Saved to a Virtual Hard Disk (VHD) and synchronized to Amazon S3
  • Other: Esri / AWS AppStream 2.0 Deployment Guide

 

Windows Virtual Desktop - Azure (Currently in Preview)*

  • Use Case: Supporting users at scale, but not all use cases are known since this is a newer offering
  • Connectivity: Utilizes the Remote Desktop Connection client and the RDP protocol
  • User Experience: Supports both published desktops and apps
  • Scalability: Can scale as needed and either be deployed as one user per VM or concurrent sessions
  • Management: Can be used with snapshot technology to create a base image
  • User Profiles: Roaming profile or equivalent, assuming at least two servers deployed
  • Other: The only solution supporting multi-session Windows 10

 

* Third-party vendors are working to extend the core Windows Virtual Desktop capabilities and providing additional client options and management features. Current examples include CloudJumper Cloud Workspace and Citrix Managed Desktops.

There are many impactful change management components that can increase a technology solutions’ adoption rate among users.  People coming to UC get excited about so many new capacities or expanded utilization of their platform that will bring value to their organization.  Leverage these components to increase your success rate:

 

  • Preparing for Change is a strategic activity that should happen at the beginning of each project where you create strong alignment between people, the goals of the project, and the sponsors who will advocate for the technology’s use. 
  • Managing Change is a series of initiatives that are integrated into your project plan to assist people in accepting and embracing the new capabilities or workflows of each project. 
  • Reinforcing Change is the best practice to cement the new workflows into an everyday routine.  People interacting with technology brings your geospatial efforts to fruition. 

 

Come by the Guiding Your Geospatial Journey area to talk with experts about people focused change management activities that can enhance and increase your technology adoption rates.  Additionally, here are several people-oriented sessions that are happening at UC:

 

Technical Workshops

Tuesday, July 9                                                                                                         Location: SDCC - Rooms

8:30 am

Helping the Workforce Survive and Thrive in Times of Technology Change

SDCC, Room 16 A

Wednesday, July 10                                                                                         Location: SDCC - Rooms

8:30 am

Get the C-Suite’s Attention with Strategic Workforce Planning

SDCC, Room 31 A

1:00 pm

Increase GIS Adoption the Agile Way

SDCC, Ballroom 06 F

2:30 pm

Workforce Development Planning in Three Simple Steps

SDCC, Ballroom 06 F

4:00 pm

Helping the Workforce Survive and Thrive in Times of Technology Change

SDCC, Room 10

 

Spotlight Talks                                               

Tuesday, July 9                             Location: SDCC – Expo: Guiding Your Geospatial Journey Spotlight Theater      

10:30 am

Making It Real: Use Training to Make Your Tech Dreams Come True

 

Wednesday, July 10                          Location: SDCC – Expo: Guiding Your Geospatial Journey Spotlight Theater      

4:00 pm

Focus on Training to Go the Distance

 

 

Expo Area

Stop by to connect 1-on-1 with Esri Staff and talk more about Change Management and Adoption Strategies in our Guiding Your Geospatial Journey area.

Tuesday, July 9                9:00 AM–6:00 PM

Wednesday, July 10        9:00 AM–6:00 PM

Thursday, July 11             8:00 AM–4:00 PM

As the Esri platform continues to evolve, it is critical that organizations maintain a capable GIS system architecture that will support new GIS/IT capabilities and scale to support growing user demand. There are further key considerations that we have seen most recently such as moving to the cloud, migrating to ArcGIS Pro, and expanding the adoption of GIS capabilities within the organization. Esri Architects can work with you to understand your organization’s needs and provide guidance. We will be available at UC2019 at the Guiding Your Geospatial Journey area of the Esri Showcase and participating in several sessions throughout the conference. Meanwhile, have a look at the Architecture and Security section of GeoNet where you will discover valuable information related to GIS system architecture!

 

Technical Workshops

Tuesday, July 9                                                                                                         Location: SDCC - Rooms

8:30 am

Esri Best Practices: Architecting Your ArcGIS Implementation

SDCC, Room 33 C

8:30 am

Moving to a Managed Cloud Services Environment

SDCC, Room 30 A

Wednesday, July 10                                                                                         Location: SDCC - Rooms

8:00 AM

Designing an Enterprise GIS Security Strategy

SDCC, Room 30 A

1:00 pm

Moving to a Managed Cloud Services Environment

SDCC, Room 31 A

4:00 pm

Esri Best Practices: Architecting Your ArcGIS Implementation

SDCC, Room 31 A


Thursday, July 11                                                                                             Location: SDCC - Rooms

10:00 am

How to be Successful with Esri Managed Cloud Services

SDCC, Room 16 A

 

Spotlight Talks                                               

Tuesday, July 9                             Location: SDCC – Expo: Guiding Your Geospatial Journey Spotlight Theater      

11:15 am

ArcGIS Enterprise: Architecture Best Practices

 

1:00 pm

ArcGIS in the Cloud

 

1:30 pm

Build Security into Your System

 

Wednesday, July 10                          Location: SDCC – Expo: Guiding Your Geospatial Journey Spotlight Theater      

10:00 am

Are You Cloud Ready?

 

11:15 am

Designing a Robust Environment - Workload Separation

 

1:30 pm

Considerations for a Highly Available Enterprise

 

4:30 pm

Designing a Robust Environment - Environment Isolation

 

5:45 pm

Distributed Web GIS - A Modern Approach to Sharing

 

 

Appointments         

Tuesday, July 9 – Thursday, July 11  Location: SDCC – Expo: Guiding Your Geospatial Journey Spotlight Theater                                             

Architecture Maturity Review

Get expert advice and feedback on your enterprise implementation, including best practices. Leave with recommendations for meeting security and architecture needs.

Schedule an appointment

 

Stop by to connect 1-on-1 with Esri Staff that can talk more about your system architecture plans in our Guiding Your Geospatial Journey area.

 

Tuesday, July 9               9:00 AM–6:00 PM

Wednesday, July 10        9:00 AM–6:00 PM

Thursday, July 11            9:00 AM–4:00 PM