Skip navigation
All Places > Geo Developers > Blog
1 2 3 Previous Next

Geo Developers

84 posts

Being a user of Microsoft Visual Studio since version 6.0, I prefer it as a one-stop shop for as many kinds of development as possible, including C++, VB, C#, Python, and HTML5/TypeScript projects. One feature of VS that I really like is the ability to create project templates. VS2015 included a project template for TypeScript, but it was ugly as sin. VS2017 dropped it, but failed to provide a viable alternative; being lazy, I continued to use the same version available online:

This must stop! Sometimes, you just have to get your hands dirty, so I decided to create my own project template from scratch. Fortunately, the TypeScript documentation has sections on Integrating with Build Tools, and Compiler Options in MSBuild, which provided valuable assistance. Also, see the MSBuild documentation and How to: Create project templates for more information.



The TypeScript website has download links to install the latest version for a number of IDEs, including VS2017. In addition, since the TypeScript folks now prefer you to use npm to install typings; you should install Node.

Warning! If you are behind a corporate firewall, you may run into this error when you try to use npm to install typings:


If you see that, try setting this configuration at the command prompt:

   npm config set strict-ssl false


Create a generic TypeScript project:

While, formally, the best approach would be to create a new project type, my lazy approach recycles the C# project type and redefines the build targets (but there is a disadvantage – see below). The first step is to create a blank solution in VS2017 named “TypeScriptProjectTemplates.” In Explorer or the Command Prompt, navigate to the solution folder and create a subfolder named “BasicTypeScriptTemplate.” In that folder, create a file named “BasicTypeScriptTemplate.csproj,” containing the following text:


<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="">
  <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.Default.props" Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.Default.props')" />
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <StartupObject />
    <VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">12.0</VisualStudioVersion>
  <PropertyGroup Condition="'$(Configuration)' == 'Debug'">
  <PropertyGroup Condition="'$(Configuration)' == 'Release'">
  <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets" Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets')" />
  <Target Name="Build" DependsOnTargets="CompileTypeScript">
  <Target Name="Rebuild" DependsOnTargets="CompileTypeScript">
  <Target Name="Clean" Condition="Exists('$(TSDefaultOutputLog)')">
      <TSOutputLogsToDelete Include="$(TSDefaultOutputLog)" />
    <ReadLinesFromFile File="@(TSOutputLogsToDelete)">
      <Output TaskParameter="Lines" ItemName="TSCompilerOutput" />
    <Delete Files="@(TSCompilerOutput)" Condition=" '@(TSCompilerOutput)' != '' " />
    <Delete Files="@(TSOutputLogsToDelete)" />
    <!-- <RemoveDir Directories="$(IntermediateOutputPath)" /> -->


In VS2017, add the existing project to the solution. Within the project, create an “app” subfolder, and add a new TypeScript file named “main.ts,” containing the following text:

class Student {
     fullName: string;
     constructor(public firstName: string, public middleInitial: string, public lastName: string) {
          this.fullName = firstName + " " + middleInitial + " " + lastName;

interface Person {
     firstName: string;
     lastName: string;

function greeter(person: Person) {
     return "Hello, " + person.firstName + " " + person.lastName;

let user = new Student("Jane", "M.", "User");

document.body.textContent = greeter(user);


In the project folder, add an HTML Page file named “index.html,” containing the following text:

<!DOCTYPE html>
     <title>TypeScript Greeter</title>
     <script src="./app/main.js"></script>


At this point, the project should look like this in Solution Explorer:


Building or rebuilding the project will generate TypeScript compiler output, and a file named “Tsc.out” will be created in a subfolder named “vs”. The “Tsc.out” file defines the compiler output to delete when cleaning the project; cleaning the project will also delete that file. [Note that if you build and clean “Release” without cleaning “Debug” beforehand, the source map files will still remain.]


Export the project template:

At this point, if you export the project to a template, you have a generic TypeScript project template. However, it will be displayed under the “Visual C#” category. If you want it to appear under the “TypeScript” category, there are additional steps to take. First, unzip the template to a new folder and edit “MyTemplate.vstemplate:”


Change the “ProjectType” value from “CSharp” to “TypeScript”. Zip the contents of the folder to a new zip file, and the template is ready to use.


Building a JSAPI project template:

Now that we have a basic TypeScript project template, the next step is to use it to create a template for a simple JavaScript API project.

First, create a new project, called “ArcGIS4xTypeScriptTemplate,” using the template created above. Open a Command Prompt, navigate to the project folder, and enter the following commands:

   npm init --yes
   npm install --save @types/arcgis-js-api

Back in Solution Explorer, select the project and click the “Show All Files” button. Select the “node_modules” folder, “package.json,” and “package-lock.json,” right-click, and select “Include In Project.” Finally, replace the contents of “main.ts” and “index.html” with the text given at the JSAPI TypeScript walk-through. Your project should now look like this:


You may notice that the “import” statements in “main.ts” are marked as errors, even though the esModuleInterop flag is set in the project:


This appears to be a defect in the Visual Studio extension. The project will build without errors, and the resulting page will load correctly. If it annoys you, you can always revert to the older AMD style statements:


At this point, you’re ready to export the template.


On a final note:

The JavaScript API is updated frequently, which means that you may also want to keep your project templates up to date. Rather than updating the source project and repeating the export steps, you might want to consider keeping the unzipped template folders in a standard location, and updating those directly. Then, all you have to do is zip them to create the updated template.

Recently, I found myself painted into a corner.  Some time ago, I'd created custom tile caches for use with Runtime .NET, which had one scale level defined.  They worked just fine in 10.2.7, but on preparing to upgrade to 100.x, I discovered that they caused 100.6 to hang up.  The workaround was simple enough, namely to define additional scale levels, even if they weren't populated.  However, the task of modifying the caches for nearly 150 users proved so daunting, that I decided to let the app itself make the modification.  Updates to the app are automatically detected and downloaded, which provides a simpler mechanism than deploying a script to everyone's machine. It's not a perceptible performance hit as it is, and later on, as part of another update, I can simply deactivate it.  So here's the code:



using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Xml.Linq;

namespace NavAddin

      * Runtime 100.6 BUG: Tile caches that have only one scale level level defined will hang up on loading.
      * WORKAROUND: Define additional scale levels
      * [Assumes that 100 < scale level < 24000]

     public static class TileCacheHelper

          public const string L0Scale = "24000";
          public const string L0Resolution = "20.833375000083333";
          public const string L2Scale = "100";
          public const string L2Resolution = "0.086805729167013887";

          public static bool CheckTileCache(string sPath)

               // Check if tile cache (i.e. a folder)

               if (!Directory.Exists(sPath))
                    return true; // Not a tile cache

               // Check if one scale level defined

               string sConfigPath = Path.Combine(sPath, "conf.xml");
               StreamReader sr = new StreamReader(sConfigPath);
               XDocument xDoc = XDocument.Load(sr);
               XElement xRoot = xDoc.Root;
               XElement xTCInfo = xRoot.Element("TileCacheInfo");
               XElement xLODInfos = xTCInfo.Element("LODInfos");
               int iLevelCount = xLODInfos.Elements("LODInfo").Count();
               if (iLevelCount > 1)
                    return true; // Not a problem
               if (iLevelCount < 1)
                    return false; // This should never happen?

               // Check if scale level is between 100 (L2) and 24000 (L0)

               XElement xLODInfo, xLevelID, xScale, xResolution;

               xLODInfo = xLODInfos.Element("LODInfo");
               xScale = xLODInfo.Element("Scale");
               string sScale = xScale.Value;
               double dScale = Convert.ToDouble(sScale);
               double dL0Scale = Convert.ToDouble(L0Scale);
               double dL2Scale = Convert.ToDouble(L2Scale);
               if (dScale >= dL0Scale)
                    return false;
               if (dScale <= dL2Scale)
                    return false;

               // Redefine scale levels

               xLevelID = xLODInfo.Element("LevelID");
               xLevelID.Value = "1";
               XElement xLOD0 = new XElement(xLODInfo);
               xLevelID = xLOD0.Element("LevelID");
               xLevelID.Value = "0";
               xScale = xLOD0.Element("Scale");
               xScale.Value = L0Scale;
               xResolution = xLOD0.Element("Resolution");
               xResolution.Value = L0Resolution;
               XElement xLOD2 = new XElement(xLODInfo);
               xLevelID = xLOD2.Element("LevelID");
               xLevelID.Value = "2";
               xScale = xLOD2.Element("Scale");
               xScale.Value = L2Scale;
               xResolution = xLOD2.Element("Resolution");
               xResolution.Value = L2Resolution;

               // Write config file

               StreamWriter sw = new StreamWriter(sConfigPath);

               // Rename L00 folder to L01

               string sLayersPath = Path.Combine(sPath, "_alllayers");
               string sL00Path = Path.Combine(sLayersPath, "L00");
               string sL01Path = Path.Combine(sLayersPath, "L01");
               Directory.Move(sL00Path, sL01Path);

               return true;



At some point in the 100.x lifespan of ArcGIS Runtime SDK for .NET, the old tried-and-true method of treating a MapView as just another WPF Visual went sailing out the window.  Granted, the ExportImageAsync method should have been a simple workaround, but for one drawback: overlay items are not included!


Now I don't know about you, but I find the OverlayItemsControl to be a great way to add interactive text to a map.  You can have it respond to a mouse-over:



Bring up a context menu:



Modify properties:



And so on.  In the old days, when you created an image of the MapView, the overlays would just come right along:


          private RenderTargetBitmap GetMapImage(MapView mv)

               // Save map transform

               System.Windows.Media.Transform t = mv.LayoutTransform;
               Rect r = System.Windows.Controls.Primitives.LayoutInformation.GetLayoutSlot(mv);
               mv.LayoutTransform = null;
               Size sz = new Size(mv.ActualWidth, mv.ActualHeight);
               mv.Arrange(new Rect(sz));

               // Output map

               RenderTargetBitmap rtBitmap = new RenderTargetBitmap(
                    (int)sz.Width, (int)sz.Height, 96d, 96d,

               // Restore map transform

               mv.LayoutTransform = t;

               return rtBitmap;



Not so today!  Try that approach in 100.6 and you just get a black box.    


My workaround:


  1. Create a Canvas
  2. Create an Image for the Mapview and add it to the Canvas
  3. Create an Image for every overlay and add it to the Canvas
  4. Create a bitmap from the Canvas


Step 3 is trickier than you would think, however, because of two issues:  1) relating the anchor point to the overlay, and 2) taking any RenderTransform into account.


As far as I can tell, this is the rule for determining the relationship between the overlay and the anchor point:


HorizontalAlignment: Center or Stretch, anchor point is at the center; Left, anchor point is at the right; Right, anchor point is at the left.

VerticalAlignment: Center or Stretch, anchor point is at the center; Top, anchor point is at the bottom; Bottom, anchor point is at the top.

For a Canvas element, the anchor point is at 0,0 -- however, I have not found a good way to create an Image from a Canvas [if the actual width and height are unknown].


To create an Image from the element, any RenderTransform must be removed before generating the RenderTargetBitmap.  Then, the Transform must be reapplied to the Image.  Also, you need to preserve HorizontalAlignment and VerticalAlignment if you're creating a page layout using a copy of the MapView, so that the anchor point placement is correct.


So here it is, the code for my workaround:


using System.Collections.Generic;
using System.Diagnostics;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using System.Windows.Media.Imaging;

using Esri.ArcGISRuntime.Geometry;
using Esri.ArcGISRuntime.UI;
using Esri.ArcGISRuntime.UI.Controls;

namespace Workarounds

     public struct MapOverlayExport
          public Image OverlayImage;
          public MapPoint Anchor;
          public MapPoint TopLeft;

     public static class MapExportHelper

          // Export bitmap from map with XAML graphics overlays

          public static async Task<ImageSource> GetMapImage(MapView mv)

               RuntimeImage ri = await mv.ExportImageAsync();
               ImageSource src = await ri.ToImageSourceAsync();
               if (mv.Overlays.Items.Count == 0)
                    return src; // No XAML overlays

               // Create canvas

               double dWidth = mv.ActualWidth;
               double dHeight = mv.ActualHeight;
               Rect rMap = new Rect(0, 0, dWidth, dHeight);
               Size szMap = new Size(dWidth, dHeight);
               Canvas c = new Canvas();

               // Add map image

               Image imgMap = new Image()
                    Height = dHeight,
                    Width = dWidth,
                    Source = src
               Canvas.SetTop(imgMap, 0);
               Canvas.SetLeft(imgMap, 0);

               // Add map overlays

               List<MapOverlayExport> Overlays = GetMapOverlays(mv);
               foreach (MapOverlayExport overlay in Overlays)

                    // Get Image and location

                    Image img = overlay.OverlayImage;
                    MapPoint ptMap = overlay.TopLeft;
                    Point ptScreen = mv.LocationToScreen(ptMap);

                    // Create and place image of element

                    Canvas.SetTop(img, ptScreen.Y);
                    Canvas.SetLeft(img, ptScreen.X);


               // Create RenderTargetBitmap

               RenderTargetBitmap rtBitmap = new RenderTargetBitmap(
                    (int)dWidth, (int)dHeight, 96d, 96d, PixelFormats.Pbgra32);
               return rtBitmap;


          public static List<MapOverlayExport> GetMapOverlays(MapView mv)

               List<MapOverlayExport> Overlays = new List<MapOverlayExport>();
               foreach (object obj in mv.Overlays.Items)

                    // Get element and location

                    if (!(obj is FrameworkElement elem))
                         Debug.Print("MapExportHelper: Non-FrameworkElement encountered.");
                    double dW = elem.ActualWidth;
                    double dH = elem.ActualHeight;
                    if ((dH == 0) || (dW == 0))
                         Debug.Print("MapExportHelper: Unsupported FrameworkElement encountered.");

                    // Remove RenderTransform and RenderTransformOrigin

                    Transform tRender = elem.RenderTransform;
                    Point ptOrigin = elem.RenderTransformOrigin;
                    elem.RenderTransform = null;
                    elem.RenderTransformOrigin = new Point(0,0);
                    elem.Measure(new Size(dW, dH));
                    elem.Arrange(new Rect(0, 0, dW, dH));

                    // Create image of element

                    ImageSource src = null;
                    if (elem is Image imgSrc)
                         src = imgSrc.Source;
                         RenderTargetBitmap bmp = new RenderTargetBitmap(
                              (int)dW, (int)dH, 96d, 96d, PixelFormats.Pbgra32);
                         src = bmp;
                    Image img = new Image()
                         Height = dH,
                         Width = dW,
                         Source = src,
                         HorizontalAlignment = elem.HorizontalAlignment,
                         VerticalAlignment = elem.VerticalAlignment,
                         RenderTransform = tRender,
                         RenderTransformOrigin = ptOrigin

                    // Restore RenderTransform and RenderTransformOrigin

                    elem.RenderTransform = tRender;
                    elem.RenderTransformOrigin = ptOrigin;

                    // Find top left location in map coordinates

                    MapPoint ptMap = MapView.GetViewOverlayAnchor(elem);
                    Point ptScreen = mv.LocationToScreen(ptMap);
                    double dY = 0;
                    double dX = 0;
                    switch (elem.VerticalAlignment)
                         case VerticalAlignment.Center:
                         case VerticalAlignment.Stretch:
                              dY = -dH / 2;
                         case VerticalAlignment.Top:
                              dY = -dH;
                    switch (elem.HorizontalAlignment)
                         case HorizontalAlignment.Center:
                         case HorizontalAlignment.Stretch:
                              dX = -dW / 2;
                         case HorizontalAlignment.Left:
                              dX = -dW;
                    Point ptTopLeftScreen = new Point(ptScreen.X + dX, ptScreen.Y + dY);
                    MapPoint ptTopLeftMap = mv.ScreenToLocation(ptTopLeftScreen);

                    // Add exported overlay to list

                    Overlays.Add(new MapOverlayExport()
                         OverlayImage = img,
                         Anchor = ptMap,
                         TopLeft = ptTopLeftMap


               return Overlays;




P.S. -- If you want ExportImageAsync to include overlays, vote up this idea:  GeoView.ExportImageAsync should include overlays 

Originally posted by Courtney Kirkham, September 18, 2019 from the MapThis! Blog

While OAuth 2.0 is Esri’s recommended methodology for handling security and authentication for their ArcGIS platform, not everyone using it understands what it does or how to implement it. Here at GEO Jobe, we’ve had to explain it to more than a few of the people we’ve worked with. As such, we thought we’d lay out a quick guide to what OAuth is and how it works.

OAuth 2.0 handles security and authentication for the ArcGIS platform. Image Source

What is OAuth 2.0?

OAuth 2.0 is the protocol that ensures only users you give permission to can access your ArcGIS content. Esri chooses to use OAuth 2.0 for a number of reasons, including this list they’ve provided:

  • OAuth 2.0 meets the needs of both users and applications.
  • There are strong security practices around OAuth 2.0.
  • OAuth 2.0 is designed to function at Internet-scale across domains, networks, cloud services, and applications.
  • As a widely accepted standard OAuth 2.0 has many libraries and helpers for a variety of languages and platforms.

This is an important part of security for controlling who can access or edit content, as well as managing credit usage. By using OAuth 2.0 in your applications, you can make a map of company assets available to anyone in your company while still keeping it hidden from the public. A company working on building a new neighborhood could create a map to track the progress of the homes being built, while ensuring only supervisors can edit the status of the houses.

Perhaps the most important way OAuth 2.0 manages security is controlling access to premium content and services. Since interacting with these resources consumes credits, and credits cost real money, OAuth 2.0 is an important part of making sure that only the people you want accessing those resources are able to do so.
(Bonus: For additional control over security while reducing the overhead in your in your org, check out security.manager)

You’re not getting that data without valid credentials. Image Source

How does OAuth 2.0 work?

Here at GEO Jobe, we’ve found the best way to explain how OAuth 2.0 is with an analogy. Say your friend, Chris, got access to some exclusive event – a networking opportunity, a party, or something like that. There is a private guest list for the event, and the doormen are checking everyone. Your friend tells you all you need to do is tell the doorman you’re there with Chris, and the doorman will let you in.

When you get to the event and check in with the doorman, one of three things can happen. We’ve outlined them each below, and explained what they mean in the context of OAuth 2.0.

The Doorman Finds Your Friend; You Get a Wristband and Go In

This is what happens when OAuth 2.0 works. You’re able to get in and see your friend. In the case of ArcGIS, this means you requested access to content that you have permission to see. After OAuth checks your credentials, they give you a token (the wristband) that’s added to all your requests for content after that. Then, you get whatever you need (that you have permission to view), and everything is good.

The Doorman Finds Your Friend and You Don’t Get In

This is when the doorman comes back and tells you they found Chris, but Chris says they don’t know you. While this may be an awkward social situation, in OAuth 2.0, it’s pretty simple. It means you tried to access content, and OAuth 2.0 doesn’t think you are supposed to be able to see it. This will often result in an “Invalid Redirect URI” error.

In terms of development, this happens because the request is coming from a URL the app doesn’t recognize. To fix it, go to the app in your ArcGIS used to register for OAuth 2.0. Then, in the Settings menu, view the “Registered Info”. The domain sending the request will need to be included in the Redirect URIs.

The Doorman Can’t Find Your Friend

Maybe your friend left the party. Maybe the doorman thought the “Chris” they were looking for was a “Christopher” instead of a “Christine”. Regardless of the reason, the doorman can’t find your friend, and they’re not letting you into the party. When this happens, OAuth 2.0 will return an error stating that there is an “Invalid Client ID”. This is also easy for a developer to fix.

This situation occurs because there isn’t an app in the ArcGIS Organization in question with an App ID that matches what OAuth 2.0 was told to look for. This can happen if the app was deleted from your ArcGIS Org, or if the code where the App ID was specified was altered. In order to fix it, check where the App ID is specified in the code for the OAuth 2.0 call. Also, check the application in ArcGIS Org used to register for OAuth 2.0. If the application was deleted, you will need to create and register a new application, then use that App ID. If the application exists, check under the “Settings” menu and the “Registered Info” to find the App ID. This should match the value for the App ID in the code. If it doesn’t, recopy the App ID from the application in the ArcGIS, then paste the value into the code where the OAuth 2.0 information is initialized.

How to Implement an OAuth 2.0 Application

Setting up an OAuth 2.0 application in your ArcGIS Organization is fairly simple. In fact, it only takes five steps! It’s so easy, we’ve outlined the process below.

1. To start, sign into your ArcGIS Org and go to the Content menu. From there, click on “Add Item” and choose the option for “An Application”.

2. Next, you’ll select the type “Application” and fill out some basic information.

3. After you add the item, go to the Settings page and click the “Registered Info” button. Note: While on the settings page, you may want to select the option for “Prevent this item from being accidentally deleted.

4. After clicking the “Registered Info” button, the App ID you will need should be visible on the left. The final step will be to update the Rediret URIs for the application. Click the “Update” button on the right side of the screen.

5. A popup with the Registered Info should appear. Any applications a developer builds that will need to OAuth into your ArcGIS organization will need to have their domains added to the approved Redirect URIs of an OAuth application. Add the appropriate domains in the textbox, then click “Add”. After your domains are all added, click the “Update” button at the bottom of the popup.

And there you have it! Five easy steps and you’re ready to use OAuth 2.0 in your ArcGIS Organization.

You can relax, knowing your ArcGIS content is safe and only accessible by who you choose. Image Source


Securing your ArcGIS data is important. OAuth 2.0 can make it simple. If you need any assistance setting up OAuth for your ArcGIS Organization, or need some custom applications built while keeping your data secure, reach out to us at We’ll be glad to help!

Liked this article? Here’s more cool stuff

Does the ArcGIS API for JavaScript work with Content Security Policy?  The short answer is yes, but which version you're using (4.x vs. 3.x) determines the approach to take.  Dojo allows you to configure support CSP support:


// mapconfig.js
window.dojoConfig = {
     async: true,
     has: {"csp-restrictions": true}


So the following example works [note that blob support must be enabled]:


<!DOCTYPE html>
     <meta charset="utf-8" />
     <meta http-equiv="content-security-policy"
               content="script-src 'self' blob:; object-src 'self'" />

     <title>Using ArcGIS API for JavaScript with CSP</title>
     <script src="./mapconfig.js"></script>
     <link rel="stylesheet" href="">
     <script src=""></script>
          html, body, #map {
               padding: 0;
               margin: 0;
               height: 100%;
               width: 100%

     <div id="map"></div>
     <script src="./mapinit412.js"></script>
], function (Map, MapView) {

     var map = new Map({
          basemap: "topo-vector"

     var view = new MapView({
          container: "map",
          map: map,
          center: [-118.71511, 34.09042],
          zoom: 11


Note that CSP doesn't allow any inline JavaScript, so even the simplest blocks of code need to be in a separate file.


What about 3.x?  Aye, there's the rub.  Although Dojo supports CSP, the ArcGIS API 3.x does not: it contains code that CSP will reject.  Here's an example from VectorTileLayerImpl.js:


l = Function("return this")();


The only way to get 3.x to work with CSP is to include the dreaded 'unsafe-eval' in the policy string.  With that, the following example will work:


<!DOCTYPE html>
     <meta charset="utf-8" />
     <meta http-equiv="content-security-policy"
               content="script-src 'self' 'unsafe-eval'; object-src 'self'" />

     <title>Using ArcGIS API for JavaScript with CSP</title>
     <script src="./mapconfig.js"></script>
     <link rel="stylesheet" href="">
     <script src=""></script>
          html, body, #map {
               padding: 0;
               margin: 0;
               height: 100%;
               width: 100%

     <div id="map"></div>
     <script src="./mapinit329.js"></script>
// mapinit329.js
require(["esri/map"], function (Map) {
     var map = new Map("map", {
          center: [-118, 34.5],
          zoom: 8,
          basemap: "topo"

I received a request to provide all videos and other files available for an area of interest on the map.


Using ArcGIS Pro, I digitized a polygon to enclose the desired area.

Used this polygon to select all pipe line features that intersect this area.
Export the selected pipes to an excel.
Copy only the user-defined unique id field onto a local text file, as list1.txt
Ensure no extra newlines/whitespace at the beginning nor end of the file.

move list1.txt to a new directory labeled 'stagingFiles'.




Using the command line, write the  contents of the directory that contains the desired files to a local text file, as list2.txt:
dir /b > list2.txt
remove the value of 'list2.txt' from the text file, as well as the names of any subdirectories.
If subdirectories exist, create another text file within the subdirectory, as list2_1.txt, then move it to 'stagingFiles' directory, and repeat for other subdirectories.
remove the value of 'list2_1.txt' from the text file, as well as the names of any subdirectories, and repeat for other subdirectories.



Use this python script, and follow the remainder instructions within it:


import re

with open(r'\\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\stagingFiles\list1.txt', 'r') as f:
    generatorOfFileLines = [line.strip()for line in f]
    a = generatorOfFileLines
    for i in range(len(a)):
        pattern = re.compile(r'(.*)' + str(re.escape(a[i])) + r'(.*)')
        with open(r'\\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\stagingFiles\list2.txt', 'r') as g:
            contents =
            matches = pattern.finditer(contents)
            for match in matches:
                results =
                print results
        i +=1

print "Copy and paste the results of the Regular Expression (above this printed statement) to a text file, as list3.txt, save it, and close it."
resultsFile = raw_input("paste path to text file here: ")
# \\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\stagingFiles\list3.txt

with open(r'{}'.format(resultsFile), 'r') as h:
    b = [line.strip()for line in h]
    c = list(set(b))
    for i in range(len(c)):
            print c[i]
print 'Now take the results above, and paste to the list4.txt file (move the text file to the intended directory to search for files)'
print 'This file is used in conjuction with the command line argument FOR /F "delims=" %N in (list4.txt) do COPY "%N" "C:\\targetFolder" to copy and paste files to an intended directory'
print 'If subdirectories exist, make a new list3.txt from the list2_# (by changing the file name in the path above), and perform the instructions in this script again.'
print 'find out what to do when a filename collides with another, like.........command overwrite yes no all.....'
print 'Separate the cmd line results to single out the files that did not copy successfully'
print 'If feasible, change the filename to its current name, and append "_1" to it.'
print 'once done, remove the list4.txt, and target folder (if applicable) from the intended directory.'
print "______________________________________________Operation is Complete"    

# Try the findall() method without groups, and it should work.
# Another use case for regex would be for validating user input in client apps to ensure what they input meets a criteria.

    # Didn't work

        #     subbedContents = pattern.sub(r'\0', contents)
        #     print subbedContents
        # i +=1

    # for dirpath, dirnames, filenames in os.walk(r'\\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\test'):
    # \\GIS-WEBAPP\Hyperlinks\StormCCTV
        # for file in filenames:
        #     matches = pattern.finditer(re.escape(file))
        #     print matches
        #     for match in matches:
        #         print match

        # Didn't work for list4.txt:
        # trimmedResultsFile = r'\\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\stagingFiles\list4.txt'
        # with open('{}.format(trimmedResultsFile)', 'w') as j:
        #     j.write(str(c[i]))

# fnmatch and os modules did not work

# To get the filenames of the resources in a directory:

# Hold the "Shift" key, right-click the folder and select "Open Command Window Here."
# This only works with folders, not libraries.

# Type "dir /b > dirlist.txt" without quotes and press "Enter."
# This creates a list containing file names only.

# Open Microsoft Excel and press "Ctrl-O" to bring up the Open dialog window.

# Open Txt file in Excel







Sent the client the requested files.



















How Do I select multiple files in a folder from a list of file names
Select multiple files in same folder with variations of filenames python
Select multiple files in same folder with variations of filenames python regular expression
write contents of directory to text file.
Select variations of many file names at once python regex
regular expression tester
How Do I select multiple files in a folder from a list of file names

16 May 2018 #4

Welcome to TenForums @SGTOOL

The simplest way to use a text file with a filename on each line (such as list.txt) to copy the files to a folder such as C:\Destination is by using this single command in a command prompt:

FOR /F "delims=" %N in (list.txt) do COPY "%N" "C:\targetFolder"

'for' loops through all the filenames in list.txt
"delims=" says use the whole of each line as the parameter %N
the quotes around %N in 'copy "%N"' allows for any filenames that contain spaces
C:\Destination specifies the folder you want to copy to (it must already exist, create it first if necessary)

If the text file contains just the file names, then the command has to be run in the folder that contains the files to be copied. To go to that folder, first use the 'change directory' command: CD <full path to the folder>
eg: CD C:\Source_folder

If the text file contains the full path and filename on each line, eg:
...then the CD step is not needed.

If the text file is in a different folder, give the full path to it in the FOR command, eg: (C:\Temp\list.txt)


Select multiple files in same folder from a list of file names
Tiffany McLeod Replied on April 25, 2009

You can use a Excel Spreadsheet to automatically format the code and then copy and paste it into a text document which you would save as a *.bat file. I have a Spreadsheet I've created for this, and I'll share with you, you can download it by clicking on the following link (hopefully).

Download Excel Spreadsheet

How to Use:

Open the Spreadsheet.

At the bottom of the screen, you will see that there are two worksheets in this file.
If your list of names includes the the full file path (example: c:\weddings\sally\img1.jpg), choose the worksheet labeled "Full Path".
If your list only has the filenames (example: myimage.jpg), choose the worksheet labeled "Filename Only".

I'll explain the Filename Only worksheet:


You will NOT enter any Data into the first three columns: that's the output.
In column E (Current Folder Path) Type the full folder path where the pictures are currently located. Make sure that you include the final "\", as shown above. You will only need to type this once.

The next Column (F) is the File Name Column. Paste your list of file names here, one name per cell (the list you paste from should have only one name per line). My spreadsheet allows for well over 200 filenames before the formulas stop working (for more files, simply extend the formulas).

Type the full path of the folder you want to move the pictures to into column G, as shown.

Now, we'll look back to the First three columns, A,B, and C. Find the spot where the output in column B no longer has a filename after the folder path. Select all the output in the three columns, above the ending line. For example, in the sheet above, the ending line is Line 7 (we don't want to include that line), so the selected range would be A2:C6. For two hundred files, the range would be A2:C202.


Open notepad. Paste.

Save as a .bat file. (Choose save. Slect the folder you wish to save it in. Type move.bat into the name line. Make sure that "All files" is selected from the file type drop down list.) This .bat file is reusable. Simply right-click and choose edit to reuse instead of making a new one each time.

Once the .bat file is saved, double-click on it to run it.

Check your destination folder, and make sure the files moved as desired.

If you want to copy the files instead of move them, simply type COPY into cell A2.

Use the Full Path worksheet in the same manner, except you don't need to enter the current folder path into cell E2.

Now for the Disclaimer: Follow these instructions at your own risk. I take no responsibility for any damage caused to your data or system as a consquence of using my spreadsheet or following these instructions. Back up your data before using the .bat file.
You should test this process to make sure that you understand it, before using it for important files.

I freely admit that this is probably a bit clunky and inelegant, but it works and it is very versatile for generating large batches of DOS commands.

Best Wishes,
Tiffany McLeod aka BookwormDragon

Working With Files in Python
by Vuyisile Ndlovu Jan 21, 2019

5 Different ways to read a file line by line in Python


select/search multiple files, copy and paste to a new folder




python snippets

Developer Communities

The specified item was not found.

GIS Developers

Geo Developers

Developer Communities

GIS Developers

Article written by Amy Niessen with contributions from Ciara Rowland-Simms


On Wednesday, May 15th the Cardiff R&D Center co-hosted a Rust and C++ birthday party at Yolk Recruitment to celebrate Rust's 4th Birthday! Despite short notice, we were able to get the word out in time for a nice mixture of full-time freelance and hobbyist programmers as well as a few students to join us. Quite a few people expressed interest in helping out with future events while demonstrating a lot of enthusiasm for a Rust/C++ community in Cardiff! 


To begin, you can't have a party without cake, and to celebrate the birthday properly, you can see the cake and Ferris the crab, which was made from icing by Jack Kelly's partner, Sofia.


cake and Ferris the crab


We then began to introduce our speakers. We had Dan Morgan from DevOpsGroup, Ciara Rowland-Simms from Esri, Chris Light from Esri, and Jack Kelly from DevOpsGroup, with Chris doubling as MC for the event.


The first two talks were about learning new languages, specifically Rust and C++.


Dan had never done C++ before until that week and spoke on the confusion you face when trying to find best practices and up to date learning materials online. His talk will be part of series documenting his journey into C++, driven by advice from the audience about what resources to use next!


Dan and Rust


Ciara did a talk on learning Rust, having also never used the language. By contrast, there is very coherent documentation story for Rust as it is a very modern language. The learning experience was therefore comparatively painless and she was able to cover install and setup, including debugging, along with discussing some cool and some controversial Rust language features (such as the heavy use of macros, the ability to do shadowing of variables, and implicit returns).


Ciara and Rust


Chris’s talk was Modern C++: ACCU 2019 revelations and covered some of the cool new features of C++ 20. He also discussed C++ 17 and more broadly the modernization of the language. The talk provoked some really good discussion on some of the network capabilities that are lacking in the standard library and provided a great space for talking about where the language is moving.


Chris and C++


The final talk was a dive in Rust best practices, helpful tips, formatting and linting tools, and how easy CI/CD can be with Rust. He really highlighted why people are so excited about Rust and how easily it can leveraged to hit the ground running even with only limited experience in the language.




In the end, we had some specific language questions, which is always good. It seemed there were a lot of really good discussions taking place during the event and, of course, people already sharing excitement in the anticipation of future meetups.


Going forward, we also hope to bring in more speakers – which hopefully won’t be difficult given the enthusiasm from audience members at our first event! We really liked having talks which were a mixture of beginner accessible and provoking good discussion for more experienced developers. We struck that balance pretty well this time and hope to do so again in future meetups.


Overall, we were really pleased with how the event went and are really excited to see the beginning of a Cardiff C++/Rust community where we can all learn from each other and grow! We look forward to the next meetup and will be sure to announce it on the page. Be sure to follow it to stay in the loop on our next adventure!

Last month we hosted a unique GeoDev Webinar when we had Manushi Majumdar share her presentation on "Thinking Spatially and Statistically". Manushi introduced types and characteristics of spatial data and advanced GIS analysis techniques. She was able to cover a few basic concepts of statistics and show how they differ in a spatial context, advancing towards Spatial Machine Learning with ArcGIS. 


Here are the questions that were received during the webinar along with their respective answers:


Q: What is the difference between machine learning and statistics? For example, with regression, is there a difference? This always puzzles me!

A: Here is a resource to understand the difference:


Q: Is there any geo-processing tools in built in ArcGIS for running machine learning algorithms?

A: Yes, ArcGIS has support for several Machine Learning techniques. We would suggest looking this blog post to learn more about ML support within ArcGIS Desktop.


Q: Can you provide the link to the notebook again?

AHub-Tutorials/GeoDev_ServiceRequests.ipynb at master · esridc/Hub-Tutorials · GitHub 


Q: Are there any other good resources for finding examples of utilizing Machine Learning with GIS?

A: Here are just a few ArcGIS blogs demonstrating examples:


Q: Spatial Join: I see the tool has capability to join two layers with out common attributes . But can this be done on multiple layers in a single shot? The built-in tool has only option to select two layers. What are the options?

A: Join works on a 1:1 principle, you can only join one layer to another. That said you can use the concept of table 'Relate' to join one table to many using a common attribute in those tables (does not work spatially).


Q: Is it possible to integrate ArcGIS with machine learning software like Jupiter notebook?

A: You can use ArcPy as well as the ArcGIS API for Python in Jupyter notebooks. 


Q: Is there a way to use machine learning to predict or project possible future incident locations without assigning a z-value?

A: Z-score (standard score) denotes the number of standard deviations from the mean a data point is. Simply put, it conveys the distribution of a point around the mean. Prediction or Classification does not need z-scores for input variables.


Q: Could you walk us through the hotspot analysis? How do you access these tools?

A: Read through this to learn more about HotSpot Analysis. It can be accessed within the Spatial Statistics (Mapping Clusters) toolbox in ArcGIS Desktop and under the Analyze Patterns category in ArcGIS Online.


Q: Can we do the machine learning analyses using 10.6 geoprocessing tools?

A: Yes, apart from the usual, ArcGIS Desktop 10.6 comes with two new tools Deep Learning Model To End and Export Training Data For Deep Learning. 


Q: I have a GIS online account. How can I access the data demonstration in ArcGIS online?

A: The data I used for my examples is publicly available. Once you add it to your ArcGIS Online account, you can use the Summarize Center and Dispersion tool there to generate spatial mean, median and standard dispersion for your data.


Q: How can I use ArcGIS for linear regression or logistical regression analysis?

A: Support for regression, both linear and logistic, is available in ArcGIS Desktop Spatial Statistics (Modeling Spatial Relationships) toolbox. Click here to learn more.


Q: Is machine learning part of programming, or is it remote sensing?

A: Machine Learning involves concepts of statistics as well as algorithms to solve problems based on patterns or inferences drawn from data. Remote sensing, on the other hand, involves studying the planet using remote instruments. Machine Learning can have applications in the field of Remote Sensing, for instance, to detect buildings, roads using satellite imagery data.


Q: Is the Jupyter environment embedded directly within Esri ML module?

A: ArcGIS Enterprise 10.7 comes with Hosted Notebooks, that lets you perform spatial analysis and data science workflows in a notebook within your portal. Other than that, you can use ArcPy or ArcGIS API for Python in an external Jupyter Notebook too.


Q: Which interpolation techniques suits best when you are dealing with underground water data?

A: While it depends on your sampling size and distance, Kriging might be a good Interpolation technique.


Q: Please suggest out-of-the-box tools provided by ArcGIS for machine learning algo.

A: Yes, ArcGIS has support for several Machine Learning techniques. I'd suggest looking this blog post to learn more about ML support within ArcGIS Desktop.


For more information, Manushi shared her presentation: GeoDev Webinar - Thinking Spatially and Statistically


Also, for the full recording of the webinar, click here.

Well!  In my previous article, I presented a workaround for a bizarre MMPK bug that reappeared in version 100.5 of the ArcGIS Runtime SDK for .NET; but just when I thought I couldn't find another one even more unthinkably bizarre, up pops this: QueryRelatedFeaturesAsync will return a bad result when the FeatureLayer of a GeodatabaseFeatureTable is a sublayer of a GroupLayer.


Huh?  You can check out the attached Visual Studio project for confirmation, but in the meantime we have a conundrum.  It would appear that, just when the GroupLayer class is finally implemented, we have to chuck it right back out until a safer, more effective version is delivered.  I've updated an earlier article of mine to reflect that situation.  Nonetheless, I got to thinking about how Runtime support for querying M:N relationships in a mobile map package didn't even start to appear until 100.4, and what I would need to do in order to support them were I still stuck at 100.3. Or, what if QueryRelatedFeaturesAsync were to fail again in a future version?


Supporting one-to-one and one-to-many relationships is actually fairly simple, since the RelationshipInfo class gives the required information, when retrieved from both origin and destination tables.  But many-to-many relationships are entirely another can of worms, because some crucial information is inaccessible via Runtime, even though it's encoded in the geodatabase.


Contrary to the wording in the documentation for the RelationshipInfo class [and I quote: "A relationship consists of two and only two tables"], M:N relationships involve a third, intermediate table.  Querying M:N relationships requires knowledge to query that intermediate table, and that's precisely the information which is withheld from the Runtime developer.


Let's take a look at how relationships are stored in a mobile map package.  In my previous article, I introduced you to the GDB_ServiceItems table.  The ItemInfo field in that table stores the JSON data used to hydrate the ArcGISFeatureLayerInfo class:


View of GDB_ServiceItems in SQLiteSpy


Here's the JSON that describes the RegulatorStation to GasValve relationship from the origin role:


     "id": 4,
     "name": "Gas Valve",
     "relatedTableId": 10,
     "cardinality": "esriRelCardinalityManyToMany",
     "role": "esriRelRoleOrigin",
     "keyField": "OBJECTID",
     "composite": false,
     "relationshipTableId": 73,
     "keyFieldInRelationshipTable": "REGSTATIONOBJECTID"


And here's the description for destination role:


     "id": 4,
     "name": "Regulator Station",
     "relatedTableId": 13,
     "cardinality": "esriRelCardinalityManyToMany",
     "role": "esriRelRoleDestination",
     "keyField": "OBJECTID",
     "composite": false,
     "relationshipTableId": 73,
     "keyFieldInRelationshipTable": "GASVALVEOBJECTID"


The two crucial items that are not included in the RelationshipInfo class are relationshipTableId and keyFieldInRelationshipTable.  But how to get at that information in your app?  Aye, there's the rub.  In short, you need to extract the geodatabase from the mobile map package and query the GDB_ServiceItems table directly.  That's where you need a library such as System.Data.SQLite, which is available via the NuGet Package Manager:


NuGet Package Manager


Given the necessary tools, the first step is to extract the geodatabase to a temporary location:


          public async Task Init(string sMMPKPath, Geodatabase gdb)
               string sGDBPath = gdb.Path;
               string sGDBName = Path.GetFileName(sGDBPath);
               string sTempDir = Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString());
               string sTempPath = Path.Combine(sTempDir, sGDBName);
               using (ZipArchive zip = ZipFile.OpenRead(sMMPKPath))
                    ZipArchiveEntry zipEntry = zip.GetEntry(sGDBPath);


Next, query the desired information, taking the steps necessary to clean up afterwards:


               List<string> ItemInfos = new List<string>();
               string sConn = "Data Source=" + sTempPath + ";Read Only=True";
               string sSQL = "SELECT ItemInfo FROM GDB_ServiceItems";
               using (SQLiteConnection sqlConn = new SQLiteConnection("Data Source=" + sTempPath))
                    using (SQLiteCommand sqlCmd = new SQLiteCommand(sSQL, sqlConn))
                         using (SQLiteDataReader sqlReader = sqlCmd.ExecuteReader())
                              while (sqlReader.Read())
               Directory.Delete(sTempDir, true);


Finally, combine the missing ingredients with the out-of-the-box information:


               _infos = new Dictionary<long, Dictionary<long, ExtendedRelationshipInfo>>();
               foreach (string sInfo in ItemInfos)

                    Dictionary<string, object> info = _js.DeserializeObject(sInfo) as Dictionary<string, object>;
                    if (!info.ContainsKey("relationships"))
                    object[] relationships = info["relationships"] as object[];
                    if (relationships.Length == 0)
                    long iTableID = Convert.ToInt64(info["id"]);

                    // Get basic table relationship infos

                    GeodatabaseFeatureTable gfTab = gdb.GeodatabaseFeatureTable(iTableID);
                    if (gfTab.LoadStatus != Esri.ArcGISRuntime.LoadStatus.Loaded)
                         await gfTab.LoadAsync();
                    Dictionary<long, RelationshipInfo> BasicInfos = new Dictionary<long, RelationshipInfo>();
                    foreach (RelationshipInfo relInfo in gfTab.LayerInfo.RelationshipInfos)
                         BasicInfos[relInfo.Id] = relInfo;

                    // Add extended data

                    Dictionary<long, ExtendedRelationshipInfo> ExtendedInfos = new Dictionary<long, ExtendedRelationshipInfo>();
                    foreach (object obj in relationships)
                         Dictionary<string, object> rel = obj as Dictionary<string, object>;
                         long iRelID = Convert.ToInt64(rel["id"]);
                         string sCard = rel["cardinality"].ToString();
                         long? iRelTableID = null;
                         string sKeyField = null;
                         if (sCard == "esriRelCardinalityManyToMany")
                              iRelTableID = Convert.ToInt64(rel["relationshipTableId"]);
                              sKeyField = rel["keyFieldInRelationshipTable"].ToString();
                         ExtendedRelationshipInfo erInfo = new ExtendedRelationshipInfo()
                              BasicInfo = BasicInfos[iRelID],
                              RelationshipTableId = iRelTableID,
                              KeyFieldInRelationshipTable = sKeyField
                         ExtendedInfos[iRelID] = erInfo;
                    _infos[iTableID] = ExtendedInfos;

               } // foreach


Here, then, is the code for querying related features:


public async Task<FeatureQueryResult> QueryRelated(ArcGISFeature feat, long iRelID)

               // Get relationship data

               if (!(feat.FeatureTable is GeodatabaseFeatureTable gfTabSource))
                    return null;
               long iTableID = gfTabSource.LayerInfo.ServiceLayerId;
               if (!_infos.ContainsKey(iTableID))
                    return null;
               Dictionary<long, ExtendedRelationshipInfo> ExtendedInfos = _infos[iTableID];
               if (!ExtendedInfos.ContainsKey(iRelID))
                    return null;
               ExtendedRelationshipInfo extInfoSource = ExtendedInfos[iRelID];
               RelationshipInfo infoSource = extInfoSource.BasicInfo;
               long iRelTableID = infoSource.RelatedTableId;
               if (!_infos.ContainsKey(iRelTableID))
                    return null;
               ExtendedInfos = _infos[iRelTableID];
               if (!ExtendedInfos.ContainsKey(iRelID))
                    return null;
               ExtendedRelationshipInfo extInfoTarget = ExtendedInfos[iRelID];
               RelationshipInfo infoTarget = extInfoTarget.BasicInfo;

               // Build query

               string sKeyValSource = feat.GetAttributeValue(infoSource.KeyField).ToString();
               Geodatabase gdb = gfTabSource.Geodatabase;
               GeodatabaseFeatureTable gfTabTarget = gdb.GeodatabaseFeatureTable(iRelTableID);
               string sKeyFieldTarget = infoTarget.KeyField;
               Field fieldKeyTarget = gfTabTarget.GetField(sKeyFieldTarget);
               StringBuilder sb = new StringBuilder();
               if (infoSource.Cardinality == RelationshipCardinality.ManyToMany)

                    // Gather key values from intermediate table

                    GeodatabaseFeatureTable gfTabRel = gdb.GeodatabaseFeatureTable(extInfoSource.RelationshipTableId.Value);
                    string sKeyFieldRelSource = extInfoSource.KeyFieldInRelationshipTable;
                    Field fieldRelSource = gfTabRel.GetField(sKeyFieldRelSource);
                    string sWhere = sKeyFieldRelSource + " = " + sKeyValSource;
                    if (fieldRelSource.FieldType == FieldType.Guid)
                         sWhere = sKeyFieldRelSource + " = '" + sKeyValSource + "'";
                    QueryParameters qpRel = new QueryParameters() { WhereClause = sWhere };
                    FeatureQueryResult resultRel = await gfTabRel.QueryFeaturesAsync(qpRel);
                    if (resultRel.Count() == 0)
                         return resultRel;
                    string sKeyFieldRelTarget = extInfoTarget.KeyFieldInRelationshipTable;
                    Field fieldRelTarget = gfTabRel.GetField(sKeyFieldRelTarget);
                    sb.Append(" IN ( ");
                    bool bFirst = true;
                    foreach (Feature featRel in resultRel)
                         if (bFirst)
                              bFirst = false;
                              sb.Append(", ");
                         string sKeyValTarget = featRel.GetAttributeValue(sKeyFieldRelTarget).ToString();
                         if (fieldRelTarget.FieldType == FieldType.Guid)
                              sb.Append("'" + sKeyValTarget + "'");
                    sb.Append(" ) ");

                    sb.Append(" = ");
                    if (fieldKeyTarget.FieldType == FieldType.Guid)
                         sb.Append("'" + sKeyValSource + "'");

               // Query related features

               QueryParameters qp = new QueryParameters() { WhereClause = sb.ToString() };
               return await gfTabTarget.QueryFeaturesAsync(qp);



Needless to say, this is a pretty extreme approach to take.  Nonetheless, you never know when this knowledge may come in useful.  




It occurred to me that since I routinely automate MMPK creation using Python, I could also create companion files containing the many-to-many relationships.  I've added a new attachment that contains both a Python script and a revised version of the RelationshipHelper class that takes advantage of it.  Now it's more feasible to support both group layers and related feature queries.

In my previous article, I presented a workaround for preserving group layers in a mobile map opened using ArcGIS Runtime SDK for .NET 100.5.  Today's topic involves something a bit nastier.  It can be pretty frustrating when a bug that is fixed in an earlier software version reappears in a later one.  The lesson here is: Never discard your workaround code!


The bug in question involves certain multi-layer marker symbols that are not rendered properly when rotated.  For example, see this symbol as shown in the original ArcGIS Pro project:


Rotated symbol in ArcGIS Pro

Here's how it looks when exported to a mobile map package and opened using ArcGIS Runtime (see the attached Visual Studio example project):


Rotated symbol in ArcGIS Runtime, all jumbled up

Yikes!  This problem was identified at 100.1 and fixed in 100.2, but at 100.5 once more it rears its ugly head.  One workaround is to set ArcGISFeatureTable.UseAdvancedSymbology to false.  This causes marker symbols to be rendered as picture markers.  That's fine until you run into two limitations.  The first is when you set a reference scale and zoom in:


Zoomed into a bitmap

But even more challenging, what if you want to change symbol colors on the fly?  In theory, you can do that with a bitmap, but it's beyond my skill to deal with the dithering:


Failed attempt to change color of a dithered bitmap

There's another approach, but until Esri implements more fine-grained class properties and methods, manipulating symbols involves a lot of JSON hacking.  Before I go any further, let's crack open a mobile map package and see where drawing information is stored.  If you examine the mobile geodatabase using a tool such as SQLiteSpy, 

you will see a table called GDB_ServiceItems:


View of GDB_ServiceItems in SQLiteSpy


That's the raw JSON for the data retrieved by ArcGISFeatureTable.LayerInfo.DrawingInfo.  Fortunately, there's no need to hack into the table, because you can get the renderer for a feature layer, retrieve the symbol(s), and convert them to JSON.  Then you make whatever edits you want, and create a new symbol.


          public static Symbol UpdateSymbolJSON(MultilayerPointSymbol symOld, Color colorOld, Color colorNew)
               string sOldJSON = symOld.ToJson();
               Dictionary<string, object> dict = (Dictionary<string, object>)_js.DeserializeObject(sOldJSON);
               SymbolHelper.ProcessObjectColorJSON(dict, colorOld, colorNew);
               string sNewJSON = _js.Serialize(dict);
               Symbol symNew = Symbol.FromJson(sNewJSON);
               return symNew;


So what's the workaround?  The nature of the bug seems to be an inability to process offsetX and offsetY correctly.  In fact, they seem to be reversed.  So let's see what happens when the offsets are reversed in the JSON:


Symbol with offsets reversed

Nope.  Not quite there.  What I finally ended up doing is to combine the offset layers into a single layer with no offsets.  Fortunately again, characters are already converted to polygons in the JSON, or I would be doing a lot more work.  First, I collect the offset layers and find the smallest interval (points per coordinate unit):


               bool[] Offset = new bool[layers.Length];
               List<OffsetLayer> OffsetLayers = new List<OffsetLayer>();
               double dInterval = double.MaxValue;
               for (int i = 0; i < layers.Length; i++)

                    Dictionary<string, object> lyr = layers[i] as Dictionary<string, object>;

                    // Check for X and/or Y offset

                    bool bOffset = false;
                    double dOffsetX = 0;
                    double dOffsetY = 0;
                    if (lyr.ContainsKey("offsetX"))
                         dOffsetX = Convert.ToDouble(lyr["offsetX"]);
                         lyr["offsetX"] = 0;
                         bOffset = true;
                    if (lyr.ContainsKey("offsetY"))
                         dOffsetY = Convert.ToDouble(lyr["offsetY"]);
                         lyr["offsetY"] = 0;
                         bOffset = true;
                    Offset[i] = bOffset;
                    if (!bOffset)

                    // Get offset layer data

                    Dictionary<string, object> frame = lyr["frame"] as Dictionary<string, object>;
                    object[] markerGraphics = lyr["markerGraphics"] as object[];
                    Dictionary<string, object> markerGraphic = markerGraphics[0] as Dictionary<string, object>;
                    Dictionary<string, object> geometry = markerGraphic["geometry"] as Dictionary<string, object>;
                    object[] rings = geometry["rings"] as object[];
                    int ymin = Convert.ToInt32(frame["ymin"]);
                    int ymax = Convert.ToInt32(frame["ymax"]);
                    double size = Convert.ToDouble(lyr["size"]);
                    double dInt = size / (ymax - ymin);
                    if (dInt < dInterval)
                         dInterval = dInt;
                    OffsetLayer layer = new OffsetLayer()
                         offsetX = dOffsetX,
                         offsetY = dOffsetY,
                         xmin = Convert.ToInt32(frame["xmin"]),
                         ymin = ymin,
                         xmax = Convert.ToInt32(frame["xmax"]),
                         ymax = ymax,
                         size = size,
                         rings = rings

               } // for


Then I set up the combined frame and recalculate the ring coordinates:


               int iMinX = 0;
               int iMinY = 0;
               int iMaxX = 0;
               int iMaxY = 0;
               List<object[]> OffsetRings = new List<object[]>();
               foreach (OffsetLayer lyr in OffsetLayers)

                    double dX, dY;
                    int iX, iY;

                    // Set up transformation

                    double dInt = lyr.size / (lyr.ymax - lyr.ymin);
                    double dOffsetX = lyr.offsetX / dInt;
                    double dOffsetY = lyr.offsetY / dInt;
                    double dScale = dInt / dInterval;
                    dX = (lyr.xmin + dOffsetX) * dScale;
                    iX = (int)dX;
                    if (iX < iMinX)
                         iMinX = iX;
                    dX = (lyr.xmax + dOffsetX) * dScale;
                    iX = (int)dX;
                    if (iX > iMaxX)
                         iMaxX = iX;
                    dY = (lyr.ymin + dOffsetY) * dScale;
                    iY = (int)dY;
                    if (iY < iMinY)
                         iMinY = iY;
                    dY = (lyr.ymax + dOffsetY) * dScale;
                    iY = (int)dY;
                    if (iY > iMaxY)
                         iMaxY = iY;

                    // Recalculate rings

                    foreach (object obj in lyr.rings)
                         object[] ring = obj as object[];
                         foreach (object o in ring)
                              object[] pt = o as object[];
                              pt[0] = (int)((Convert.ToInt32(pt[0]) + dOffsetX) * dScale);
                              pt[1] = (int)((Convert.ToInt32(pt[1]) + dOffsetY) * dScale);

               } // foreach
               double dSize = (iMaxY - iMinY) * dInterval;


Finally, I assemble a new symbol layer list:


               List<object> NewLayers = new List<object>();
               bool bFirst = true;
               for (int i = 0; i < layers.Length; i++)

                    if (!Offset[i])
                    else if (!bFirst)

                    // Update first offset layer

                    Dictionary<string, object> lyr = layers[i] as Dictionary<string, object>;
                    Dictionary<string, object> frame = lyr["frame"] as Dictionary<string, object>;
                    frame["xmin"] = iMinX;
                    frame["ymin"] = iMinY;
                    frame["xmax"] = iMaxX;
                    frame["ymax"] = iMaxY;
                    lyr["size"] = dSize;
                    if (lyr.ContainsKey("offsetX"))
                         lyr["offsetX"] = 0;
                    if (lyr.ContainsKey("offsetY"))
                         lyr["offsetY"] = 0;
                    object[] markerGraphics = lyr["markerGraphics"] as object[];
                    Dictionary<string, object> markerGraphic = markerGraphics[0] as Dictionary<string, object>;
                    Dictionary<string, object> geometry = markerGraphic["geometry"] as Dictionary<string, object>;
                    geometry["rings"] = OffsetRings.ToArray();
                    bFirst = false;

               } // for
               return NewLayers.ToArray();


And here are the results:


Fixed symbolColors changed


Much better.  I can't guarantee that this code will work for every situation, but it seems to work fine for my own complex symbols.  And remember:  even if this bug is fixed at 100.6, hang onto this code, in case you need it again in the future!

Article contributed to and authored by Satish Sankaran, Max Payson, and Amy Niessen


Last week, the FOSS4G community landed in San Diego for its 2019 North American conference. Esri participated in the event as a silver sponsor and, given its proximity to Esri’s home base in Redlands, many employees were able to attend. FOSS4G is an amazing event for developers and GIS geeks interested in emerging technologies, so we were excited to share our projects and to engage with thought leaders in the geospatial community.


The event kicked off with lightning talks and networking events, which dovetailed into devoted presentations and workshops. While it had a developer focus, many of the presentations provided gentle introductions to hot topics – AI/ML, blockchain, microservices, containers, and serverless computing were all covered. Presenters discussed how these buzzwords can help scale storage, compute, and insight to solve increasingly complex challenges. Many presentations were also grounded by real-world projects, from disseminating 14 trillion USGS LIDAR points to achieving the UN’s sustainable development goals.


Colleagues from Esri shared their work with presentations at the event from Atma Mani presenting "Let's Take the Machines House Hunting" using Python and Jupyter Notebooks, Thomas Maurer presenting "LERC - Fast Compression of Images and Tensors", highlighting low-level libraries like LERC for raster compression, and Tamrat Belayneh presenting "I3S - An Open Standard to Bring 3D to Web, Desktop, and Mobile Platforms", introducing the OGC community standard I3S spec. We also appreciated hearing Howard Butler acknowledge our contributions to the GDAL Coordinate System barn-raising effort in his presentation. As an important vendor in the GIS space, we are happy support fundamental initiatives like these that help build core libraries used extensively by the community.


Atma Mani demonstrating the Python API to a user


While many attendees were familiar with Esri software and some even active users, at the Esri booth, the conversations extended beyond traditional ArcGIS workflows often discussed at Esri events. We enjoyed learning from others’ diverse perspectives and expertise, and it was reassuring to see community validation regarding steps we are taking in the areas of 3D, interoperable data science, and with our Developer program. Esri continues to push forward on its Open Platform vision – a vision that includes support for standards, interoperability, open data and open source. And, we are constantly looking for better ways to engage with developers and support their work.


While large software business may share complex relationships with the open source world, Esri’s role in the GIS realm has always been community focused. We hope to continue to grow the community of GIS users and developers and FOSS initiatives are an important subsystem contributing to this growth.


New sticker design at FOSS4G

Screen shot of example app in action


Given the fact that the ArcGIS Runtime SDK for .NET supports group layers at 100.5, and that they work perfectly fine in 2D maps, it's somewhat disappointing that mobile maps with group layers are still not hydrated properly.  Esri can mansplain their issues until their faces turn blue, but the fact is that the tools exist right now to do it. In this post I will show you a functional workaround.


Before I start showing any code, let's take a peek at the internal structure of a mobile map package, as produced by ArcGIS Pro 2.3.2:


MMPK top level

The .info file is a JSON file, and it's important because it provides the link between the map index and the map name:


"maps": [ "Mohave Electric MMPK" ]

The "Create Mobile Map Package" tool enforces unique map names, so the problem of duplicate map names should never arise.  My own practice is to have only one map per MMPK for operational layers only, no base layers.  This allows separation of packages based on frequency of updates:  facility data (frequent updates), landbase (infrequent updates), and basemaps (vector or raster tile packages).  A Runtime app can then open each package, grab its layers, and add them to the main map.


Now let's look at the p14 subfolder:


MMPK p14 folder contents

The .mapx file is a JSON file containing all the information needed for ArcGIS Pro to hydrate the map; it uses the CIM (Cartographic Information Model) specification.  In an ideal world, that would be the file that Runtime uses to open a map, but it's not.  Runtime uses the .mmap file to open maps:  it's also a JSON file, but it doesn't use CIM.  Although the mobile map specification is not fully aligned with CIM, nonetheless it does store group layer information:


            "id" : "b4a38ad5025b46d1ac809b3ed5258935",
            "title" : "Transformers",
            "visibility" : true,
            "layerType" : "GroupLayer",
            "layers" : [
                "id" : "2c4d3bafe82c4344a0bc2019b67097cc",
                "title" : "Capacitor Bank",
                "visibility" : true,
                "layerType" : "ArcGISFeatureLayer",
                "layerDefinition" : {
                  "minScale" : 4000


What it does not store, however, is any minScale or maxScale information for the group layer.  You can look that up in the .mapx file, as long as the group layers are uniquely named, but I don't bother.  The workaround is to design the mobile map without scale ranges on the group layers.


I've attached an example Visual Studio project which retrieves the .info and .mmap files to extract the operational layer definitions:


               // Open mmpk as zip archive

               string sMapName = null;
               object[] oplayers = null;
               using (ZipArchive zip = ZipFile.Open(sPath, ZipArchiveMode.Read))

                    // Parse the info file to get map name

                    string sInfoFile = Path.GetFileNameWithoutExtension(sPath) + ".info";
                    sMapName = GetMapName(zip, sInfoFile, iMapIndex);

                    // Parse corresponding .mmap file and get operational layers

                    if (!string.IsNullOrEmpty(sMapName))
                         string sMapFile = "p14/" + sMapName + ".mmap";
                         oplayers = GetOperationalLayers(zip, sMapFile);



The code uses the .NET JavaScriptSerializer object to parse the JSON:


          private static string GetMapName(ZipArchive zip, string sInfoFile, int iMapIndex)

               string sJSON = GetEntry(zip, sInfoFile);
               if (string.IsNullOrEmpty(sJSON))
                    return null;
               Dictionary<string, object> dict = js.DeserializeObject(sJSON) as Dictionary<string, object>;
               if (!dict.ContainsKey("maps"))
                    return null;
               if (!(dict["maps"] is object[] maps))
                    return null;
               if (iMapIndex > maps.Count() - 1)
                    return null;
               return maps[iMapIndex].ToString();


          private static object[] GetOperationalLayers(ZipArchive zip, string sMapFile)

               string sJSON = GetEntry(zip, sMapFile);
               if (string.IsNullOrEmpty(sJSON))
                    return null;
               Dictionary<string, object> dict = js.DeserializeObject(sJSON) as Dictionary<string, object>;
               if (!dict.ContainsKey("map"))
                    return null;
               if (!(dict["map"] is Dictionary<string, object> map))
                    return null;
               if (!map.ContainsKey("operationalLayers"))
                    return null;
               return map["operationalLayers"] as object[];


          private static string GetEntry(ZipArchive zip, string sEntryName)
               ZipArchiveEntry zipInfo = zip.GetEntry(sEntryName);
               if (zipInfo == null)
                    return null;
               string sJSON = null;
               using (StreamReader reader = new StreamReader(zipInfo.Open()))
                    sJSON = reader.ReadToEnd();
               return sJSON;


Then it opens the map and restructures it to insert group layers in the appropriate places.  


          private static void AddLayers(Map MyMap, GroupLayer glParent, object[] sublayers)

               foreach (object obj in sublayers)

                    if (!(obj is Dictionary<string, object> layer))
                    if (!layer.ContainsKey("layerType"))
                    string sLayerType = layer["layerType"].ToString();

                    // Process layer

                    Layer lyr = null;
                    if (sLayerType == "GroupLayer")

                         // Create group layer
                         // **** LIMITATION: .mmap group layer entry does not store minScale or maxScale ****
                         // [MMPKs should be created with that limitation in mind]

                         GroupLayer glChild = new GroupLayer();
                         string sDisplayName = "";
                         if (layer.ContainsKey("title"))
                              sDisplayName = layer["title"].ToString();
                         glChild.Name = sDisplayName;
                         bool bVisibility = true;
                         if (layer.ContainsKey("visibility"))
                              bVisibility = (bool)layer["visibility"];
                         glChild.IsVisible = bVisibility;
                         if (!layer.ContainsKey("layers"))
                         if (!(layer["layers"] is object[] layers))
                         AddLayers(null, glChild, layers);
                         lyr = glChild;


                         // Get layer and add to parent

                         if (!layer.ContainsKey("id"))
                         string sID = layer["id"].ToString();
                         if (!OpLayers.ContainsKey(sID))
                         lyr = OpLayers[sID];

                    if (lyr == null)
                    if (glParent == null)




As a bonus, the example code also builds an interactive TOC.




[P.S. -- I've heard rumors that ArcGIS Pro 2.4 will introduce potentially code-breaking changes to the mobile map specification.  Watch out for that.]

If you missed out on DevSummit this year, or are just looking to re-live it, the following are the Top 20 technical sessions you may want to check out.


First… what does "Top” mean really?


Well, some we included due to them being some of the most popular sessions of the week, and some we chose because of the session room was really full, or flat out overfull. But then, some of the sessions below introduced some new products and capabilities, and we wanted to take one more shot at ensuring you know about them.

Ok, here we go, in no particular order:


0 – Real Software Engineering

It's not uncommon for some to assert that software development isn't, or should not be considered "engineering".

We were lucky to have as a Keynote Speaker, Glenn Vanderburg, the VP of Engineering at Glenn gave an interesting talk exploring the concept of "engineering" and how it applies to the pursuit of building powerful software applications.


Glenn does a clever job exploring this idea--showing that, while software engineering has aspects that are fairly unique compared to other engineering fields, there are other aspects that not only show that it can be considered engineering, but that when it comes to designing, testing, scaling, deploying good software, and collectively learning from what we all do, for us to get the most from technology it should be treated as, and practiced as an engineering discipline.


1 - High-End 3D Visualization with CityEngine, Unity, and Unreal

GIS and Game Engines--interesting combination. And the future is here today. Game-like interactive experiences, whether on the screen or in VR/AR, are the next big step in 3D GIS. In this session, you will get an introductory guide to Unity and Unreal, the leading game engines. You will learn to load your map data up into a virtual world, one where you can walk through realistically rendered city models or to explore interactive storytelling of your 3D project.


2 – ArcGIS API for JavaScript: What's New

This was THE most heavily attended session at DevSummit this year (that didn't have "Road Ahead" in the title). Web developers make up the largest part of the global developer community building with ArcGIS, and the developers and engineers at Esri responsible for growing this API continue adding many new capabilities each release.

For web developers, hitting up the annual What's New session is near the top of their lists of things to do. And now you can too.


3 - Using Insights for ArcGIS with Python and R

New this year, Insights has been enhanced so that developers and data scientists can now extend the capabilities with both Python and R for analysis and visualization.


4 - ArcGIS Runtime SDKs: Everything (or Anything) You Wanted to Know But Were Afraid to Ask

Live Q&A session. No slides, no demos, just all of the ArcGIS Runtime SDK developer leads. They all covered topics such as Metal, Vulkan, React Native, editing, snapping, Arcade, samples, styling 2D and 3D, and more. And here is the full list of questions that were answered by the panel of developers and engineers on the ArcGIS Runtime teams. 


5 – ArcGIS Enterprise: Architecting Your Deployment

"This is not a very GIS-y presentation, and I'm not doing any live coding...", says ArcGIS Enterprise product manager Philip Heede. Despite that caution, this session continues to be very popular in Palm Springs year after year. The title says it all. Optimizing your resources, storage, and performance starts with a solid architecture that fits the needs of your organization, apps, databases, and users.


6 - Extending Survey123 for ArcGIS

Sure, Survey123 has been one of the most popular end-user apps for a couple years now, but given this is DevSummit, this session looks into ways developers can now extend the app, to customize it further to make it more closely fit your workflows, such as writing JavaScript functions, creating Add-ins, using Webhooks, and rebranding the app.


7 - Using Webhooks in ArcGIS Enterprise

This. Demo. Theater. Was. OVERFULL. All chairs taken, standing room only, spilling out into the Showcase. This was easily the most popular demo theater of the week. If you got there too late, then today you're in luck. Here's the 30-min video... enjoy! Webhooks: They're Better Than Polling!


8 - Integrating Machine Learning and Deep Learning with ArcGIS

During the Plenary session, Rohit Singh invited everyone to follow him into "the exciting world of pavement cracks". :-) Well, this session here is a more complete version. This was the second most highly attended demo theater of the week. Also standing room only.


In this session, Rohit demonstrated how organizations can detect deep and complex spatiotemporal patterns in their data, and use that to predict geospatial events of interest – at scale, and in real-time.


And if you like this 30-min session, don’t miss the 4+ hours of machine learning "deep dive” here.  


9 - ArcGIS Data Store Management: Best Practices

This was a new session this year, but with almost 300 attendees in the room, its popularity is clear. We bet you will find this information useful as well. In this session, after a brief intro about what the Data Store is, how it works, and what it can do for you, the presenters dig into how to install and configure it, followed by techniques for backing up, restoring, moving, and upgrading the Data Store.


10 - Using Webpack and React with the ArcGIS API for JavaScript

Modern frameworks continue to make robust and powerful web app development better. In this session, you'll learn to integrate the ArcGIS API for JavaScript via the ArcGIS webpack plugin or esri-loader, and then use React to build a fast and responsive application that uses some of these modern tooling and techniques.


11 - Network Management with ArcGIS: Introduction to the Utility Network

We put this session in the wrong room--way too small. If you tried to get in, and couldn't, then this is what you missed. Next year, the update to this session will be much better placed.


12 - What's New in Collector for ArcGIS

One of the most popular apps over the past several years, the Esri Apps team gave a rundown--to a full house--on all the new capabilities of Collector, and what their plans are for the future, so that you can get the most from your field data collection. As they say "Accurate Data Collection Made Easy!"


13 - Interacting with Hosted Feature Layers through the ArcGIS REST API

Yet another overfull demo theater session. Although ArcGIS Online has a rich user interface for working with Hosted Feature Layers, there are some advanced features that developers can use for interacting with these web services using the ArcGIS REST API.


14 - Tools for the Modern Web Developer

When pulling together this session, Josh and Gavin from Esri Professional Services ask themselves: "What tools improved our workflows the most over the past year?”. So in 30 minutes, they went and covered helpful techniques for using Postman, Visual Studio Code, and open source tools from Esri for working with Calcite Maps, React, Angular, and Ember.


15 - Building Native Cross-Platform Apps with AppStudio: Advanced Topics

DevSummit attendees really like sessions with the word "Advanced" in the title. In this talk, we dig into some new capabilities, such as leveraging other geolocation tech with your apps, using Bluetooth, Beacons, and high-accuracy GNSS receivers. Also demonstrated is the new extension for developing AppStudio apps with the Visual Studio Code development environment.


16 - Roadmap for BIM-GIS Interoperability

As soon as GIS introduced 3D capability, users started experimenting with putting one of the most widely created 3D data types in GIS context, Building Information Modeling (BIM) data. This session covers best practices and techniques for using BIM data in ArcGIS, along with a road map for where things are going.


17 - Continuously Delivered Scalable Geoservices on Kubernetes

Containerization is hot. It’s a sharply growing trend in the IT industry, improving the automation, scalability, and management of apps, services, data, users, and organizational workflows. In this session, developers from Esri's Washington DC R&D Center cut to the chase on how to use Kubernetes and Jenkins, and their useful abstractions, for automating and managing the creation and use of containers, so that you can continuously deliver scalable web services.


18 - ArcGIS Pro SDK for .NET: Understanding the CIM, a Guide for Developers

CIM == Cartographic Information Model. While the Pro SDK provides classes and methods for the most commonly used aspects of rendering layers and managing map layouts, the CIM opens that box up all the way. Access to the CIM gives ArcGIS Pro developers access to all of these much more advanced capabilities. This session digs right in, showing you techniques you'll find useful right away.


19 – Improving your Web App through UI/UX Best Practices

With so many great GIS tools around, why isn't map app design a slam dunk? This session teaches you that great UX is more than the sum of its parts and explains how to design more compelling User Interfaces. We'll cover fundamental building blocks of successful applications and introduce ways to improve the total experience.


Tied for 20 - 242, all of the other videos that you can find here:

Let us know what you think!

In March, Chris LeSueur, Erwin Soekianto, and James Tedrick presented the GeoDev Webinar, "Extending Survey123 Field App Using AppStudio for ArcGIS". Coming fresh out of DevSummit, they had a lot of new information to share and thought it would be great to do so in a webinar. 


There were a lot of great questions to come out of the webinar, which are shared below. You can also find a link to the recording of the webinar, along with a slide deck of their presentation after the Q&A section.



Q: When using Survey123, will we always be using QML?

A: If you are going to customize the code of the Survey123 app, then you will be writing in QML & JavaScript.


Q: A complaint I have had for the past two years is that the buttons when you close a survey are so small. I am usually in the field with sun in my eyes, and with and huge fingers, it is difficult to choose one option to save/discard/close. Is there anything that can be done about this?

A: Thank you for the feedback. This is something that could be customized using AppStudio.


Q: Does it need to be driven by ArcGIS Online or Portal? Or can we connect directly to ArcGIS Server Services?

A: Survey123 for ArcGIS is designed to work with forms that are stored in ArcGIS Online and ArcGIS Enterprise.


Q: How do I modify a Survey123 app for any time I create a new version of it? Do I put a version number or something like that?

A: With AppStudio, every time you make changes and upload your apps to your organization account, the version number will increase. You can always save previous versions for your archives.


Q: Does AppStudio honor smart form functionality of Survey123? Mine does not; am I doing something wrong? 

A: The Survey123 Template features the full smart form capabilities of Survey123.


Q: AppStudio puts image and geopoint questions first no matter what the order in the Survey123 form - how do we change this?

A: By default, the Quick Report template has this behavior, but Survey123 honors the question ordering in the form.


Q: Using the Survey123 template goes through the app gallery instead of opening the survey directly - how do we bypass this?

A: Yes, you could customize the code of Survey123 using AppStudio to directly open a survey when the app is opened.


Q: With the live "Compass" info, what if you have a high accuracy GPS connected via bluetooth? Will the app use the GPS or the device's compass?

A: That would depend on how the Compass was implemented. It could either always use the mobile device's compass or be configured to use the current location provider.


Q: Can I set up my own template for creating sites?

A: Yes, AppStudio lets you create your own App Template that you can reuse to build your own apps.


Q: Is it recommended to delete the forms not used in your custom app to improve performance?

A: The number of forms shouldn't have an impact on performance.


Q: How is AppStudio different from Survey123 Connect?

A: AppStudio is an environment for creating custom applications; Survey123 Connect is an environment for authoring and publishing forms.


Q: I have some faculty who want a form that will re-order a checklist based on counts of other reports nearby, similar to eBird's checklist app, but for non-avian observations. Would this be possible to create using AppStudio?

A: Yes, this would be possible, but it would definitely require some customer code in the Survey123 app and most likely a custom geoprocessing service that queries a feature.


Q: When are you expecting the Add-in capability to be available?

A: The Survey123 add-in capability will be added to the Early Adopter Community in the near future; we anticipate a formal beta release around the Esri User Conference (UC).


Q: Can I display previous data into a new Survey123 so, for example, previous data + new collected data = new sum?

A: Yes, this is possible with out of the box Survey123 using the Inbox feature.


Q: Can you show where you would add the printer info?

A: This gets into some advanced development that, unfortunately, we cannot fully address here. You can start by looking at some of our samples in AppStudio desktop that works with hardware devices such as the "GNNSInfo" and BarcodeReader" Samples.


Q: Is Survey123 now able to display custom maps on the web? Currently we are able to load our custom map on the surveyConnect but it doesn't show up on the browser.

A: Thank you for the feedback; this enhancement is something we are considering for a future release.


Q: With this AppStudio, can we customize the base maps to suit our need or change them to use our customized maps?

A: You can set the maps in the out of the box Survey123 by manually configuring map settings; please see


Q: How can we prepare maps by Collector apps?

A: Please see for configuring custom maps with Survey123.


Q: Can you customize time/location stamp with fonts and placement of text on the image?

A: Yes, you can customize the time-stamp on the image.


Q: Can you use this to add view only data to the basemap used in a geopoint question of a survey?

A: Yes, this is a possible customization of the Survey123 template.


Q: I am having trouble with white-labeling the Survey123 app. I keep getting an error for invalid redirect URI when trying to connect to our Enterprise Portal. I have tried adding the http and https urls to the redirect properties but cannot get it to work. Our portal url is What would be the correct format for the redirect URI?

A: Please see the documentation on the ArcGIS Developers site concerning mobile redirects:


Q: Can the new Survey123 work with sso IWA on Portal? 

A: Survey123 can work with IWA.


Q: Can AppStudio work with the report(beta) functions in

A: The report functionality is a REST-based service; it would be possible for an AppStudio app to trigger a feature report request.


Q: Since we have to upload the code who owns the IP of the code?

A: Esri maintains ownership of the original code, but you will own any modifications you make to the code. When you distribute the app, you must acknowledge Esri somewhere within the app.


Q: What kind of documentation for development and API is available outside of the blog shown?

A: AppStudio has an extensive documentation with a examples and API documentation. See:


Q: How do you measure the length of a pipe?

A: For doing things like measuring the length of a pipe, you may want to investigate the Spike GPS product; it can integrate with Survey123:


Q: What licensing conditions apply to apps created with AppStudio? For example, what restrictions would apply to a Survey123 app created using the template? 

A: The Survey123 template is licensed under the Apache 2 open source license; no additional encumbrances apply from it.


Q: Is there a reason to incorporate Collector along with Survey123?

A: There are several customers who use both Collector and Survey123 in their workflow, for example, using a Collector map for overview and asset inventorying while using Survey123 for asset inspection. These applications can also be linked to open the other at appropriate points; see


Q: How compatible is Survey123 with ArcGIS Pro?

A: Survey123 and ArcGIS Pro can both work with feature services, allowing both to operate on the same data.


Q: Can you share a customized app in ArcGIS Online using groups?

A: Yes, with AppStudio you can create apps and share them with other users in your ArcGIS Organization, which they can download and run in AppStudio Player. Think of it as similar to how you can share a web map with someone in your ArcGIS Organization, except this time you are sharing a native app and they are running in AppStudio Player instead of a browser.


Q: Can you collect data via app player?

A: Yes, you can run an app inside of AppStudio Player that collects data.


Q: What's the cost to further enhance Survey123 in AppStudio? 

A: Customizing the Survey123 template with AppStudio requires an AppStudio Standard license. The AppStudio Standard license is included with a paid ArcGIS Developer Subscription (Builder level or above). Review more information here:


Q: Will this solve the issue of not being able to capture images on existing feature service data? We do inspections of existing features, but Survey123 does not allow for capturing.

A: Yes, it would be possible to customize Survey123 to support image attachments while editing existing features.


Q: What would be the best way to auto-populate fields in Survey123 from feature layers in a Web App?

A: To populate data from a web application into Survey123, you would use URL parameters to load the data. See for more information.


Q: Can you bring in Feature services from your own organizational ArcServer?

A: Survey123 works with ArcGIS Enterprise, and through it, can connect to a feature service on an organizational ArcGIS Server.


Q: Earlier you mentioned add-ons as a future feature. Can you provide more details on this?

A: The Survey123 add-in capability will be added to the Early Adopter Community in the near future; we anticipate a formal beta release around the Esri User Conference (UC).


Q: What would you say are the main differences between WebApp builder and AppStudio? What might be some examples of when you would want to use one over the other?  

A: Web AppBuilder gives you a widget-based experience for building web applications that run inside a browser. AppStudio is for developing apps that get installed as standalone applications and can be published to app stores or distributed as installation files.


Q: Is QML accessing the native APIs of iOS and Android?

A: When you write in QML, you are not accessing the native APIs directly, but you are accessing them through Qt/QML components. This allows you to write an app once in QML and have it work on multiple platforms. Also, with AppStudio, you get the AppStudio Framework, which is a set of QML components that give you cross-platform access to things like biometric authentication, barcode scanning, and more.


Q: Is Esri planning to make the photo branding as an option in a future release without needing to code it yourself?

A: Thank you for the feedback; this enhancement is something we are considering for a future release.


Q: Can you change the lat long to UTMs?

A: Survey123 can display UTM coordinates out-of-the-box via both the Map Settings and by using the pulldata() function; please refer to the "Coordinate Format" section of


Q: Do developers need a paid license? What is the entry level price for a developer?

A: The AppStudio Standard license is included with a paid ArcGIS Developer Subscription (Builder level or above). You can read more information here:


Q: Without customization, can Survey123 fetch data from a database at runtime and display in a drop-down menu?

A: Thank you for the feedback; this enhancement is something we are considering for a future release.


Q: What level of accuracy do you get from the photo coordinates?

A: Photos taken in Survey123 will inherit location accuracy of the GPS receiver connected to Survey123 at the time.


Q: What is the cost of the Survey123 app?

A: Survey123 is included in the Fieldworker, Creator, and Pro user roles; it is also available for purchase as an added license on top of the Editor role. Please refer to the ArcGIS Online Pricing page for additional details.


Q: When a survey is resubmitted, is it possible to get an API trigger to be used in Microsoft flow? 

A: Support for an 'editing data' (resubmit) event is coming in the next version of Survey123.


Q: Is xForms the area where you could make fields required to be filled out before they are submitted?

A: Yes, required questions are part of the xForms/XLSForms form definition.


Q: Can you load previous surveys created in Connect into here and change those settings?

A: When creating a custom version of the Survey123 app, it will have support of the full XLSForm standard shown in Survey123 Connect; any customization made may not show in Survey123 Connect, but can pass through to the custom application without problems.


Q: Is AppStudio considered to be a mobile native or hybrid application?

A: AppStudio apps are native mobile applications written in QML and JavaScript.


Q: Is it possible to combine the Augmented Reality app with survey options (take pic, record voice, and answer survey question, for example)?

A: Yes, this is possible with customer code. Take a look at the "AuGeo" template included with the "Enterprise Templates" in AppStudio Desktop. You could combine AuGeo with the functionality from the Quick Report template or Survey123.


Q: Would the image attachments work in ArcMap 10.1 from Survey123?

A: Survey123 stores image data as standard attachments readable by ArcMap.


Q: Can the Inbox be customized to show pictures from a submitted form? 

A: Yes, this could be a possible customization enhancement.


Q: Can the Inbox be customized to increase the limit beyond 1000?

A: Yes, this could be a possible customization enhancement.


Q: How can we record time in the correct time zone? It shows the right time zone when collecting, but writes in UTC.

A: ArcGIS Online stores database time in UTC, which may cause an issue when viewing times in software that is not time zone aware. A Survey123 form can be designed to store the device's offset from UTC, which could be used to calculate the local time.


Q: Can this (record time in correct time zone) be done in Survey123, or does it have to be customized in AppStudio?

A: Yes, this could be a possible customization by pre-adjusting the time, though technically, the data would be stored at the wrong time.


Q: If I want to submit data to a different database, does AppStudio use the webhooks with Microsoft Flow or is Flow no longer needed?

A: There are two different options described for writing data to a different database: 1) Use a webhook and webhook provider to create a copy of the data in another data system, or 2) use AppStudio to customize where the data is sent to. These are independent; the customized application would not need to use a webhook provider like Microsoft Flow.


Q: Is there a AppStudio template for Collector?

A: No, we do have a data capture template that does all that Collector for ArcGIS can do, but take a look at the Quick Report template; it is a good example of editing Feature Layer data (points/lines/polys).


Q: What would the main customer base be for this service within Google Maps platform?

A: Survey123 and AppStudio are designed to work with Esri's Geospatial Cloud.


Q: Is it possible to read the bearing from the sensor on the device rather than manually inputting it? 

A: Yes, this could be a possible customization enhancement.


Q: Is it possible to work with geometry like line polygon?

A: Polygon and line capture support are now in beta testing in Survey123; please visit Survey123's Early Adopter Community ( for more information.


For additional questions that came up at the end of the webinar, please be sure to view the recording:


You will also see useful links and tips from our presenters at the end of both the recording and the presentation. 


So that was March's GeoDev Webinar! What would you be interested in seeing put together in a GeoDev webinar? We are always looking for great developer topics to incorporate, so we're all ears! Comment your suggestion below!

Granted, it's a lame Internet meme, but it got your attention, no? My argument rests in a single word: granularity. If you picked up on that, you need not read the rest of my post. For those of you who need one or two concrete cases, read on.


1) You may not need me to tell you that containment associations simply fail to replace Smallworld's internal worlds. Internal worlds allow an independent coordinate system, which I suppose requires a way to trace between feature datasets, and at this point I need say no more.


2) That aside, here's a nifty little diagnostic tool -- stolen from Smallworld -- that I implemented for the old geometric network, and I challenge Esri to allow me to duplicate it for the new utility network. This is how it works in ArcMap: you click a button, and it displays the actual edge connectivity for junction features in the current map view.




As far as I can tell, this functionality is not possible on a utility network via the Pro/REST/Enterprise APIs. You can access the ancient code here:


And I'll leave it to you kids to prove me wrong. To be fair, I did hear, from certain folks at the DevSummit, rumors of a ForwardStar-ish implementation to come, but hey guess what, it's not here yet. Hence my original point.