Skip navigation
All Places > Geo Developers > Blog
1 2 3 Previous Next

Geo Developers

86 posts

I maintain a number of automated map products in ArcMap which involve not just spatial queries and geometric operations, but also fine-grained manipulation of layers, including renderers and symbology. Let's face it: I never could get the arcpy.mapping module or early versions of ArcGIS Pro to cut the mustard. Later versions of the ArcGIS Pro SDK introduced far greater capability to manipulate map layers and layout elements. But then I asked myself: should users be running Pro at all to create those plots?

 

At Pro 2.4.3, I started taking a closer look at arcpy.mp, wondering if I could create a geoprocessing tool and publish it to a web tool for consumption by a custom Web AppBuilder widget in Portal. I am happy to say that an initial proof-of-concept experiment has been a success.

 

Before I go into that, first I would like to point out some of the features of arcpy.mp that made me decide that it has finally reached the level of functionality that I need:

 

  • Load and modify symbols
  • Change and manipulate renderers
  • Make layout elements visible or invisible
  • Make modifications at the CIM level

 

One thing arcpy.mp doesn't do yet is create new layout elements, but for my purposes I can recycle existing ones. A good approach is to have a number of elements present for various tasks in a layout, and make them visible or invisible on demand for different situations.

 

        # Show or hide legend
        legend = self.__layout.listElements("LEGEND_ELEMENT")[0]
        if self.__bOverview:
            if self.__bMainline:
                legend.visible = True
            else:
                legend.visible = False
        else:
            legend.visible = True

 

The ability to manipulate legend elements is still pretty limited, but I haven't run into any deal-killers yet. If you really hit a wall, one powerful thing you can now do is dive into the layout's CIM (Cartographic Information Model) and make changes directly to that.  Here's an example of modifying a legend element in a layout via the CIM:

 

aprx = arcpy.mp.ArcGISProject("c:/apps/Maps/LeakSurvey/LeakSurvey.aprx")
layout = aprx.listLayouts("Leak Survey Report Maps Template")[0]
cim = layout.getDefinition("V2")
legend = None
for e in cim.elements:
    if type(e) == arcpy.cim.CIMLegend:
        legend = e
        break
legend.columns = 2
legend.makeColumnsSameWidth = True
layout.setDefinition(cim)

 

While the CIM spec is formally documented on GitHub, a simpler way to explore the CIM is to check out the ArcGIS Pro API Reference; all objects and properties in the ArcGIS.Core.CIM namespace should be mirrored in Python.

 

Part One: Creating a Python Toolbox

 

LeakSurvey.pyt is in the sample code attached to this post. While my initial draft was focused on successfully generating a PDF file, when the time came to test the tool as a service, additional factors came into play:

 

  • Getting the service to publish successfully at all
  • Returning a usable link to the resulting PDF file
  • Providing a source for valid input parameters

 

Sharing a geoprocessing tool as a package or service is one of the least intuitive, most trippy experiences I've ever had with any Esri product.  The rationale seems to be that you are not publishing a tool, but a vignette. You can't simply put out the tool and say, here it is: you must publish a geoprocessing result. As part of that concept, any resolvable references will cause ArcGIS to attempt to bundle them, or to match them to a registered data store. This is a great way to get the publication process to crash, or lock the published service into Groundhog Day.

 

So, one key to successfully publishing a web tool is to provide a parameter that:

 

  1. Gives the tool a link to resolve data and aprx references, and
  2. When left blank, returns a placeholder result that you can use to publish the service.

 

LeakSurvey.pyt does just that. Here's the definition for the "Project Folder" parameter:

 

        param0 = arcpy.Parameter(
            displayName = "Project Folder",
            name = "project_folder",
            datatype = "GPString",
            parameterType = "Optional",
            direction = "Input")

 

When left blank, the tool simply returns "No results" without throwing an error. Otherwise, it points to a shared folder that contains the ArcGIS Pro project and some enterprise GDB connection files.

 

Returning a usable link to an output file involves a bit of a trick.  Consider the definition of the "Result" parameter:

 

        param7 = arcpy.Parameter(
            displayName = "Result",
            name = "result",
            datatype = "GPString",
            parameterType = "Derived",
            direction = "Output")

 

The tool itself creates a path to the output file as follows:

 

        sOutName = self.__sSurveyType + "_" + self.__sSurveyName + "_" + self.__sMapsheet + "_"
        sOutName += str(uuid.uuid4())
        sOutName += ".pdf"
        sOutName = sOutName.replace(" ", "_")
        sOutput = os.path.join(arcpy.env.scratchFolder, sOutName)

 

If that value is sent to the "Result" parameter, what the user will see is the local file path on the server. In order for the service to return a usable url, a return parameter needs to be defined as follows:

 

        param8 = arcpy.Parameter(
            displayName = "Output PDF",
            name = "output_pdf",
            datatype = "DEFile",
            parameterType = "Derived",
            direction = "Output")

 

Traditional tool validation code is somewhat funky when working with a web tool, and I dispense with it. Rather, the tool returns a list of valid values depending on the parameters provided, keeping in mind that I want this service to be consumed by a web app. For example, if you provide the tool with a survey type and leave the survey name blank, it will return a list of the surveys that exist. If you provide a survey type and name and leave the map sheets parameter blank, it will return a list of the map sheets for that survey:

 

        if self.__sSurveyName == "" or self.__sSurveyName == "#" or self.__sSurveyName == None:
            # Return list of surveys for type
            return self.__GetSurveysForType()
        self.__bMainline = self.__sSurveyType == "MAINLINE" or self.__sSurveyType == "TRANSMISSION"
        self.__Message("Querying map sheets...")
        bResult = self.__GetMapsheetsForSurvey()
        if not bResult:
            return "No leak survey features."
        if self.__sMapsheets == None or self.__sMapsheets == "#":
            # Return list of map sheets for survey
            sResult = "MAPSHEETS|OVERVIEW"
            for sName in self.__MapSheetNames:
                sResult += "\t" + sName
            return sResult

 

So how's the performance? Not incredibly great, compared to doing the same thing in ArcObjects, but there are things I can do to improve script performance. For example, because every time the tool is run, it must re-query the survey and its map sheets, there is an option to specify multiple sheets, which will be combined into one PDF, to be returned to the calling application. The tool also supports an "ALL" map sheets option, in order to bypass the need to return a list of map sheets for the survey.

 

Nonetheless, arcpy can suffer in comparison to ArcObjects in various tasks [see this post for some revealing comparisons]. On the other hand, the advantages of using arcpy.mp can outweigh the disadvantages when it comes to automating map production.

 

After testing the tool, it's simple matter to create an empty result and publish it to Portal:

 

 

For this example, I also enable messages to be returned:

 

 

Once in Portal, it's ready to use:

 

 

Part Two: Creating and Publishing a Custom Web AppBuilder Widget

 

As I've mentioned in another post, one reason I like developing in Visual Studio is that I can create and use project templates. I've attached my current Web AppBuilder custom widget template to this post.

 

 

I've also attached the code for the widget itself. Because the widget makes multiple calls to the web tool, it needs a way to sort through the returns. In this example, the tool prefixes "SURVEYS|" when returning a list of surveys, and "MAPSHEETS|" when returning a list of map sheets. When a PDF is successfully generated, the "Result" parameter contains "Success."

 

   private onJobComplete(evt: any): void {
      let info: JobInfo = evt.jobInfo;
      this._sJobId = info.jobId;
      this._gp.getResultData(info.jobId, "result");
   }

   private onGetResultDataComplete(evt: any): void {
      let val: ParameterValue = evt.result;
      let sName: string = val.paramName;
      if (sName === "output_pdf") {
         this.status("Done.");
         window.open(val.value.url);
         this._btnGenerate.disabled = false;
         return;
      }
      let sVal: string = val.value;
      if (this.processSurveyNames(sVal))
         return;
      if (this.processMapSheets(sVal))
         return;
      if (this.processPDF(sVal))
         return;
      this.status(sVal);
   }

   private processSurveyNames(sVal: string): boolean {
      if (sVal.indexOf("SURVEYS|") !== 0)
         return false;
   ...

   private processMapSheets(sVal: string): boolean {
      if (sVal.indexOf("MAPSHEETS|") !== 0)
         return false;
   ...

   private processPDF(sVal: string): boolean {
      if (sVal !== "Success.")
         return false;
   ...

 

The widget can be tested and debugged using Web AppBuilder for ArcGIS (Developer Edition):

 

 

Publishing widgets to Portal can be tricky: our production Portal sits in a DMZ, and https calls to another server behind the firewall will fail, so widgets must reside on the Portal server. And even though our "Q" Portal sits behind the firewall and can see other servers, it's on a different domain. Thus, if I choose to host "Q" widgets on a different server, I need to configure CORS.  Here's an example of web.config:

 

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <system.webServer>
        <cors enabled="true" failUnlistedOrigins="true">
            <add origin="*" />
            <add origin="https://*.uns.com"
                 allowCredentials="true"
                 maxAge="120">

                <allowHeaders allowAllRequestedHeaders="true">
                    <add header="header1" />
                    <add header="header2" />
                </allowHeaders>
                <allowMethods>
                     <add method="DELETE" />
                </allowMethods>
                <exposeHeaders>
                    <add header="header1" />
                    <add header="header2" />
                </exposeHeaders>
            </add>
            <add origin="https://*.unisource.corp"
                 allowCredentials="true"
                 maxAge="120">

                <allowHeaders allowAllRequestedHeaders="true">
                    <add header="header1" />
                    <add header="header2" />
                </allowHeaders>
                <allowMethods>
                     <add method="DELETE" />
                </allowMethods>
                <exposeHeaders>
                    <add header="header1" />
                    <add header="header2" />
                </exposeHeaders>
            </add>
            <add origin="http://*" allowed="false" />
        </cors>
    </system.webServer>
</configuration>

 

The file sits in a virtual web folder called "Widgets" with any widget folders to publish placed under that. When publishing a widget, initially there may be a CORS error:

 

 

but reloading the page and trying again should work.

 

 

Once the widget is published to Portal, it can be added to a new or existing application, and it's ready to use:

 

 

 

Because generating plot files can be a lengthy process, it may not be useful for the widget to wait for completion. Were I to put this into production, I would probably modify the tool to send plot files to a shared folder (or even a document management service) and send an email notification when it completes or fails.

[This was to be my user presentation at the 2020 DevSummit, which was cancelled.]

 

Chrome extensions are a fun way to implement functionality that is not normally available to a web client app. Extensions can make cross-domain requests to gather data from a variety of sources, and at the same time can filter out unwanted content. The Chrome API provides a rich suite of tools for focused application development.

 

Obviously, any app that is implemented as a Chrome extension will only run in Chrome. Also, Chrome extensions must be distributed through Chrome Web Store, but that's not necessarily a bad thing, as I will show later.

 

Here are some online resources:

 

 

Chrome extensions can contain background scripts, content scripts, a UI for saved user options, and so on. The manifest file is what ties it all together: if you've developed custom widgets for Web AppBuilder, you should already be familiar with the concept. Here's an example of manifest.json:

 

{
     "name": "Simple Map Example",
     "version": "1.0",
     "description": "Build an Extension with TypeScript and the ArcGIS API for JavaScript 4.x!",
     "manifest_version": 2,
     "icons": { "128": "images/chrome32.png" },
     "browser_action": {
          "default_popup": "popup.html",
          "default_icon": { "128": "images/chrome32.png" }
     },
     "options_ui": {
          "page": "options.html",
          "open_in_tab": false
     },
     "permissions": [ "storage" ],
     "content_security_policy": "script-src 'self' https://js.arcgis.com blob:; object-src 'self'"

}

 

One thing that's worth pointing out is the "content_security_policy" entry. This will be different depending on whether you use JSAPI 3.x or 4.x. See this post for more information.

 

Let's use a Visual Studio 2017 project template (attached) to create a simple extension. Because the template uses TypeScript, there are some prerequisites; see this post for more information.

 

First, let's create a blank solution called DevSummitDemo:

 

 

Next, add a new project using the ArcGIS4xChromeExtensionTemplate:

 

 

Here is the structure of the resulting project:

 

 

Building the project compiles the TypeScript source into corresponding JS files.  Extensions can be tested and debugged using Chrome's "Load unpacked" function:

 

 

Note that Chrome DevTools will not load TypeScript source maps from within an extension. That's normally not an issue since you can debug the JS files directly. There is a way to debug the TypeScript source, but it involves some extra work. First, set up IIS express to serve up the project folder:

 

 

Then, edit the JS files to point to the localhost url:

 

 

Now, you can set a breakpoint in a TS file and it will be hit:

 

 

The disadvantage of this approach is that you must re-edit the JS files every time you recompile them.

 

The next demo involves functionality that is available in JSAPI 3.x, but not yet at 4.x. Namely, the ability to grab an image resource and display it as a layer. Here is a web page that displays the latest weather radar imagery:

 

 

The latest image is a fixed url, so nothing special needs to be done to reference it. Wouldn't it be cool, however, to display an animated loop of the 10 latest images? But there's a problem.

 

Let's add the LocalRadarLoop demo project code (attached) to the VS2017 solution and look at pageHelper.ts:

 

     export class myApp {
          public static readonly isExtension: boolean = false;
          public static readonly latestOnly: boolean = true;
     }

 

When isExtension is false, and latestOnly is true, the app behaves like the web page previously shown.

Note also this section of extension-only code that must be commented out for the app to run as a normal web page:

 

               // **** BEGIN Extension-only block ****
               /*
               if (myApp.isExtension) {
                    let sDefaultCode: string = defaultLocalCode;
                    chrome.storage.local.get({ localRadarCode: sDefaultCode },
                         (items: any) => {
                              let sCode: string = items.localRadarCode;
                              let sel: HTMLSelectElement = <HTMLSelectElement>document.getElementById("localRadarCode");
                              sel.value = sCode;
                              this.setRadar();
                         });
                    return;
               }
               */

               // **** END ****

 

Because the latest set of radar images do not have fixed names, it is necessary to obtain a directory listing to find out what they are. If you set latestOnly to false and run the app, however, you will run into the dreaded CORS policy error:

 

 

This is where the power of Chrome extensions comes into play. Set isExtension to true, and uncomment the extension-only code (which enables a saved user option), and load the app as an extension. Now you get the desired animation loop!

 

Note the relevant line in manifest.json which enables the XMLHttpRequest to run without a CORS error:

 

 

Now, as I pointed out earlier, Chrome extensions are distributed through Chrome Web Store:

 

 

There are some advantages to this. For example, updates are automatically distributed to users. You can also create an "invisible" store entry, or publish only to testers. I find that last feature useful for distributing an extension that I created for my personal use only. Other distribution options do exist, which you can read about at this link.

 

In conclusion, Chrome extensions enable pure client-side functionality that otherwise would not be possible without the aid of web services. Chrome Web Store provides a convenient way to distribute extensions and updates, with public and private options.

 

The Local Radar Loop extension is available free at Chrome Web Store.

Being a user of Microsoft Visual Studio since version 6.0, I prefer it as a one-stop shop for as many kinds of development as possible, including C++, VB, C#, Python, and HTML5/TypeScript projects. One feature of VS that I really like is the ability to create project templates. VS2015 included a project template for TypeScript, but it was ugly as sin. VS2017 dropped it, but failed to provide a viable alternative; being lazy, I continued to use the same version available online:

This must stop! Sometimes, you just have to get your hands dirty, so I decided to create my own project template from scratch. Fortunately, the TypeScript documentation has sections on Integrating with Build Tools, and Compiler Options in MSBuild, which provided valuable assistance. Also, see the MSBuild documentation and How to: Create project templates for more information.

 

Prerequisites:

The TypeScript website has download links to install the latest version for a number of IDEs, including VS2017. In addition, since the TypeScript folks now prefer you to use npm to install typings; you should install Node.


Warning! If you are behind a corporate firewall, you may run into this error when you try to use npm to install typings:

   npm ERR! code UNABLE_TO_GET_ISSUER_CERT_LOCALLY

If you see that, try setting this configuration at the command prompt:

   npm config set strict-ssl false

 

Create a generic TypeScript project:

While, formally, the best approach would be to create a new project type, my lazy approach recycles the C# project type and redefines the build targets (but there is a disadvantage – see below). The first step is to create a blank solution in VS2017 named “TypeScriptProjectTemplates.” In Explorer or the Command Prompt, navigate to the solution folder and create a subfolder named “BasicTypeScriptTemplate.” In that folder, create a file named “BasicTypeScriptTemplate.csproj,” containing the following text:

 

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.Default.props" Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.Default.props')" />
  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <OutputType>Library</OutputType>
    <StartupObject />
    <OutputPath>.\</OutputPath>
    <IntermediateOutputPath>vs\</IntermediateOutputPath>
  </PropertyGroup>
  <PropertyGroup>
    <VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">12.0</VisualStudioVersion>
  </PropertyGroup>
  <PropertyGroup>
    <TypeScriptToolsVersion>Latest</TypeScriptToolsVersion>
    <TypeScriptModuleKind>amd</TypeScriptModuleKind>
    <TypeScriptNoImplicitAny>true</TypeScriptNoImplicitAny>
    <TypeScriptESModuleInterop>true</TypeScriptESModuleInterop>
    <TypeScriptJSXEmit>react</TypeScriptJSXEmit>
    <TypeScriptJSXFactory>tsx</TypeScriptJSXFactory>
    <TypeScriptTarget>es5</TypeScriptTarget>
    <TypeScriptExperimentalDecorators>true</TypeScriptExperimentalDecorators>
    <TypeScriptPreserveConstEnums>true</TypeScriptPreserveConstEnums>
    <TypeScriptSuppressImplicitAnyIndexErrors>true</TypeScriptSuppressImplicitAnyIndexErrors>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)' == 'Debug'">
    <TypeScriptRemoveComments>false</TypeScriptRemoveComments>
    <TypeScriptSourceMap>true</TypeScriptSourceMap>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)' == 'Release'">
    <TypeScriptRemoveComments>true</TypeScriptRemoveComments>
    <TypeScriptSourceMap>false</TypeScriptSourceMap>
  </PropertyGroup>
  <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets" Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets')" />
  <Target Name="Build" DependsOnTargets="CompileTypeScript">
  </Target>
  <Target Name="Rebuild" DependsOnTargets="CompileTypeScript">
  </Target>
  <Target Name="Clean" Condition="Exists('$(TSDefaultOutputLog)')">
    <ItemGroup>
      <TSOutputLogsToDelete Include="$(TSDefaultOutputLog)" />
    </ItemGroup>
    <ReadLinesFromFile File="@(TSOutputLogsToDelete)">
      <Output TaskParameter="Lines" ItemName="TSCompilerOutput" />
    </ReadLinesFromFile>
    <Delete Files="@(TSCompilerOutput)" Condition=" '@(TSCompilerOutput)' != '' " />
    <Delete Files="@(TSOutputLogsToDelete)" />
    <!-- <RemoveDir Directories="$(IntermediateOutputPath)" /> -->
  </Target>
</Project>

 

In VS2017, add the existing project to the solution. Within the project, create an “app” subfolder, and add a new TypeScript file named “main.ts,” containing the following text:

class Student {
     fullName: string;
     constructor(public firstName: string, public middleInitial: string, public lastName: string) {
          this.fullName = firstName + " " + middleInitial + " " + lastName;
     }
}

interface Person {
     firstName: string;
     lastName: string;
}

function greeter(person: Person) {
     return "Hello, " + person.firstName + " " + person.lastName;
}

let user = new Student("Jane", "M.", "User");

document.body.textContent = greeter(user);

 

In the project folder, add an HTML Page file named “index.html,” containing the following text:

<!DOCTYPE html>
<html>
<head>
     <title>TypeScript Greeter</title>
</head>
<body>
     <script src="./app/main.js"></script>
</body>
</html>

 

At this point, the project should look like this in Solution Explorer:

 

Building or rebuilding the project will generate TypeScript compiler output, and a file named “Tsc.out” will be created in a subfolder named “vs”. The “Tsc.out” file defines the compiler output to delete when cleaning the project; cleaning the project will also delete that file. [Note that if you build and clean “Release” without cleaning “Debug” beforehand, the source map files will still remain.]

 

Export the project template:

At this point, if you export the project to a template, you have a generic TypeScript project template. However, it will be displayed under the “Visual C#” category. If you want it to appear under the “TypeScript” category, there are additional steps to take. First, unzip the template to a new folder and edit “MyTemplate.vstemplate:”

 

Change the “ProjectType” value from “CSharp” to “TypeScript”. Zip the contents of the folder to a new zip file, and the template is ready to use.

 

Building a JSAPI project template:

Now that we have a basic TypeScript project template, the next step is to use it to create a template for a simple JavaScript API project.


First, create a new project, called “ArcGIS4xTypeScriptTemplate,” using the template created above. Open a Command Prompt, navigate to the project folder, and enter the following commands:

   npm init --yes
   npm install --save @types/arcgis-js-api

Back in Solution Explorer, select the project and click the “Show All Files” button. Select the “node_modules” folder, “package.json,” and “package-lock.json,” right-click, and select “Include In Project.” Finally, replace the contents of “main.ts” and “index.html” with the text given at the JSAPI TypeScript walk-through. Your project should now look like this:

 

You may notice that the “import” statements in “main.ts” are marked as errors, even though the esModuleInterop flag is set in the project:

 

This appears to be a defect in the Visual Studio extension. The project will build without errors, and the resulting page will load correctly. If it annoys you, you can always revert to the older AMD style statements:

 

At this point, you’re ready to export the template.

 

On a final note:

The JavaScript API is updated frequently, which means that you may also want to keep your project templates up to date. Rather than updating the source project and repeating the export steps, you might want to consider keeping the unzipped template folders in a standard location, and updating those directly. Then, all you have to do is zip them to create the updated template.

Recently, I found myself painted into a corner.  Some time ago, I'd created custom tile caches for use with Runtime .NET, which had one scale level defined.  They worked just fine in 10.2.7, but on preparing to upgrade to 100.x, I discovered that they caused 100.6 to hang up.  The workaround was simple enough, namely to define additional scale levels, even if they weren't populated.  However, the task of modifying the caches for nearly 150 users proved so daunting, that I decided to let the app itself make the modification.  Updates to the app are automatically detected and downloaded, which provides a simpler mechanism than deploying a script to everyone's machine. It's not a perceptible performance hit as it is, and later on, as part of another update, I can simply deactivate it.  So here's the code:

 

 

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Xml.Linq;

namespace NavAddin
{

     /*
      * Runtime 100.6 BUG: Tile caches that have only one scale level level defined will hang up on loading.
      * WORKAROUND: Define additional scale levels
      * [Assumes that 100 < scale level < 24000]
     */


     public static class TileCacheHelper
     {

          public const string L0Scale = "24000";
          public const string L0Resolution = "20.833375000083333";
          public const string L2Scale = "100";
          public const string L2Resolution = "0.086805729167013887";

          public static bool CheckTileCache(string sPath)
          {

               // Check if tile cache (i.e. a folder)

               if (!Directory.Exists(sPath))
                    return true; // Not a tile cache

               // Check if one scale level defined

               string sConfigPath = Path.Combine(sPath, "conf.xml");
               StreamReader sr = new StreamReader(sConfigPath);
               XDocument xDoc = XDocument.Load(sr);
               sr.Close();
               XElement xRoot = xDoc.Root;
               XElement xTCInfo = xRoot.Element("TileCacheInfo");
               XElement xLODInfos = xTCInfo.Element("LODInfos");
               int iLevelCount = xLODInfos.Elements("LODInfo").Count();
               if (iLevelCount > 1)
                    return true; // Not a problem
               if (iLevelCount < 1)
                    return false; // This should never happen?

               // Check if scale level is between 100 (L2) and 24000 (L0)

               XElement xLODInfo, xLevelID, xScale, xResolution;

               xLODInfo = xLODInfos.Element("LODInfo");
               xScale = xLODInfo.Element("Scale");
               string sScale = xScale.Value;
               double dScale = Convert.ToDouble(sScale);
               double dL0Scale = Convert.ToDouble(L0Scale);
               double dL2Scale = Convert.ToDouble(L2Scale);
               if (dScale >= dL0Scale)
                    return false;
               if (dScale <= dL2Scale)
                    return false;

               // Redefine scale levels

               xLevelID = xLODInfo.Element("LevelID");
               xLevelID.Value = "1";
               XElement xLOD0 = new XElement(xLODInfo);
               xLevelID = xLOD0.Element("LevelID");
               xLevelID.Value = "0";
               xScale = xLOD0.Element("Scale");
               xScale.Value = L0Scale;
               xResolution = xLOD0.Element("Resolution");
               xResolution.Value = L0Resolution;
               xLODInfos.AddFirst(xLOD0);
               XElement xLOD2 = new XElement(xLODInfo);
               xLevelID = xLOD2.Element("LevelID");
               xLevelID.Value = "2";
               xScale = xLOD2.Element("Scale");
               xScale.Value = L2Scale;
               xResolution = xLOD2.Element("Resolution");
               xResolution.Value = L2Resolution;
               xLODInfos.Add(xLOD2);

               // Write config file

               StreamWriter sw = new StreamWriter(sConfigPath);
               xDoc.Save(sw);
               sw.Close();

               // Rename L00 folder to L01

               string sLayersPath = Path.Combine(sPath, "_alllayers");
               string sL00Path = Path.Combine(sLayersPath, "L00");
               string sL01Path = Path.Combine(sLayersPath, "L01");
               Directory.Move(sL00Path, sL01Path);

               return true;

          }



     }
}

At some point in the 100.x lifespan of ArcGIS Runtime SDK for .NET, the old tried-and-true method of treating a MapView as just another WPF Visual went sailing out the window.  Granted, the ExportImageAsync method should have been a simple workaround, but for one drawback: overlay items are not included!

 

Now I don't know about you, but I find the OverlayItemsControl to be a great way to add interactive text to a map.  You can have it respond to a mouse-over:

 

 

Bring up a context menu:

 

 

Modify properties:

 

 

And so on.  In the old days, when you created an image of the MapView, the overlays would just come right along:

 

          private RenderTargetBitmap GetMapImage(MapView mv)
          {

               // Save map transform

               System.Windows.Media.Transform t = mv.LayoutTransform;
               Rect r = System.Windows.Controls.Primitives.LayoutInformation.GetLayoutSlot(mv);
               mv.LayoutTransform = null;
               Size sz = new Size(mv.ActualWidth, mv.ActualHeight);
               mv.Measure(sz);
               mv.Arrange(new Rect(sz));

               // Output map

               RenderTargetBitmap rtBitmap = new RenderTargetBitmap(
                    (int)sz.Width, (int)sz.Height, 96d, 96d,
                    System.Windows.Media.PixelFormats.Pbgra32);
               rtBitmap.Render(mv);

               // Restore map transform

               mv.Arrange(r);
               mv.LayoutTransform = t;

               return rtBitmap;

          }

 

Not so today!  Try that approach in 100.6 and you just get a black box.    

 

My workaround:

 

  1. Create a Canvas
  2. Create an Image for the Mapview and add it to the Canvas
  3. Create an Image for every overlay and add it to the Canvas
  4. Create a bitmap from the Canvas

 

Step 3 is trickier than you would think, however, because of two issues:  1) relating the anchor point to the overlay, and 2) taking any RenderTransform into account.

 

As far as I can tell, this is the rule for determining the relationship between the overlay and the anchor point:

 

HorizontalAlignment: Center or Stretch, anchor point is at the center; Left, anchor point is at the right; Right, anchor point is at the left.

VerticalAlignment: Center or Stretch, anchor point is at the center; Top, anchor point is at the bottom; Bottom, anchor point is at the top.

For a Canvas element, the anchor point is at 0,0 -- however, I have not found a good way to create an Image from a Canvas [if the actual width and height are unknown].

 

To create an Image from the element, any RenderTransform must be removed before generating the RenderTargetBitmap.  Then, the Transform must be reapplied to the Image.  Also, you need to preserve HorizontalAlignment and VerticalAlignment if you're creating a page layout using a copy of the MapView, so that the anchor point placement is correct.

 

So here it is, the code for my workaround:

 

using System.Collections.Generic;
using System.Diagnostics;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using System.Windows.Media.Imaging;

using Esri.ArcGISRuntime.Geometry;
using Esri.ArcGISRuntime.UI;
using Esri.ArcGISRuntime.UI.Controls;

namespace Workarounds
{

     public struct MapOverlayExport
     {
          public Image OverlayImage;
          public MapPoint Anchor;
          public MapPoint TopLeft;
     }

     public static class MapExportHelper
     {

          // Export bitmap from map with XAML graphics overlays

          public static async Task<ImageSource> GetMapImage(MapView mv)
          {

               RuntimeImage ri = await mv.ExportImageAsync();
               ImageSource src = await ri.ToImageSourceAsync();
               if (mv.Overlays.Items.Count == 0)
                    return src; // No XAML overlays

               // Create canvas

               double dWidth = mv.ActualWidth;
               double dHeight = mv.ActualHeight;
               Rect rMap = new Rect(0, 0, dWidth, dHeight);
               Size szMap = new Size(dWidth, dHeight);
               Canvas c = new Canvas();

               // Add map image

               Image imgMap = new Image()
               {
                    Height = dHeight,
                    Width = dWidth,
                    Source = src
               };
               imgMap.Measure(szMap);
               imgMap.Arrange(rMap);
               imgMap.UpdateLayout();
               Canvas.SetTop(imgMap, 0);
               Canvas.SetLeft(imgMap, 0);
               c.Children.Add(imgMap);

               // Add map overlays

               List<MapOverlayExport> Overlays = GetMapOverlays(mv);
               foreach (MapOverlayExport overlay in Overlays)
               {

                    // Get Image and location

                    Image img = overlay.OverlayImage;
                    MapPoint ptMap = overlay.TopLeft;
                    Point ptScreen = mv.LocationToScreen(ptMap);

                    // Create and place image of element

                    Canvas.SetTop(img, ptScreen.Y);
                    Canvas.SetLeft(img, ptScreen.X);
                    c.Children.Add(img);
                    img.UpdateLayout();

               }
               c.Measure(szMap);
               c.Arrange(rMap);
               c.UpdateLayout();

               // Create RenderTargetBitmap

               RenderTargetBitmap rtBitmap = new RenderTargetBitmap(
                    (int)dWidth, (int)dHeight, 96d, 96d, PixelFormats.Pbgra32);
               rtBitmap.Render(c);
               return rtBitmap;

          }

          public static List<MapOverlayExport> GetMapOverlays(MapView mv)
          {

               List<MapOverlayExport> Overlays = new List<MapOverlayExport>();
               foreach (object obj in mv.Overlays.Items)
               {

                    // Get element and location

                    if (!(obj is FrameworkElement elem))
                    {
                         Debug.Print("MapExportHelper: Non-FrameworkElement encountered.");
                         continue;
                    }
                    double dW = elem.ActualWidth;
                    double dH = elem.ActualHeight;
                    if ((dH == 0) || (dW == 0))
                    {
                         Debug.Print("MapExportHelper: Unsupported FrameworkElement encountered.");
                         continue;
                    }

                    // Remove RenderTransform and RenderTransformOrigin

                    Transform tRender = elem.RenderTransform;
                    Point ptOrigin = elem.RenderTransformOrigin;
                    elem.RenderTransform = null;
                    elem.RenderTransformOrigin = new Point(0,0);
                    elem.Measure(new Size(dW, dH));
                    elem.Arrange(new Rect(0, 0, dW, dH));
                    elem.UpdateLayout();

                    // Create image of element

                    ImageSource src = null;
                    if (elem is Image imgSrc)
                         src = imgSrc.Source;
                    else
                    {
                         RenderTargetBitmap bmp = new RenderTargetBitmap(
                              (int)dW, (int)dH, 96d, 96d, PixelFormats.Pbgra32);
                         bmp.Render(elem);
                         src = bmp;
                    }
                    Image img = new Image()
                    {
                         Height = dH,
                         Width = dW,
                         Source = src,
                         HorizontalAlignment = elem.HorizontalAlignment,
                         VerticalAlignment = elem.VerticalAlignment,
                         RenderTransform = tRender,
                         RenderTransformOrigin = ptOrigin
                    };

                    // Restore RenderTransform and RenderTransformOrigin

                    elem.RenderTransform = tRender;
                    elem.RenderTransformOrigin = ptOrigin;

                    // Find top left location in map coordinates

                    MapPoint ptMap = MapView.GetViewOverlayAnchor(elem);
                    Point ptScreen = mv.LocationToScreen(ptMap);
                    double dY = 0;
                    double dX = 0;
                    switch (elem.VerticalAlignment)
                    {
                         case VerticalAlignment.Center:
                         case VerticalAlignment.Stretch:
                              dY = -dH / 2;
                              break;
                         case VerticalAlignment.Top:
                              dY = -dH;
                              break;
                    }
                    switch (elem.HorizontalAlignment)
                    {
                         case HorizontalAlignment.Center:
                         case HorizontalAlignment.Stretch:
                              dX = -dW / 2;
                              break;
                         case HorizontalAlignment.Left:
                              dX = -dW;
                              break;
                    }
                    Point ptTopLeftScreen = new Point(ptScreen.X + dX, ptScreen.Y + dY);
                    MapPoint ptTopLeftMap = mv.ScreenToLocation(ptTopLeftScreen);

                    // Add exported overlay to list

                    Overlays.Add(new MapOverlayExport()
                    {
                         OverlayImage = img,
                         Anchor = ptMap,
                         TopLeft = ptTopLeftMap
                    });

               }

               return Overlays;

          }

     }
}

 

P.S. -- If you want ExportImageAsync to include overlays, vote up this idea:  GeoView.ExportImageAsync should include overlays 

Originally posted by Courtney Kirkham, September 18, 2019 from the MapThis! Blog

While OAuth 2.0 is Esri’s recommended methodology for handling security and authentication for their ArcGIS platform, not everyone using it understands what it does or how to implement it. Here at GEO Jobe, we’ve had to explain it to more than a few of the people we’ve worked with. As such, we thought we’d lay out a quick guide to what OAuth is and how it works.

OAuth 2.0 handles security and authentication for the ArcGIS platform. Image Source

What is OAuth 2.0?

OAuth 2.0 is the protocol that ensures only users you give permission to can access your ArcGIS content. Esri chooses to use OAuth 2.0 for a number of reasons, including this list they’ve provided:

  • OAuth 2.0 meets the needs of both users and applications.
  • There are strong security practices around OAuth 2.0.
  • OAuth 2.0 is designed to function at Internet-scale across domains, networks, cloud services, and applications.
  • As a widely accepted standard OAuth 2.0 has many libraries and helpers for a variety of languages and platforms.

This is an important part of security for controlling who can access or edit content, as well as managing credit usage. By using OAuth 2.0 in your applications, you can make a map of company assets available to anyone in your company while still keeping it hidden from the public. A company working on building a new neighborhood could create a map to track the progress of the homes being built, while ensuring only supervisors can edit the status of the houses.

Perhaps the most important way OAuth 2.0 manages security is controlling access to premium content and services. Since interacting with these resources consumes credits, and credits cost real money, OAuth 2.0 is an important part of making sure that only the people you want accessing those resources are able to do so.
(Bonus: For additional control over security while reducing the overhead in your in your org, check out security.manager)

You’re not getting that data without valid credentials. Image Source

How does OAuth 2.0 work?

Here at GEO Jobe, we’ve found the best way to explain how OAuth 2.0 is with an analogy. Say your friend, Chris, got access to some exclusive event – a networking opportunity, a party, or something like that. There is a private guest list for the event, and the doormen are checking everyone. Your friend tells you all you need to do is tell the doorman you’re there with Chris, and the doorman will let you in.

When you get to the event and check in with the doorman, one of three things can happen. We’ve outlined them each below, and explained what they mean in the context of OAuth 2.0.

The Doorman Finds Your Friend; You Get a Wristband and Go In

This is what happens when OAuth 2.0 works. You’re able to get in and see your friend. In the case of ArcGIS, this means you requested access to content that you have permission to see. After OAuth checks your credentials, they give you a token (the wristband) that’s added to all your requests for content after that. Then, you get whatever you need (that you have permission to view), and everything is good.

The Doorman Finds Your Friend and You Don’t Get In

This is when the doorman comes back and tells you they found Chris, but Chris says they don’t know you. While this may be an awkward social situation, in OAuth 2.0, it’s pretty simple. It means you tried to access content, and OAuth 2.0 doesn’t think you are supposed to be able to see it. This will often result in an “Invalid Redirect URI” error.

In terms of development, this happens because the request is coming from a URL the app doesn’t recognize. To fix it, go to the app in your ArcGIS used to register for OAuth 2.0. Then, in the Settings menu, view the “Registered Info”. The domain sending the request will need to be included in the Redirect URIs.

The Doorman Can’t Find Your Friend

Maybe your friend left the party. Maybe the doorman thought the “Chris” they were looking for was a “Christopher” instead of a “Christine”. Regardless of the reason, the doorman can’t find your friend, and they’re not letting you into the party. When this happens, OAuth 2.0 will return an error stating that there is an “Invalid Client ID”. This is also easy for a developer to fix.

This situation occurs because there isn’t an app in the ArcGIS Organization in question with an App ID that matches what OAuth 2.0 was told to look for. This can happen if the app was deleted from your ArcGIS Org, or if the code where the App ID was specified was altered. In order to fix it, check where the App ID is specified in the code for the OAuth 2.0 call. Also, check the application in ArcGIS Org used to register for OAuth 2.0. If the application was deleted, you will need to create and register a new application, then use that App ID. If the application exists, check under the “Settings” menu and the “Registered Info” to find the App ID. This should match the value for the App ID in the code. If it doesn’t, recopy the App ID from the application in the ArcGIS, then paste the value into the code where the OAuth 2.0 information is initialized.

How to Implement an OAuth 2.0 Application

Setting up an OAuth 2.0 application in your ArcGIS Organization is fairly simple. In fact, it only takes five steps! It’s so easy, we’ve outlined the process below.

1. To start, sign into your ArcGIS Org and go to the Content menu. From there, click on “Add Item” and choose the option for “An Application”.

2. Next, you’ll select the type “Application” and fill out some basic information.

3. After you add the item, go to the Settings page and click the “Registered Info” button. Note: While on the settings page, you may want to select the option for “Prevent this item from being accidentally deleted.

4. After clicking the “Registered Info” button, the App ID you will need should be visible on the left. The final step will be to update the Rediret URIs for the application. Click the “Update” button on the right side of the screen.

5. A popup with the Registered Info should appear. Any applications a developer builds that will need to OAuth into your ArcGIS organization will need to have their domains added to the approved Redirect URIs of an OAuth application. Add the appropriate domains in the textbox, then click “Add”. After your domains are all added, click the “Update” button at the bottom of the popup.

And there you have it! Five easy steps and you’re ready to use OAuth 2.0 in your ArcGIS Organization.

You can relax, knowing your ArcGIS content is safe and only accessible by who you choose. Image Source

Conclusion

Securing your ArcGIS data is important. OAuth 2.0 can make it simple. If you need any assistance setting up OAuth for your ArcGIS Organization, or need some custom applications built while keeping your data secure, reach out to us at connect@geo-jobe.com. We’ll be glad to help!

Liked this article? Here’s more cool stuff

Does the ArcGIS API for JavaScript work with Content Security Policy?  The short answer is yes, but which version you're using (4.x vs. 3.x) determines the approach to take.  Dojo allows you to configure support CSP support:

 

// mapconfig.js
window.dojoConfig = {
     async: true,
     has: {"csp-restrictions": true}
}

 

So the following example works [note that blob support must be enabled]:

 

<!DOCTYPE html>
<html>
<head>
     <meta charset="utf-8" />
     <meta http-equiv="content-security-policy"
               content="script-src 'self' https://js.arcgis.com blob:; object-src 'self'" />

     <title>Using ArcGIS API for JavaScript with CSP</title>
     <script src="./mapconfig.js"></script>
     <link rel="stylesheet" href="https://js.arcgis.com/4.12/esri/css/main.css">
     <script src="https://js.arcgis.com/4.12/"></script>
     <style>
          html, body, #map {
               padding: 0;
               margin: 0;
               height: 100%;
               width: 100%
          }
     </style>

</head>
<body>
     <div id="map"></div>
     <script src="./mapinit412.js"></script>
</body>
</html>
//mapinit412.js
require([
     "esri/Map",
     "esri/views/MapView"
], function (Map, MapView) {

     var map = new Map({
          basemap: "topo-vector"
     });

     var view = new MapView({
          container: "map",
          map: map,
          center: [-118.71511, 34.09042],
          zoom: 11
     });
});

 

Note that CSP doesn't allow any inline JavaScript, so even the simplest blocks of code need to be in a separate file.

 

What about 3.x?  Aye, there's the rub.  Although Dojo supports CSP, the ArcGIS API 3.x does not: it contains code that CSP will reject.  Here's an example from VectorTileLayerImpl.js:

 

l = Function("return this")();

 

The only way to get 3.x to work with CSP is to include the dreaded 'unsafe-eval' in the policy string.  With that, the following example will work:

 

<!DOCTYPE html>
<html>
<head>
     <meta charset="utf-8" />
     <meta http-equiv="content-security-policy"
               content="script-src 'self' 'unsafe-eval' https://js.arcgis.com; object-src 'self'" />

     <title>Using ArcGIS API for JavaScript with CSP</title>
     <script src="./mapconfig.js"></script>
     <link rel="stylesheet" href="https://js.arcgis.com/3.29/esri/css/esri.css">
     <script src="https://js.arcgis.com/3.29/"></script>
     <style>
          html, body, #map {
               padding: 0;
               margin: 0;
               height: 100%;
               width: 100%
          }
     </style>

</head>
<body>
     <div id="map"></div>
     <script src="./mapinit329.js"></script>
</body>
</html>
// mapinit329.js
require(["esri/map"], function (Map) {
     var map = new Map("map", {
          center: [-118, 34.5],
          zoom: 8,
          basemap: "topo"
     });
});

I received a request to provide all videos and other files available for an area of interest on the map.

 

Using ArcGIS Pro, I digitized a polygon to enclose the desired area.

Used this polygon to select all pipe line features that intersect this area.
Export the selected pipes to an excel.
Copy only the user-defined unique id field onto a local text file, as list1.txt
Ensure no extra newlines/whitespace at the beginning nor end of the file.

move list1.txt to a new directory labeled 'stagingFiles'.

 

 

 

Using the command line, write the  contents of the directory that contains the desired files to a local text file, as list2.txt:
dir /b > list2.txt
remove the value of 'list2.txt' from the text file, as well as the names of any subdirectories.
If subdirectories exist, create another text file within the subdirectory, as list2_1.txt, then move it to 'stagingFiles' directory, and repeat for other subdirectories.
remove the value of 'list2_1.txt' from the text file, as well as the names of any subdirectories, and repeat for other subdirectories.

 

 

Use this python script, and follow the remainder instructions within it:

 

import re


with open(r'\\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\stagingFiles\list1.txt', 'r') as f:
    generatorOfFileLines = [line.strip()for line in f]
    a = generatorOfFileLines
    for i in range(len(a)):
        pattern = re.compile(r'(.*)' + str(re.escape(a[i])) + r'(.*)')
        with open(r'\\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\stagingFiles\list2.txt', 'r') as g:
            contents = g.read()
            matches = pattern.finditer(contents)
            for match in matches:
                results = match.group(0)
                print results
        i +=1


print "Copy and paste the results of the Regular Expression (above this printed statement) to a text file, as list3.txt, save it, and close it."
resultsFile = raw_input("paste path to text file here: ")
# \\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\stagingFiles\list3.txt

with open(r'{}'.format(resultsFile), 'r') as h:
    b = [line.strip()for line in h]
    c = list(set(b))
    for i in range(len(c)):
            print c[i]
  
print 'Now take the results above, and paste to the list4.txt file (move the text file to the intended directory to search for files)'
print 'This file is used in conjuction with the command line argument FOR /F "delims=" %N in (list4.txt) do COPY "%N" "C:\\targetFolder" to copy and paste files to an intended directory'
print 'If subdirectories exist, make a new list3.txt from the list2_# (by changing the file name in the path above), and perform the instructions in this script again.'
print 'find out what to do when a filename collides with another, like.........command overwrite yes no all.....'
print 'Separate the cmd line results to single out the files that did not copy successfully'
print 'If feasible, change the filename to its current name, and append "_1" to it.'
print 'once done, remove the list4.txt, and target folder (if applicable) from the intended directory.'
print "______________________________________________Operation is Complete"    





# Try the findall() method without groups, and it should work.
# Another use case for regex would be for validating user input in client apps to ensure what they input meets a criteria.






    # Didn't work

        #     subbedContents = pattern.sub(r'\0', contents)
        #     print subbedContents
        # i +=1


    # for dirpath, dirnames, filenames in os.walk(r'\\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\test'):
    # \\GIS-WEBAPP\Hyperlinks\StormCCTV
        # for file in filenames:
        #     matches = pattern.finditer(re.escape(file))
        #     print matches
        #     for match in matches:
        #         print match


        # Didn't work for list4.txt:
        # trimmedResultsFile = r'\\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\stagingFiles\list4.txt'
        # with open('{}.format(trimmedResultsFile)', 'w') as j:
        #     j.write(str(c[i]))


# fnmatch and os modules did not work





# To get the filenames of the resources in a directory:

# Hold the "Shift" key, right-click the folder and select "Open Command Window Here."
# This only works with folders, not libraries.

# Type "dir /b > dirlist.txt" without quotes and press "Enter."
# This creates a list containing file names only.

# Open Microsoft Excel and press "Ctrl-O" to bring up the Open dialog window.

# Open Txt file in Excel

 

 

 

 

 

 

Sent the client the requested files.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Resources:

 

Google:
How Do I select multiple files in a folder from a list of file names
Select multiple files in same folder with variations of filenames python
Select multiple files in same folder with variations of filenames python regular expression
write contents of directory to text file.
Select variations of many file names at once python regex
regular expression tester

 

https://pymotw.com/2/glob/
https://docs.python.org/2/library/fnmatch.html

 

https://www.tenforums.com/general-support/110415-how-do-i-select-multiple-files-folder-list-file-names.html
How Do I select multiple files in a folder from a list of file names

16 May 2018 #4

Welcome to TenForums @SGTOOL

The simplest way to use a text file with a filename on each line (such as list.txt) to copy the files to a folder such as C:\Destination is by using this single command in a command prompt:

FOR /F "delims=" %N in (list.txt) do COPY "%N" "C:\targetFolder"

'for' loops through all the filenames in list.txt
"delims=" says use the whole of each line as the parameter %N
the quotes around %N in 'copy "%N"' allows for any filenames that contain spaces
C:\Destination specifies the folder you want to copy to (it must already exist, create it first if necessary)

If the text file contains just the file names, then the command has to be run in the folder that contains the files to be copied. To go to that folder, first use the 'change directory' command: CD <full path to the folder>
eg: CD C:\Source_folder

If the text file contains the full path and filename on each line, eg:
C:\Users\Me\Pictures\SourceFiles\Filename.jpg
...then the CD step is not needed.

If the text file is in a different folder, give the full path to it in the FOR command, eg: (C:\Temp\list.txt)

 

 

 

 

https://answers.microsoft.com/en-us/windows/forum/windows_vista-files/select-multiple-files-in-same-folder-from-a-list/d6bba385-f87d-448a-ada8-76cec34d5a63?page=1
Select multiple files in same folder from a list of file names
Tiffany McLeod Replied on April 25, 2009

You can use a Excel Spreadsheet to automatically format the code and then copy and paste it into a text document which you would save as a *.bat file. I have a Spreadsheet I've created for this, and I'll share with you, you can download it by clicking on the following link (hopefully).

Download Excel Spreadsheet

How to Use:

Open the Spreadsheet.

At the bottom of the screen, you will see that there are two worksheets in this file.
If your list of names includes the the full file path (example: c:\weddings\sally\img1.jpg), choose the worksheet labeled "Full Path".
If your list only has the filenames (example: myimage.jpg), choose the worksheet labeled "Filename Only".

I'll explain the Filename Only worksheet:

Image

You will NOT enter any Data into the first three columns: that's the output.
In column E (Current Folder Path) Type the full folder path where the pictures are currently located. Make sure that you include the final "\", as shown above. You will only need to type this once.

The next Column (F) is the File Name Column. Paste your list of file names here, one name per cell (the list you paste from should have only one name per line). My spreadsheet allows for well over 200 filenames before the formulas stop working (for more files, simply extend the formulas).

Type the full path of the folder you want to move the pictures to into column G, as shown.

Now, we'll look back to the First three columns, A,B, and C. Find the spot where the output in column B no longer has a filename after the folder path. Select all the output in the three columns, above the ending line. For example, in the sheet above, the ending line is Line 7 (we don't want to include that line), so the selected range would be A2:C6. For two hundred files, the range would be A2:C202.

Copy.

Open notepad. Paste.

Save as a .bat file. (Choose save. Slect the folder you wish to save it in. Type move.bat into the name line. Make sure that "All files" is selected from the file type drop down list.) This .bat file is reusable. Simply right-click and choose edit to reuse instead of making a new one each time.

Once the .bat file is saved, double-click on it to run it.

Check your destination folder, and make sure the files moved as desired.

If you want to copy the files instead of move them, simply type COPY into cell A2.

Use the Full Path worksheet in the same manner, except you don't need to enter the current folder path into cell E2.

Now for the Disclaimer: Follow these instructions at your own risk. I take no responsibility for any damage caused to your data or system as a consquence of using my spreadsheet or following these instructions. Back up your data before using the .bat file.
You should test this process to make sure that you understand it, before using it for important files.

I freely admit that this is probably a bit clunky and inelegant, but it works and it is very versatile for generating large batches of DOS commands.

Best Wishes,
Tiffany McLeod aka BookwormDragon

 

 

 


https://realpython.com/working-with-files-in-python/#simple-filename-pattern-matching-using-fnmatch
Working With Files in Python
by Vuyisile Ndlovu Jan 21, 2019

 

 

 

https://thispointer.com/5-different-ways-to-read-a-file-line-by-line-in-python/
5 Different ways to read a file line by line in Python

 

 

 

 


https://www.sevenforums.com/general-discussion/201734-select-search-multiple-files-copy-paste-new-folder.html
select/search multiple files, copy and paste to a new folder

 

 

Python

python snippets

Developer Communities

The specified item was not found.

GIS Developers

Geo Developers

Developer Communities

GIS Developers

Article written by Amy Niessen with contributions from Ciara Rowland-Simms

 

On Wednesday, May 15th the Cardiff R&D Center co-hosted a Rust and C++ birthday party at Yolk Recruitment to celebrate Rust's 4th Birthday! Despite short notice, we were able to get the word out in time for a nice mixture of full-time freelance and hobbyist programmers as well as a few students to join us. Quite a few people expressed interest in helping out with future events while demonstrating a lot of enthusiasm for a Rust/C++ community in Cardiff! 

 

To begin, you can't have a party without cake, and to celebrate the birthday properly, you can see the cake and Ferris the crab, which was made from icing by Jack Kelly's partner, Sofia.

 

cake and Ferris the crab

 

We then began to introduce our speakers. We had Dan Morgan from DevOpsGroup, Ciara Rowland-Simms from Esri, Chris Light from Esri, and Jack Kelly from DevOpsGroup, with Chris doubling as MC for the event.

 

The first two talks were about learning new languages, specifically Rust and C++.

 

Dan had never done C++ before until that week and spoke on the confusion you face when trying to find best practices and up to date learning materials online. His talk will be part of series documenting his journey into C++, driven by advice from the audience about what resources to use next!

 

Dan and Rust

 

Ciara did a talk on learning Rust, having also never used the language. By contrast, there is very coherent documentation story for Rust as it is a very modern language. The learning experience was therefore comparatively painless and she was able to cover install and setup, including debugging, along with discussing some cool and some controversial Rust language features (such as the heavy use of macros, the ability to do shadowing of variables, and implicit returns).

 

Ciara and Rust

 

Chris’s talk was Modern C++: ACCU 2019 revelations and covered some of the cool new features of C++ 20. He also discussed C++ 17 and more broadly the modernization of the language. The talk provoked some really good discussion on some of the network capabilities that are lacking in the standard library and provided a great space for talking about where the language is moving.

 

Chris and C++

 

The final talk was a dive in Rust best practices, helpful tips, formatting and linting tools, and how easy CI/CD can be with Rust. He really highlighted why people are so excited about Rust and how easily it can leveraged to hit the ground running even with only limited experience in the language.

 

 

 

In the end, we had some specific language questions, which is always good. It seemed there were a lot of really good discussions taking place during the event and, of course, people already sharing excitement in the anticipation of future meetups.

  

Going forward, we also hope to bring in more speakers – which hopefully won’t be difficult given the enthusiasm from audience members at our first event! We really liked having talks which were a mixture of beginner accessible and provoking good discussion for more experienced developers. We struck that balance pretty well this time and hope to do so again in future meetups.

 

Overall, we were really pleased with how the event went and are really excited to see the beginning of a Cardiff C++/Rust community where we can all learn from each other and grow! We look forward to the next meetup and will be sure to announce it on the Meetup.com page. Be sure to follow it to stay in the loop on our next adventure!

Last month we hosted a unique GeoDev Webinar when we had Manushi Majumdar share her presentation on "Thinking Spatially and Statistically". Manushi introduced types and characteristics of spatial data and advanced GIS analysis techniques. She was able to cover a few basic concepts of statistics and show how they differ in a spatial context, advancing towards Spatial Machine Learning with ArcGIS. 

 

Here are the questions that were received during the webinar along with their respective answers:

 

Q: What is the difference between machine learning and statistics? For example, with regression, is there a difference? This always puzzles me!

A: Here is a resource to understand the difference: https://www.kdnuggets.com/2016/11/machine-learning-vs-statistics.html

 

Q: Is there any geo-processing tools in built in ArcGIS for running machine learning algorithms?

A: Yes, ArcGIS has support for several Machine Learning techniques. We would suggest looking this blog post to learn more about ML support within ArcGIS Desktop. https://www.esri.com/arcgis-blog/products/arcgis-pro/analytics/machine-learning-in-arcgis/

 

Q: Can you provide the link to the notebook again?

AHub-Tutorials/GeoDev_ServiceRequests.ipynb at master · esridc/Hub-Tutorials · GitHub 

 

Q: Are there any other good resources for finding examples of utilizing Machine Learning with GIS?

A: Here are just a few ArcGIS blogs demonstrating examples:
https://www.esri.com/arcgis-blog/products/product/analytics/density-based-clustering-exploring-fatal-car-accident-data-to-find-systemic-problems/


https://www.esri.com/arcgis-blog/products/arcgis-pro/analytics/using-forest-based-classification-and-regression-to-model-and-estimate-house-values/


https://www.esri.com/arcgis-blog/products/arcgis-enterprise/analytics/the-science-of-where-seagrasses-grow-arcgis-and-machine-learning/?rmedium=redirect&rsource=blogs.esri.com/esri/arcgis/2017/09/18/the-science-of-where-seagrasses-grow-arcgis-and-machine-learning

 

Q: Spatial Join: I see the tool has capability to join two layers with out common attributes . But can this be done on multiple layers in a single shot? The built-in tool has only option to select two layers. What are the options?

A: Join works on a 1:1 principle, you can only join one layer to another. That said you can use the concept of table 'Relate' to join one table to many using a common attribute in those tables (does not work spatially).

 

Q: Is it possible to integrate ArcGIS with machine learning software like Jupiter notebook?

A: You can use ArcPy as well as the ArcGIS API for Python in Jupyter notebooks. 

 

Q: Is there a way to use machine learning to predict or project possible future incident locations without assigning a z-value?

A: Z-score (standard score) denotes the number of standard deviations from the mean a data point is. Simply put, it conveys the distribution of a point around the mean. Prediction or Classification does not need z-scores for input variables.

 

Q: Could you walk us through the hotspot analysis? How do you access these tools?

A: Read through this https://pro.arcgis.com/en/pro-app/tool-reference/spatial-statistics/h-how-hot-spot-analysis-getis-ord-gi-spatial-stati.htm to learn more about HotSpot Analysis. It can be accessed within the Spatial Statistics (Mapping Clusters) toolbox in ArcGIS Desktop and under the Analyze Patterns category in ArcGIS Online.

 

Q: Can we do the machine learning analyses using 10.6 geoprocessing tools?

A: Yes, apart from the usual, ArcGIS Desktop 10.6 comes with two new tools Deep Learning Model To End and Export Training Data For Deep Learning. 

 

Q: I have a GIS online account. How can I access the data demonstration in ArcGIS online?

A: The data I used for my examples is publicly available. Once you add it to your ArcGIS Online account, you can use the Summarize Center and Dispersion tool there to generate spatial mean, median and standard dispersion for your data.

 

Q: How can I use ArcGIS for linear regression or logistical regression analysis?

A: Support for regression, both linear and logistic, is available in ArcGIS Desktop Spatial Statistics (Modeling Spatial Relationships) toolbox. Click here https://pro.arcgis.com/en/pro-app/tool-reference/spatial-statistics/an-overview-of-the-modeling-spatial-relationships-toolset.htm to learn more.

 

Q: Is machine learning part of programming, or is it remote sensing?

A: Machine Learning involves concepts of statistics as well as algorithms to solve problems based on patterns or inferences drawn from data. Remote sensing, on the other hand, involves studying the planet using remote instruments. Machine Learning can have applications in the field of Remote Sensing, for instance, to detect buildings, roads using satellite imagery data.

 

Q: Is the Jupyter environment embedded directly within Esri ML module?

A: ArcGIS Enterprise 10.7 comes with Hosted Notebooks, that lets you perform spatial analysis and data science workflows in a notebook within your portal. Other than that, you can use ArcPy or ArcGIS API for Python in an external Jupyter Notebook too.

 

Q: Which interpolation techniques suits best when you are dealing with underground water data?

A: While it depends on your sampling size and distance, Kriging might be a good Interpolation technique.

 

Q: Please suggest out-of-the-box tools provided by ArcGIS for machine learning algo.

A: Yes, ArcGIS has support for several Machine Learning techniques. I'd suggest looking this blog post to learn more about ML support within ArcGIS Desktop. https://www.esri.com/arcgis-blog/products/arcgis-pro/analytics/machine-learning-in-arcgis/

 

For more information, Manushi shared her presentation: GeoDev Webinar - Thinking Spatially and Statistically

 

Also, for the full recording of the webinar, click here.

Well!  In my previous article, I presented a workaround for a bizarre MMPK bug that reappeared in version 100.5 of the ArcGIS Runtime SDK for .NET; but just when I thought I couldn't find another one even more unthinkably bizarre, up pops this: QueryRelatedFeaturesAsync will return a bad result when the FeatureLayer of a GeodatabaseFeatureTable is a sublayer of a GroupLayer.

 

Huh?  You can check out the attached Visual Studio project for confirmation, but in the meantime we have a conundrum.  It would appear that, just when the GroupLayer class is finally implemented, we have to chuck it right back out until a safer, more effective version is delivered.  I've updated an earlier article of mine to reflect that situation.  Nonetheless, I got to thinking about how Runtime support for querying M:N relationships in a mobile map package didn't even start to appear until 100.4, and what I would need to do in order to support them were I still stuck at 100.3. Or, what if QueryRelatedFeaturesAsync were to fail again in a future version?

 

Supporting one-to-one and one-to-many relationships is actually fairly simple, since the RelationshipInfo class gives the required information, when retrieved from both origin and destination tables.  But many-to-many relationships are entirely another can of worms, because some crucial information is inaccessible via Runtime, even though it's encoded in the geodatabase.

 

Contrary to the wording in the documentation for the RelationshipInfo class [and I quote: "A relationship consists of two and only two tables"], M:N relationships involve a third, intermediate table.  Querying M:N relationships requires knowledge to query that intermediate table, and that's precisely the information which is withheld from the Runtime developer.

 

Let's take a look at how relationships are stored in a mobile map package.  In my previous article, I introduced you to the GDB_ServiceItems table.  The ItemInfo field in that table stores the JSON data used to hydrate the ArcGISFeatureLayerInfo class:

 

View of GDB_ServiceItems in SQLiteSpy

 

Here's the JSON that describes the RegulatorStation to GasValve relationship from the origin role:

 

{
     "id": 4,
     "name": "Gas Valve",
     "relatedTableId": 10,
     "cardinality": "esriRelCardinalityManyToMany",
     "role": "esriRelRoleOrigin",
     "keyField": "OBJECTID",
     "composite": false,
     "relationshipTableId": 73,
     "keyFieldInRelationshipTable": "REGSTATIONOBJECTID"
}

 

And here's the description for destination role:

 

{
     "id": 4,
     "name": "Regulator Station",
     "relatedTableId": 13,
     "cardinality": "esriRelCardinalityManyToMany",
     "role": "esriRelRoleDestination",
     "keyField": "OBJECTID",
     "composite": false,
     "relationshipTableId": 73,
     "keyFieldInRelationshipTable": "GASVALVEOBJECTID"
}

 

The two crucial items that are not included in the RelationshipInfo class are relationshipTableId and keyFieldInRelationshipTable.  But how to get at that information in your app?  Aye, there's the rub.  In short, you need to extract the geodatabase from the mobile map package and query the GDB_ServiceItems table directly.  That's where you need a library such as System.Data.SQLite, which is available via the NuGet Package Manager:

 

NuGet Package Manager

 

Given the necessary tools, the first step is to extract the geodatabase to a temporary location:

 

          public async Task Init(string sMMPKPath, Geodatabase gdb)
          {
               string sGDBPath = gdb.Path;
               string sGDBName = Path.GetFileName(sGDBPath);
               string sTempDir = Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString());
               Directory.CreateDirectory(sTempDir);
               string sTempPath = Path.Combine(sTempDir, sGDBName);
               using (ZipArchive zip = ZipFile.OpenRead(sMMPKPath))
               {
                    ZipArchiveEntry zipEntry = zip.GetEntry(sGDBPath);
                    zipEntry.ExtractToFile(sTempPath);
               }

 

Next, query the desired information, taking the steps necessary to clean up afterwards:

 

               List<string> ItemInfos = new List<string>();
               string sConn = "Data Source=" + sTempPath + ";Read Only=True";
               string sSQL = "SELECT ItemInfo FROM GDB_ServiceItems";
               using (SQLiteConnection sqlConn = new SQLiteConnection("Data Source=" + sTempPath))
               {
                    sqlConn.Open();
                    using (SQLiteCommand sqlCmd = new SQLiteCommand(sSQL, sqlConn))
                    {
                         using (SQLiteDataReader sqlReader = sqlCmd.ExecuteReader())
                         {
                              while (sqlReader.Read())
                                   ItemInfos.Add(sqlReader.GetString(0));
                              sqlReader.Close();
                         }
                    }
                    sqlConn.Close();
               }
               GC.Collect();
               GC.WaitForPendingFinalizers();
               Directory.Delete(sTempDir, true);

 

Finally, combine the missing ingredients with the out-of-the-box information:

 

               _infos = new Dictionary<long, Dictionary<long, ExtendedRelationshipInfo>>();
               foreach (string sInfo in ItemInfos)
               {

                    Dictionary<string, object> info = _js.DeserializeObject(sInfo) as Dictionary<string, object>;
                    if (!info.ContainsKey("relationships"))
                         continue;
                    object[] relationships = info["relationships"] as object[];
                    if (relationships.Length == 0)
                         continue;
                    long iTableID = Convert.ToInt64(info["id"]);

                    // Get basic table relationship infos

                    GeodatabaseFeatureTable gfTab = gdb.GeodatabaseFeatureTable(iTableID);
                    if (gfTab.LoadStatus != Esri.ArcGISRuntime.LoadStatus.Loaded)
                         await gfTab.LoadAsync();
                    Dictionary<long, RelationshipInfo> BasicInfos = new Dictionary<long, RelationshipInfo>();
                    foreach (RelationshipInfo relInfo in gfTab.LayerInfo.RelationshipInfos)
                         BasicInfos[relInfo.Id] = relInfo;

                    // Add extended data

                    Dictionary<long, ExtendedRelationshipInfo> ExtendedInfos = new Dictionary<long, ExtendedRelationshipInfo>();
                    foreach (object obj in relationships)
                    {
                         Dictionary<string, object> rel = obj as Dictionary<string, object>;
                         long iRelID = Convert.ToInt64(rel["id"]);
                         string sCard = rel["cardinality"].ToString();
                         long? iRelTableID = null;
                         string sKeyField = null;
                         if (sCard == "esriRelCardinalityManyToMany")
                         {
                              iRelTableID = Convert.ToInt64(rel["relationshipTableId"]);
                              sKeyField = rel["keyFieldInRelationshipTable"].ToString();
                         }
                         ExtendedRelationshipInfo erInfo = new ExtendedRelationshipInfo()
                         {
                              BasicInfo = BasicInfos[iRelID],
                              RelationshipTableId = iRelTableID,
                              KeyFieldInRelationshipTable = sKeyField
                         };
                         ExtendedInfos[iRelID] = erInfo;
                    }
                    _infos[iTableID] = ExtendedInfos;

               } // foreach

 

Here, then, is the code for querying related features:

 

public async Task<FeatureQueryResult> QueryRelated(ArcGISFeature feat, long iRelID)
          {

               // Get relationship data

               if (!(feat.FeatureTable is GeodatabaseFeatureTable gfTabSource))
                    return null;
               long iTableID = gfTabSource.LayerInfo.ServiceLayerId;
               if (!_infos.ContainsKey(iTableID))
                    return null;
               Dictionary<long, ExtendedRelationshipInfo> ExtendedInfos = _infos[iTableID];
               if (!ExtendedInfos.ContainsKey(iRelID))
                    return null;
               ExtendedRelationshipInfo extInfoSource = ExtendedInfos[iRelID];
               RelationshipInfo infoSource = extInfoSource.BasicInfo;
               long iRelTableID = infoSource.RelatedTableId;
               if (!_infos.ContainsKey(iRelTableID))
                    return null;
               ExtendedInfos = _infos[iRelTableID];
               if (!ExtendedInfos.ContainsKey(iRelID))
                    return null;
               ExtendedRelationshipInfo extInfoTarget = ExtendedInfos[iRelID];
               RelationshipInfo infoTarget = extInfoTarget.BasicInfo;

               // Build query

               string sKeyValSource = feat.GetAttributeValue(infoSource.KeyField).ToString();
               Geodatabase gdb = gfTabSource.Geodatabase;
               GeodatabaseFeatureTable gfTabTarget = gdb.GeodatabaseFeatureTable(iRelTableID);
               string sKeyFieldTarget = infoTarget.KeyField;
               Field fieldKeyTarget = gfTabTarget.GetField(sKeyFieldTarget);
               StringBuilder sb = new StringBuilder();
               sb.Append(sKeyFieldTarget);
               if (infoSource.Cardinality == RelationshipCardinality.ManyToMany)
               {

                    // Gather key values from intermediate table

                    GeodatabaseFeatureTable gfTabRel = gdb.GeodatabaseFeatureTable(extInfoSource.RelationshipTableId.Value);
                    string sKeyFieldRelSource = extInfoSource.KeyFieldInRelationshipTable;
                    Field fieldRelSource = gfTabRel.GetField(sKeyFieldRelSource);
                    string sWhere = sKeyFieldRelSource + " = " + sKeyValSource;
                    if (fieldRelSource.FieldType == FieldType.Guid)
                         sWhere = sKeyFieldRelSource + " = '" + sKeyValSource + "'";
                    QueryParameters qpRel = new QueryParameters() { WhereClause = sWhere };
                    FeatureQueryResult resultRel = await gfTabRel.QueryFeaturesAsync(qpRel);
                    if (resultRel.Count() == 0)
                         return resultRel;
                    string sKeyFieldRelTarget = extInfoTarget.KeyFieldInRelationshipTable;
                    Field fieldRelTarget = gfTabRel.GetField(sKeyFieldRelTarget);
                    sb.Append(" IN ( ");
                    bool bFirst = true;
                    foreach (Feature featRel in resultRel)
                    {
                         if (bFirst)
                              bFirst = false;
                         else
                              sb.Append(", ");
                         string sKeyValTarget = featRel.GetAttributeValue(sKeyFieldRelTarget).ToString();
                         if (fieldRelTarget.FieldType == FieldType.Guid)
                              sb.Append("'" + sKeyValTarget + "'");
                         else
                              sb.Append(sKeyValTarget);
                    }
                    sb.Append(" ) ");

               }
               else
               {
                    sb.Append(" = ");
                    if (fieldKeyTarget.FieldType == FieldType.Guid)
                         sb.Append("'" + sKeyValSource + "'");
                    else
                         sb.Append(sKeyValSource);
               }

               // Query related features

               QueryParameters qp = new QueryParameters() { WhereClause = sb.ToString() };
               return await gfTabTarget.QueryFeaturesAsync(qp);

          }

 

Needless to say, this is a pretty extreme approach to take.  Nonetheless, you never know when this knowledge may come in useful.  

 

UPDATE:

 

It occurred to me that since I routinely automate MMPK creation using Python, I could also create companion files containing the many-to-many relationships.  I've added a new attachment that contains both a Python script and a revised version of the RelationshipHelper class that takes advantage of it.  Now it's more feasible to support both group layers and related feature queries.

In my previous article, I presented a workaround for preserving group layers in a mobile map opened using ArcGIS Runtime SDK for .NET 100.5.  Today's topic involves something a bit nastier.  It can be pretty frustrating when a bug that is fixed in an earlier software version reappears in a later one.  The lesson here is: Never discard your workaround code!

 

The bug in question involves certain multi-layer marker symbols that are not rendered properly when rotated.  For example, see this symbol as shown in the original ArcGIS Pro project:

 

Rotated symbol in ArcGIS Pro

Here's how it looks when exported to a mobile map package and opened using ArcGIS Runtime (see the attached Visual Studio example project):

 

Rotated symbol in ArcGIS Runtime, all jumbled up

Yikes!  This problem was identified at 100.1 and fixed in 100.2, but at 100.5 once more it rears its ugly head.  One workaround is to set ArcGISFeatureTable.UseAdvancedSymbology to false.  This causes marker symbols to be rendered as picture markers.  That's fine until you run into two limitations.  The first is when you set a reference scale and zoom in:

 

Zoomed into a bitmap

But even more challenging, what if you want to change symbol colors on the fly?  In theory, you can do that with a bitmap, but it's beyond my skill to deal with the dithering:

 

Failed attempt to change color of a dithered bitmap

There's another approach, but until Esri implements more fine-grained class properties and methods, manipulating symbols involves a lot of JSON hacking.  Before I go any further, let's crack open a mobile map package and see where drawing information is stored.  If you examine the mobile geodatabase using a tool such as SQLiteSpy, 

you will see a table called GDB_ServiceItems:

 

View of GDB_ServiceItems in SQLiteSpy

 

That's the raw JSON for the data retrieved by ArcGISFeatureTable.LayerInfo.DrawingInfo.  Fortunately, there's no need to hack into the table, because you can get the renderer for a feature layer, retrieve the symbol(s), and convert them to JSON.  Then you make whatever edits you want, and create a new symbol.

 

          public static Symbol UpdateSymbolJSON(MultilayerPointSymbol symOld, Color colorOld, Color colorNew)
          {
               string sOldJSON = symOld.ToJson();
               Dictionary<string, object> dict = (Dictionary<string, object>)_js.DeserializeObject(sOldJSON);
               SymbolHelper.ProcessObjectColorJSON(dict, colorOld, colorNew);
               string sNewJSON = _js.Serialize(dict);
               Symbol symNew = Symbol.FromJson(sNewJSON);
               return symNew;
          }

 

So what's the workaround?  The nature of the bug seems to be an inability to process offsetX and offsetY correctly.  In fact, they seem to be reversed.  So let's see what happens when the offsets are reversed in the JSON:

 

Symbol with offsets reversed

Nope.  Not quite there.  What I finally ended up doing is to combine the offset layers into a single layer with no offsets.  Fortunately again, characters are already converted to polygons in the JSON, or I would be doing a lot more work.  First, I collect the offset layers and find the smallest interval (points per coordinate unit):

 

               bool[] Offset = new bool[layers.Length];
               List<OffsetLayer> OffsetLayers = new List<OffsetLayer>();
               double dInterval = double.MaxValue;
               for (int i = 0; i < layers.Length; i++)
               {

                    Dictionary<string, object> lyr = layers[i] as Dictionary<string, object>;

                    // Check for X and/or Y offset

                    bool bOffset = false;
                    double dOffsetX = 0;
                    double dOffsetY = 0;
                    if (lyr.ContainsKey("offsetX"))
                    {
                         dOffsetX = Convert.ToDouble(lyr["offsetX"]);
                         lyr["offsetX"] = 0;
                         bOffset = true;
                    }
                    if (lyr.ContainsKey("offsetY"))
                    {
                         dOffsetY = Convert.ToDouble(lyr["offsetY"]);
                         lyr["offsetY"] = 0;
                         bOffset = true;
                    }
                    Offset[i] = bOffset;
                    if (!bOffset)
                         continue;

                    // Get offset layer data

                    Dictionary<string, object> frame = lyr["frame"] as Dictionary<string, object>;
                    object[] markerGraphics = lyr["markerGraphics"] as object[];
                    Dictionary<string, object> markerGraphic = markerGraphics[0] as Dictionary<string, object>;
                    Dictionary<string, object> geometry = markerGraphic["geometry"] as Dictionary<string, object>;
                    object[] rings = geometry["rings"] as object[];
                    int ymin = Convert.ToInt32(frame["ymin"]);
                    int ymax = Convert.ToInt32(frame["ymax"]);
                    double size = Convert.ToDouble(lyr["size"]);
                    double dInt = size / (ymax - ymin);
                    if (dInt < dInterval)
                         dInterval = dInt;
                    OffsetLayer layer = new OffsetLayer()
                    {
                         offsetX = dOffsetX,
                         offsetY = dOffsetY,
                         xmin = Convert.ToInt32(frame["xmin"]),
                         ymin = ymin,
                         xmax = Convert.ToInt32(frame["xmax"]),
                         ymax = ymax,
                         size = size,
                         rings = rings
                    };
                    OffsetLayers.Add(layer);

               } // for

 

Then I set up the combined frame and recalculate the ring coordinates:

 

               int iMinX = 0;
               int iMinY = 0;
               int iMaxX = 0;
               int iMaxY = 0;
               List<object[]> OffsetRings = new List<object[]>();
               foreach (OffsetLayer lyr in OffsetLayers)
               {

                    double dX, dY;
                    int iX, iY;

                    // Set up transformation

                    double dInt = lyr.size / (lyr.ymax - lyr.ymin);
                    double dOffsetX = lyr.offsetX / dInt;
                    double dOffsetY = lyr.offsetY / dInt;
                    double dScale = dInt / dInterval;
                    dX = (lyr.xmin + dOffsetX) * dScale;
                    iX = (int)dX;
                    if (iX < iMinX)
                         iMinX = iX;
                    dX = (lyr.xmax + dOffsetX) * dScale;
                    iX = (int)dX;
                    if (iX > iMaxX)
                         iMaxX = iX;
                    dY = (lyr.ymin + dOffsetY) * dScale;
                    iY = (int)dY;
                    if (iY < iMinY)
                         iMinY = iY;
                    dY = (lyr.ymax + dOffsetY) * dScale;
                    iY = (int)dY;
                    if (iY > iMaxY)
                         iMaxY = iY;

                    // Recalculate rings

                    foreach (object obj in lyr.rings)
                    {
                         object[] ring = obj as object[];
                         foreach (object o in ring)
                         {
                              object[] pt = o as object[];
                              pt[0] = (int)((Convert.ToInt32(pt[0]) + dOffsetX) * dScale);
                              pt[1] = (int)((Convert.ToInt32(pt[1]) + dOffsetY) * dScale);
                         }
                         OffsetRings.Add(ring);
                    }

               } // foreach
               double dSize = (iMaxY - iMinY) * dInterval;

 

Finally, I assemble a new symbol layer list:

 

               List<object> NewLayers = new List<object>();
               bool bFirst = true;
               for (int i = 0; i < layers.Length; i++)
               {

                    if (!Offset[i])
                    {
                         NewLayers.Add(layers[i]);
                         continue;
                    }
                    else if (!bFirst)
                         continue;

                    // Update first offset layer

                    Dictionary<string, object> lyr = layers[i] as Dictionary<string, object>;
                    Dictionary<string, object> frame = lyr["frame"] as Dictionary<string, object>;
                    frame["xmin"] = iMinX;
                    frame["ymin"] = iMinY;
                    frame["xmax"] = iMaxX;
                    frame["ymax"] = iMaxY;
                    lyr["size"] = dSize;
                    if (lyr.ContainsKey("offsetX"))
                         lyr["offsetX"] = 0;
                    if (lyr.ContainsKey("offsetY"))
                         lyr["offsetY"] = 0;
                    NewLayers.Add(lyr);
                    object[] markerGraphics = lyr["markerGraphics"] as object[];
                    Dictionary<string, object> markerGraphic = markerGraphics[0] as Dictionary<string, object>;
                    Dictionary<string, object> geometry = markerGraphic["geometry"] as Dictionary<string, object>;
                    geometry["rings"] = OffsetRings.ToArray();
                    bFirst = false;

               } // for
               return NewLayers.ToArray();

 

And here are the results:

 

Fixed symbolColors changed

 

Much better.  I can't guarantee that this code will work for every situation, but it seems to work fine for my own complex symbols.  And remember:  even if this bug is fixed at 100.6, hang onto this code, in case you need it again in the future!

Article contributed to and authored by Satish Sankaran, Max Payson, and Amy Niessen

 

Last week, the FOSS4G community landed in San Diego for its 2019 North American conference. Esri participated in the event as a silver sponsor and, given its proximity to Esri’s home base in Redlands, many employees were able to attend. FOSS4G is an amazing event for developers and GIS geeks interested in emerging technologies, so we were excited to share our projects and to engage with thought leaders in the geospatial community.

 

The event kicked off with lightning talks and networking events, which dovetailed into devoted presentations and workshops. While it had a developer focus, many of the presentations provided gentle introductions to hot topics – AI/ML, blockchain, microservices, containers, and serverless computing were all covered. Presenters discussed how these buzzwords can help scale storage, compute, and insight to solve increasingly complex challenges. Many presentations were also grounded by real-world projects, from disseminating 14 trillion USGS LIDAR points to achieving the UN’s sustainable development goals.

 

Colleagues from Esri shared their work with presentations at the event from Atma Mani presenting "Let's Take the Machines House Hunting" using Python and Jupyter Notebooks, Thomas Maurer presenting "LERC - Fast Compression of Images and Tensors", highlighting low-level libraries like LERC for raster compression, and Tamrat Belayneh presenting "I3S - An Open Standard to Bring 3D to Web, Desktop, and Mobile Platforms", introducing the OGC community standard I3S spec. We also appreciated hearing Howard Butler acknowledge our contributions to the GDAL Coordinate System barn-raising effort in his presentation. As an important vendor in the GIS space, we are happy support fundamental initiatives like these that help build core libraries used extensively by the community.

 

Atma Mani demonstrating the Python API to a user

 

While many attendees were familiar with Esri software and some even active users, at the Esri booth, the conversations extended beyond traditional ArcGIS workflows often discussed at Esri events. We enjoyed learning from others’ diverse perspectives and expertise, and it was reassuring to see community validation regarding steps we are taking in the areas of 3D, interoperable data science, and with our Developer program. Esri continues to push forward on its Open Platform vision – a vision that includes support for standards, interoperability, open data and open source. And, we are constantly looking for better ways to engage with developers and support their work.

 

While large software business may share complex relationships with the open source world, Esri’s role in the GIS realm has always been community focused. We hope to continue to grow the community of GIS users and developers and FOSS initiatives are an important subsystem contributing to this growth.

 

New sticker design at FOSS4G

Screen shot of example app in action

 

Given the fact that the ArcGIS Runtime SDK for .NET supports group layers at 100.5, and that they work perfectly fine in 2D maps, it's somewhat disappointing that mobile maps with group layers are still not hydrated properly.  Esri can mansplain their issues until their faces turn blue, but the fact is that the tools exist right now to do it. In this post I will show you a functional workaround.

 

Before I start showing any code, let's take a peek at the internal structure of a mobile map package, as produced by ArcGIS Pro 2.3.2:

 

MMPK top level

The .info file is a JSON file, and it's important because it provides the link between the map index and the map name:

 

"maps": [ "Mohave Electric MMPK" ]

The "Create Mobile Map Package" tool enforces unique map names, so the problem of duplicate map names should never arise.  My own practice is to have only one map per MMPK for operational layers only, no base layers.  This allows separation of packages based on frequency of updates:  facility data (frequent updates), landbase (infrequent updates), and basemaps (vector or raster tile packages).  A Runtime app can then open each package, grab its layers, and add them to the main map.

 

Now let's look at the p14 subfolder:

 

MMPK p14 folder contents

The .mapx file is a JSON file containing all the information needed for ArcGIS Pro to hydrate the map; it uses the CIM (Cartographic Information Model) specification.  In an ideal world, that would be the file that Runtime uses to open a map, but it's not.  Runtime uses the .mmap file to open maps:  it's also a JSON file, but it doesn't use CIM.  Although the mobile map specification is not fully aligned with CIM, nonetheless it does store group layer information:

 

          {
            "id" : "b4a38ad5025b46d1ac809b3ed5258935",
            "title" : "Transformers",
            "visibility" : true,
            "layerType" : "GroupLayer",
            "layers" : [
              {
                "id" : "2c4d3bafe82c4344a0bc2019b67097cc",
                "title" : "Capacitor Bank",
                "visibility" : true,
                "layerType" : "ArcGISFeatureLayer",
                "layerDefinition" : {
                  "minScale" : 4000
                },

 

What it does not store, however, is any minScale or maxScale information for the group layer.  You can look that up in the .mapx file, as long as the group layers are uniquely named, but I don't bother.  The workaround is to design the mobile map without scale ranges on the group layers.

 

I've attached an example Visual Studio project which retrieves the .info and .mmap files to extract the operational layer definitions:

 

               // Open mmpk as zip archive

               string sMapName = null;
               object[] oplayers = null;
               using (ZipArchive zip = ZipFile.Open(sPath, ZipArchiveMode.Read))
               {

                    // Parse the info file to get map name

                    string sInfoFile = Path.GetFileNameWithoutExtension(sPath) + ".info";
                    sMapName = GetMapName(zip, sInfoFile, iMapIndex);

                    // Parse corresponding .mmap file and get operational layers

                    if (!string.IsNullOrEmpty(sMapName))
                    {
                         string sMapFile = "p14/" + sMapName + ".mmap";
                         oplayers = GetOperationalLayers(zip, sMapFile);
                    }

               }

 

The code uses the .NET JavaScriptSerializer object to parse the JSON:

 

          private static string GetMapName(ZipArchive zip, string sInfoFile, int iMapIndex)
          {

               string sJSON = GetEntry(zip, sInfoFile);
               if (string.IsNullOrEmpty(sJSON))
                    return null;
               Dictionary<string, object> dict = js.DeserializeObject(sJSON) as Dictionary<string, object>;
               if (!dict.ContainsKey("maps"))
                    return null;
               if (!(dict["maps"] is object[] maps))
                    return null;
               if (iMapIndex > maps.Count() - 1)
                    return null;
               return maps[iMapIndex].ToString();

          }

          private static object[] GetOperationalLayers(ZipArchive zip, string sMapFile)
          {

               string sJSON = GetEntry(zip, sMapFile);
               if (string.IsNullOrEmpty(sJSON))
                    return null;
               Dictionary<string, object> dict = js.DeserializeObject(sJSON) as Dictionary<string, object>;
               if (!dict.ContainsKey("map"))
                    return null;
               if (!(dict["map"] is Dictionary<string, object> map))
                    return null;
               if (!map.ContainsKey("operationalLayers"))
                    return null;
               return map["operationalLayers"] as object[];

          }

          private static string GetEntry(ZipArchive zip, string sEntryName)
          {
               ZipArchiveEntry zipInfo = zip.GetEntry(sEntryName);
               if (zipInfo == null)
                    return null;
               string sJSON = null;
               using (StreamReader reader = new StreamReader(zipInfo.Open()))
               {
                    sJSON = reader.ReadToEnd();
               }
               return sJSON;
          }

 

Then it opens the map and restructures it to insert group layers in the appropriate places.  

 

          private static void AddLayers(Map MyMap, GroupLayer glParent, object[] sublayers)
          {

               foreach (object obj in sublayers)
               {

                    if (!(obj is Dictionary<string, object> layer))
                         continue;
                    if (!layer.ContainsKey("layerType"))
                         continue;
                    string sLayerType = layer["layerType"].ToString();

                    // Process layer

                    Layer lyr = null;
                    if (sLayerType == "GroupLayer")
                    {

                         // Create group layer
                         // **** LIMITATION: .mmap group layer entry does not store minScale or maxScale ****
                         // [MMPKs should be created with that limitation in mind]

                         GroupLayer glChild = new GroupLayer();
                         string sDisplayName = "";
                         if (layer.ContainsKey("title"))
                              sDisplayName = layer["title"].ToString();
                         glChild.Name = sDisplayName;
                         bool bVisibility = true;
                         if (layer.ContainsKey("visibility"))
                              bVisibility = (bool)layer["visibility"];
                         glChild.IsVisible = bVisibility;
                         if (!layer.ContainsKey("layers"))
                              continue;
                         if (!(layer["layers"] is object[] layers))
                              continue;
                         AddLayers(null, glChild, layers);
                         lyr = glChild;

                    }
                    else
                    {

                         // Get layer and add to parent

                         if (!layer.ContainsKey("id"))
                              continue;
                         string sID = layer["id"].ToString();
                         if (!OpLayers.ContainsKey(sID))
                              continue;
                         lyr = OpLayers[sID];

                    }
                    if (lyr == null)
                         continue;
                    if (glParent == null)
                         MyMap.OperationalLayers.Add(lyr);
                    else
                         glParent.Layers.Add(lyr);

               }

          }

 

As a bonus, the example code also builds an interactive TOC.

 

Enjoy!

 

[P.S. -- I've heard rumors that ArcGIS Pro 2.4 will introduce potentially code-breaking changes to the mobile map specification.  Watch out for that.]

If you missed out on DevSummit this year, or are just looking to re-live it, the following are the Top 20 technical sessions you may want to check out.

 

First… what does "Top” mean really?

 

Well, some we included due to them being some of the most popular sessions of the week, and some we chose because of the session room was really full, or flat out overfull. But then, some of the sessions below introduced some new products and capabilities, and we wanted to take one more shot at ensuring you know about them.

Ok, here we go, in no particular order:

 

0 – Real Software Engineering

It's not uncommon for some to assert that software development isn't, or should not be considered "engineering".

We were lucky to have as a Keynote Speaker, Glenn Vanderburg, the VP of Engineering at First.io. Glenn gave an interesting talk exploring the concept of "engineering" and how it applies to the pursuit of building powerful software applications.

 

Glenn does a clever job exploring this idea--showing that, while software engineering has aspects that are fairly unique compared to other engineering fields, there are other aspects that not only show that it can be considered engineering, but that when it comes to designing, testing, scaling, deploying good software, and collectively learning from what we all do, for us to get the most from technology it should be treated as, and practiced as an engineering discipline.

 

1 - High-End 3D Visualization with CityEngine, Unity, and Unreal

GIS and Game Engines--interesting combination. And the future is here today. Game-like interactive experiences, whether on the screen or in VR/AR, are the next big step in 3D GIS. In this session, you will get an introductory guide to Unity and Unreal, the leading game engines. You will learn to load your map data up into a virtual world, one where you can walk through realistically rendered city models or to explore interactive storytelling of your 3D project.

 

2 – ArcGIS API for JavaScript: What's New

This was THE most heavily attended session at DevSummit this year (that didn't have "Road Ahead" in the title). Web developers make up the largest part of the global developer community building with ArcGIS, and the developers and engineers at Esri responsible for growing this API continue adding many new capabilities each release.

For web developers, hitting up the annual What's New session is near the top of their lists of things to do. And now you can too.

 

3 - Using Insights for ArcGIS with Python and R

New this year, Insights has been enhanced so that developers and data scientists can now extend the capabilities with both Python and R for analysis and visualization.

 

4 - ArcGIS Runtime SDKs: Everything (or Anything) You Wanted to Know But Were Afraid to Ask

Live Q&A session. No slides, no demos, just all of the ArcGIS Runtime SDK developer leads. They all covered topics such as Metal, Vulkan, React Native, editing, snapping, Arcade, samples, styling 2D and 3D, and more. And here is the full list of questions that were answered by the panel of developers and engineers on the ArcGIS Runtime teams. 

 

5 – ArcGIS Enterprise: Architecting Your Deployment

"This is not a very GIS-y presentation, and I'm not doing any live coding...", says ArcGIS Enterprise product manager Philip Heede. Despite that caution, this session continues to be very popular in Palm Springs year after year. The title says it all. Optimizing your resources, storage, and performance starts with a solid architecture that fits the needs of your organization, apps, databases, and users.

 

6 - Extending Survey123 for ArcGIS

Sure, Survey123 has been one of the most popular end-user apps for a couple years now, but given this is DevSummit, this session looks into ways developers can now extend the app, to customize it further to make it more closely fit your workflows, such as writing JavaScript functions, creating Add-ins, using Webhooks, and rebranding the app.

 

7 - Using Webhooks in ArcGIS Enterprise

This. Demo. Theater. Was. OVERFULL. All chairs taken, standing room only, spilling out into the Showcase. This was easily the most popular demo theater of the week. If you got there too late, then today you're in luck. Here's the 30-min video... enjoy! Webhooks: They're Better Than Polling!

 

8 - Integrating Machine Learning and Deep Learning with ArcGIS

During the Plenary session, Rohit Singh invited everyone to follow him into "the exciting world of pavement cracks". :-) Well, this session here is a more complete version. This was the second most highly attended demo theater of the week. Also standing room only.

 

In this session, Rohit demonstrated how organizations can detect deep and complex spatiotemporal patterns in their data, and use that to predict geospatial events of interest – at scale, and in real-time.

 

And if you like this 30-min session, don’t miss the 4+ hours of machine learning "deep dive” here.  

 

9 - ArcGIS Data Store Management: Best Practices

This was a new session this year, but with almost 300 attendees in the room, its popularity is clear. We bet you will find this information useful as well. In this session, after a brief intro about what the Data Store is, how it works, and what it can do for you, the presenters dig into how to install and configure it, followed by techniques for backing up, restoring, moving, and upgrading the Data Store.

 

10 - Using Webpack and React with the ArcGIS API for JavaScript

Modern frameworks continue to make robust and powerful web app development better. In this session, you'll learn to integrate the ArcGIS API for JavaScript via the ArcGIS webpack plugin or esri-loader, and then use React to build a fast and responsive application that uses some of these modern tooling and techniques.

 

11 - Network Management with ArcGIS: Introduction to the Utility Network

We put this session in the wrong room--way too small. If you tried to get in, and couldn't, then this is what you missed. Next year, the update to this session will be much better placed.

 

12 - What's New in Collector for ArcGIS

One of the most popular apps over the past several years, the Esri Apps team gave a rundown--to a full house--on all the new capabilities of Collector, and what their plans are for the future, so that you can get the most from your field data collection. As they say "Accurate Data Collection Made Easy!"

 

13 - Interacting with Hosted Feature Layers through the ArcGIS REST API

Yet another overfull demo theater session. Although ArcGIS Online has a rich user interface for working with Hosted Feature Layers, there are some advanced features that developers can use for interacting with these web services using the ArcGIS REST API.

 

14 - Tools for the Modern Web Developer

When pulling together this session, Josh and Gavin from Esri Professional Services ask themselves: "What tools improved our workflows the most over the past year?”. So in 30 minutes, they went and covered helpful techniques for using Postman, Visual Studio Code, and open source tools from Esri for working with Calcite Maps, React, Angular, and Ember.

 

15 - Building Native Cross-Platform Apps with AppStudio: Advanced Topics

DevSummit attendees really like sessions with the word "Advanced" in the title. In this talk, we dig into some new capabilities, such as leveraging other geolocation tech with your apps, using Bluetooth, Beacons, and high-accuracy GNSS receivers. Also demonstrated is the new extension for developing AppStudio apps with the Visual Studio Code development environment.

 

16 - Roadmap for BIM-GIS Interoperability

As soon as GIS introduced 3D capability, users started experimenting with putting one of the most widely created 3D data types in GIS context, Building Information Modeling (BIM) data. This session covers best practices and techniques for using BIM data in ArcGIS, along with a road map for where things are going.

 

17 - Continuously Delivered Scalable Geoservices on Kubernetes

Containerization is hot. It’s a sharply growing trend in the IT industry, improving the automation, scalability, and management of apps, services, data, users, and organizational workflows. In this session, developers from Esri's Washington DC R&D Center cut to the chase on how to use Kubernetes and Jenkins, and their useful abstractions, for automating and managing the creation and use of containers, so that you can continuously deliver scalable web services.

 

18 - ArcGIS Pro SDK for .NET: Understanding the CIM, a Guide for Developers

CIM == Cartographic Information Model. While the Pro SDK provides classes and methods for the most commonly used aspects of rendering layers and managing map layouts, the CIM opens that box up all the way. Access to the CIM gives ArcGIS Pro developers access to all of these much more advanced capabilities. This session digs right in, showing you techniques you'll find useful right away.

 

19 – Improving your Web App through UI/UX Best Practices

With so many great GIS tools around, why isn't map app design a slam dunk? This session teaches you that great UX is more than the sum of its parts and explains how to design more compelling User Interfaces. We'll cover fundamental building blocks of successful applications and introduce ways to improve the total experience.

 

Tied for 20 - 242, all of the other videos that you can find here:

https://www.youtube.com/playlist?list=PLaPDDLTCmy4Y0GMTl0O4V6LF3EmxIAivv

Let us know what you think!