Skip navigation
All Places > Geo Developers > Blog
1 2 3 Previous Next

Geo Developers

88 posts

Earlier this month, Christopher Zent from the ArcGIS Pro SDK team and Robert Burke, Esri Instructor, co-presented the GeoDev Webinar, "ArcGIS Pro SDK for .NET: Extensibility Patterns. Throughout the presentation, attendees can submit their questions. The questions below are the ones we were unable to get to during the webinar. For those we did address, as well as the presentation recording and slides are listed below. Check out what you may have missed!

 

What are the most common extensibility patterns and customizations seen with the Pro SDK?

By far the most commonly used is the Pro add-in pattern. This is very similar in concept to the traditional ArcMap add-in pattern which Desktop developers have used since ArcGIS Desktop 10.0. The Pro add-in provides the range of capabilities which most developers and their end users are looking for, whereas the other patterns are more specialized.

 

Are there any samples for CoreHost updating databases?

The CoreHost community samples demonstrate the concepts of accessing and reading geodatabases. Review the Geodatabase ProConcepts and Snippets documents for examples.

 

Is the SDK backward-compatible – can I write an add-in with the 2.5-2.6 SDK and expect it to run in Pro 2.3?

ArcGIS Pro add-ins are only forward compatible with releases of ArcGIS Pro. For an add-in to run with ArcGIS Pro 2.3, it would need to have been compiled with ArcGIS Pro SDK 2.0, 2.1, 2.2 or 2.3. An add-in compiled with ArcGIS Pro SDK 2.3 can be used with ArcGIS Pro 2.3 and higher 2.x releases. Earlier releases of the Pro SDK can be found in the Assets section under each release at this page.

 

How are the ArcGIS Pro API extension files installed?

The Pro API core and extension assembly files are always installed as part of ArcGIS Pro. There is no separate install required.  Developers only need to install the ArcGIS Pro SDK (.VSIX) files to access the APIs. More information can be found on the documentation site here.

 

Do you support NuGet packages for Pro SDK?

Yes.  You can find out more about downloading and using the ArcGIS Pro Extensions NuGet in this guide document.

 

For those experienced in ArcObjects SDK development, how quickly could I become productive in using the ArcGIS Pro SDK?

Many developers coming from ArcObjects development find using the Pro SDK to a highly productive and streamlined development experience. There are many online resources available for getting started, including a set of easy to follow ArcGIS Tutorials for the Pro SDK, and documentation on Migrating to ArcGIS Pro and getting started with the Pro SDK. We also recommend the instructor-led training course.

 

How is real-time data handled with the Pro SDK, and are there samples?

The Realtime Stream Layers API allows for management of stream layers in ArcGIS Pro, with documentation here.  There is also a sample available here.

 

Can you build Custom Cylindrical Objects on the Map connecting to Real Time Data.

Using the Geometry API it is possible to create multipatch features, and using the Realtime Stream Layers API you can connect to real time data in stream layers in Pro.

 

Can you add BigQuery (Google Cloud) data with lat/long into Pro using a plugin datasource?

See the ProConcepts Plugin Datasources document for information on the architecture and requirements for source data.

 

What are the language options for developing with the ArcGIS Pro SDK?

The language options for development with the ArcGIS Pro SDK are C# and VB.NET.

 

To view the recording, visit this page: ArcGIS Pro SDK for .NET: Extensibility Patterns 

To view the slides for this presentation, click here.

 

Have a question? Post them below!

At the end of May, we hosted a GeoDev Webinar on one of the latest blog posts the came from Kristian Ekenes, who works on the Senior Product Engineer on the ArcGIS API for JavaScript team. He wrote a blog post on Mapping Large Datasets on the Web, and since we thought this would be a great topic to cover more in-depth, we decided to host the same topic as a webinar where attendees could ask questions. There were a lot of good questions that came in but were not addressed during the live Q&A portion of the webinar, so Kristian addresses them below.

 

Q: What parameters need to be set to enable dynamic tile service?

A: You don't need to do anything to enable dynamic feature tiles. You get them out of the box with online hosted Feature Services and Enterprise feature services. See attached matrix for more specific information on versioning.

 

Q: Can you show me where the quantization parameter is defined and for which type of services?

A: Quantization parameters are query parameters for the feature service. You can directly query data in quantized format using the JS API's Query object. See the doc here:  https://developers.arcgis.com/javascript/latest/api-reference/esri-tasks-support-Query.html#quantizationParameters. But you don't need to worry about that. The JS API takes care of querying the data in quantized form for you.

 

Q:  I have an ArcGIS Online license and I uploaded a 4GB data set as a CSV that created a table feature layer, that then is used to create a map. The thing is that I must update this data set every week. Is there some way, or an architecture that let me automate this? I already updated the data set with ArcGIS REST API, but it doesn't reflect on the maps or in the table feature layer.

A: Yes. You can automatically apply edits to this feature service using the ArcGIS Python API. Though this isn't my area of expertise. I would reach out to someone on GeoNet in a Python discussion for a more specific answer. The thing to remember though, is that once you update the data, the old feature tiles are automatically replaced with tiles containing the new/updated data once a new query for that data is made. So you don't have to worry about the backend taking care of that for you! 

 

Q: I am using one feature service to different maps. How can I filter data based on a map?

A: You will want to contact Esri Technical Support for this.

 

Q: When publishing a feature service to ArcGIS Online, will this eliminate the restriction on how many features you can render?

A: There isn't a limit to the number of features you can publish. The limitation you may encounter is with storage. The number of features you can render is dependent on the client loading the data, the network speed/latency, number of attributes required, etc.

 

Q: Will example code be available in GitHub?

A: Example code is here: https://github.com/ekenes/conferences/tree/master/ds-2020/large-data and here: https://github.com/ekenes/conferences/tree/master/ds-2020/plenary.

 

Q: Do we have to set the scale in the feature layer definition?

A: One way to avoid loading too many features unnecessarily is by progressively filtering out data based on view scale. This isn't the only way, but just one method without having to load the layer multiple times.

 

Q: I am not familiar with CDN cache. How can I optimize the performance by using this cache?

A: You don't need to do anything to take advantage of this. It just applies to public feature services. If you have a public feature service hosted on ArcGIS Online, then you automatically benefit from the CDN cache.

 

Q: Is there a dataset where we can get access to US population or zip code density? I could not seem to find any.

A: The Living Atlas of the World has zip code layers you can freely use as well as up to date population estimates through the ACS. I highly recommend searching there. https://livingatlas.arcgis.com/en/browse/#d=2&q=zip%20code

 

Q: Why are you cloning the renderer for Feature Layer?

A: I'm cloning the renderer so when I reset the renderer the layer will detect changes and re-render the data. We don't watch all renderer and symbol properties for changes for performance reasons. Therefore you must clone the renderer, make your modifications, then set it back on the layer. This is your way of deliberately telling the layer  a change has been made and it needs to redraw the features.

 

Q: Can you apply this visualization techniques in ArcMap or ArcGIS Pro?

A: Not all of these techniques can be applied in our desktop software. You can set the same renderer types…graduated symbols, color visual variables, etc. But you cannot update the renderer based on another attribute like time or depth. The time and depth UI sliders in Pro and ArcMap perform filters of the data. I'm rendering all features in the JS API and updating the renderer rather than performing a data filter. This allows me to avoid loading duplicate geometries just to show different data values. So no, you can't use all of these techniques in ArcGIS Pro/ArcMap. Also, you cannot set up the size range by scale in ArcGIS Pro.

 

Q: What about applying this visualization technique in ArcMap or ArcPro regarding line color thickness?

A: Regarding the scale-dependent line thickness in the pipes example. You can configure that in Pro, but it's a different approach than in the JS API. In Pro, you set a reference scale and a size that will render the lines at that scale with a specific size. When you zoom in or out, the line width will adjust linearly based on the difference between the map scale and the reference scale. You can set more stops in the JS API to do it.

 

Q: Can the Arcade expressions and other things be leveraged in a context wherein everything is, by intent or design, is cached on the client in memory or with the CDN -- specifically avoiding everything except the initial call to the originating ArcGIS service as a REST service? 

A: Arcade can execute against client-side features and you can query your data client-side, thus avoiding another round trip to the server. You first have to ensure that you actually have all the data available on the client though.

 

Q: Could you provide the code for this examples please?

A: Example code is here: https://github.com/ekenes/conferences/tree/master/ds-2020/large-data and here: https://github.com/ekenes/conferences/tree/master/ds-2020/plenary

 

Q: What difference between filter scale on API and display scale on mxd then publish to feature service?

A: Hopefully I get this right…display scale in ArcGIS Pro/ArcMap is similar to visibility scale in the JS API (layer.minScale/layer.maxScale). The difference is the visibility scale determines when a layer will be queried and displayed based on map/view scale. The filtering by scale still queries the data regardless if there is visibility scale (if there is one the visibility scale is still honored). You're just being more deliberate about filtering out data, such as smaller or less meaningful features that aren't needed for that scale. So you still see data, just not all the features in the approach where you filter based on scale.

 

Q: How do you normally explain clustering to the lay person?

A: Clustering is a method of reducing the number of features in view by aggregating features into clusters based on a predefined cluster radius. Larger cluster graphics indicate areas that have a higher density of features. Smaller cluster graphics indicate areas with fewer features.

 

Q: Is clustering available in Portal as well as ArcGIS Online?

A: Yes

 

Q: Could you provide the codes and the links for this examples please?

AExample code is here: https://github.com/ekenes/conferences/tree/master/ds-2020/large-data and here: https://github.com/ekenes/conferences/tree/master/ds-2020/plenary

 

Q: How do we deal with the time based data? For example, I am dealing with vehicle speed data which is 70k line per minute, and I want to show animation that will last for one hour.

A: I'm not sure I understand the case. Animations can be tricky…perhaps a question to post on the ArcGIS API for JavaScript GeoNet community with more specific details?

 

Q: If you need to update the layer, how do you update it?

A: Just apply edits or set the properties. 

 

Q: Do you offer tailor made training on web app development? I find the One Ocean app cool and would like to develop one for my region.

A: No. But I regularly contribute to the ArcGIS blog where I discuss details on some of these projects like One Ocean. You can read it here: https://www.esri.com/arcgis-blog/products/js-api-arcgis/mapping/mapping-large-datasets-on-the-web/ other JS API blogs can be searched on this page: https://www.esri.com/arcgis-blog/?s=#&products=js-api-arcgis.

 

Q: Thank you for the examples of using queries with a Feature Tile Cache. Can you also use  Filter Widget, or reporting tool widgets with a Feature Tile Cache?

A: Any time you filter your layer, the data is requested in tile format, which means it is automatically cached for you. So you don't have to worry about configuring it. As long as the JS API recognizes the query as a repeatable one, you leverage the feature tile cache.

 

Q: Can you use this techniques with rasters or grids?

A: Not at the moment. This only applies to vector data.

 

Q: Is there a GitHub link for the EugeneTrees - Cluster example?

A: Yes, you can find it here: Map Viewer 

 

Q: Is it best to limit the fields (attributes needed) in ArcMap or ArcGIS Pro before you publish or elsewhere (i.e. in the web map configuration)?

A: Not necessarily. You can also limit the fields using a hosted Feature Layer View. You can also limit them in the outFields of the layer in the JS app. If you have a long list of fields though, the query won't be cacheable (query stings must be less than 2048 characters), so it is best to limit fields in those situations whether it is from Pro or a hosted layer view.

 

Q: Some of the features you presented (such as adjusting the size of a line by scale, or definition queries by scale) look really interesting, but I am developing web apps in web app builder. Can you use those tools in a GUI driven environment? Or would it have to be within the code?

A: When you style a layer using the new Map Viewer Beta, you already take advantage of the scale-driven symbology by default. But the renderer must be authored in the viewer. That means resetting it there even if you have one saved to the layer. Or you can simply load it in the new map viewer beta and check the box. Read this blog for more information - https://www.esri.com/arcgis-blog/products/arcgis-online/mapping/auto-size-by-scale-now-available-in-map-viewer-beta/ Regarding the scale-driven definition queries, you have to do that in code. There is no GUI for it.

 

Q: Hello, how can I access the JavaScript backend code which you have reviewed?

AExample code is here: https://github.com/ekenes/conferences/tree/master/ds-2020/large-data and here: https://github.com/ekenes/conferences/tree/master/ds-2020/plenary

 

Q: How much can you improve performance of a dataset by adjusting text lengths of an attribute table? Do longer lengths greatly reduce draw speed in a feature service?

A: Longer lengths will reduce speed. But it may only matter if you have a lot of features and/or a lot of fields you are loading. We're continually improving draw times though, so it may matter less and less. You should see a significant improvement here later this year.

 

Q: Is there any documentation on geometry thinning?

A: You can read more about it in this blog - https://www.esri.com/arcgis-blog/products/js-api-arcgis/mapping/mapping-large-datasets-on-the-web/#geometry-thinning - But you can also read about it in Pro documentation under Select Layer By Attribute. Fundamentally, what I mean by "geometry thinning" is filtering out unnecessary features based on their geometry...whether they are inside or outside an area of interest...or in this case...whether there are too many points in a grid (e.g. stacked on top of one another, or even grid resolution).

 

Q: How do I cluster the thousands of POIs from different categories for better performance?

A: To cluster by category, you need to set up different layers with definitionExpressions based on each category…I'm not sure if you get better performance though. You should get decent performance in clustering with a few thousand features (even in the hundreds of thousands). But if you have way more than that, then you'll need to enable clustering on your service, which isn't fully supported in the JS APi yet. Though it's coming soon...

 

Q: When will snapping will be available in 4.x?

A: It is planned, but there is no specific date set.

 

Q: If I set scale range in mxd for a feature, but I want to see all data in attribute table, I receive slow results. Why?

A: This doesn't appear related to the JS API. I would contact Esri Technical Support

 

Q: Would you happen to have any advice for improving dataset performance within an ArcGIS Dashboards?

A: I would ask that question on GeoNet in the ArcGIS Dashboards discussion. You'll get someone who works on that product that will provide you with a better answer than I can give.

 

Q: How frequently is the data is cached? Can we change the frequency?

A: You can change the frequency using the maxAge parameter in the layer's settings in ArcGIS Online. Go to the layer item. Click the "Settings" tab. Scroll down to "Cache control". There you can control how long clients will have to wait before seeing an updated cache. That applies to editable layers where the features/attributes may change. Once the tiles are cached, they will stay that way until an edit is made.

 

Q: Are there any plans to move Web AppBuilder for Developers to work with the 4x API?

A: Yes. You'll need to contact the Web AppBuilder team though. You can reach them on GeoNet.

 

Q: How do I publish feature tile services instead of feature services? And it sounds like feature tile service is better than feature service. Should I use feature tile service all the time? What's the advantage of feature service that feature tile service doesn't have?

A: Feature services automatically query data as dynamic feature tiles. You don't have to do anything to take advantage of this functionality. It's all happening behind the scenes for you.

 

You can find a recording of this video on our GeoDev Webinar playlist on YouTube. If you would like to download the slides to the webinar, you can do so here: https://github.com/ekenes/conferences/raw/master/ds-2020/large-data/geodev-slides.pptx

 

We hope you enjoyed this installment of the GeoDev Webinars! You can find all of our GeoDev Webinars on go.esri.com/geodev. Until next time...

I maintain a number of automated map products in ArcMap which involve not just spatial queries and geometric operations, but also fine-grained manipulation of layers, including renderers and symbology. Let's face it: I never could get the arcpy.mapping module or early versions of ArcGIS Pro to cut the mustard. Later versions of the ArcGIS Pro SDK introduced far greater capability to manipulate map layers and layout elements. But then I asked myself: should users be running Pro at all to create those plots?

 

At Pro 2.4.3, I started taking a closer look at arcpy.mp, wondering if I could create a geoprocessing tool and publish it to a web tool for consumption by a custom Web AppBuilder widget in Portal. I am happy to say that an initial proof-of-concept experiment has been a success.

 

Before I go into that, first I would like to point out some of the features of arcpy.mp that made me decide that it has finally reached the level of functionality that I need:

 

  • Load and modify symbols
  • Change and manipulate renderers
  • Make layout elements visible or invisible
  • Make modifications at the CIM level

 

One thing arcpy.mp doesn't do yet is create new layout elements, but for my purposes I can recycle existing ones. A good approach is to have a number of elements present for various tasks in a layout, and make them visible or invisible on demand for different situations.

 

        # Show or hide legend
        legend = self.__layout.listElements("LEGEND_ELEMENT")[0]
        if self.__bOverview:
            if self.__bMainline:
                legend.visible = True
            else:
                legend.visible = False
        else:
            legend.visible = True

 

The ability to manipulate legend elements is still pretty limited, but I haven't run into any deal-killers yet. If you really hit a wall, one powerful thing you can now do is dive into the layout's CIM (Cartographic Information Model) and make changes directly to that.  Here's an example of modifying a legend element in a layout via the CIM:

 

aprx = arcpy.mp.ArcGISProject("c:/apps/Maps/LeakSurvey/LeakSurvey.aprx")
layout = aprx.listLayouts("Leak Survey Report Maps Template")[0]
cim = layout.getDefinition("V2")
legend = None
for e in cim.elements:
    if type(e) == arcpy.cim.CIMLegend:
        legend = e
        break
legend.columns = 2
legend.makeColumnsSameWidth = True
layout.setDefinition(cim)

 

While the CIM spec is formally documented on GitHub, a simpler way to explore the CIM is to check out the ArcGIS Pro API Reference; all objects and properties in the ArcGIS.Core.CIM namespace should be mirrored in Python.

 

Part One: Creating a Python Toolbox

 

LeakSurvey.pyt is in the sample code attached to this post. While my initial draft was focused on successfully generating a PDF file, when the time came to test the tool as a service, additional factors came into play:

 

  • Getting the service to publish successfully at all
  • Returning a usable link to the resulting PDF file
  • Providing a source for valid input parameters

 

Sharing a geoprocessing tool as a package or service is one of the least intuitive, most trippy experiences I've ever had with any Esri product.  The rationale seems to be that you are not publishing a tool, but a vignette. You can't simply put out the tool and say, here it is: you must publish a geoprocessing result. As part of that concept, any resolvable references will cause ArcGIS to attempt to bundle them, or to match them to a registered data store. This is a great way to get the publication process to crash, or lock the published service into Groundhog Day.

 

So, one key to successfully publishing a web tool is to provide a parameter that:

 

  1. Gives the tool a link to resolve data and aprx references, and
  2. When left blank, returns a placeholder result that you can use to publish the service.

 

LeakSurvey.pyt does just that. Here's the definition for the "Project Folder" parameter:

 

        param0 = arcpy.Parameter(
            displayName = "Project Folder",
            name = "project_folder",
            datatype = "GPString",
            parameterType = "Optional",
            direction = "Input")

 

When left blank, the tool simply returns "No results" without throwing an error. Otherwise, it points to a shared folder that contains the ArcGIS Pro project and some enterprise GDB connection files.

 

Returning a usable link to an output file involves a bit of a trick.  Consider the definition of the "Result" parameter:

 

        param7 = arcpy.Parameter(
            displayName = "Result",
            name = "result",
            datatype = "GPString",
            parameterType = "Derived",
            direction = "Output")

 

The tool itself creates a path to the output file as follows:

 

        sOutName = self.__sSurveyType + "_" + self.__sSurveyName + "_" + self.__sMapsheet + "_"
        sOutName += str(uuid.uuid4())
        sOutName += ".pdf"
        sOutName = sOutName.replace(" ", "_")
        sOutput = os.path.join(arcpy.env.scratchFolder, sOutName)

 

If that value is sent to the "Result" parameter, what the user will see is the local file path on the server. In order for the service to return a usable url, a return parameter needs to be defined as follows:

 

        param8 = arcpy.Parameter(
            displayName = "Output PDF",
            name = "output_pdf",
            datatype = "DEFile",
            parameterType = "Derived",
            direction = "Output")

 

Traditional tool validation code is somewhat funky when working with a web tool, and I dispense with it. Rather, the tool returns a list of valid values depending on the parameters provided, keeping in mind that I want this service to be consumed by a web app. For example, if you provide the tool with a survey type and leave the survey name blank, it will return a list of the surveys that exist. If you provide a survey type and name and leave the map sheets parameter blank, it will return a list of the map sheets for that survey:

 

        if self.__sSurveyName == "" or self.__sSurveyName == "#" or self.__sSurveyName == None:
            # Return list of surveys for type
            return self.__GetSurveysForType()
        self.__bMainline = self.__sSurveyType == "MAINLINE" or self.__sSurveyType == "TRANSMISSION"
        self.__Message("Querying map sheets...")
        bResult = self.__GetMapsheetsForSurvey()
        if not bResult:
            return "No leak survey features."
        if self.__sMapsheets == None or self.__sMapsheets == "#":
            # Return list of map sheets for survey
            sResult = "MAPSHEETS|OVERVIEW"
            for sName in self.__MapSheetNames:
                sResult += "\t" + sName
            return sResult

 

So how's the performance? Not incredibly great, compared to doing the same thing in ArcObjects, but there are things I can do to improve script performance. For example, because every time the tool is run, it must re-query the survey and its map sheets, there is an option to specify multiple sheets, which will be combined into one PDF, to be returned to the calling application. The tool also supports an "ALL" map sheets option, in order to bypass the need to return a list of map sheets for the survey.

 

Nonetheless, arcpy can suffer in comparison to ArcObjects in various tasks [see this post for some revealing comparisons]. On the other hand, the advantages of using arcpy.mp can outweigh the disadvantages when it comes to automating map production.

 

After testing the tool, it's simple matter to create an empty result and publish it to Portal:

 

 

For this example, I also enable messages to be returned:

 

 

Once in Portal, it's ready to use:

 

 

Part Two: Creating and Publishing a Custom Web AppBuilder Widget

 

As I've mentioned in another post, one reason I like developing in Visual Studio is that I can create and use project templates. I've attached my current Web AppBuilder custom widget template to this post.

 

 

I've also attached the code for the widget itself. Because the widget makes multiple calls to the web tool, it needs a way to sort through the returns. In this example, the tool prefixes "SURVEYS|" when returning a list of surveys, and "MAPSHEETS|" when returning a list of map sheets. When a PDF is successfully generated, the "Result" parameter contains "Success."

 

   private onJobComplete(evt: any): void {
      let info: JobInfo = evt.jobInfo;
      this._sJobId = info.jobId;
      this._gp.getResultData(info.jobId, "result");
   }

   private onGetResultDataComplete(evt: any): void {
      let val: ParameterValue = evt.result;
      let sName: string = val.paramName;
      if (sName === "output_pdf") {
         this.status("Done.");
         window.open(val.value.url);
         this._btnGenerate.disabled = false;
         return;
      }
      let sVal: string = val.value;
      if (this.processSurveyNames(sVal))
         return;
      if (this.processMapSheets(sVal))
         return;
      if (this.processPDF(sVal))
         return;
      this.status(sVal);
   }

   private processSurveyNames(sVal: string): boolean {
      if (sVal.indexOf("SURVEYS|") !== 0)
         return false;
   ...

   private processMapSheets(sVal: string): boolean {
      if (sVal.indexOf("MAPSHEETS|") !== 0)
         return false;
   ...

   private processPDF(sVal: string): boolean {
      if (sVal !== "Success.")
         return false;
   ...

 

The widget can be tested and debugged using Web AppBuilder for ArcGIS (Developer Edition):

 

 

Publishing widgets to Portal can be tricky: our production Portal sits in a DMZ, and https calls to another server behind the firewall will fail, so widgets must reside on the Portal server. And even though our "Q" Portal sits behind the firewall and can see other servers, it's on a different domain. Thus, if I choose to host "Q" widgets on a different server, I need to configure CORS.  Here's an example of web.config:

 

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <system.webServer>
        <cors enabled="true" failUnlistedOrigins="true">
            <add origin="*" />
            <add origin="https://*.uns.com"
                 allowCredentials="true"
                 maxAge="120">

                <allowHeaders allowAllRequestedHeaders="true">
                    <add header="header1" />
                    <add header="header2" />
                </allowHeaders>
                <allowMethods>
                     <add method="DELETE" />
                </allowMethods>
                <exposeHeaders>
                    <add header="header1" />
                    <add header="header2" />
                </exposeHeaders>
            </add>
            <add origin="https://*.unisource.corp"
                 allowCredentials="true"
                 maxAge="120">

                <allowHeaders allowAllRequestedHeaders="true">
                    <add header="header1" />
                    <add header="header2" />
                </allowHeaders>
                <allowMethods>
                     <add method="DELETE" />
                </allowMethods>
                <exposeHeaders>
                    <add header="header1" />
                    <add header="header2" />
                </exposeHeaders>
            </add>
            <add origin="http://*" allowed="false" />
        </cors>
    </system.webServer>
</configuration>

 

The file sits in a virtual web folder called "Widgets" with any widget folders to publish placed under that. When publishing a widget, initially there may be a CORS error:

 

 

but reloading the page and trying again should work.

 

 

Once the widget is published to Portal, it can be added to a new or existing application, and it's ready to use:

 

 

 

Because generating plot files can be a lengthy process, it may not be useful for the widget to wait for completion. Were I to put this into production, I would probably modify the tool to send plot files to a shared folder (or even a document management service) and send an email notification when it completes or fails.

[This was to be my user presentation at the 2020 DevSummit, which was cancelled.]

 

Chrome extensions are a fun way to implement functionality that is not normally available to a web client app. Extensions can make cross-domain requests to gather data from a variety of sources, and at the same time can filter out unwanted content. The Chrome API provides a rich suite of tools for focused application development.

 

Obviously, any app that is implemented as a Chrome extension will only run in Chrome. Also, Chrome extensions must be distributed through Chrome Web Store, but that's not necessarily a bad thing, as I will show later.

 

Here are some online resources:

 

 

Chrome extensions can contain background scripts, content scripts, a UI for saved user options, and so on. The manifest file is what ties it all together: if you've developed custom widgets for Web AppBuilder, you should already be familiar with the concept. Here's an example of manifest.json:

 

{
     "name": "Simple Map Example",
     "version": "1.0",
     "description": "Build an Extension with TypeScript and the ArcGIS API for JavaScript 4.x!",
     "manifest_version": 2,
     "icons": { "128": "images/chrome32.png" },
     "browser_action": {
          "default_popup": "popup.html",
          "default_icon": { "128": "images/chrome32.png" }
     },
     "options_ui": {
          "page": "options.html",
          "open_in_tab": false
     },
     "permissions": [ "storage" ],
     "content_security_policy": "script-src 'self' https://js.arcgis.com blob:; object-src 'self'"

}

 

One thing that's worth pointing out is the "content_security_policy" entry. This will be different depending on whether you use JSAPI 3.x or 4.x. See this post for more information.

 

Let's use a Visual Studio 2017 project template (attached) to create a simple extension. Because the template uses TypeScript, there are some prerequisites; see this post for more information.

 

First, let's create a blank solution called DevSummitDemo:

 

 

Next, add a new project using the ArcGIS4xChromeExtensionTemplate:

 

 

Here is the structure of the resulting project:

 

 

Building the project compiles the TypeScript source into corresponding JS files.  Extensions can be tested and debugged using Chrome's "Load unpacked" function:

 

 

Note that Chrome DevTools will not load TypeScript source maps from within an extension. That's normally not an issue since you can debug the JS files directly. There is a way to debug the TypeScript source, but it involves some extra work. First, set up IIS express to serve up the project folder:

 

 

Then, edit the JS files to point to the localhost url:

 

 

Now, you can set a breakpoint in a TS file and it will be hit:

 

 

The disadvantage of this approach is that you must re-edit the JS files every time you recompile them.

 

The next demo involves functionality that is available in JSAPI 3.x, but not yet at 4.x. Namely, the ability to grab an image resource and display it as a layer. Here is a web page that displays the latest weather radar imagery:

 

 

The latest image is a fixed url, so nothing special needs to be done to reference it. Wouldn't it be cool, however, to display an animated loop of the 10 latest images? But there's a problem.

 

Let's add the LocalRadarLoop demo project code (attached) to the VS2017 solution and look at pageHelper.ts:

 

     export class myApp {
          public static readonly isExtension: boolean = false;
          public static readonly latestOnly: boolean = true;
     }

 

When isExtension is false, and latestOnly is true, the app behaves like the web page previously shown.

Note also this section of extension-only code that must be commented out for the app to run as a normal web page:

 

               // **** BEGIN Extension-only block ****
               /*
               if (myApp.isExtension) {
                    let sDefaultCode: string = defaultLocalCode;
                    chrome.storage.local.get({ localRadarCode: sDefaultCode },
                         (items: any) => {
                              let sCode: string = items.localRadarCode;
                              let sel: HTMLSelectElement = <HTMLSelectElement>document.getElementById("localRadarCode");
                              sel.value = sCode;
                              this.setRadar();
                         });
                    return;
               }
               */

               // **** END ****

 

Because the latest set of radar images do not have fixed names, it is necessary to obtain a directory listing to find out what they are. If you set latestOnly to false and run the app, however, you will run into the dreaded CORS policy error:

 

 

This is where the power of Chrome extensions comes into play. Set isExtension to true, and uncomment the extension-only code (which enables a saved user option), and load the app as an extension. Now you get the desired animation loop!

 

Note the relevant line in manifest.json which enables the XMLHttpRequest to run without a CORS error:

 

 

Now, as I pointed out earlier, Chrome extensions are distributed through Chrome Web Store:

 

 

There are some advantages to this. For example, updates are automatically distributed to users. You can also create an "invisible" store entry, or publish only to testers. I find that last feature useful for distributing an extension that I created for my personal use only. Other distribution options do exist, which you can read about at this link.

 

In conclusion, Chrome extensions enable pure client-side functionality that otherwise would not be possible without the aid of web services. Chrome Web Store provides a convenient way to distribute extensions and updates, with public and private options.

 

The Local Radar Loop extension is available free at Chrome Web Store.

Being a user of Microsoft Visual Studio since version 6.0, I prefer it as a one-stop shop for as many kinds of development as possible, including C++, VB, C#, Python, and HTML5/TypeScript projects. One feature of VS that I really like is the ability to create project templates. VS2015 included a project template for TypeScript, but it was ugly as sin. VS2017 dropped it, but failed to provide a viable alternative; being lazy, I continued to use the same version available online:

This must stop! Sometimes, you just have to get your hands dirty, so I decided to create my own project template from scratch. Fortunately, the TypeScript documentation has sections on Integrating with Build Tools, and Compiler Options in MSBuild, which provided valuable assistance. Also, see the MSBuild documentation and How to: Create project templates for more information.

 

Prerequisites:

The TypeScript website has download links to install the latest version for a number of IDEs, including VS2017. In addition, since the TypeScript folks now prefer you to use npm to install typings; you should install Node.


Warning! If you are behind a corporate firewall, you may run into this error when you try to use npm to install typings:

   npm ERR! code UNABLE_TO_GET_ISSUER_CERT_LOCALLY

If you see that, try setting this configuration at the command prompt:

   npm config set strict-ssl false

 

Create a generic TypeScript project:

While, formally, the best approach would be to create a new project type, my lazy approach recycles the C# project type and redefines the build targets (but there is a disadvantage – see below). The first step is to create a blank solution in VS2017 named “TypeScriptProjectTemplates.” In Explorer or the Command Prompt, navigate to the solution folder and create a subfolder named “BasicTypeScriptTemplate.” In that folder, create a file named “BasicTypeScriptTemplate.csproj,” containing the following text:

 

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.Default.props" Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.Default.props')" />
  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <OutputType>Library</OutputType>
    <StartupObject />
    <OutputPath>.\</OutputPath>
    <IntermediateOutputPath>vs\</IntermediateOutputPath>
  </PropertyGroup>
  <PropertyGroup>
    <VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">12.0</VisualStudioVersion>
  </PropertyGroup>
  <PropertyGroup>
    <TypeScriptToolsVersion>Latest</TypeScriptToolsVersion>
    <TypeScriptModuleKind>amd</TypeScriptModuleKind>
    <TypeScriptNoImplicitAny>true</TypeScriptNoImplicitAny>
    <TypeScriptESModuleInterop>true</TypeScriptESModuleInterop>
    <TypeScriptJSXEmit>react</TypeScriptJSXEmit>
    <TypeScriptJSXFactory>tsx</TypeScriptJSXFactory>
    <TypeScriptTarget>es5</TypeScriptTarget>
    <TypeScriptExperimentalDecorators>true</TypeScriptExperimentalDecorators>
    <TypeScriptPreserveConstEnums>true</TypeScriptPreserveConstEnums>
    <TypeScriptSuppressImplicitAnyIndexErrors>true</TypeScriptSuppressImplicitAnyIndexErrors>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)' == 'Debug'">
    <TypeScriptRemoveComments>false</TypeScriptRemoveComments>
    <TypeScriptSourceMap>true</TypeScriptSourceMap>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)' == 'Release'">
    <TypeScriptRemoveComments>true</TypeScriptRemoveComments>
    <TypeScriptSourceMap>false</TypeScriptSourceMap>
  </PropertyGroup>
  <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets" Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets')" />
  <Target Name="Build" DependsOnTargets="CompileTypeScript">
  </Target>
  <Target Name="Rebuild" DependsOnTargets="CompileTypeScript">
  </Target>
  <Target Name="Clean" Condition="Exists('$(TSDefaultOutputLog)')">
    <ItemGroup>
      <TSOutputLogsToDelete Include="$(TSDefaultOutputLog)" />
    </ItemGroup>
    <ReadLinesFromFile File="@(TSOutputLogsToDelete)">
      <Output TaskParameter="Lines" ItemName="TSCompilerOutput" />
    </ReadLinesFromFile>
    <Delete Files="@(TSCompilerOutput)" Condition=" '@(TSCompilerOutput)' != '' " />
    <Delete Files="@(TSOutputLogsToDelete)" />
    <!-- <RemoveDir Directories="$(IntermediateOutputPath)" /> -->
  </Target>
</Project>

 

In VS2017, add the existing project to the solution. Within the project, create an “app” subfolder, and add a new TypeScript file named “main.ts,” containing the following text:

class Student {
     fullName: string;
     constructor(public firstName: string, public middleInitial: string, public lastName: string) {
          this.fullName = firstName + " " + middleInitial + " " + lastName;
     }
}

interface Person {
     firstName: string;
     lastName: string;
}

function greeter(person: Person) {
     return "Hello, " + person.firstName + " " + person.lastName;
}

let user = new Student("Jane", "M.", "User");

document.body.textContent = greeter(user);

 

In the project folder, add an HTML Page file named “index.html,” containing the following text:

<!DOCTYPE html>
<html>
<head>
     <title>TypeScript Greeter</title>
</head>
<body>
     <script src="./app/main.js"></script>
</body>
</html>

 

At this point, the project should look like this in Solution Explorer:

 

Building or rebuilding the project will generate TypeScript compiler output, and a file named “Tsc.out” will be created in a subfolder named “vs”. The “Tsc.out” file defines the compiler output to delete when cleaning the project; cleaning the project will also delete that file. [Note that if you build and clean “Release” without cleaning “Debug” beforehand, the source map files will still remain.]

 

Export the project template:

At this point, if you export the project to a template, you have a generic TypeScript project template. However, it will be displayed under the “Visual C#” category. If you want it to appear under the “TypeScript” category, there are additional steps to take. First, unzip the template to a new folder and edit “MyTemplate.vstemplate:”

 

Change the “ProjectType” value from “CSharp” to “TypeScript”. Zip the contents of the folder to a new zip file, and the template is ready to use.

 

Building a JSAPI project template:

Now that we have a basic TypeScript project template, the next step is to use it to create a template for a simple JavaScript API project.


First, create a new project, called “ArcGIS4xTypeScriptTemplate,” using the template created above. Open a Command Prompt, navigate to the project folder, and enter the following commands:

   npm init --yes
   npm install --save @types/arcgis-js-api

Back in Solution Explorer, select the project and click the “Show All Files” button. Select the “node_modules” folder, “package.json,” and “package-lock.json,” right-click, and select “Include In Project.” Finally, replace the contents of “main.ts” and “index.html” with the text given at the JSAPI TypeScript walk-through. Your project should now look like this:

 

You may notice that the “import” statements in “main.ts” are marked as errors, even though the esModuleInterop flag is set in the project:

 

This appears to be a defect in the Visual Studio extension. The project will build without errors, and the resulting page will load correctly. If it annoys you, you can always revert to the older AMD style statements:

 

At this point, you’re ready to export the template.

 

On a final note:

The JavaScript API is updated frequently, which means that you may also want to keep your project templates up to date. Rather than updating the source project and repeating the export steps, you might want to consider keeping the unzipped template folders in a standard location, and updating those directly. Then, all you have to do is zip them to create the updated template.

Recently, I found myself painted into a corner.  Some time ago, I'd created custom tile caches for use with Runtime .NET, which had one scale level defined.  They worked just fine in 10.2.7, but on preparing to upgrade to 100.x, I discovered that they caused 100.6 to hang up.  The workaround was simple enough, namely to define additional scale levels, even if they weren't populated.  However, the task of modifying the caches for nearly 150 users proved so daunting, that I decided to let the app itself make the modification.  Updates to the app are automatically detected and downloaded, which provides a simpler mechanism than deploying a script to everyone's machine. It's not a perceptible performance hit as it is, and later on, as part of another update, I can simply deactivate it.  So here's the code:

 

 

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Xml.Linq;

namespace NavAddin
{

     /*
      * Runtime 100.6 BUG: Tile caches that have only one scale level level defined will hang up on loading.
      * WORKAROUND: Define additional scale levels
      * [Assumes that 100 < scale level < 24000]
     */


     public static class TileCacheHelper
     {

          public const string L0Scale = "24000";
          public const string L0Resolution = "20.833375000083333";
          public const string L2Scale = "100";
          public const string L2Resolution = "0.086805729167013887";

          public static bool CheckTileCache(string sPath)
          {

               // Check if tile cache (i.e. a folder)

               if (!Directory.Exists(sPath))
                    return true; // Not a tile cache

               // Check if one scale level defined

               string sConfigPath = Path.Combine(sPath, "conf.xml");
               StreamReader sr = new StreamReader(sConfigPath);
               XDocument xDoc = XDocument.Load(sr);
               sr.Close();
               XElement xRoot = xDoc.Root;
               XElement xTCInfo = xRoot.Element("TileCacheInfo");
               XElement xLODInfos = xTCInfo.Element("LODInfos");
               int iLevelCount = xLODInfos.Elements("LODInfo").Count();
               if (iLevelCount > 1)
                    return true; // Not a problem
               if (iLevelCount < 1)
                    return false; // This should never happen?

               // Check if scale level is between 100 (L2) and 24000 (L0)

               XElement xLODInfo, xLevelID, xScale, xResolution;

               xLODInfo = xLODInfos.Element("LODInfo");
               xScale = xLODInfo.Element("Scale");
               string sScale = xScale.Value;
               double dScale = Convert.ToDouble(sScale);
               double dL0Scale = Convert.ToDouble(L0Scale);
               double dL2Scale = Convert.ToDouble(L2Scale);
               if (dScale >= dL0Scale)
                    return false;
               if (dScale <= dL2Scale)
                    return false;

               // Redefine scale levels

               xLevelID = xLODInfo.Element("LevelID");
               xLevelID.Value = "1";
               XElement xLOD0 = new XElement(xLODInfo);
               xLevelID = xLOD0.Element("LevelID");
               xLevelID.Value = "0";
               xScale = xLOD0.Element("Scale");
               xScale.Value = L0Scale;
               xResolution = xLOD0.Element("Resolution");
               xResolution.Value = L0Resolution;
               xLODInfos.AddFirst(xLOD0);
               XElement xLOD2 = new XElement(xLODInfo);
               xLevelID = xLOD2.Element("LevelID");
               xLevelID.Value = "2";
               xScale = xLOD2.Element("Scale");
               xScale.Value = L2Scale;
               xResolution = xLOD2.Element("Resolution");
               xResolution.Value = L2Resolution;
               xLODInfos.Add(xLOD2);

               // Write config file

               StreamWriter sw = new StreamWriter(sConfigPath);
               xDoc.Save(sw);
               sw.Close();

               // Rename L00 folder to L01

               string sLayersPath = Path.Combine(sPath, "_alllayers");
               string sL00Path = Path.Combine(sLayersPath, "L00");
               string sL01Path = Path.Combine(sLayersPath, "L01");
               Directory.Move(sL00Path, sL01Path);

               return true;

          }



     }
}

At some point in the 100.x lifespan of ArcGIS Runtime SDK for .NET, the old tried-and-true method of treating a MapView as just another WPF Visual went sailing out the window.  Granted, the ExportImageAsync method should have been a simple workaround, but for one drawback: overlay items are not included!

 

Now I don't know about you, but I find the OverlayItemsControl to be a great way to add interactive text to a map.  You can have it respond to a mouse-over:

 

 

Bring up a context menu:

 

 

Modify properties:

 

 

And so on.  In the old days, when you created an image of the MapView, the overlays would just come right along:

 

          private RenderTargetBitmap GetMapImage(MapView mv)
          {

               // Save map transform

               System.Windows.Media.Transform t = mv.LayoutTransform;
               Rect r = System.Windows.Controls.Primitives.LayoutInformation.GetLayoutSlot(mv);
               mv.LayoutTransform = null;
               Size sz = new Size(mv.ActualWidth, mv.ActualHeight);
               mv.Measure(sz);
               mv.Arrange(new Rect(sz));

               // Output map

               RenderTargetBitmap rtBitmap = new RenderTargetBitmap(
                    (int)sz.Width, (int)sz.Height, 96d, 96d,
                    System.Windows.Media.PixelFormats.Pbgra32);
               rtBitmap.Render(mv);

               // Restore map transform

               mv.Arrange(r);
               mv.LayoutTransform = t;

               return rtBitmap;

          }

 

Not so today!  Try that approach in 100.6 and you just get a black box.    

 

My workaround:

 

  1. Create a Canvas
  2. Create an Image for the Mapview and add it to the Canvas
  3. Create an Image for every overlay and add it to the Canvas
  4. Create a bitmap from the Canvas

 

Step 3 is trickier than you would think, however, because of two issues:  1) relating the anchor point to the overlay, and 2) taking any RenderTransform into account.

 

As far as I can tell, this is the rule for determining the relationship between the overlay and the anchor point:

 

HorizontalAlignment: Center or Stretch, anchor point is at the center; Left, anchor point is at the right; Right, anchor point is at the left.

VerticalAlignment: Center or Stretch, anchor point is at the center; Top, anchor point is at the bottom; Bottom, anchor point is at the top.

For a Canvas element, the anchor point is at 0,0 -- however, I have not found a good way to create an Image from a Canvas [if the actual width and height are unknown].

 

To create an Image from the element, any RenderTransform must be removed before generating the RenderTargetBitmap.  Then, the Transform must be reapplied to the Image.  Also, you need to preserve HorizontalAlignment and VerticalAlignment if you're creating a page layout using a copy of the MapView, so that the anchor point placement is correct.

 

So here it is, the code for my workaround:

 

using System.Collections.Generic;
using System.Diagnostics;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using System.Windows.Media.Imaging;

using Esri.ArcGISRuntime.Geometry;
using Esri.ArcGISRuntime.UI;
using Esri.ArcGISRuntime.UI.Controls;

namespace Workarounds
{

     public struct MapOverlayExport
     {
          public Image OverlayImage;
          public MapPoint Anchor;
          public MapPoint TopLeft;
     }

     public static class MapExportHelper
     {

          // Export bitmap from map with XAML graphics overlays

          public static async Task<ImageSource> GetMapImage(MapView mv)
          {

               RuntimeImage ri = await mv.ExportImageAsync();
               ImageSource src = await ri.ToImageSourceAsync();
               if (mv.Overlays.Items.Count == 0)
                    return src; // No XAML overlays

               // Create canvas

               double dWidth = mv.ActualWidth;
               double dHeight = mv.ActualHeight;
               Rect rMap = new Rect(0, 0, dWidth, dHeight);
               Size szMap = new Size(dWidth, dHeight);
               Canvas c = new Canvas();

               // Add map image

               Image imgMap = new Image()
               {
                    Height = dHeight,
                    Width = dWidth,
                    Source = src
               };
               imgMap.Measure(szMap);
               imgMap.Arrange(rMap);
               imgMap.UpdateLayout();
               Canvas.SetTop(imgMap, 0);
               Canvas.SetLeft(imgMap, 0);
               c.Children.Add(imgMap);

               // Add map overlays

               List<MapOverlayExport> Overlays = GetMapOverlays(mv);
               foreach (MapOverlayExport overlay in Overlays)
               {

                    // Get Image and location

                    Image img = overlay.OverlayImage;
                    MapPoint ptMap = overlay.TopLeft;
                    Point ptScreen = mv.LocationToScreen(ptMap);

                    // Create and place image of element

                    Canvas.SetTop(img, ptScreen.Y);
                    Canvas.SetLeft(img, ptScreen.X);
                    c.Children.Add(img);
                    img.UpdateLayout();

               }
               c.Measure(szMap);
               c.Arrange(rMap);
               c.UpdateLayout();

               // Create RenderTargetBitmap

               RenderTargetBitmap rtBitmap = new RenderTargetBitmap(
                    (int)dWidth, (int)dHeight, 96d, 96d, PixelFormats.Pbgra32);
               rtBitmap.Render(c);
               return rtBitmap;

          }

          public static List<MapOverlayExport> GetMapOverlays(MapView mv)
          {

               List<MapOverlayExport> Overlays = new List<MapOverlayExport>();
               foreach (object obj in mv.Overlays.Items)
               {

                    // Get element and location

                    if (!(obj is FrameworkElement elem))
                    {
                         Debug.Print("MapExportHelper: Non-FrameworkElement encountered.");
                         continue;
                    }
                    double dW = elem.ActualWidth;
                    double dH = elem.ActualHeight;
                    if ((dH == 0) || (dW == 0))
                    {
                         Debug.Print("MapExportHelper: Unsupported FrameworkElement encountered.");
                         continue;
                    }

                    // Remove RenderTransform and RenderTransformOrigin

                    Transform tRender = elem.RenderTransform;
                    Point ptOrigin = elem.RenderTransformOrigin;
                    elem.RenderTransform = null;
                    elem.RenderTransformOrigin = new Point(0,0);
                    elem.Measure(new Size(dW, dH));
                    elem.Arrange(new Rect(0, 0, dW, dH));
                    elem.UpdateLayout();

                    // Create image of element

                    ImageSource src = null;
                    if (elem is Image imgSrc)
                         src = imgSrc.Source;
                    else
                    {
                         RenderTargetBitmap bmp = new RenderTargetBitmap(
                              (int)dW, (int)dH, 96d, 96d, PixelFormats.Pbgra32);
                         bmp.Render(elem);
                         src = bmp;
                    }
                    Image img = new Image()
                    {
                         Height = dH,
                         Width = dW,
                         Source = src,
                         HorizontalAlignment = elem.HorizontalAlignment,
                         VerticalAlignment = elem.VerticalAlignment,
                         RenderTransform = tRender,
                         RenderTransformOrigin = ptOrigin
                    };

                    // Restore RenderTransform and RenderTransformOrigin

                    elem.RenderTransform = tRender;
                    elem.RenderTransformOrigin = ptOrigin;

                    // Find top left location in map coordinates

                    MapPoint ptMap = MapView.GetViewOverlayAnchor(elem);
                    Point ptScreen = mv.LocationToScreen(ptMap);
                    double dY = 0;
                    double dX = 0;
                    switch (elem.VerticalAlignment)
                    {
                         case VerticalAlignment.Center:
                         case VerticalAlignment.Stretch:
                              dY = -dH / 2;
                              break;
                         case VerticalAlignment.Top:
                              dY = -dH;
                              break;
                    }
                    switch (elem.HorizontalAlignment)
                    {
                         case HorizontalAlignment.Center:
                         case HorizontalAlignment.Stretch:
                              dX = -dW / 2;
                              break;
                         case HorizontalAlignment.Left:
                              dX = -dW;
                              break;
                    }
                    Point ptTopLeftScreen = new Point(ptScreen.X + dX, ptScreen.Y + dY);
                    MapPoint ptTopLeftMap = mv.ScreenToLocation(ptTopLeftScreen);

                    // Add exported overlay to list

                    Overlays.Add(new MapOverlayExport()
                    {
                         OverlayImage = img,
                         Anchor = ptMap,
                         TopLeft = ptTopLeftMap
                    });

               }

               return Overlays;

          }

     }
}

 

P.S. -- If you want ExportImageAsync to include overlays, vote up this idea:  GeoView.ExportImageAsync should include overlays 

Originally posted by Courtney Kirkham, September 18, 2019 from the MapThis! Blog

While OAuth 2.0 is Esri’s recommended methodology for handling security and authentication for their ArcGIS platform, not everyone using it understands what it does or how to implement it. Here at GEO Jobe, we’ve had to explain it to more than a few of the people we’ve worked with. As such, we thought we’d lay out a quick guide to what OAuth is and how it works.

OAuth 2.0 handles security and authentication for the ArcGIS platform. Image Source

What is OAuth 2.0?

OAuth 2.0 is the protocol that ensures only users you give permission to can access your ArcGIS content. Esri chooses to use OAuth 2.0 for a number of reasons, including this list they’ve provided:

  • OAuth 2.0 meets the needs of both users and applications.
  • There are strong security practices around OAuth 2.0.
  • OAuth 2.0 is designed to function at Internet-scale across domains, networks, cloud services, and applications.
  • As a widely accepted standard OAuth 2.0 has many libraries and helpers for a variety of languages and platforms.

This is an important part of security for controlling who can access or edit content, as well as managing credit usage. By using OAuth 2.0 in your applications, you can make a map of company assets available to anyone in your company while still keeping it hidden from the public. A company working on building a new neighborhood could create a map to track the progress of the homes being built, while ensuring only supervisors can edit the status of the houses.

Perhaps the most important way OAuth 2.0 manages security is controlling access to premium content and services. Since interacting with these resources consumes credits, and credits cost real money, OAuth 2.0 is an important part of making sure that only the people you want accessing those resources are able to do so.
(Bonus: For additional control over security while reducing the overhead in your in your org, check out security.manager)

You’re not getting that data without valid credentials. Image Source

How does OAuth 2.0 work?

Here at GEO Jobe, we’ve found the best way to explain how OAuth 2.0 is with an analogy. Say your friend, Chris, got access to some exclusive event – a networking opportunity, a party, or something like that. There is a private guest list for the event, and the doormen are checking everyone. Your friend tells you all you need to do is tell the doorman you’re there with Chris, and the doorman will let you in.

When you get to the event and check in with the doorman, one of three things can happen. We’ve outlined them each below, and explained what they mean in the context of OAuth 2.0.

The Doorman Finds Your Friend; You Get a Wristband and Go In

This is what happens when OAuth 2.0 works. You’re able to get in and see your friend. In the case of ArcGIS, this means you requested access to content that you have permission to see. After OAuth checks your credentials, they give you a token (the wristband) that’s added to all your requests for content after that. Then, you get whatever you need (that you have permission to view), and everything is good.

The Doorman Finds Your Friend and You Don’t Get In

This is when the doorman comes back and tells you they found Chris, but Chris says they don’t know you. While this may be an awkward social situation, in OAuth 2.0, it’s pretty simple. It means you tried to access content, and OAuth 2.0 doesn’t think you are supposed to be able to see it. This will often result in an “Invalid Redirect URI” error.

In terms of development, this happens because the request is coming from a URL the app doesn’t recognize. To fix it, go to the app in your ArcGIS used to register for OAuth 2.0. Then, in the Settings menu, view the “Registered Info”. The domain sending the request will need to be included in the Redirect URIs.

The Doorman Can’t Find Your Friend

Maybe your friend left the party. Maybe the doorman thought the “Chris” they were looking for was a “Christopher” instead of a “Christine”. Regardless of the reason, the doorman can’t find your friend, and they’re not letting you into the party. When this happens, OAuth 2.0 will return an error stating that there is an “Invalid Client ID”. This is also easy for a developer to fix.

This situation occurs because there isn’t an app in the ArcGIS Organization in question with an App ID that matches what OAuth 2.0 was told to look for. This can happen if the app was deleted from your ArcGIS Org, or if the code where the App ID was specified was altered. In order to fix it, check where the App ID is specified in the code for the OAuth 2.0 call. Also, check the application in ArcGIS Org used to register for OAuth 2.0. If the application was deleted, you will need to create and register a new application, then use that App ID. If the application exists, check under the “Settings” menu and the “Registered Info” to find the App ID. This should match the value for the App ID in the code. If it doesn’t, recopy the App ID from the application in the ArcGIS, then paste the value into the code where the OAuth 2.0 information is initialized.

How to Implement an OAuth 2.0 Application

Setting up an OAuth 2.0 application in your ArcGIS Organization is fairly simple. In fact, it only takes five steps! It’s so easy, we’ve outlined the process below.

1. To start, sign into your ArcGIS Org and go to the Content menu. From there, click on “Add Item” and choose the option for “An Application”.

2. Next, you’ll select the type “Application” and fill out some basic information.

3. After you add the item, go to the Settings page and click the “Registered Info” button. Note: While on the settings page, you may want to select the option for “Prevent this item from being accidentally deleted.

4. After clicking the “Registered Info” button, the App ID you will need should be visible on the left. The final step will be to update the Rediret URIs for the application. Click the “Update” button on the right side of the screen.

5. A popup with the Registered Info should appear. Any applications a developer builds that will need to OAuth into your ArcGIS organization will need to have their domains added to the approved Redirect URIs of an OAuth application. Add the appropriate domains in the textbox, then click “Add”. After your domains are all added, click the “Update” button at the bottom of the popup.

And there you have it! Five easy steps and you’re ready to use OAuth 2.0 in your ArcGIS Organization.

You can relax, knowing your ArcGIS content is safe and only accessible by who you choose. Image Source

Conclusion

Securing your ArcGIS data is important. OAuth 2.0 can make it simple. If you need any assistance setting up OAuth for your ArcGIS Organization, or need some custom applications built while keeping your data secure, reach out to us at connect@geo-jobe.com. We’ll be glad to help!

Liked this article? Here’s more cool stuff

Does the ArcGIS API for JavaScript work with Content Security Policy?  The short answer is yes, but which version you're using (4.x vs. 3.x) determines the approach to take.  Dojo allows you to configure support CSP support:

 

// mapconfig.js
window.dojoConfig = {
     async: true,
     has: {"csp-restrictions": true}
}

 

So the following example works [note that blob support must be enabled]:

 

<!DOCTYPE html>
<html>
<head>
     <meta charset="utf-8" />
     <meta http-equiv="content-security-policy"
               content="script-src 'self' https://js.arcgis.com blob:; object-src 'self'" />

     <title>Using ArcGIS API for JavaScript with CSP</title>
     <script src="./mapconfig.js"></script>
     <link rel="stylesheet" href="https://js.arcgis.com/4.12/esri/css/main.css">
     <script src="https://js.arcgis.com/4.12/"></script>
     <style>
          html, body, #map {
               padding: 0;
               margin: 0;
               height: 100%;
               width: 100%
          }
     </style>

</head>
<body>
     <div id="map"></div>
     <script src="./mapinit412.js"></script>
</body>
</html>
//mapinit412.js
require([
     "esri/Map",
     "esri/views/MapView"
], function (Map, MapView) {

     var map = new Map({
          basemap: "topo-vector"
     });

     var view = new MapView({
          container: "map",
          map: map,
          center: [-118.71511, 34.09042],
          zoom: 11
     });
});

 

Note that CSP doesn't allow any inline JavaScript, so even the simplest blocks of code need to be in a separate file.

 

What about 3.x?  Aye, there's the rub.  Although Dojo supports CSP, the ArcGIS API 3.x does not: it contains code that CSP will reject.  Here's an example from VectorTileLayerImpl.js:

 

l = Function("return this")();

 

The only way to get 3.x to work with CSP is to include the dreaded 'unsafe-eval' in the policy string.  With that, the following example will work:

 

<!DOCTYPE html>
<html>
<head>
     <meta charset="utf-8" />
     <meta http-equiv="content-security-policy"
               content="script-src 'self' 'unsafe-eval' https://js.arcgis.com; object-src 'self'" />

     <title>Using ArcGIS API for JavaScript with CSP</title>
     <script src="./mapconfig.js"></script>
     <link rel="stylesheet" href="https://js.arcgis.com/3.29/esri/css/esri.css">
     <script src="https://js.arcgis.com/3.29/"></script>
     <style>
          html, body, #map {
               padding: 0;
               margin: 0;
               height: 100%;
               width: 100%
          }
     </style>

</head>
<body>
     <div id="map"></div>
     <script src="./mapinit329.js"></script>
</body>
</html>
// mapinit329.js
require(["esri/map"], function (Map) {
     var map = new Map("map", {
          center: [-118, 34.5],
          zoom: 8,
          basemap: "topo"
     });
});

I received a request to provide all videos and other files available for an area of interest on the map.

 

Using ArcGIS Pro, I digitized a polygon to enclose the desired area.

Used this polygon to select all pipe line features that intersect this area.
Export the selected pipes to an excel.
Copy only the user-defined unique id field onto a local text file, as list1.txt
Ensure no extra newlines/whitespace at the beginning nor end of the file.

move list1.txt to a new directory labeled 'stagingFiles'.

 

 

 

Using the command line, write the  contents of the directory that contains the desired files to a local text file, as list2.txt:
dir /b > list2.txt
remove the value of 'list2.txt' from the text file, as well as the names of any subdirectories.
If subdirectories exist, create another text file within the subdirectory, as list2_1.txt, then move it to 'stagingFiles' directory, and repeat for other subdirectories.
remove the value of 'list2_1.txt' from the text file, as well as the names of any subdirectories, and repeat for other subdirectories.

 

 

Use this python script, and follow the remainder instructions within it:

 

import re


with open(r'\\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\stagingFiles\list1.txt', 'r') as f:
    generatorOfFileLines = [line.strip()for line in f]
    a = generatorOfFileLines
    for i in range(len(a)):
        pattern = re.compile(r'(.*)' + str(re.escape(a[i])) + r'(.*)')
        with open(r'\\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\stagingFiles\list2.txt', 'r') as g:
            contents = g.read()
            matches = pattern.finditer(contents)
            for match in matches:
                results = match.group(0)
                print results
        i +=1


print "Copy and paste the results of the Regular Expression (above this printed statement) to a text file, as list3.txt, save it, and close it."
resultsFile = raw_input("paste path to text file here: ")
# \\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\stagingFiles\list3.txt

with open(r'{}'.format(resultsFile), 'r') as h:
    b = [line.strip()for line in h]
    c = list(set(b))
    for i in range(len(c)):
            print c[i]
  
print 'Now take the results above, and paste to the list4.txt file (move the text file to the intended directory to search for files)'
print 'This file is used in conjuction with the command line argument FOR /F "delims=" %N in (list4.txt) do COPY "%N" "C:\\targetFolder" to copy and paste files to an intended directory'
print 'If subdirectories exist, make a new list3.txt from the list2_# (by changing the file name in the path above), and perform the instructions in this script again.'
print 'find out what to do when a filename collides with another, like.........command overwrite yes no all.....'
print 'Separate the cmd line results to single out the files that did not copy successfully'
print 'If feasible, change the filename to its current name, and append "_1" to it.'
print 'once done, remove the list4.txt, and target folder (if applicable) from the intended directory.'
print "______________________________________________Operation is Complete"    





# Try the findall() method without groups, and it should work.
# Another use case for regex would be for validating user input in client apps to ensure what they input meets a criteria.






    # Didn't work

        #     subbedContents = pattern.sub(r'\0', contents)
        #     print subbedContents
        # i +=1


    # for dirpath, dirnames, filenames in os.walk(r'\\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\test'):
    # \\GIS-WEBAPP\Hyperlinks\StormCCTV
        # for file in filenames:
        #     matches = pattern.finditer(re.escape(file))
        #     print matches
        #     for match in matches:
        #         print match


        # Didn't work for list4.txt:
        # trimmedResultsFile = r'\\cityhall\data\GIS_MAPS\AndresCastillo\toDo\stormCCTVReportsVideos3336NFlaglerOutfallImprovementsTicket40365\stagingFiles\list4.txt'
        # with open('{}.format(trimmedResultsFile)', 'w') as j:
        #     j.write(str(c[i]))


# fnmatch and os modules did not work





# To get the filenames of the resources in a directory:

# Hold the "Shift" key, right-click the folder and select "Open Command Window Here."
# This only works with folders, not libraries.

# Type "dir /b > dirlist.txt" without quotes and press "Enter."
# This creates a list containing file names only.

# Open Microsoft Excel and press "Ctrl-O" to bring up the Open dialog window.

# Open Txt file in Excel

 

 

 

 

 

 

Sent the client the requested files.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Resources:

 

Google:
How Do I select multiple files in a folder from a list of file names
Select multiple files in same folder with variations of filenames python
Select multiple files in same folder with variations of filenames python regular expression
write contents of directory to text file.
Select variations of many file names at once python regex
regular expression tester

 

https://pymotw.com/2/glob/
https://docs.python.org/2/library/fnmatch.html

 

https://www.tenforums.com/general-support/110415-how-do-i-select-multiple-files-folder-list-file-names.html
How Do I select multiple files in a folder from a list of file names

16 May 2018 #4

Welcome to TenForums @SGTOOL

The simplest way to use a text file with a filename on each line (such as list.txt) to copy the files to a folder such as C:\Destination is by using this single command in a command prompt:

FOR /F "delims=" %N in (list.txt) do COPY "%N" "C:\targetFolder"

'for' loops through all the filenames in list.txt
"delims=" says use the whole of each line as the parameter %N
the quotes around %N in 'copy "%N"' allows for any filenames that contain spaces
C:\Destination specifies the folder you want to copy to (it must already exist, create it first if necessary)

If the text file contains just the file names, then the command has to be run in the folder that contains the files to be copied. To go to that folder, first use the 'change directory' command: CD <full path to the folder>
eg: CD C:\Source_folder

If the text file contains the full path and filename on each line, eg:
C:\Users\Me\Pictures\SourceFiles\Filename.jpg
...then the CD step is not needed.

If the text file is in a different folder, give the full path to it in the FOR command, eg: (C:\Temp\list.txt)

 

 

 

 

https://answers.microsoft.com/en-us/windows/forum/windows_vista-files/select-multiple-files-in-same-folder-from-a-list/d6bba385-f87d-448a-ada8-76cec34d5a63?page=1
Select multiple files in same folder from a list of file names
Tiffany McLeod Replied on April 25, 2009

You can use a Excel Spreadsheet to automatically format the code and then copy and paste it into a text document which you would save as a *.bat file. I have a Spreadsheet I've created for this, and I'll share with you, you can download it by clicking on the following link (hopefully).

Download Excel Spreadsheet

How to Use:

Open the Spreadsheet.

At the bottom of the screen, you will see that there are two worksheets in this file.
If your list of names includes the the full file path (example: c:\weddings\sally\img1.jpg), choose the worksheet labeled "Full Path".
If your list only has the filenames (example: myimage.jpg), choose the worksheet labeled "Filename Only".

I'll explain the Filename Only worksheet:

Image

You will NOT enter any Data into the first three columns: that's the output.
In column E (Current Folder Path) Type the full folder path where the pictures are currently located. Make sure that you include the final "\", as shown above. You will only need to type this once.

The next Column (F) is the File Name Column. Paste your list of file names here, one name per cell (the list you paste from should have only one name per line). My spreadsheet allows for well over 200 filenames before the formulas stop working (for more files, simply extend the formulas).

Type the full path of the folder you want to move the pictures to into column G, as shown.

Now, we'll look back to the First three columns, A,B, and C. Find the spot where the output in column B no longer has a filename after the folder path. Select all the output in the three columns, above the ending line. For example, in the sheet above, the ending line is Line 7 (we don't want to include that line), so the selected range would be A2:C6. For two hundred files, the range would be A2:C202.

Copy.

Open notepad. Paste.

Save as a .bat file. (Choose save. Slect the folder you wish to save it in. Type move.bat into the name line. Make sure that "All files" is selected from the file type drop down list.) This .bat file is reusable. Simply right-click and choose edit to reuse instead of making a new one each time.

Once the .bat file is saved, double-click on it to run it.

Check your destination folder, and make sure the files moved as desired.

If you want to copy the files instead of move them, simply type COPY into cell A2.

Use the Full Path worksheet in the same manner, except you don't need to enter the current folder path into cell E2.

Now for the Disclaimer: Follow these instructions at your own risk. I take no responsibility for any damage caused to your data or system as a consquence of using my spreadsheet or following these instructions. Back up your data before using the .bat file.
You should test this process to make sure that you understand it, before using it for important files.

I freely admit that this is probably a bit clunky and inelegant, but it works and it is very versatile for generating large batches of DOS commands.

Best Wishes,
Tiffany McLeod aka BookwormDragon

 

 

 


https://realpython.com/working-with-files-in-python/#simple-filename-pattern-matching-using-fnmatch
Working With Files in Python
by Vuyisile Ndlovu Jan 21, 2019

 

 

 

https://thispointer.com/5-different-ways-to-read-a-file-line-by-line-in-python/
5 Different ways to read a file line by line in Python

 

 

 

 


https://www.sevenforums.com/general-discussion/201734-select-search-multiple-files-copy-paste-new-folder.html
select/search multiple files, copy and paste to a new folder

 

 

Python

python snippets

Developer Communities

The specified item was not found.

GIS Developers

Geo Developers

Developer Communities

GIS Developers

Article written by Amy Niessen with contributions from Ciara Rowland-Simms

 

On Wednesday, May 15th the Cardiff R&D Center co-hosted a Rust and C++ birthday party at Yolk Recruitment to celebrate Rust's 4th Birthday! Despite short notice, we were able to get the word out in time for a nice mixture of full-time freelance and hobbyist programmers as well as a few students to join us. Quite a few people expressed interest in helping out with future events while demonstrating a lot of enthusiasm for a Rust/C++ community in Cardiff! 

 

To begin, you can't have a party without cake, and to celebrate the birthday properly, you can see the cake and Ferris the crab, which was made from icing by Jack Kelly's partner, Sofia.

 

cake and Ferris the crab

 

We then began to introduce our speakers. We had Dan Morgan from DevOpsGroup, Ciara Rowland-Simms from Esri, Chris Light from Esri, and Jack Kelly from DevOpsGroup, with Chris doubling as MC for the event.

 

The first two talks were about learning new languages, specifically Rust and C++.

 

Dan had never done C++ before until that week and spoke on the confusion you face when trying to find best practices and up to date learning materials online. His talk will be part of series documenting his journey into C++, driven by advice from the audience about what resources to use next!

 

Dan and Rust

 

Ciara did a talk on learning Rust, having also never used the language. By contrast, there is very coherent documentation story for Rust as it is a very modern language. The learning experience was therefore comparatively painless and she was able to cover install and setup, including debugging, along with discussing some cool and some controversial Rust language features (such as the heavy use of macros, the ability to do shadowing of variables, and implicit returns).

 

Ciara and Rust

 

Chris’s talk was Modern C++: ACCU 2019 revelations and covered some of the cool new features of C++ 20. He also discussed C++ 17 and more broadly the modernization of the language. The talk provoked some really good discussion on some of the network capabilities that are lacking in the standard library and provided a great space for talking about where the language is moving.

 

Chris and C++

 

The final talk was a dive in Rust best practices, helpful tips, formatting and linting tools, and how easy CI/CD can be with Rust. He really highlighted why people are so excited about Rust and how easily it can leveraged to hit the ground running even with only limited experience in the language.

 

 

 

In the end, we had some specific language questions, which is always good. It seemed there were a lot of really good discussions taking place during the event and, of course, people already sharing excitement in the anticipation of future meetups.

  

Going forward, we also hope to bring in more speakers – which hopefully won’t be difficult given the enthusiasm from audience members at our first event! We really liked having talks which were a mixture of beginner accessible and provoking good discussion for more experienced developers. We struck that balance pretty well this time and hope to do so again in future meetups.

 

Overall, we were really pleased with how the event went and are really excited to see the beginning of a Cardiff C++/Rust community where we can all learn from each other and grow! We look forward to the next meetup and will be sure to announce it on the Meetup.com page. Be sure to follow it to stay in the loop on our next adventure!

Last month we hosted a unique GeoDev Webinar when we had Manushi Majumdar share her presentation on "Thinking Spatially and Statistically". Manushi introduced types and characteristics of spatial data and advanced GIS analysis techniques. She was able to cover a few basic concepts of statistics and show how they differ in a spatial context, advancing towards Spatial Machine Learning with ArcGIS. 

 

Here are the questions that were received during the webinar along with their respective answers:

 

Q: What is the difference between machine learning and statistics? For example, with regression, is there a difference? This always puzzles me!

A: Here is a resource to understand the difference: https://www.kdnuggets.com/2016/11/machine-learning-vs-statistics.html

 

Q: Is there any geo-processing tools in built in ArcGIS for running machine learning algorithms?

A: Yes, ArcGIS has support for several Machine Learning techniques. We would suggest looking this blog post to learn more about ML support within ArcGIS Desktop. https://www.esri.com/arcgis-blog/products/arcgis-pro/analytics/machine-learning-in-arcgis/

 

Q: Can you provide the link to the notebook again?

AHub-Tutorials/GeoDev_ServiceRequests.ipynb at master · esridc/Hub-Tutorials · GitHub 

 

Q: Are there any other good resources for finding examples of utilizing Machine Learning with GIS?

A: Here are just a few ArcGIS blogs demonstrating examples:
https://www.esri.com/arcgis-blog/products/product/analytics/density-based-clustering-exploring-fatal-car-accident-data-to-find-systemic-problems/


https://www.esri.com/arcgis-blog/products/arcgis-pro/analytics/using-forest-based-classification-and-regression-to-model-and-estimate-house-values/


https://www.esri.com/arcgis-blog/products/arcgis-enterprise/analytics/the-science-of-where-seagrasses-grow-arcgis-and-machine-learning/?rmedium=redirect&rsource=blogs.esri.com/esri/arcgis/2017/09/18/the-science-of-where-seagrasses-grow-arcgis-and-machine-learning

 

Q: Spatial Join: I see the tool has capability to join two layers with out common attributes . But can this be done on multiple layers in a single shot? The built-in tool has only option to select two layers. What are the options?

A: Join works on a 1:1 principle, you can only join one layer to another. That said you can use the concept of table 'Relate' to join one table to many using a common attribute in those tables (does not work spatially).

 

Q: Is it possible to integrate ArcGIS with machine learning software like Jupiter notebook?

A: You can use ArcPy as well as the ArcGIS API for Python in Jupyter notebooks. 

 

Q: Is there a way to use machine learning to predict or project possible future incident locations without assigning a z-value?

A: Z-score (standard score) denotes the number of standard deviations from the mean a data point is. Simply put, it conveys the distribution of a point around the mean. Prediction or Classification does not need z-scores for input variables.

 

Q: Could you walk us through the hotspot analysis? How do you access these tools?

A: Read through this https://pro.arcgis.com/en/pro-app/tool-reference/spatial-statistics/h-how-hot-spot-analysis-getis-ord-gi-spatial-stati.htm to learn more about HotSpot Analysis. It can be accessed within the Spatial Statistics (Mapping Clusters) toolbox in ArcGIS Desktop and under the Analyze Patterns category in ArcGIS Online.

 

Q: Can we do the machine learning analyses using 10.6 geoprocessing tools?

A: Yes, apart from the usual, ArcGIS Desktop 10.6 comes with two new tools Deep Learning Model To End and Export Training Data For Deep Learning. 

 

Q: I have a GIS online account. How can I access the data demonstration in ArcGIS online?

A: The data I used for my examples is publicly available. Once you add it to your ArcGIS Online account, you can use the Summarize Center and Dispersion tool there to generate spatial mean, median and standard dispersion for your data.

 

Q: How can I use ArcGIS for linear regression or logistical regression analysis?

A: Support for regression, both linear and logistic, is available in ArcGIS Desktop Spatial Statistics (Modeling Spatial Relationships) toolbox. Click here https://pro.arcgis.com/en/pro-app/tool-reference/spatial-statistics/an-overview-of-the-modeling-spatial-relationships-toolset.htm to learn more.

 

Q: Is machine learning part of programming, or is it remote sensing?

A: Machine Learning involves concepts of statistics as well as algorithms to solve problems based on patterns or inferences drawn from data. Remote sensing, on the other hand, involves studying the planet using remote instruments. Machine Learning can have applications in the field of Remote Sensing, for instance, to detect buildings, roads using satellite imagery data.

 

Q: Is the Jupyter environment embedded directly within Esri ML module?

A: ArcGIS Enterprise 10.7 comes with Hosted Notebooks, that lets you perform spatial analysis and data science workflows in a notebook within your portal. Other than that, you can use ArcPy or ArcGIS API for Python in an external Jupyter Notebook too.

 

Q: Which interpolation techniques suits best when you are dealing with underground water data?

A: While it depends on your sampling size and distance, Kriging might be a good Interpolation technique.

 

Q: Please suggest out-of-the-box tools provided by ArcGIS for machine learning algo.

A: Yes, ArcGIS has support for several Machine Learning techniques. I'd suggest looking this blog post to learn more about ML support within ArcGIS Desktop. https://www.esri.com/arcgis-blog/products/arcgis-pro/analytics/machine-learning-in-arcgis/

 

For more information, Manushi shared her presentation: GeoDev Webinar - Thinking Spatially and Statistically

 

Also, for the full recording of the webinar, click here.

Well!  In my previous article, I presented a workaround for a bizarre MMPK bug that reappeared in version 100.5 of the ArcGIS Runtime SDK for .NET; but just when I thought I couldn't find another one even more unthinkably bizarre, up pops this: QueryRelatedFeaturesAsync will return a bad result when the FeatureLayer of a GeodatabaseFeatureTable is a sublayer of a GroupLayer.

 

Huh?  You can check out the attached Visual Studio project for confirmation, but in the meantime we have a conundrum.  It would appear that, just when the GroupLayer class is finally implemented, we have to chuck it right back out until a safer, more effective version is delivered.  I've updated an earlier article of mine to reflect that situation.  Nonetheless, I got to thinking about how Runtime support for querying M:N relationships in a mobile map package didn't even start to appear until 100.4, and what I would need to do in order to support them were I still stuck at 100.3. Or, what if QueryRelatedFeaturesAsync were to fail again in a future version?

 

Supporting one-to-one and one-to-many relationships is actually fairly simple, since the RelationshipInfo class gives the required information, when retrieved from both origin and destination tables.  But many-to-many relationships are entirely another can of worms, because some crucial information is inaccessible via Runtime, even though it's encoded in the geodatabase.

 

Contrary to the wording in the documentation for the RelationshipInfo class [and I quote: "A relationship consists of two and only two tables"], M:N relationships involve a third, intermediate table.  Querying M:N relationships requires knowledge to query that intermediate table, and that's precisely the information which is withheld from the Runtime developer.

 

Let's take a look at how relationships are stored in a mobile map package.  In my previous article, I introduced you to the GDB_ServiceItems table.  The ItemInfo field in that table stores the JSON data used to hydrate the ArcGISFeatureLayerInfo class:

 

View of GDB_ServiceItems in SQLiteSpy

 

Here's the JSON that describes the RegulatorStation to GasValve relationship from the origin role:

 

{
     "id": 4,
     "name": "Gas Valve",
     "relatedTableId": 10,
     "cardinality": "esriRelCardinalityManyToMany",
     "role": "esriRelRoleOrigin",
     "keyField": "OBJECTID",
     "composite": false,
     "relationshipTableId": 73,
     "keyFieldInRelationshipTable": "REGSTATIONOBJECTID"
}

 

And here's the description for destination role:

 

{
     "id": 4,
     "name": "Regulator Station",
     "relatedTableId": 13,
     "cardinality": "esriRelCardinalityManyToMany",
     "role": "esriRelRoleDestination",
     "keyField": "OBJECTID",
     "composite": false,
     "relationshipTableId": 73,
     "keyFieldInRelationshipTable": "GASVALVEOBJECTID"
}

 

The two crucial items that are not included in the RelationshipInfo class are relationshipTableId and keyFieldInRelationshipTable.  But how to get at that information in your app?  Aye, there's the rub.  In short, you need to extract the geodatabase from the mobile map package and query the GDB_ServiceItems table directly.  That's where you need a library such as System.Data.SQLite, which is available via the NuGet Package Manager:

 

NuGet Package Manager

 

Given the necessary tools, the first step is to extract the geodatabase to a temporary location:

 

          public async Task Init(string sMMPKPath, Geodatabase gdb)
          {
               string sGDBPath = gdb.Path;
               string sGDBName = Path.GetFileName(sGDBPath);
               string sTempDir = Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString());
               Directory.CreateDirectory(sTempDir);
               string sTempPath = Path.Combine(sTempDir, sGDBName);
               using (ZipArchive zip = ZipFile.OpenRead(sMMPKPath))
               {
                    ZipArchiveEntry zipEntry = zip.GetEntry(sGDBPath);
                    zipEntry.ExtractToFile(sTempPath);
               }

 

Next, query the desired information, taking the steps necessary to clean up afterwards:

 

               List<string> ItemInfos = new List<string>();
               string sConn = "Data Source=" + sTempPath + ";Read Only=True";
               string sSQL = "SELECT ItemInfo FROM GDB_ServiceItems";
               using (SQLiteConnection sqlConn = new SQLiteConnection("Data Source=" + sTempPath))
               {
                    sqlConn.Open();
                    using (SQLiteCommand sqlCmd = new SQLiteCommand(sSQL, sqlConn))
                    {
                         using (SQLiteDataReader sqlReader = sqlCmd.ExecuteReader())
                         {
                              while (sqlReader.Read())
                                   ItemInfos.Add(sqlReader.GetString(0));
                              sqlReader.Close();
                         }
                    }
                    sqlConn.Close();
               }
               GC.Collect();
               GC.WaitForPendingFinalizers();
               Directory.Delete(sTempDir, true);

 

Finally, combine the missing ingredients with the out-of-the-box information:

 

               _infos = new Dictionary<long, Dictionary<long, ExtendedRelationshipInfo>>();
               foreach (string sInfo in ItemInfos)
               {

                    Dictionary<string, object> info = _js.DeserializeObject(sInfo) as Dictionary<string, object>;
                    if (!info.ContainsKey("relationships"))
                         continue;
                    object[] relationships = info["relationships"] as object[];
                    if (relationships.Length == 0)
                         continue;
                    long iTableID = Convert.ToInt64(info["id"]);

                    // Get basic table relationship infos

                    GeodatabaseFeatureTable gfTab = gdb.GeodatabaseFeatureTable(iTableID);
                    if (gfTab.LoadStatus != Esri.ArcGISRuntime.LoadStatus.Loaded)
                         await gfTab.LoadAsync();
                    Dictionary<long, RelationshipInfo> BasicInfos = new Dictionary<long, RelationshipInfo>();
                    foreach (RelationshipInfo relInfo in gfTab.LayerInfo.RelationshipInfos)
                         BasicInfos[relInfo.Id] = relInfo;

                    // Add extended data

                    Dictionary<long, ExtendedRelationshipInfo> ExtendedInfos = new Dictionary<long, ExtendedRelationshipInfo>();
                    foreach (object obj in relationships)
                    {
                         Dictionary<string, object> rel = obj as Dictionary<string, object>;
                         long iRelID = Convert.ToInt64(rel["id"]);
                         string sCard = rel["cardinality"].ToString();
                         long? iRelTableID = null;
                         string sKeyField = null;
                         if (sCard == "esriRelCardinalityManyToMany")
                         {
                              iRelTableID = Convert.ToInt64(rel["relationshipTableId"]);
                              sKeyField = rel["keyFieldInRelationshipTable"].ToString();
                         }
                         ExtendedRelationshipInfo erInfo = new ExtendedRelationshipInfo()
                         {
                              BasicInfo = BasicInfos[iRelID],
                              RelationshipTableId = iRelTableID,
                              KeyFieldInRelationshipTable = sKeyField
                         };
                         ExtendedInfos[iRelID] = erInfo;
                    }
                    _infos[iTableID] = ExtendedInfos;

               } // foreach

 

Here, then, is the code for querying related features:

 

public async Task<FeatureQueryResult> QueryRelated(ArcGISFeature feat, long iRelID)
          {

               // Get relationship data

               if (!(feat.FeatureTable is GeodatabaseFeatureTable gfTabSource))
                    return null;
               long iTableID = gfTabSource.LayerInfo.ServiceLayerId;
               if (!_infos.ContainsKey(iTableID))
                    return null;
               Dictionary<long, ExtendedRelationshipInfo> ExtendedInfos = _infos[iTableID];
               if (!ExtendedInfos.ContainsKey(iRelID))
                    return null;
               ExtendedRelationshipInfo extInfoSource = ExtendedInfos[iRelID];
               RelationshipInfo infoSource = extInfoSource.BasicInfo;
               long iRelTableID = infoSource.RelatedTableId;
               if (!_infos.ContainsKey(iRelTableID))
                    return null;
               ExtendedInfos = _infos[iRelTableID];
               if (!ExtendedInfos.ContainsKey(iRelID))
                    return null;
               ExtendedRelationshipInfo extInfoTarget = ExtendedInfos[iRelID];
               RelationshipInfo infoTarget = extInfoTarget.BasicInfo;

               // Build query

               string sKeyValSource = feat.GetAttributeValue(infoSource.KeyField).ToString();
               Geodatabase gdb = gfTabSource.Geodatabase;
               GeodatabaseFeatureTable gfTabTarget = gdb.GeodatabaseFeatureTable(iRelTableID);
               string sKeyFieldTarget = infoTarget.KeyField;
               Field fieldKeyTarget = gfTabTarget.GetField(sKeyFieldTarget);
               StringBuilder sb = new StringBuilder();
               sb.Append(sKeyFieldTarget);
               if (infoSource.Cardinality == RelationshipCardinality.ManyToMany)
               {

                    // Gather key values from intermediate table

                    GeodatabaseFeatureTable gfTabRel = gdb.GeodatabaseFeatureTable(extInfoSource.RelationshipTableId.Value);
                    string sKeyFieldRelSource = extInfoSource.KeyFieldInRelationshipTable;
                    Field fieldRelSource = gfTabRel.GetField(sKeyFieldRelSource);
                    string sWhere = sKeyFieldRelSource + " = " + sKeyValSource;
                    if (fieldRelSource.FieldType == FieldType.Guid)
                         sWhere = sKeyFieldRelSource + " = '" + sKeyValSource + "'";
                    QueryParameters qpRel = new QueryParameters() { WhereClause = sWhere };
                    FeatureQueryResult resultRel = await gfTabRel.QueryFeaturesAsync(qpRel);
                    if (resultRel.Count() == 0)
                         return resultRel;
                    string sKeyFieldRelTarget = extInfoTarget.KeyFieldInRelationshipTable;
                    Field fieldRelTarget = gfTabRel.GetField(sKeyFieldRelTarget);
                    sb.Append(" IN ( ");
                    bool bFirst = true;
                    foreach (Feature featRel in resultRel)
                    {
                         if (bFirst)
                              bFirst = false;
                         else
                              sb.Append(", ");
                         string sKeyValTarget = featRel.GetAttributeValue(sKeyFieldRelTarget).ToString();
                         if (fieldRelTarget.FieldType == FieldType.Guid)
                              sb.Append("'" + sKeyValTarget + "'");
                         else
                              sb.Append(sKeyValTarget);
                    }
                    sb.Append(" ) ");

               }
               else
               {
                    sb.Append(" = ");
                    if (fieldKeyTarget.FieldType == FieldType.Guid)
                         sb.Append("'" + sKeyValSource + "'");
                    else
                         sb.Append(sKeyValSource);
               }

               // Query related features

               QueryParameters qp = new QueryParameters() { WhereClause = sb.ToString() };
               return await gfTabTarget.QueryFeaturesAsync(qp);

          }

 

Needless to say, this is a pretty extreme approach to take.  Nonetheless, you never know when this knowledge may come in useful.  

 

UPDATE:

 

It occurred to me that since I routinely automate MMPK creation using Python, I could also create companion files containing the many-to-many relationships.  I've added a new attachment that contains both a Python script and a revised version of the RelationshipHelper class that takes advantage of it.  Now it's more feasible to support both group layers and related feature queries.

In my previous article, I presented a workaround for preserving group layers in a mobile map opened using ArcGIS Runtime SDK for .NET 100.5.  Today's topic involves something a bit nastier.  It can be pretty frustrating when a bug that is fixed in an earlier software version reappears in a later one.  The lesson here is: Never discard your workaround code!

 

The bug in question involves certain multi-layer marker symbols that are not rendered properly when rotated.  For example, see this symbol as shown in the original ArcGIS Pro project:

 

Rotated symbol in ArcGIS Pro

Here's how it looks when exported to a mobile map package and opened using ArcGIS Runtime (see the attached Visual Studio example project):

 

Rotated symbol in ArcGIS Runtime, all jumbled up

Yikes!  This problem was identified at 100.1 and fixed in 100.2, but at 100.5 once more it rears its ugly head.  One workaround is to set ArcGISFeatureTable.UseAdvancedSymbology to false.  This causes marker symbols to be rendered as picture markers.  That's fine until you run into two limitations.  The first is when you set a reference scale and zoom in:

 

Zoomed into a bitmap

But even more challenging, what if you want to change symbol colors on the fly?  In theory, you can do that with a bitmap, but it's beyond my skill to deal with the dithering:

 

Failed attempt to change color of a dithered bitmap

There's another approach, but until Esri implements more fine-grained class properties and methods, manipulating symbols involves a lot of JSON hacking.  Before I go any further, let's crack open a mobile map package and see where drawing information is stored.  If you examine the mobile geodatabase using a tool such as SQLiteSpy, 

you will see a table called GDB_ServiceItems:

 

View of GDB_ServiceItems in SQLiteSpy

 

That's the raw JSON for the data retrieved by ArcGISFeatureTable.LayerInfo.DrawingInfo.  Fortunately, there's no need to hack into the table, because you can get the renderer for a feature layer, retrieve the symbol(s), and convert them to JSON.  Then you make whatever edits you want, and create a new symbol.

 

          public static Symbol UpdateSymbolJSON(MultilayerPointSymbol symOld, Color colorOld, Color colorNew)
          {
               string sOldJSON = symOld.ToJson();
               Dictionary<string, object> dict = (Dictionary<string, object>)_js.DeserializeObject(sOldJSON);
               SymbolHelper.ProcessObjectColorJSON(dict, colorOld, colorNew);
               string sNewJSON = _js.Serialize(dict);
               Symbol symNew = Symbol.FromJson(sNewJSON);
               return symNew;
          }

 

So what's the workaround?  The nature of the bug seems to be an inability to process offsetX and offsetY correctly.  In fact, they seem to be reversed.  So let's see what happens when the offsets are reversed in the JSON:

 

Symbol with offsets reversed

Nope.  Not quite there.  What I finally ended up doing is to combine the offset layers into a single layer with no offsets.  Fortunately again, characters are already converted to polygons in the JSON, or I would be doing a lot more work.  First, I collect the offset layers and find the smallest interval (points per coordinate unit):

 

               bool[] Offset = new bool[layers.Length];
               List<OffsetLayer> OffsetLayers = new List<OffsetLayer>();
               double dInterval = double.MaxValue;
               for (int i = 0; i < layers.Length; i++)
               {

                    Dictionary<string, object> lyr = layers[i] as Dictionary<string, object>;

                    // Check for X and/or Y offset

                    bool bOffset = false;
                    double dOffsetX = 0;
                    double dOffsetY = 0;
                    if (lyr.ContainsKey("offsetX"))
                    {
                         dOffsetX = Convert.ToDouble(lyr["offsetX"]);
                         lyr["offsetX"] = 0;
                         bOffset = true;
                    }
                    if (lyr.ContainsKey("offsetY"))
                    {
                         dOffsetY = Convert.ToDouble(lyr["offsetY"]);
                         lyr["offsetY"] = 0;
                         bOffset = true;
                    }
                    Offset[i] = bOffset;
                    if (!bOffset)
                         continue;

                    // Get offset layer data

                    Dictionary<string, object> frame = lyr["frame"] as Dictionary<string, object>;
                    object[] markerGraphics = lyr["markerGraphics"] as object[];
                    Dictionary<string, object> markerGraphic = markerGraphics[0] as Dictionary<string, object>;
                    Dictionary<string, object> geometry = markerGraphic["geometry"] as Dictionary<string, object>;
                    object[] rings = geometry["rings"] as object[];
                    int ymin = Convert.ToInt32(frame["ymin"]);
                    int ymax = Convert.ToInt32(frame["ymax"]);
                    double size = Convert.ToDouble(lyr["size"]);
                    double dInt = size / (ymax - ymin);
                    if (dInt < dInterval)
                         dInterval = dInt;
                    OffsetLayer layer = new OffsetLayer()
                    {
                         offsetX = dOffsetX,
                         offsetY = dOffsetY,
                         xmin = Convert.ToInt32(frame["xmin"]),
                         ymin = ymin,
                         xmax = Convert.ToInt32(frame["xmax"]),
                         ymax = ymax,
                         size = size,
                         rings = rings
                    };
                    OffsetLayers.Add(layer);

               } // for

 

Then I set up the combined frame and recalculate the ring coordinates:

 

               int iMinX = 0;
               int iMinY = 0;
               int iMaxX = 0;
               int iMaxY = 0;
               List<object[]> OffsetRings = new List<object[]>();
               foreach (OffsetLayer lyr in OffsetLayers)
               {

                    double dX, dY;
                    int iX, iY;

                    // Set up transformation

                    double dInt = lyr.size / (lyr.ymax - lyr.ymin);
                    double dOffsetX = lyr.offsetX / dInt;
                    double dOffsetY = lyr.offsetY / dInt;
                    double dScale = dInt / dInterval;
                    dX = (lyr.xmin + dOffsetX) * dScale;
                    iX = (int)dX;
                    if (iX < iMinX)
                         iMinX = iX;
                    dX = (lyr.xmax + dOffsetX) * dScale;
                    iX = (int)dX;
                    if (iX > iMaxX)
                         iMaxX = iX;
                    dY = (lyr.ymin + dOffsetY) * dScale;
                    iY = (int)dY;
                    if (iY < iMinY)
                         iMinY = iY;
                    dY = (lyr.ymax + dOffsetY) * dScale;
                    iY = (int)dY;
                    if (iY > iMaxY)
                         iMaxY = iY;

                    // Recalculate rings

                    foreach (object obj in lyr.rings)
                    {
                         object[] ring = obj as object[];
                         foreach (object o in ring)
                         {
                              object[] pt = o as object[];
                              pt[0] = (int)((Convert.ToInt32(pt[0]) + dOffsetX) * dScale);
                              pt[1] = (int)((Convert.ToInt32(pt[1]) + dOffsetY) * dScale);
                         }
                         OffsetRings.Add(ring);
                    }

               } // foreach
               double dSize = (iMaxY - iMinY) * dInterval;

 

Finally, I assemble a new symbol layer list:

 

               List<object> NewLayers = new List<object>();
               bool bFirst = true;
               for (int i = 0; i < layers.Length; i++)
               {

                    if (!Offset[i])
                    {
                         NewLayers.Add(layers[i]);
                         continue;
                    }
                    else if (!bFirst)
                         continue;

                    // Update first offset layer

                    Dictionary<string, object> lyr = layers[i] as Dictionary<string, object>;
                    Dictionary<string, object> frame = lyr["frame"] as Dictionary<string, object>;
                    frame["xmin"] = iMinX;
                    frame["ymin"] = iMinY;
                    frame["xmax"] = iMaxX;
                    frame["ymax"] = iMaxY;
                    lyr["size"] = dSize;
                    if (lyr.ContainsKey("offsetX"))
                         lyr["offsetX"] = 0;
                    if (lyr.ContainsKey("offsetY"))
                         lyr["offsetY"] = 0;
                    NewLayers.Add(lyr);
                    object[] markerGraphics = lyr["markerGraphics"] as object[];
                    Dictionary<string, object> markerGraphic = markerGraphics[0] as Dictionary<string, object>;
                    Dictionary<string, object> geometry = markerGraphic["geometry"] as Dictionary<string, object>;
                    geometry["rings"] = OffsetRings.ToArray();
                    bFirst = false;

               } // for
               return NewLayers.ToArray();

 

And here are the results:

 

Fixed symbolColors changed

 

Much better.  I can't guarantee that this code will work for every situation, but it seems to work fine for my own complex symbols.  And remember:  even if this bug is fixed at 100.6, hang onto this code, in case you need it again in the future!

Article contributed to and authored by Satish Sankaran, Max Payson, and Amy Niessen

 

Last week, the FOSS4G community landed in San Diego for its 2019 North American conference. Esri participated in the event as a silver sponsor and, given its proximity to Esri’s home base in Redlands, many employees were able to attend. FOSS4G is an amazing event for developers and GIS geeks interested in emerging technologies, so we were excited to share our projects and to engage with thought leaders in the geospatial community.

 

The event kicked off with lightning talks and networking events, which dovetailed into devoted presentations and workshops. While it had a developer focus, many of the presentations provided gentle introductions to hot topics – AI/ML, blockchain, microservices, containers, and serverless computing were all covered. Presenters discussed how these buzzwords can help scale storage, compute, and insight to solve increasingly complex challenges. Many presentations were also grounded by real-world projects, from disseminating 14 trillion USGS LIDAR points to achieving the UN’s sustainable development goals.

 

Colleagues from Esri shared their work with presentations at the event from Atma Mani presenting "Let's Take the Machines House Hunting" using Python and Jupyter Notebooks, Thomas Maurer presenting "LERC - Fast Compression of Images and Tensors", highlighting low-level libraries like LERC for raster compression, and Tamrat Belayneh presenting "I3S - An Open Standard to Bring 3D to Web, Desktop, and Mobile Platforms", introducing the OGC community standard I3S spec. We also appreciated hearing Howard Butler acknowledge our contributions to the GDAL Coordinate System barn-raising effort in his presentation. As an important vendor in the GIS space, we are happy support fundamental initiatives like these that help build core libraries used extensively by the community.

 

Atma Mani demonstrating the Python API to a user

 

While many attendees were familiar with Esri software and some even active users, at the Esri booth, the conversations extended beyond traditional ArcGIS workflows often discussed at Esri events. We enjoyed learning from others’ diverse perspectives and expertise, and it was reassuring to see community validation regarding steps we are taking in the areas of 3D, interoperable data science, and with our Developer program. Esri continues to push forward on its Open Platform vision – a vision that includes support for standards, interoperability, open data and open source. And, we are constantly looking for better ways to engage with developers and support their work.

 

While large software business may share complex relationships with the open source world, Esri’s role in the GIS realm has always been community focused. We hope to continue to grow the community of GIS users and developers and FOSS initiatives are an important subsystem contributing to this growth.

 

New sticker design at FOSS4G