Developers Blog

cancel
Showing results for 
Search instead for 
Did you mean: 

Other Boards in This Place

Latest Activity

(90 Posts)
Esri Regular Contributor

Earlier this month, Christopher Zent from the ArcGIS Pro SDK team and Robert Burke, Esri Instructor, co-presented the GeoDev Webinar, "ArcGIS Pro SDK for .NET: Extensibility Patterns. Throughout the presentation, attendees can submit their questions. The questions below are the ones we were unable to get to during the webinar. For those we did address, as well as the presentation recording and slides are listed below. Check out what you may have missed!

 

What are the most common extensibility patterns and customizations seen with the Pro SDK?

By far the most commonly used is the Pro add-in pattern. This is very similar in concept to the traditional ArcMap add-in pattern which Desktop developers have used since ArcGIS Desktop 10.0. The Pro add-in provides the range of capabilities which most developers and their end users are looking for, whereas the other patterns are more specialized.

 

Are there any samples for CoreHost updating databases?

The CoreHost community samples demonstrate the concepts of accessing and reading geodatabases. Review the Geodatabase ProConcepts and Snippets documents for examples.

 

Is the SDK backward-compatible – can I write an add-in with the 2.5-2.6 SDK and expect it to run in Pro 2.3?

ArcGIS Pro add-ins are only forward compatible with releases of ArcGIS Pro. For an add-in to run with ArcGIS Pro 2.3, it would need to have been compiled with ArcGIS Pro SDK 2.0, 2.1, 2.2 or 2.3. An add-in compiled with ArcGIS Pro SDK 2.3 can be used with ArcGIS Pro 2.3 and higher 2.x releases. Earlier releases of the Pro SDK can be found in the Assets section under each release at this page.

 

How are the ArcGIS Pro API extension files installed?

The Pro API core and extension assembly files are always installed as part of ArcGIS Pro. There is no separate install required.  Developers only need to install the ArcGIS Pro SDK (.VSIX) files to access the APIs. More information can be found on the documentation site here.

 

Do you support NuGet packages for Pro SDK?

Yes.  You can find out more about downloading and using the ArcGIS Pro Extensions NuGet in this guide document.

 

For those experienced in ArcObjects SDK development, how quickly could I become productive in using the ArcGIS Pro SDK?

Many developers coming from ArcObjects development find using the Pro SDK to a highly productive and streamlined development experience. There are many online resources available for getting started, including a set of easy to follow ArcGIS Tutorials for the Pro SDK, and documentation on Migrating to ArcGIS Pro and getting started with the Pro SDK. We also recommend the instructor-led training course.

 

How is real-time data handled with the Pro SDK, and are there samples?

The Realtime Stream Layers API allows for management of stream layers in ArcGIS Pro, with documentation here.  There is also a sample available here.

 

Can you build Custom Cylindrical Objects on the Map connecting to Real Time Data.

Using the Geometry API it is possible to create multipatch features, and using the Realtime Stream Layers API you can connect to real time data in stream layers in Pro.

 

Can you add BigQuery (Google Cloud) data with lat/long into Pro using a plugin datasource?

See the ProConcepts Plugin Datasources document for information on the architecture and requirements for source data.

 

What are the language options for developing with the ArcGIS Pro SDK?

The language options for development with the ArcGIS Pro SDK are C# and VB.NET.

 

To view the recording, visit this page: ArcGIS Pro SDK for .NET: Extensibility Patterns 

To view the slides for this presentation, click here.

Have a question? Post them below!

more
1 0 142
New Contributor

The Ghost blogging platform offers a lean and minimalist experience. And that's why we love it. But unfortunately sometimes, it can be too lean for our requirements.

Web performance has become more important and relevant than ever, especially since Google started including it as a parameter in its SEO rankings. We make sure to optimize our websites as much as possible, offering the best possible user experience. This article will walk you through the steps you can take to optimize a Ghost Blog's performance while keeping it lean and resourceful.

When we started working on the appfleet blog we began with a few simple things:

Ghost responsive images

The featured image in a blog have lots of parameters, which is a good thing. For example, you can set multiple sizes in package.json and have Ghost automatically resize them for a responsive experience for users on mobile devices or smaller screens.

"config": {
"posts_per_page": 10,
"image_sizes": {
"xxs": {
"width": 30
},
"xs": {
"width": 100
},
"s": {
"width": 300
},
"m": {
"width": 600
},
"l": {
"width": 900
},
"xl": {
"width": 1200
}
}
}‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

And then, all you have to do is update the theme's code

<img class="feature-image"
srcset="{{img_url feature_image size="s"}} 300w,
{{img_url feature_image size="m"}} 600w,
{{img_url feature_image size="l"}} 900w,
{{img_url feature_image size="xl"}} 1200w"
sizes="800px"
src="{{img_url feature_image size="l"}}"
alt="{{title}}"
/>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Common HTML tags for performance

Next we take a few simple steps to optimize Asset Download Time. That includes adding preconnect and preload headers in default.hbs:

<link rel="preconnect" href="https://fonts.gstatic.com/" crossorigin="anonymous">
<link rel="preconnect" href="https://cdn.jsdelivr.net/" crossorigin="anonymous">
<link rel="preconnect" href="https://widget.appfleet.com/" crossorigin="anonymous">

<link rel="preload" as="style" href="https://fonts.googleapis.com/css?family=Red+Hat+Display:400,500,700&display=swap" />
<link rel="preload" as="style" href="https://cdn.jsdelivr.net/npm/@fortawesome/fontawesome-free@5.13.0/css/all.min.css" />‍‍‍‍‍‍

As we load many files from jsDelivr to improve our performance, we instruct the browser to establish a connection with the domain as soon as possible. Same goes for Google Fonts and the sidebar widget that was custom coded.

Most often than not, users coming from Google or some other source to a specific blog post will navigate to the homepage to check what else we have written. For the same reason, on blog posts we also added prefetch and prerender tags for the main blog page.

That way the browser will asynchronously download and cache it, making the next most probable action of the user almost instant:

<link rel="prefetch" href="https://appfleet.com/blog">
<link rel="prerender" href="https://appfleet.com/blog">‍‍

Now these optimizations definitely helped but we still had a big problem. Our posts often have many screenshots and images in them, eventually impacting the page load time.

To solve this problem we took two steps. Lazy load the images and use a CDN. The issue is that Ghost doesn't allow to modify or filter the contents of the post. All you can do is output the HTML.

The easiest solution to this is to use a dynamic content CDN like Cloudflare. A CDN will proxy the whole site, won't cache the HTML, but cache all static content like images. They also have an option to lazy load all images by injecting their own Javascript.

But we didn't want to use Cloudflare in this case. And didn't feel like injecting third-party JS to lazy load the images either. So what did we do?

Nginx to the rescue!

Our blog is hosted on a DigitalOcean droplet created using its marketplace apps. It's basically an Ubuntu VM that comes pre-installed with Node.js, NPM, Nginx and Ghost.

Note that even if you don't use DigitalOcean, you are still recommended to use Nginx in-front of the Node.js app of Ghost.

This eventually makes the solution pretty simple. We use Nginx to rewrite the HTML, along with enabling a CDN and lazy-loading images at the same time, without any extra JS.

For CDN, you may also use the free CDN offered by Google to all AMP projects. Not many people are aware that you can use it as a regular CDN without actually implementing AMP.

All you have to do is use this URL in front of your images:

https://appfleet-com.cdn.ampproject.org/i/s/appfleet.com/‍

Replace the domains with your own and change your <img> tags, and you are done. All images are now served through Google's CDN.

The best part is that the images are not only served but optimized as well. Additionally, it will even serve a WebP version of the image when possible, further improving the performance of your site.

As for lazy loading, you may use the native functionality of modern browsers that looks like this <img loading="lazy". By adding loading="lazy" to all images, you instruct the browsers to automatically lazy load them once they become visible by the user.

And now the code itself to achieve this:

server {
listen 80;

server_name NAME;

location ^~ /blog/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host "appfleet.com";
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://127.0.0.1:2368;
proxy_redirect off;

#disable compression
proxy_set_header Accept-Encoding "";
#rewrite the html
sub_filter_once off;
sub_filter_types text/html;
sub_filter '<img src="https://appfleet.com' '<img loading="lazy" src="https://appfleet-com.cdn.ampproject.org/i/s/appfleet.com';
}

}‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

First we disable compression between node.js and nginx. Otherwise nginx can't modify the HTML if it comes in binary form.

Next we use the sub_filter parameter to rewrite the HTML. Ghost is using absolute paths in images, so we add the beginning as well. And in 1 line enabled both the CDN and lazyloading.

Reload the config and you are good to go. Check our blog to see this in real time.

Disclaimer - While the above configuration should definitely help optimize Ghost blogs like Appfleet, non-ghost blogs like Javelynn might need a different or a blended approach.

About the author - Sudip is a Solution Architect with more than 15 years of working experience, and is the founder of Javelynn and FlyBHP. He likes sharing his knowledge by regularly writing for Hackernoon, DZone, Appfleet and many more. And while he is not doing that, he must be fishing or playing chess.

more
0 0 32
Esri Regular Contributor

At the end of May, we hosted a GeoDev Webinar on one of the latest blog posts the came from Kristian Ekenes, who works on the Senior Product Engineer on the ArcGIS API for JavaScript team. He wrote a blog post on Mapping Large Datasets on the Web, and since we thought this would be a great topic to cover more in-depth, we decided to host the same topic as a webinar where attendees could ask questions. There were a lot of good questions that came in but were not addressed during the live Q&A portion of the webinar, so Kristian addresses them below.

Q: What parameters need to be set to enable dynamic tile service?

A: You don't need to do anything to enable dynamic feature tiles. You get them out of the box with online hosted Feature Services and Enterprise feature services. See attached matrix for more specific information on versioning.

Q: Can you show me where the quantization parameter is defined and for which type of services?

A: Quantization parameters are query parameters for the feature service. You can directly query data in quantized format using the JS API's Query object. See the doc here:  https://developers.arcgis.com/javascript/latest/api-reference/esri-tasks-support-Query.html#quantizationParameters. But you don't need to worry about that. The JS API takes care of querying the data in quantized form for you.

Q:  I have an ArcGIS Online license and I uploaded a 4GB data set as a CSV that created a table feature layer, that then is used to create a map. The thing is that I must update this data set every week. Is there some way, or an architecture that let me automate this? I already updated the data set with ArcGIS REST API, but it doesn't reflect on the maps or in the table feature layer.

A: Yes. You can automatically apply edits to this feature service using the ArcGIS Python API. Though this isn't my area of expertise. I would reach out to someone on GeoNet in a Python discussion for a more specific answer. The thing to remember though, is that once you update the data, the old feature tiles are automatically replaced with tiles containing the new/updated data once a new query for that data is made. So you don't have to worry about the backend taking care of that for you! 

Q: I am using one feature service to different maps. How can I filter data based on a map?

A: You will want to contact Esri Technical Support for this.

Q: When publishing a feature service to ArcGIS Online, will this eliminate the restriction on how many features you can render?

A: There isn't a limit to the number of features you can publish. The limitation you may encounter is with storage. The number of features you can render is dependent on the client loading the data, the network speed/latency, number of attributes required, etc.

Q: Will example code be available in GitHub?

A: Example code is here: https://github.com/ekenes/conferences/tree/master/ds-2020/large-data and here: https://github.com/ekenes/conferences/tree/master/ds-2020/plenary.

Q: Do we have to set the scale in the feature layer definition?

A: One way to avoid loading too many features unnecessarily is by progressively filtering out data based on view scale. This isn't the only way, but just one method without having to load the layer multiple times.

Q: I am not familiar with CDN cache. How can I optimize the performance by using this cache?

A: You don't need to do anything to take advantage of this. It just applies to public feature services. If you have a public feature service hosted on ArcGIS Online, then you automatically benefit from the CDN cache.

Q: Is there a dataset where we can get access to US population or zip code density? I could not seem to find any.

A: The Living Atlas of the World has zip code layers you can freely use as well as up to date population estimates through the ACS. I highly recommend searching there. https://livingatlas.arcgis.com/en/browse/#d=2&q=zip%20code

Q: Why are you cloning the renderer for Feature Layer?

A: I'm cloning the renderer so when I reset the renderer the layer will detect changes and re-render the data. We don't watch all renderer and symbol properties for changes for performance reasons. Therefore you must clone the renderer, make your modifications, then set it back on the layer. This is your way of deliberately telling the layer  a change has been made and it needs to redraw the features.

Q: Can you apply this visualization techniques in ArcMap or ArcGIS Pro?

A: Not all of these techniques can be applied in our desktop software. You can set the same renderer types…graduated symbols, color visual variables, etc. But you cannot update the renderer based on another attribute like time or depth. The time and depth UI sliders in Pro and ArcMap perform filters of the data. I'm rendering all features in the JS API and updating the renderer rather than performing a data filter. This allows me to avoid loading duplicate geometries just to show different data values. So no, you can't use all of these techniques in ArcGIS Pro/ArcMap. Also, you cannot set up the size range by scale in ArcGIS Pro.

Q: What about applying this visualization technique in ArcMap or ArcPro regarding line color thickness?

A: Regarding the scale-dependent line thickness in the pipes example. You can configure that in Pro, but it's a different approach than in the JS API. In Pro, you set a reference scale and a size that will render the lines at that scale with a specific size. When you zoom in or out, the line width will adjust linearly based on the difference between the map scale and the reference scale. You can set more stops in the JS API to do it.

Q: Can the Arcade expressions and other things be leveraged in a context wherein everything is, by intent or design, is cached on the client in memory or with the CDN -- specifically avoiding everything except the initial call to the originating ArcGIS service as a REST service? 

A: Arcade can execute against client-side features and you can query your data client-side, thus avoiding another round trip to the server. You first have to ensure that you actually have all the data available on the client though.

Q: Could you provide the code for this examples please?

A: Example code is here: https://github.com/ekenes/conferences/tree/master/ds-2020/large-data and here: https://github.com/ekenes/conferences/tree/master/ds-2020/plenary

Q: What difference between filter scale on API and display scale on mxd then publish to feature service?

A: Hopefully I get this right…display scale in ArcGIS Pro/ArcMap is similar to visibility scale in the JS API (layer.minScale/layer.maxScale). The difference is the visibility scale determines when a layer will be queried and displayed based on map/view scale. The filtering by scale still queries the data regardless if there is visibility scale (if there is one the visibility scale is still honored). You're just being more deliberate about filtering out data, such as smaller or less meaningful features that aren't needed for that scale. So you still see data, just not all the features in the approach where you filter based on scale.

Q: How do you normally explain clustering to the lay person?

A: Clustering is a method of reducing the number of features in view by aggregating features into clusters based on a predefined cluster radius. Larger cluster graphics indicate areas that have a higher density of features. Smaller cluster graphics indicate areas with fewer features.

Q: Is clustering available in Portal as well as ArcGIS Online?

A: Yes

Q: Could you provide the codes and the links for this examples please?

AExample code is here: https://github.com/ekenes/conferences/tree/master/ds-2020/large-data and here: https://github.com/ekenes/conferences/tree/master/ds-2020/plenary

Q: How do we deal with the time based data? For example, I am dealing with vehicle speed data which is 70k line per minute, and I want to show animation that will last for one hour.

A: I'm not sure I understand the case. Animations can be tricky…perhaps a question to post on the ArcGIS API for JavaScript GeoNet community with more specific details?

Q: If you need to update the layer, how do you update it?

A: Just apply edits or set the properties. 

Q: Do you offer tailor made training on web app development? I find the One Ocean app cool and would like to develop one for my region.

A: No. But I regularly contribute to the ArcGIS blog where I discuss details on some of these projects like One Ocean. You can read it here: https://www.esri.com/arcgis-blog/products/js-api-arcgis/mapping/mapping-large-datasets-on-the-web/ other JS API blogs can be searched on this page: https://www.esri.com/arcgis-blog/?s=#&products=js-api-arcgis.

Q: Thank you for the examples of using queries with a Feature Tile Cache. Can you also use  Filter Widget, or reporting tool widgets with a Feature Tile Cache?

A: Any time you filter your layer, the data is requested in tile format, which means it is automatically cached for you. So you don't have to worry about configuring it. As long as the JS API recognizes the query as a repeatable one, you leverage the feature tile cache.

Q: Can you use this techniques with rasters or grids?

A: Not at the moment. This only applies to vector data.

Q: Is there a GitHub link for the EugeneTrees - Cluster example?

A: Yes, you can find it here: Map Viewer 

Q: Is it best to limit the fields (attributes needed) in ArcMap or ArcGIS Pro before you publish or elsewhere (i.e. in the web map configuration)?

A: Not necessarily. You can also limit the fields using a hosted Feature Layer View. You can also limit them in the outFields of the layer in the JS app. If you have a long list of fields though, the query won't be cacheable (query stings must be less than 2048 characters), so it is best to limit fields in those situations whether it is from Pro or a hosted layer view.

Q: Some of the features you presented (such as adjusting the size of a line by scale, or definition queries by scale) look really interesting, but I am developing web apps in web app builder. Can you use those tools in a GUI driven environment? Or would it have to be within the code?

A: When you style a layer using the new Map Viewer Beta, you already take advantage of the scale-driven symbology by default. But the renderer must be authored in the viewer. That means resetting it there even if you have one saved to the layer. Or you can simply load it in the new map viewer beta and check the box. Read this blog for more information - https://www.esri.com/arcgis-blog/products/arcgis-online/mapping/auto-size-by-scale-now-available-in-map-viewer-beta/ Regarding the scale-driven definition queries, you have to do that in code. There is no GUI for it.

Q: Hello, how can I access the JavaScript backend code which you have reviewed?

AExample code is here: https://github.com/ekenes/conferences/tree/master/ds-2020/large-data and here: https://github.com/ekenes/conferences/tree/master/ds-2020/plenary

Q: How much can you improve performance of a dataset by adjusting text lengths of an attribute table? Do longer lengths greatly reduce draw speed in a feature service?

A: Longer lengths will reduce speed. But it may only matter if you have a lot of features and/or a lot of fields you are loading. We're continually improving draw times though, so it may matter less and less. You should see a significant improvement here later this year.

Q: Is there any documentation on geometry thinning?

A: You can read more about it in this blog - https://www.esri.com/arcgis-blog/products/js-api-arcgis/mapping/mapping-large-datasets-on-the-web/#geometry-thinning - But you can also read about it in Pro documentation under Select Layer By Attribute. Fundamentally, what I mean by "geometry thinning" is filtering out unnecessary features based on their geometry...whether they are inside or outside an area of interest...or in this case...whether there are too many points in a grid (e.g. stacked on top of one another, or even grid resolution).

Q: How do I cluster the thousands of POIs from different categories for better performance?

A: To cluster by category, you need to set up different layers with definitionExpressions based on each category…I'm not sure if you get better performance though. You should get decent performance in clustering with a few thousand features (even in the hundreds of thousands). But if you have way more than that, then you'll need to enable clustering on your service, which isn't fully supported in the JS APi yet. Though it's coming soon...

Q: When will snapping will be available in 4.x?

A: It is planned, but there is no specific date set.

Q: If I set scale range in mxd for a feature, but I want to see all data in attribute table, I receive slow results. Why?

A: This doesn't appear related to the JS API. I would contact Esri Technical Support

Q: Would you happen to have any advice for improving dataset performance within an ArcGIS Dashboards?

A: I would ask that question on GeoNet in the ArcGIS Dashboards discussion. You'll get someone who works on that product that will provide you with a better answer than I can give.

Q: How frequently is the data is cached? Can we change the frequency?

A: You can change the frequency using the maxAge parameter in the layer's settings in ArcGIS Online. Go to the layer item. Click the "Settings" tab. Scroll down to "Cache control". There you can control how long clients will have to wait before seeing an updated cache. That applies to editable layers where the features/attributes may change. Once the tiles are cached, they will stay that way until an edit is made.

Q: Are there any plans to move Web AppBuilder for Developers to work with the 4x API?

A: Yes. You'll need to contact the Web AppBuilder team though. You can reach them on GeoNet.

Q: How do I publish feature tile services instead of feature services? And it sounds like feature tile service is better than feature service. Should I use feature tile service all the time? What's the advantage of feature service that feature tile service doesn't have?

A: Feature services automatically query data as dynamic feature tiles. You don't have to do anything to take advantage of this functionality. It's all happening behind the scenes for you.

You can find a recording of this video on our GeoDev Webinar playlist on YouTube. If you would like to download the slides to the webinar, you can do so here: https://github.com/ekenes/conferences/raw/master/ds-2020/large-data/geodev-slides.pptx

We hope you enjoyed this installment of the GeoDev Webinars! You can find all of our GeoDev Webinars on go.esri.com/geodev. Until next time...

more
0 0 428
Regular Contributor

I maintain a number of automated map products in ArcMap which involve not just spatial queries and geometric operations, but also fine-grained manipulation of layers, including renderers and symbology. Let's face it: I never could get the arcpy.mapping module or early versions of ArcGIS Pro to cut the mustard. Later versions of the ArcGIS Pro SDK introduced far greater capability to manipulate map layers and layout elements. But then I asked myself: should users be running Pro at all to create those plots?

At Pro 2.4.3, I started taking a closer look at arcpy.mp, wondering if I could create a geoprocessing tool and publish it to a web tool for consumption by a custom Web AppBuilder widget in Portal. I am happy to say that an initial proof-of-concept experiment has been a success.

Before I go into that, first I would like to point out some of the features of arcpy.mp that made me decide that it has finally reached the level of functionality that I need:

  • Load and modify symbols
  • Change and manipulate renderers
  • Make layout elements visible or invisible
  • Make modifications at the CIM level

One thing arcpy.mp doesn't do yet is create new layout elements, but for my purposes I can recycle existing ones. A good approach is to have a number of elements present for various tasks in a layout, and make them visible or invisible on demand for different situations.

        # Show or hide legend
legend = self.__layout.listElements("LEGEND_ELEMENT")[0]
if self.__bOverview:
if self.__bMainline:
legend.visible = True
else:
legend.visible = False
else:
legend.visible = True
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

The ability to manipulate legend elements is still pretty limited, but I haven't run into any deal-killers yet. If you really hit a wall, one powerful thing you can now do is dive into the layout's CIM (Cartographic Information Model) and make changes directly to that.  Here's an example of modifying a legend element in a layout via the CIM:

aprx = arcpy.mp.ArcGISProject("c:/apps/Maps/LeakSurvey/LeakSurvey.aprx")
layout = aprx.listLayouts("Leak Survey Report Maps Template")[0]
cim = layout.getDefinition("V2")
legend = None
for e in cim.elements:
if type(e) == arcpy.cim.CIMLegend:
legend = e
break
legend.columns = 2
legend.makeColumnsSameWidth = True
layout.setDefinition(cim)
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

While the CIM spec is formally documented on GitHub, a simpler way to explore the CIM is to check out the ArcGIS Pro API Reference; all objects and properties in the ArcGIS.Core.CIM namespace should be mirrored in Python.

Part One: Creating a Python Toolbox

LeakSurvey.pyt is in the sample code attached to this post. While my initial draft was focused on successfully generating a PDF file, when the time came to test the tool as a service, additional factors came into play:

  • Getting the service to publish successfully at all
  • Returning a usable link to the resulting PDF file
  • Providing a source for valid input parameters

Sharing a geoprocessing tool as a package or service is one of the least intuitive, most trippy experiences I've ever had with any Esri product.  The rationale seems to be that you are not publishing a tool, but a vignette. You can't simply put out the tool and say, here it is: you must publish a geoprocessing result. As part of that concept, any resolvable references will cause ArcGIS to attempt to bundle them, or to match them to a registered data store. This is a great way to get the publication process to crash, or lock the published service into Groundhog Day.

So, one key to successfully publishing a web tool is to provide a parameter that:

  1. Gives the tool a link to resolve data and aprx references, and
  2. When left blank, returns a placeholder result that you can use to publish the service.

LeakSurvey.pyt does just that. Here's the definition for the "Project Folder" parameter:

        param0 = arcpy.Parameter(
displayName = "Project Folder",
name = "project_folder",
datatype = "GPString",
parameterType = "Optional",
direction = "Input")
‍‍‍‍‍‍‍‍‍‍‍‍‍‍

When left blank, the tool simply returns "No results" without throwing an error. Otherwise, it points to a shared folder that contains the ArcGIS Pro project and some enterprise GDB connection files.

Returning a usable link to an output file involves a bit of a trick.  Consider the definition of the "Result" parameter:

        param7 = arcpy.Parameter(
displayName = "Result",
name = "result",
datatype = "GPString",
parameterType = "Derived",
direction = "Output")
‍‍‍‍‍‍‍‍‍‍‍‍‍

The tool itself creates a path to the output file as follows:

        sOutName = self.__sSurveyType + "_" + self.__sSurveyName + "_" + self.__sMapsheet + "_"
sOutName += str(uuid.uuid4())
sOutName += ".pdf"
sOutName = sOutName.replace(" ", "_")
sOutput = os.path.join(arcpy.env.scratchFolder, sOutName)
‍‍‍‍‍‍‍‍‍‍‍

If that value is sent to the "Result" parameter, what the user will see is the local file path on the server. In order for the service to return a usable url, a return parameter needs to be defined as follows:

        param8 = arcpy.Parameter(
displayName = "Output PDF",
name = "output_pdf",
datatype = "DEFile",
parameterType = "Derived",
direction = "Output")
‍‍‍‍‍‍‍‍‍‍‍‍‍

Traditional tool validation code is somewhat funky when working with a web tool, and I dispense with it. Rather, the tool returns a list of valid values depending on the parameters provided, keeping in mind that I want this service to be consumed by a web app. For example, if you provide the tool with a survey type and leave the survey name blank, it will return a list of the surveys that exist. If you provide a survey type and name and leave the map sheets parameter blank, it will return a list of the map sheets for that survey:

        if self.__sSurveyName == "" or self.__sSurveyName == "#" or self.__sSurveyName == None:
# Return list of surveys for type
return self.__GetSurveysForType()
self.__bMainline = self.__sSurveyType == "MAINLINE" or self.__sSurveyType == "TRANSMISSION"
self.__Message("Querying map sheets...")
bResult = self.__GetMapsheetsForSurvey()
if not bResult:
return "No leak survey features."
if self.__sMapsheets == None or self.__sMapsheets == "#":
# Return list of map sheets for survey
sResult = "MAPSHEETS|OVERVIEW"
for sName in self.__MapSheetNames:
sResult += "\t" + sName
return sResult
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

So how's the performance? Not incredibly great, compared to doing the same thing in ArcObjects, but there are things I can do to improve script performance. For example, because every time the tool is run, it must re-query the survey and its map sheets, there is an option to specify multiple sheets, which will be combined into one PDF, to be returned to the calling application. The tool also supports an "ALL" map sheets option, in order to bypass the need to return a list of map sheets for the survey.

Nonetheless, arcpy can suffer in comparison to ArcObjects in various tasks [see this post for some revealing comparisons]. On the other hand, the advantages of using arcpy.mp can outweigh the disadvantages when it comes to automating map production.

After testing the tool, it's simple matter to create an empty result and publish it to Portal:

For this example, I also enable messages to be returned:

Once in Portal, it's ready to use:

Part Two: Creating and Publishing a Custom Web AppBuilder Widget

As I've mentioned in another post, one reason I like developing in Visual Studio is that I can create and use project templates. I've attached my current Web AppBuilder custom widget template to this post.

I've also attached the code for the widget itself. Because the widget makes multiple calls to the web tool, it needs a way to sort through the returns. In this example, the tool prefixes "SURVEYS|" when returning a list of surveys, and "MAPSHEETS|" when returning a list of map sheets. When a PDF is successfully generated, the "Result" parameter contains "Success."

   private onJobComplete(evt: any): void {
let info: JobInfo = evt.jobInfo;
this._sJobId = info.jobId;
this._gp.getResultData(info.jobId, "result");
}

private onGetResultDataComplete(evt: any): void {
let val: ParameterValue = evt.result;
let sName: string = val.paramName;
if (sName === "output_pdf") {
this.status("Done.");
window.open(val.value.url);
this._btnGenerate.disabled = false;
return;
}
let sVal: string = val.value;
if (this.processSurveyNames(sVal))
return;
if (this.processMapSheets(sVal))
return;
if (this.processPDF(sVal))
return;
this.status(sVal);
}

private processSurveyNames(sVal: string): boolean {
if (sVal.indexOf("SURVEYS|") !== 0)
return false;
...

private processMapSheets(sVal: string): boolean {
if (sVal.indexOf("MAPSHEETS|") !== 0)
return false;
...

private processPDF(sVal: string): boolean {
if (sVal !== "Success.")
return false;
...
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

The widget can be tested and debugged using Web AppBuilder for ArcGIS (Developer Edition):

Publishing widgets to Portal can be tricky: our production Portal sits in a DMZ, and https calls to another server behind the firewall will fail, so widgets must reside on the Portal server. And even though our "Q" Portal sits behind the firewall and can see other servers, it's on a different domain. Thus, if I choose to host "Q" widgets on a different server, I need to configure CORS.  Here's an example of web.config:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<cors enabled="true" failUnlistedOrigins="true">
<add origin="*" />
<add origin="https://*.uns.com"
allowCredentials="true"
maxAge="120">

<allowHeaders allowAllRequestedHeaders="true">
<add header="header1" />
<add header="header2" />
</allowHeaders>
<allowMethods>
<add method="DELETE" />
</allowMethods>
<exposeHeaders>
<add header="header1" />
<add header="header2" />
</exposeHeaders>
</add>
<add origin="https://*.unisource.corp"
allowCredentials="true"
maxAge="120">

<allowHeaders allowAllRequestedHeaders="true">
<add header="header1" />
<add header="header2" />
</allowHeaders>
<allowMethods>
<add method="DELETE" />
</allowMethods>
<exposeHeaders>
<add header="header1" />
<add header="header2" />
</exposeHeaders>
</add>
<add origin="http://*" allowed="false" />
</cors>
</system.webServer>
</configuration>
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

The file sits in a virtual web folder called "Widgets" with any widget folders to publish placed under that. When publishing a widget, initially there may be a CORS error:

but reloading the page and trying again should work.

Once the widget is published to Portal, it can be added to a new or existing application, and it's ready to use:

Because generating plot files can be a lengthy process, it may not be useful for the widget to wait for completion. Were I to put this into production, I would probably modify the tool to send plot files to a shared folder (or even a document management service) and send an email notification when it completes or fails.

more
1 0 147
Regular Contributor

[This was to be my user presentation at the 2020 DevSummit, which was cancelled.]

Chrome extensions are a fun way to implement functionality that is not normally available to a web client app. Extensions can make cross-domain requests to gather data from a variety of sources, and at the same time can filter out unwanted content. The Chrome API provides a rich suite of tools for focused application development.

Obviously, any app that is implemented as a Chrome extension will only run in Chrome. Also, Chrome extensions must be distributed through Chrome Web Store, but that's not necessarily a bad thing, as I will show later.

Here are some online resources:

Chrome extensions can contain background scripts, content scripts, a UI for saved user options, and so on. The manifest file is what ties it all together: if you've developed custom widgets for Web AppBuilder, you should already be familiar with the concept. Here's an example of manifest.json:

{
"name": "Simple Map Example",
"version": "1.0",
"description": "Build an Extension with TypeScript and the ArcGIS API for JavaScript 4.x!",
"manifest_version": 2,
"icons": { "128": "images/chrome32.png" },
"browser_action": {
"default_popup": "popup.html",
"default_icon": { "128": "images/chrome32.png" }
},
"options_ui": {
"page": "options.html",
"open_in_tab": false
},
"permissions": [ "storage" ],
"content_security_policy": "script-src 'self' https://js.arcgis.com blob:; object-src 'self'"

}

One thing that's worth pointing out is the "content_security_policy" entry. This will be different depending on whether you use JSAPI 3.x or 4.x. See this post for more information.

Let's use a Visual Studio 2017 project template (attached) to create a simple extension. Because the template uses TypeScript, there are some prerequisites; see this post for more information.

First, let's create a blank solution called DevSummitDemo:

Next, add a new project using the ArcGIS4xChromeExtensionTemplate:

Here is the structure of the resulting project:

Building the project compiles the TypeScript source into corresponding JS files.  Extensions can be tested and debugged using Chrome's "Load unpacked" function:

Note that Chrome DevTools will not load TypeScript source maps from within an extension. That's normally not an issue since you can debug the JS files directly. There is a way to debug the TypeScript source, but it involves some extra work. First, set up IIS express to serve up the project folder:

Then, edit the JS files to point to the localhost url:

Now, you can set a breakpoint in a TS file and it will be hit:

The disadvantage of this approach is that you must re-edit the JS files every time you recompile them.

The next demo involves functionality that is available in JSAPI 3.x, but not yet at 4.x. Namely, the ability to grab an image resource and display it as a layer. Here is a web page that displays the latest weather radar imagery:

The latest image is a fixed url, so nothing special needs to be done to reference it. Wouldn't it be cool, however, to display an animated loop of the 10 latest images? But there's a problem.

Let's add the LocalRadarLoop demo project code (attached) to the VS2017 solution and look at pageHelper.ts:

	export class myApp {
public static readonly isExtension: boolean = false;
public static readonly latestOnly: boolean = true;
}

When isExtension is false, and latestOnly is true, the app behaves like the web page previously shown.

Note also this section of extension-only code that must be commented out for the app to run as a normal web page:

			// **** BEGIN Extension-only block ****
/*
if (myApp.isExtension) {
let sDefaultCode: string = defaultLocalCode;
chrome.storage.local.get({ localRadarCode: sDefaultCode },
(items: any) => {
let sCode: string = items.localRadarCode;
let sel: HTMLSelectElement = <HTMLSelectElement>document.getElementById("localRadarCode");
sel.value = sCode;
this.setRadar();
});
return;
}
*/

// **** END ****

Because the latest set of radar images do not have fixed names, it is necessary to obtain a directory listing to find out what they are. If you set latestOnly to false and run the app, however, you will run into the dreaded CORS policy error:

This is where the power of Chrome extensions comes into play. Set isExtension to true, and uncomment the extension-only code (which enables a saved user option), and load the app as an extension. Now you get the desired animation loop!

Note the relevant line in manifest.json which enables the XMLHttpRequest to run without a CORS error:

Now, as I pointed out earlier, Chrome extensions are distributed through Chrome Web Store:

There are some advantages to this. For example, updates are automatically distributed to users. You can also create an "invisible" store entry, or publish only to testers. I find that last feature useful for distributing an extension that I created for my personal use only. Other distribution options do exist, which you can read about at this link.

In conclusion, Chrome extensions enable pure client-side functionality that otherwise would not be possible without the aid of web services. Chrome Web Store provides a convenient way to distribute extensions and updates, with public and private options.

The Local Radar Loop extension is available free at Chrome Web Store.

more
2 0 147
Regular Contributor

Being a user of Microsoft Visual Studio since version 6.0, I prefer it as a one-stop shop for as many kinds of development as possible, including C++, VB, C#, Python, and HTML5/TypeScript projects. One feature of VS that I really like is the ability to create project templates. VS2015 included a project template for TypeScript, but it was ugly as sin. VS2017 dropped it, but failed to provide a viable alternative; being lazy, I continued to use the same version available online:

This must stop! Sometimes, you just have to get your hands dirty, so I decided to create my own project template from scratch. Fortunately, the TypeScript documentation has sections on Integrating with Build Tools, and Compiler Options in MSBuild, which provided valuable assistance. Also, see the MSBuild documentation and How to: Create project templates for more information.

Prerequisites:

The TypeScript website has download links to install the latest version for a number of IDEs, including VS2017. In addition, since the TypeScript folks now prefer you to use npm to install typings; you should install Node.


Warning! If you are behind a corporate firewall, you may run into this error when you try to use npm to install typings:

   npm ERR! code UNABLE_TO_GET_ISSUER_CERT_LOCALLY

If you see that, try setting this configuration at the command prompt:

   npm config set strict-ssl false

Create a generic TypeScript project:

While, formally, the best approach would be to create a new project type, my lazy approach recycles the C# project type and redefines the build targets (but there is a disadvantage – see below). The first step is to create a blank solution in VS2017 named “TypeScriptProjectTemplates.” In Explorer or the Command Prompt, navigate to the solution folder and create a subfolder named “BasicTypeScriptTemplate.” In that folder, create a file named “BasicTypeScriptTemplate.csproj,” containing the following text:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.Default.props" Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.Default.props')" />
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<OutputType>Library</OutputType>
<StartupObject />
<OutputPath>.\</OutputPath>
<IntermediateOutputPath>vs\</IntermediateOutputPath>
</PropertyGroup>
<PropertyGroup>
<VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">12.0</VisualStudioVersion>
</PropertyGroup>
<PropertyGroup>
<TypeScriptToolsVersion>Latest</TypeScriptToolsVersion>
<TypeScriptModuleKind>amd</TypeScriptModuleKind>
<TypeScriptNoImplicitAny>true</TypeScriptNoImplicitAny>
<TypeScriptESModuleInterop>true</TypeScriptESModuleInterop>
<TypeScriptJSXEmit>react</TypeScriptJSXEmit>
<TypeScriptJSXFactory>tsx</TypeScriptJSXFactory>
<TypeScriptTarget>es5</TypeScriptTarget>
<TypeScriptExperimentalDecorators>true</TypeScriptExperimentalDecorators>
<TypeScriptPreserveConstEnums>true</TypeScriptPreserveConstEnums>
<TypeScriptSuppressImplicitAnyIndexErrors>true</TypeScriptSuppressImplicitAnyIndexErrors>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)' == 'Debug'">
<TypeScriptRemoveComments>false</TypeScriptRemoveComments>
<TypeScriptSourceMap>true</TypeScriptSourceMap>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)' == 'Release'">
<TypeScriptRemoveComments>true</TypeScriptRemoveComments>
<TypeScriptSourceMap>false</TypeScriptSourceMap>
</PropertyGroup>
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets" Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets')" />
<Target Name="Build" DependsOnTargets="CompileTypeScript">
</Target>
<Target Name="Rebuild" DependsOnTargets="CompileTypeScript">
</Target>
<Target Name="Clean" Condition="Exists('$(TSDefaultOutputLog)')">
<ItemGroup>
<TSOutputLogsToDelete Include="$(TSDefaultOutputLog)" />
</ItemGroup>
<ReadLinesFromFile File="@(TSOutputLogsToDelete)">
<Output TaskParameter="Lines" ItemName="TSCompilerOutput" />
</ReadLinesFromFile>
<Delete Files="@(TSCompilerOutput)" Condition=" '@(TSCompilerOutput)' != '' " />
<Delete Files="@(TSOutputLogsToDelete)" />
<!-- <RemoveDir Directories="$(IntermediateOutputPath)" /> -->
</Target>
</Project>
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

In VS2017, add the existing project to the solution. Within the project, create an “app” subfolder, and add a new TypeScript file named “main.ts,” containing the following text:

class Student {
fullName: string;
constructor(public firstName: string, public middleInitial: string, public lastName: string) {
this.fullName = firstName + " " + middleInitial + " " + lastName;
}
}

interface Person {
firstName: string;
lastName: string;
}

function greeter(person: Person) {
return "Hello, " + person.firstName + " " + person.lastName;
}

let user = new Student("Jane", "M.", "User");

document.body.textContent = greeter(user);
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

In the project folder, add an HTML Page file named “index.html,” containing the following text:

<!DOCTYPE html>
<html>
<head>
<title>TypeScript Greeter</title>
</head>
<body>
<script src="./app/main.js"></script>
</body>
</html>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

At this point, the project should look like this in Solution Explorer:

Building or rebuilding the project will generate TypeScript compiler output, and a file named “Tsc.out” will be created in a subfolder named “vs”. The “Tsc.out” file defines the compiler output to delete when cleaning the project; cleaning the project will also delete that file. [Note that if you build and clean “Release” without cleaning “Debug” beforehand, the source map files will still remain.]

Export the project template:

At this point, if you export the project to a template, you have a generic TypeScript project template. However, it will be displayed under the “Visual C#” category. If you want it to appear under the “TypeScript” category, there are additional steps to take. First, unzip the template to a new folder and edit “MyTemplate.vstemplate:”

Change the “ProjectType” value from “CSharp” to “TypeScript”. Zip the contents of the folder to a new zip file, and the template is ready to use.

Building a JSAPI project template:

Now that we have a basic TypeScript project template, the next step is to use it to create a template for a simple JavaScript API project.


First, create a new project, called “ArcGIS4xTypeScriptTemplate,” using the template created above. Open a Command Prompt, navigate to the project folder, and enter the following commands:

   npm init --yes
   npm install --save @types/arcgis-js-api

Back in Solution Explorer, select the project and click the “Show All Files” button. Select the “node_modules” folder, “package.json,” and “package-lock.json,” right-click, and select “Include In Project.” Finally, replace the contents of “main.ts” and “index.html” with the text given at the JSAPI TypeScript walk-through. Your project should now look like this:

You may notice that the “import” statements in “main.ts” are marked as errors, even though the esModuleInterop flag is set in the project:

This appears to be a defect in the Visual Studio extension. The project will build without errors, and the resulting page will load correctly. If it annoys you, you can always revert to the older AMD style statements:

At this point, you’re ready to export the template.

On a final note:

The JavaScript API is updated frequently, which means that you may also want to keep your project templates up to date. Rather than updating the source project and repeating the export steps, you might want to consider keeping the unzipped template folders in a standard location, and updating those directly. Then, all you have to do is zip them to create the updated template.

more
2 0 170
Regular Contributor

Recently, I found myself painted into a corner.  Some time ago, I'd created custom tile caches for use with Runtime .NET, which had one scale level defined.  They worked just fine in 10.2.7, but on preparing to upgrade to 100.x, I discovered that they caused 100.6 to hang up.  The workaround was simple enough, namely to define additional scale levels, even if they weren't populated.  However, the task of modifying the caches for nearly 150 users proved so daunting, that I decided to let the app itself make the modification.  Updates to the app are automatically detected and downloaded, which provides a simpler mechanism than deploying a script to everyone's machine. It's not a perceptible performance hit as it is, and later on, as part of another update, I can simply deactivate it.  So here's the code:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Xml.Linq;

namespace NavAddin
{

/*
* Runtime 100.6 BUG: Tile caches that have only one scale level level defined will hang up on loading.
* WORKAROUND: Define additional scale levels
* [Assumes that 100 < scale level < 24000]
*/


public static class TileCacheHelper
{

public const string L0Scale = "24000";
public const string L0Resolution = "20.833375000083333";
public const string L2Scale = "100";
public const string L2Resolution = "0.086805729167013887";

public static bool CheckTileCache(string sPath)
{

// Check if tile cache (i.e. a folder)

if (!Directory.Exists(sPath))
return true; // Not a tile cache

// Check if one scale level defined

string sConfigPath = Path.Combine(sPath, "conf.xml");
StreamReader sr = new StreamReader(sConfigPath);
XDocument xDoc = XDocument.Load(sr);
sr.Close();
XElement xRoot = xDoc.Root;
XElement xTCInfo = xRoot.Element("TileCacheInfo");
XElement xLODInfos = xTCInfo.Element("LODInfos");
int iLevelCount = xLODInfos.Elements("LODInfo").Count();
if (iLevelCount > 1)
return true; // Not a problem
if (iLevelCount < 1)
return false; // This should never happen?

// Check if scale level is between 100 (L2) and 24000 (L0)

XElement xLODInfo, xLevelID, xScale, xResolution;

xLODInfo = xLODInfos.Element("LODInfo");
xScale = xLODInfo.Element("Scale");
string sScale = xScale.Value;
double dScale = Convert.ToDouble(sScale);
double dL0Scale = Convert.ToDouble(L0Scale);
double dL2Scale = Convert.ToDouble(L2Scale);
if (dScale >= dL0Scale)
return false;
if (dScale <= dL2Scale)
return false;

// Redefine scale levels

xLevelID = xLODInfo.Element("LevelID");
xLevelID.Value = "1";
XElement xLOD0 = new XElement(xLODInfo);
xLevelID = xLOD0.Element("LevelID");
xLevelID.Value = "0";
xScale = xLOD0.Element("Scale");
xScale.Value = L0Scale;
xResolution = xLOD0.Element("Resolution");
xResolution.Value = L0Resolution;
xLODInfos.AddFirst(xLOD0);
XElement xLOD2 = new XElement(xLODInfo);
xLevelID = xLOD2.Element("LevelID");
xLevelID.Value = "2";
xScale = xLOD2.Element("Scale");
xScale.Value = L2Scale;
xResolution = xLOD2.Element("Resolution");
xResolution.Value = L2Resolution;
xLODInfos.Add(xLOD2);

// Write config file

StreamWriter sw = new StreamWriter(sConfigPath);
xDoc.Save(sw);
sw.Close();

// Rename L00 folder to L01

string sLayersPath = Path.Combine(sPath, "_alllayers");
string sL00Path = Path.Combine(sLayersPath, "L00");
string sL01Path = Path.Combine(sLayersPath, "L01");
Directory.Move(sL00Path, sL01Path);

return true;

}



}
}
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

more
0 0 74
Regular Contributor

At some point in the 100.x lifespan of ArcGIS Runtime SDK for .NET, the old tried-and-true method of treating a MapView as just another WPF Visual went sailing out the window.  Granted, the ExportImageAsync method should have been a simple workaround, but for one drawback: overlay items are not included!

Now I don't know about you, but I find the OverlayItemsControl to be a great way to add interactive text to a map.  You can have it respond to a mouse-over:

Bring up a context menu:

Modify properties:

And so on.  In the old days, when you created an image of the MapView, the overlays would just come right along:

		private RenderTargetBitmap GetMapImage(MapView mv)
{

// Save map transform

System.Windows.Media.Transform t = mv.LayoutTransform;
Rect r = System.Windows.Controls.Primitives.LayoutInformation.GetLayoutSlot(mv);
mv.LayoutTransform = null;
Size sz = new Size(mv.ActualWidth, mv.ActualHeight);
mv.Measure(sz);
mv.Arrange(new Rect(sz));

// Output map

RenderTargetBitmap rtBitmap = new RenderTargetBitmap(
(int)sz.Width, (int)sz.Height, 96d, 96d,
System.Windows.Media.PixelFormats.Pbgra32);
rtBitmap.Render(mv);

// Restore map transform

mv.Arrange(r);
mv.LayoutTransform = t;

return rtBitmap;

}
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Not so today!  Try that approach in 100.6 and you just get a black box.    

My workaround:

  1. Create a Canvas
  2. Create an Image for the Mapview and add it to the Canvas
  3. Create an Image for every overlay and add it to the Canvas
  4. Create a bitmap from the Canvas

Step 3 is trickier than you would think, however, because of two issues:  1) relating the anchor point to the overlay, and 2) taking any RenderTransform into account.

As far as I can tell, this is the rule for determining the relationship between the overlay and the anchor point:

HorizontalAlignment: Center or Stretch, anchor point is at the center; Left, anchor point is at the right; Right, anchor point is at the left.

VerticalAlignment: Center or Stretch, anchor point is at the center; Top, anchor point is at the bottom; Bottom, anchor point is at the top.

For a Canvas element, the anchor point is at 0,0 -- however, I have not found a good way to create an Image from a Canvas [if the actual width and height are unknown].

To create an Image from the element, any RenderTransform must be removed before generating the RenderTargetBitmap.  Then, the Transform must be reapplied to the Image.  Also, you need to preserve HorizontalAlignment and VerticalAlignment if you're creating a page layout using a copy of the MapView, so that the anchor point placement is correct.

So here it is, the code for my workaround:

using System.Collections.Generic;
using System.Diagnostics;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using System.Windows.Media.Imaging;

using Esri.ArcGISRuntime.Geometry;
using Esri.ArcGISRuntime.UI;
using Esri.ArcGISRuntime.UI.Controls;

namespace Workarounds
{

public struct MapOverlayExport
{
public Image OverlayImage;
public MapPoint Anchor;
public MapPoint TopLeft;
}

public static class MapExportHelper
{

// Export bitmap from map with XAML graphics overlays

public static async Task<ImageSource> GetMapImage(MapView mv)
{

RuntimeImage ri = await mv.ExportImageAsync();
ImageSource src = await ri.ToImageSourceAsync();
if (mv.Overlays.Items.Count == 0)
return src; // No XAML overlays

// Create canvas

double dWidth = mv.ActualWidth;
double dHeight = mv.ActualHeight;
Rect rMap = new Rect(0, 0, dWidth, dHeight);
Size szMap = new Size(dWidth, dHeight);
Canvas c = new Canvas();

// Add map image

Image imgMap = new Image()
{
Height = dHeight,
Width = dWidth,
Source = src
};
imgMap.Measure(szMap);
imgMap.Arrange(rMap);
imgMap.UpdateLayout();
Canvas.SetTop(imgMap, 0);
Canvas.SetLeft(imgMap, 0);
c.Children.Add(imgMap);

// Add map overlays

List<MapOverlayExport> Overlays = GetMapOverlays(mv);
foreach (MapOverlayExport overlay in Overlays)
{

// Get Image and location

Image img = overlay.OverlayImage;
MapPoint ptMap = overlay.TopLeft;
Point ptScreen = mv.LocationToScreen(ptMap);

// Create and place image of element

Canvas.SetTop(img, ptScreen.Y);
Canvas.SetLeft(img, ptScreen.X);
c.Children.Add(img);
img.UpdateLayout();

}
c.Measure(szMap);
c.Arrange(rMap);
c.UpdateLayout();

// Create RenderTargetBitmap

RenderTargetBitmap rtBitmap = new RenderTargetBitmap(
(int)dWidth, (int)dHeight, 96d, 96d, PixelFormats.Pbgra32);
rtBitmap.Render(c);
return rtBitmap;

}

public static List<MapOverlayExport> GetMapOverlays(MapView mv)
{

List<MapOverlayExport> Overlays = new List<MapOverlayExport>();
foreach (object obj in mv.Overlays.Items)
{

// Get element and location

if (!(obj is FrameworkElement elem))
{
Debug.Print("MapExportHelper: Non-FrameworkElement encountered.");
continue;
}
double dW = elem.ActualWidth;
double dH = elem.ActualHeight;
if ((dH == 0) || (dW == 0))
{
Debug.Print("MapExportHelper: Unsupported FrameworkElement encountered.");
continue;
}

// Remove RenderTransform and RenderTransformOrigin

Transform tRender = elem.RenderTransform;
Point ptOrigin = elem.RenderTransformOrigin;
elem.RenderTransform = null;
elem.RenderTransformOrigin = new Point(0,0);
elem.Measure(new Size(dW, dH));
elem.Arrange(new Rect(0, 0, dW, dH));
elem.UpdateLayout();

// Create image of element

ImageSource src = null;
if (elem is Image imgSrc)
src = imgSrc.Source;
else
{
RenderTargetBitmap bmp = new RenderTargetBitmap(
(int)dW, (int)dH, 96d, 96d, PixelFormats.Pbgra32);
bmp.Render(elem);
src = bmp;
}
Image img = new Image()
{
Height = dH,
Width = dW,
Source = src,
HorizontalAlignment = elem.HorizontalAlignment,
VerticalAlignment = elem.VerticalAlignment,
RenderTransform = tRender,
RenderTransformOrigin = ptOrigin
};

// Restore RenderTransform and RenderTransformOrigin

elem.RenderTransform = tRender;
elem.RenderTransformOrigin = ptOrigin;

// Find top left location in map coordinates

MapPoint ptMap = MapView.GetViewOverlayAnchor(elem);
Point ptScreen = mv.LocationToScreen(ptMap);
double dY = 0;
double dX = 0;
switch (elem.VerticalAlignment)
{
case VerticalAlignment.Center:
case VerticalAlignment.Stretch:
dY = -dH / 2;
break;
case VerticalAlignment.Top:
dY = -dH;
break;
}
switch (elem.HorizontalAlignment)
{
case HorizontalAlignment.Center:
case HorizontalAlignment.Stretch:
dX = -dW / 2;
break;
case HorizontalAlignment.Left:
dX = -dW;
break;
}
Point ptTopLeftScreen = new Point(ptScreen.X + dX, ptScreen.Y + dY);
MapPoint ptTopLeftMap = mv.ScreenToLocation(ptTopLeftScreen);

// Add exported overlay to list

Overlays.Add(new MapOverlayExport()
{
OverlayImage = img,
Anchor = ptMap,
TopLeft = ptTopLeftMap
});

}

return Overlays;

}

}
}
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

P.S. -- If you want ExportImageAsync to include overlays, vote up this idea:   

more
0 0 97
Occasional Contributor

Originally posted by Courtney Kirkham, September 18, 2019 from the MapThis! Blog

While OAuth 2.0 is Esri’s recommended methodology for handling security and authentication for their ArcGIS platform, not everyone using it understands what it does or how to implement it. Here at GEO Jobe, we’ve had to explain it to more than a few of the people we’ve worked with. As such, we thought we’d lay out a quick guide to what OAuth is and how it works.

OAuth 2.0 handles security and authentication for the ArcGIS platform. Image Source

What is OAuth 2.0?

OAuth 2.0 is the protocol that ensures only users you give permission to can access your ArcGIS content. Esri chooses to use OAuth 2.0 for a number of reasons, including this list they’ve provided:

  • OAuth 2.0 meets the needs of both users and applications.
  • There are strong security practices around OAuth 2.0.
  • OAuth 2.0 is designed to function at Internet-scale across domains, networks, cloud services, and applications.
  • As a widely accepted standard OAuth 2.0 has many libraries and helpers for a variety of languages and platforms.

This is an important part of security for controlling who can access or edit content, as well as managing credit usage. By using OAuth 2.0 in your applications, you can make a map of company assets available to anyone in your company while still keeping it hidden from the public. A company working on building a new neighborhood could create a map to track the progress of the homes being built, while ensuring only supervisors can edit the status of the houses.

Perhaps the most important way OAuth 2.0 manages security is controlling access to premium content and services. Since interacting with these resources consumes credits, and credits cost real money, OAuth 2.0 is an important part of making sure that only the people you want accessing those resources are able to do so.
(Bonus: For additional control over security while reducing the overhead in your in your org, check out security.manager)

You’re not getting that data without valid credentials. Image Source

How does OAuth 2.0 work?

Here at GEO Jobe, we’ve found the best way to explain how OAuth 2.0 is with an analogy. Say your friend, Chris, got access to some exclusive event – a networking opportunity, a party, or something like that. There is a private guest list for the event, and the doormen are checking everyone. Your friend tells you all you need to do is tell the doorman you’re there with Chris, and the doorman will let you in.

When you get to the event and check in with the doorman, one of three things can happen. We’ve outlined them each below, and explained what they mean in the context of OAuth 2.0.

The Doorman Finds Your Friend; You Get a Wristband and Go In

This is what happens when OAuth 2.0 works. You’re able to get in and see your friend. In the case of ArcGIS, this means you requested access to content that you have permission to see. After OAuth checks your credentials, they give you a token (the wristband) that’s added to all your requests for content after that. Then, you get whatever you need (that you have permission to view), and everything is good.

The Doorman Finds Your Friend and You Don’t Get In

This is when the doorman comes back and tells you they found Chris, but Chris says they don’t know you. While this may be an awkward social situation, in OAuth 2.0, it’s pretty simple. It means you tried to access content, and OAuth 2.0 doesn’t think you are supposed to be able to see it. This will often result in an “Invalid Redirect URI” error.

In terms of development, this happens because the request is coming from a URL the app doesn’t recognize. To fix it, go to the app in your ArcGIS used to register for OAuth 2.0. Then, in the Settings menu, view the “Registered Info”. The domain sending the request will need to be included in the Redirect URIs.

The Doorman Can’t Find Your Friend

Maybe your friend left the party. Maybe the doorman thought the “Chris” they were looking for was a “Christopher” instead of a “Christine”. Regardless of the reason, the doorman can’t find your friend, and they’re not letting you into the party. When this happens, OAuth 2.0 will return an error stating that there is an “Invalid Client ID”. This is also easy for a developer to fix.

This situation occurs because there isn’t an app in the ArcGIS Organization in question with an App ID that matches what OAuth 2.0 was told to look for. This can happen if the app was deleted from your ArcGIS Org, or if the code where the App ID was specified was altered. In order to fix it, check where the App ID is specified in the code for the OAuth 2.0 call. Also, check the application in ArcGIS Org used to register for OAuth 2.0. If the application was deleted, you will need to create and register a new application, then use that App ID. If the application exists, check under the “Settings” menu and the “Registered Info” to find the App ID. This should match the value for the App ID in the code. If it doesn’t, recopy the App ID from the application in the ArcGIS, then paste the value into the code where the OAuth 2.0 information is initialized.

How to Implement an OAuth 2.0 Application

Setting up an OAuth 2.0 application in your ArcGIS Organization is fairly simple. In fact, it only takes five steps! It’s so easy, we’ve outlined the process below.

1. To start, sign into your ArcGIS Org and go to the Content menu. From there, click on “Add Item” and choose the option for “An Application”.

2. Next, you’ll select the type “Application” and fill out some basic information.

3. After you add the item, go to the Settings page and click the “Registered Info” button. Note: While on the settings page, you may want to select the option for “Prevent this item from being accidentally deleted.

4. After clicking the “Registered Info” button, the App ID you will need should be visible on the left. The final step will be to update the Rediret URIs for the application. Click the “Update” button on the right side of the screen.

5. A popup with the Registered Info should appear. Any applications a developer builds that will need to OAuth into your ArcGIS organization will need to have their domains added to the approved Redirect URIs of an OAuth application. Add the appropriate domains in the textbox, then click “Add”. After your domains are all added, click the “Update” button at the bottom of the popup.

And there you have it! Five easy steps and you’re ready to use OAuth 2.0 in your ArcGIS Organization.

You can relax, knowing your ArcGIS content is safe and only accessible by who you choose. Image Source

Conclusion

Securing your ArcGIS data is important. OAuth 2.0 can make it simple. If you need any assistance setting up OAuth for your ArcGIS Organization, or need some custom applications built while keeping your data secure, reach out to us at connect@geo-jobe.com. We’ll be glad to help!

Liked this article? Here’s more cool stuff

more
0 0 275