Developers Blog

cancel
Showing results for 
Search instead for 
Did you mean: 

Other Boards in This Place

Latest Activity

(90 Posts)
AmyNiessen
Esri Regular Contributor

Earlier this month, Christopher Zent from the ArcGIS Pro SDK team and Robert Burke, Esri Instructor, co-presented the GeoDev Webinar, "ArcGIS Pro SDK for .NET: Extensibility Patterns. Throughout the presentation, attendees can submit their questions. The questions below are the ones we were unable to get to during the webinar. For those we did address, as well as the presentation recording and slides are listed below. Check out what you may have missed!

Read more...

more
2 0 243
SudipSenGupta
New Contributor

The Ghost blogging platform offers a lean and minimalist experience. And that's why we love it. But unfortunately sometimes, it can be too lean for our requirements.

Web performance has become more important and relevant than ever, especially since Google started including it as a parameter in its SEO rankings. We make sure to optimize our websites as much as possible, offering the best possible user experience. This article will walk you through the steps you can take to optimize a Ghost Blog's performance while keeping it lean and resourceful.

Read more...

more
0 0 124
AmyNiessen
Esri Regular Contributor

At the end of May, we hosted a GeoDev Webinar on one of the latest blog posts the came from Kristian Ekenes, who works on the Senior Product Engineer on the ArcGIS API for JavaScript team. He wrote a blog post on Mapping Large Datasets on the Web, and since we thought this would be a great topic to cover more in-depth, we decided to host the same topic as a webinar where attendees could ask questions. There were a lot of good questions that came in but were not addressed during the live Q&A portion of the webinar, so Kristian addresses them below.

Read more...

more
0 0 549
MarkCederholm
Regular Contributor

I maintain a number of automated map products in ArcMap which involve not just spatial queries and geometric operations, but also fine-grained manipulation of layers, including renderers and symbology. Let's face it: I never could get the arcpy.mapping module or early versions of ArcGIS Pro to cut the mustard. Later versions of the ArcGIS Pro SDK introduced far greater capability to manipulate map layers and layout elements. But then I asked myself: should users be running Pro at all to create those plots?

At Pro 2.4.3, I started taking a closer look at arcpy.mp, wondering if I could create a geoprocessing tool and publish it to a web tool for consumption by a custom Web AppBuilder widget in Portal. I am happy to say that an initial proof-of-concept experiment has been a success.

Before I go into that, first I would like to point out some of the features of arcpy.mp that made me decide that it has finally reached the level of functionality that I need:

  • Load and modify symbols
  • Change and manipulate renderers
  • Make layout elements visible or invisible
  • Make modifications at the CIM level

One thing arcpy.mp doesn't do yet is create new layout elements, but for my purposes I can recycle existing ones. A good approach is to have a number of elements present for various tasks in a layout, and make them visible or invisible on demand for different situations.

        # Show or hide legend
legend = self.__layout.listElements("LEGEND_ELEMENT")[0]
if self.__bOverview:
if self.__bMainline:
legend.visible = True
else:
legend.visible = False
else:
legend.visible = True
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

The ability to manipulate legend elements is still pretty limited, but I haven't run into any deal-killers yet. If you really hit a wall, one powerful thing you can now do is dive into the layout's CIM (Cartographic Information Model) and make changes directly to that.  Here's an example of modifying a legend element in a layout via the CIM:

aprx = arcpy.mp.ArcGISProject("c:/apps/Maps/LeakSurvey/LeakSurvey.aprx")
layout = aprx.listLayouts("Leak Survey Report Maps Template")[0]
cim = layout.getDefinition("V2")
legend = None
for e in cim.elements:
if type(e) == arcpy.cim.CIMLegend:
legend = e
break
legend.columns = 2
legend.makeColumnsSameWidth = True
layout.setDefinition(cim)
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

While the CIM spec is formally documented on GitHub, a simpler way to explore the CIM is to check out the ArcGIS Pro API Reference; all objects and properties in the ArcGIS.Core.CIM namespace should be mirrored in Python.

Part One: Creating a Python Toolbox

LeakSurvey.pyt is in the sample code attached to this post. While my initial draft was focused on successfully generating a PDF file, when the time came to test the tool as a service, additional factors came into play:

  • Getting the service to publish successfully at all
  • Returning a usable link to the resulting PDF file
  • Providing a source for valid input parameters

Sharing a geoprocessing tool as a package or service is one of the least intuitive, most trippy experiences I've ever had with any Esri product.  The rationale seems to be that you are not publishing a tool, but a vignette. You can't simply put out the tool and say, here it is: you must publish a geoprocessing result. As part of that concept, any resolvable references will cause ArcGIS to attempt to bundle them, or to match them to a registered data store. This is a great way to get the publication process to crash, or lock the published service into Groundhog Day.

So, one key to successfully publishing a web tool is to provide a parameter that:

  1. Gives the tool a link to resolve data and aprx references, and
  2. When left blank, returns a placeholder result that you can use to publish the service.

LeakSurvey.pyt does just that. Here's the definition for the "Project Folder" parameter:

        param0 = arcpy.Parameter(
displayName = "Project Folder",
name = "project_folder",
datatype = "GPString",
parameterType = "Optional",
direction = "Input")
‍‍‍‍‍‍‍‍‍‍‍‍‍‍

When left blank, the tool simply returns "No results" without throwing an error. Otherwise, it points to a shared folder that contains the ArcGIS Pro project and some enterprise GDB connection files.

Returning a usable link to an output file involves a bit of a trick.  Consider the definition of the "Result" parameter:

        param7 = arcpy.Parameter(
displayName = "Result",
name = "result",
datatype = "GPString",
parameterType = "Derived",
direction = "Output")
‍‍‍‍‍‍‍‍‍‍‍‍‍

The tool itself creates a path to the output file as follows:

        sOutName = self.__sSurveyType + "_" + self.__sSurveyName + "_" + self.__sMapsheet + "_"
sOutName += str(uuid.uuid4())
sOutName += ".pdf"
sOutName = sOutName.replace(" ", "_")
sOutput = os.path.join(arcpy.env.scratchFolder, sOutName)
‍‍‍‍‍‍‍‍‍‍‍

If that value is sent to the "Result" parameter, what the user will see is the local file path on the server. In order for the service to return a usable url, a return parameter needs to be defined as follows:

        param8 = arcpy.Parameter(
displayName = "Output PDF",
name = "output_pdf",
datatype = "DEFile",
parameterType = "Derived",
direction = "Output")
‍‍‍‍‍‍‍‍‍‍‍‍‍

Traditional tool validation code is somewhat funky when working with a web tool, and I dispense with it. Rather, the tool returns a list of valid values depending on the parameters provided, keeping in mind that I want this service to be consumed by a web app. For example, if you provide the tool with a survey type and leave the survey name blank, it will return a list of the surveys that exist. If you provide a survey type and name and leave the map sheets parameter blank, it will return a list of the map sheets for that survey:

        if self.__sSurveyName == "" or self.__sSurveyName == "#" or self.__sSurveyName == None:
# Return list of surveys for type
return self.__GetSurveysForType()
self.__bMainline = self.__sSurveyType == "MAINLINE" or self.__sSurveyType == "TRANSMISSION"
self.__Message("Querying map sheets...")
bResult = self.__GetMapsheetsForSurvey()
if not bResult:
return "No leak survey features."
if self.__sMapsheets == None or self.__sMapsheets == "#":
# Return list of map sheets for survey
sResult = "MAPSHEETS|OVERVIEW"
for sName in self.__MapSheetNames:
sResult += "\t" + sName
return sResult
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

So how's the performance? Not incredibly great, compared to doing the same thing in ArcObjects, but there are things I can do to improve script performance. For example, because every time the tool is run, it must re-query the survey and its map sheets, there is an option to specify multiple sheets, which will be combined into one PDF, to be returned to the calling application. The tool also supports an "ALL" map sheets option, in order to bypass the need to return a list of map sheets for the survey.

Nonetheless, arcpy can suffer in comparison to ArcObjects in various tasks [see this post for some revealing comparisons]. On the other hand, the advantages of using arcpy.mp can outweigh the disadvantages when it comes to automating map production.

After testing the tool, it's simple matter to create an empty result and publish it to Portal:

For this example, I also enable messages to be returned:

Once in Portal, it's ready to use:

Part Two: Creating and Publishing a Custom Web AppBuilder Widget

As I've mentioned in another post, one reason I like developing in Visual Studio is that I can create and use project templates. I've attached my current Web AppBuilder custom widget template to this post.

I've also attached the code for the widget itself. Because the widget makes multiple calls to the web tool, it needs a way to sort through the returns. In this example, the tool prefixes "SURVEYS|" when returning a list of surveys, and "MAPSHEETS|" when returning a list of map sheets. When a PDF is successfully generated, the "Result" parameter contains "Success."

   private onJobComplete(evt: any): void {
let info: JobInfo = evt.jobInfo;
this._sJobId = info.jobId;
this._gp.getResultData(info.jobId, "result");
}

private onGetResultDataComplete(evt: any): void {
let val: ParameterValue = evt.result;
let sName: string = val.paramName;
if (sName === "output_pdf") {
this.status("Done.");
window.open(val.value.url);
this._btnGenerate.disabled = false;
return;
}
let sVal: string = val.value;
if (this.processSurveyNames(sVal))
return;
if (this.processMapSheets(sVal))
return;
if (this.processPDF(sVal))
return;
this.status(sVal);
}

private processSurveyNames(sVal: string): boolean {
if (sVal.indexOf("SURVEYS|") !== 0)
return false;
...

private processMapSheets(sVal: string): boolean {
if (sVal.indexOf("MAPSHEETS|") !== 0)
return false;
...

private processPDF(sVal: string): boolean {
if (sVal !== "Success.")
return false;
...
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

The widget can be tested and debugged using Web AppBuilder for ArcGIS (Developer Edition):

Publishing widgets to Portal can be tricky: our production Portal sits in a DMZ, and https calls to another server behind the firewall will fail, so widgets must reside on the Portal server. And even though our "Q" Portal sits behind the firewall and can see other servers, it's on a different domain. Thus, if I choose to host "Q" widgets on a different server, I need to configure CORS.  Here's an example of web.config:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<cors enabled="true" failUnlistedOrigins="true">
<add origin="*" />
<add origin="https://*.uns.com"
allowCredentials="true"
maxAge="120">

<allowHeaders allowAllRequestedHeaders="true">
<add header="header1" />
<add header="header2" />
</allowHeaders>
<allowMethods>
<add method="DELETE" />
</allowMethods>
<exposeHeaders>
<add header="header1" />
<add header="header2" />
</exposeHeaders>
</add>
<add origin="https://*.unisource.corp"
allowCredentials="true"
maxAge="120">

<allowHeaders allowAllRequestedHeaders="true">
<add header="header1" />
<add header="header2" />
</allowHeaders>
<allowMethods>
<add method="DELETE" />
</allowMethods>
<exposeHeaders>
<add header="header1" />
<add header="header2" />
</exposeHeaders>
</add>
<add origin="http://*" allowed="false" />
</cors>
</system.webServer>
</configuration>
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

The file sits in a virtual web folder called "Widgets" with any widget folders to publish placed under that. When publishing a widget, initially there may be a CORS error:

but reloading the page and trying again should work.

Once the widget is published to Portal, it can be added to a new or existing application, and it's ready to use:

Because generating plot files can be a lengthy process, it may not be useful for the widget to wait for completion. Were I to put this into production, I would probably modify the tool to send plot files to a shared folder (or even a document management service) and send an email notification when it completes or fails.

more
1 0 262
MarkCederholm
Regular Contributor

[This was to be my user presentation at the 2020 DevSummit, which was cancelled.]

Chrome extensions are a fun way to implement functionality that is not normally available to a web client app. Extensions can make cross-domain requests to gather data from a variety of sources, and at the same time can filter out unwanted content. The Chrome API provides a rich suite of tools for focused application development.

Obviously, any app that is implemented as a Chrome extension will only run in Chrome. Also, Chrome extensions must be distributed through Chrome Web Store, but that's not necessarily a bad thing, as I will show later.

Here are some online resources:

Chrome extensions can contain background scripts, content scripts, a UI for saved user options, and so on. The manifest file is what ties it all together: if you've developed custom widgets for Web AppBuilder, you should already be familiar with the concept. Here's an example of manifest.json:

{
"name": "Simple Map Example",
"version": "1.0",
"description": "Build an Extension with TypeScript and the ArcGIS API for JavaScript 4.x!",
"manifest_version": 2,
"icons": { "128": "images/chrome32.png" },
"browser_action": {
"default_popup": "popup.html",
"default_icon": { "128": "images/chrome32.png" }
},
"options_ui": {
"page": "options.html",
"open_in_tab": false
},
"permissions": [ "storage" ],
"content_security_policy": "script-src 'self' https://js.arcgis.com blob:; object-src 'self'"

}

One thing that's worth pointing out is the "content_security_policy" entry. This will be different depending on whether you use JSAPI 3.x or 4.x. See this post for more information.

Let's use a Visual Studio 2017 project template (attached) to create a simple extension. Because the template uses TypeScript, there are some prerequisites; see this post for more information.

First, let's create a blank solution called DevSummitDemo:

Next, add a new project using the ArcGIS4xChromeExtensionTemplate:

Here is the structure of the resulting project:

Building the project compiles the TypeScript source into corresponding JS files.  Extensions can be tested and debugged using Chrome's "Load unpacked" function:

Note that Chrome DevTools will not load TypeScript source maps from within an extension. That's normally not an issue since you can debug the JS files directly. There is a way to debug the TypeScript source, but it involves some extra work. First, set up IIS express to serve up the project folder:

Then, edit the JS files to point to the localhost url:

Now, you can set a breakpoint in a TS file and it will be hit:

The disadvantage of this approach is that you must re-edit the JS files every time you recompile them.

The next demo involves functionality that is available in JSAPI 3.x, but not yet at 4.x. Namely, the ability to grab an image resource and display it as a layer. Here is a web page that displays the latest weather radar imagery:

The latest image is a fixed url, so nothing special needs to be done to reference it. Wouldn't it be cool, however, to display an animated loop of the 10 latest images? But there's a problem.

Let's add the LocalRadarLoop demo project code (attached) to the VS2017 solution and look at pageHelper.ts:

	export class myApp {
public static readonly isExtension: boolean = false;
public static readonly latestOnly: boolean = true;
}

When isExtension is false, and latestOnly is true, the app behaves like the web page previously shown.

Note also this section of extension-only code that must be commented out for the app to run as a normal web page:

			// **** BEGIN Extension-only block ****
/*
if (myApp.isExtension) {
let sDefaultCode: string = defaultLocalCode;
chrome.storage.local.get({ localRadarCode: sDefaultCode },
(items: any) => {
let sCode: string = items.localRadarCode;
let sel: HTMLSelectElement = <HTMLSelectElement>document.getElementById("localRadarCode");
sel.value = sCode;
this.setRadar();
});
return;
}
*/

// **** END ****

Because the latest set of radar images do not have fixed names, it is necessary to obtain a directory listing to find out what they are. If you set latestOnly to false and run the app, however, you will run into the dreaded CORS policy error:

This is where the power of Chrome extensions comes into play. Set isExtension to true, and uncomment the extension-only code (which enables a saved user option), and load the app as an extension. Now you get the desired animation loop!

Note the relevant line in manifest.json which enables the XMLHttpRequest to run without a CORS error:

Now, as I pointed out earlier, Chrome extensions are distributed through Chrome Web Store:

There are some advantages to this. For example, updates are automatically distributed to users. You can also create an "invisible" store entry, or publish only to testers. I find that last feature useful for distributing an extension that I created for my personal use only. Other distribution options do exist, which you can read about at this link.

In conclusion, Chrome extensions enable pure client-side functionality that otherwise would not be possible without the aid of web services. Chrome Web Store provides a convenient way to distribute extensions and updates, with public and private options.

The Local Radar Loop extension is available free at Chrome Web Store.

more
2 1 343
MarkCederholm
Regular Contributor

Being a user of Microsoft Visual Studio since version 6.0, I prefer it as a one-stop shop for as many kinds of development as possible, including C++, VB, C#, Python, and HTML5/TypeScript projects. One feature of VS that I really like is the ability to create project templates. VS2015 included a project template for TypeScript, but it was ugly as sin. VS2017 dropped it, but failed to provide a viable alternative; being lazy, I continued to use the same version available online:

This must stop! Sometimes, you just have to get your hands dirty, so I decided to create my own project template from scratch. Fortunately, the TypeScript documentation has sections on Integrating with Build Tools, and Compiler Options in MSBuild, which provided valuable assistance. Also, see the MSBuild documentation and How to: Create project templates for more information.

Prerequisites:

The TypeScript website has download links to install the latest version for a number of IDEs, including VS2017. In addition, since the TypeScript folks now prefer you to use npm to install typings; you should install Node.


Warning! If you are behind a corporate firewall, you may run into this error when you try to use npm to install typings:

   npm ERR! code UNABLE_TO_GET_ISSUER_CERT_LOCALLY

If you see that, try setting this configuration at the command prompt:

   npm config set strict-ssl false

Create a generic TypeScript project:

While, formally, the best approach would be to create a new project type, my lazy approach recycles the C# project type and redefines the build targets (but there is a disadvantage – see below). The first step is to create a blank solution in VS2017 named “TypeScriptProjectTemplates.” In Explorer or the Command Prompt, navigate to the solution folder and create a subfolder named “BasicTypeScriptTemplate.” In that folder, create a file named “BasicTypeScriptTemplate.csproj,” containing the following text:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.Default.props" Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.Default.props')" />
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<OutputType>Library</OutputType>
<StartupObject />
<OutputPath>.\</OutputPath>
<IntermediateOutputPath>vs\</IntermediateOutputPath>
</PropertyGroup>
<PropertyGroup>
<VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">12.0</VisualStudioVersion>
</PropertyGroup>
<PropertyGroup>
<TypeScriptToolsVersion>Latest</TypeScriptToolsVersion>
<TypeScriptModuleKind>amd</TypeScriptModuleKind>
<TypeScriptNoImplicitAny>true</TypeScriptNoImplicitAny>
<TypeScriptESModuleInterop>true</TypeScriptESModuleInterop>
<TypeScriptJSXEmit>react</TypeScriptJSXEmit>
<TypeScriptJSXFactory>tsx</TypeScriptJSXFactory>
<TypeScriptTarget>es5</TypeScriptTarget>
<TypeScriptExperimentalDecorators>true</TypeScriptExperimentalDecorators>
<TypeScriptPreserveConstEnums>true</TypeScriptPreserveConstEnums>
<TypeScriptSuppressImplicitAnyIndexErrors>true</TypeScriptSuppressImplicitAnyIndexErrors>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)' == 'Debug'">
<TypeScriptRemoveComments>false</TypeScriptRemoveComments>
<TypeScriptSourceMap>true</TypeScriptSourceMap>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)' == 'Release'">
<TypeScriptRemoveComments>true</TypeScriptRemoveComments>
<TypeScriptSourceMap>false</TypeScriptSourceMap>
</PropertyGroup>
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets" Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets')" />
<Target Name="Build" DependsOnTargets="CompileTypeScript">
</Target>
<Target Name="Rebuild" DependsOnTargets="CompileTypeScript">
</Target>
<Target Name="Clean" Condition="Exists('$(TSDefaultOutputLog)')">
<ItemGroup>
<TSOutputLogsToDelete Include="$(TSDefaultOutputLog)" />
</ItemGroup>
<ReadLinesFromFile File="@(TSOutputLogsToDelete)">
<Output TaskParameter="Lines" ItemName="TSCompilerOutput" />
</ReadLinesFromFile>
<Delete Files="@(TSCompilerOutput)" Condition=" '@(TSCompilerOutput)' != '' " />
<Delete Files="@(TSOutputLogsToDelete)" />
<!-- <RemoveDir Directories="$(IntermediateOutputPath)" /> -->
</Target>
</Project>
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

In VS2017, add the existing project to the solution. Within the project, create an “app” subfolder, and add a new TypeScript file named “main.ts,” containing the following text:

class Student {
fullName: string;
constructor(public firstName: string, public middleInitial: string, public lastName: string) {
this.fullName = firstName + " " + middleInitial + " " + lastName;
}
}

interface Person {
firstName: string;
lastName: string;
}

function greeter(person: Person) {
return "Hello, " + person.firstName + " " + person.lastName;
}

let user = new Student("Jane", "M.", "User");

document.body.textContent = greeter(user);
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

In the project folder, add an HTML Page file named “index.html,” containing the following text:

<!DOCTYPE html>
<html>
<head>
<title>TypeScript Greeter</title>
</head>
<body>
<script src="./app/main.js"></script>
</body>
</html>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

At this point, the project should look like this in Solution Explorer:

Building or rebuilding the project will generate TypeScript compiler output, and a file named “Tsc.out” will be created in a subfolder named “vs”. The “Tsc.out” file defines the compiler output to delete when cleaning the project; cleaning the project will also delete that file. [Note that if you build and clean “Release” without cleaning “Debug” beforehand, the source map files will still remain.]

Export the project template:

At this point, if you export the project to a template, you have a generic TypeScript project template. However, it will be displayed under the “Visual C#” category. If you want it to appear under the “TypeScript” category, there are additional steps to take. First, unzip the template to a new folder and edit “MyTemplate.vstemplate:”

Change the “ProjectType” value from “CSharp” to “TypeScript”. Zip the contents of the folder to a new zip file, and the template is ready to use.

Building a JSAPI project template:

Now that we have a basic TypeScript project template, the next step is to use it to create a template for a simple JavaScript API project.


First, create a new project, called “ArcGIS4xTypeScriptTemplate,” using the template created above. Open a Command Prompt, navigate to the project folder, and enter the following commands:

   npm init --yes
   npm install --save @types/arcgis-js-api

Back in Solution Explorer, select the project and click the “Show All Files” button. Select the “node_modules” folder, “package.json,” and “package-lock.json,” right-click, and select “Include In Project.” Finally, replace the contents of “main.ts” and “index.html” with the text given at the JSAPI TypeScript walk-through. Your project should now look like this:

You may notice that the “import” statements in “main.ts” are marked as errors, even though the esModuleInterop flag is set in the project:

This appears to be a defect in the Visual Studio extension. The project will build without errors, and the resulting page will load correctly. If it annoys you, you can always revert to the older AMD style statements:

At this point, you’re ready to export the template.

On a final note:

The JavaScript API is updated frequently, which means that you may also want to keep your project templates up to date. Rather than updating the source project and repeating the export steps, you might want to consider keeping the unzipped template folders in a standard location, and updating those directly. Then, all you have to do is zip them to create the updated template.

more
2 0 284
MarkCederholm
Regular Contributor

Recently, I found myself painted into a corner.  Some time ago, I'd created custom tile caches for use with Runtime .NET, which had one scale level defined.  They worked just fine in 10.2.7, but on preparing to upgrade to 100.x, I discovered that they caused 100.6 to hang up.  The workaround was simple enough, namely to define additional scale levels, even if they weren't populated.  However, the task of modifying the caches for nearly 150 users proved so daunting, that I decided to let the app itself make the modification.  Updates to the app are automatically detected and downloaded, which provides a simpler mechanism than deploying a script to everyone's machine. It's not a perceptible performance hit as it is, and later on, as part of another update, I can simply deactivate it.  So here's the code:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Xml.Linq;

namespace NavAddin
{

/*
* Runtime 100.6 BUG: Tile caches that have only one scale level level defined will hang up on loading.
* WORKAROUND: Define additional scale levels
* [Assumes that 100 < scale level < 24000]
*/


public static class TileCacheHelper
{

public const string L0Scale = "24000";
public const string L0Resolution = "20.833375000083333";
public const string L2Scale = "100";
public const string L2Resolution = "0.086805729167013887";

public static bool CheckTileCache(string sPath)
{

// Check if tile cache (i.e. a folder)

if (!Directory.Exists(sPath))
return true; // Not a tile cache

// Check if one scale level defined

string sConfigPath = Path.Combine(sPath, "conf.xml");
StreamReader sr = new StreamReader(sConfigPath);
XDocument xDoc = XDocument.Load(sr);
sr.Close();
XElement xRoot = xDoc.Root;
XElement xTCInfo = xRoot.Element("TileCacheInfo");
XElement xLODInfos = xTCInfo.Element("LODInfos");
int iLevelCount = xLODInfos.Elements("LODInfo").Count();
if (iLevelCount > 1)
return true; // Not a problem
if (iLevelCount < 1)
return false; // This should never happen?

// Check if scale level is between 100 (L2) and 24000 (L0)

XElement xLODInfo, xLevelID, xScale, xResolution;

xLODInfo = xLODInfos.Element("LODInfo");
xScale = xLODInfo.Element("Scale");
string sScale = xScale.Value;
double dScale = Convert.ToDouble(sScale);
double dL0Scale = Convert.ToDouble(L0Scale);
double dL2Scale = Convert.ToDouble(L2Scale);
if (dScale >= dL0Scale)
return false;
if (dScale <= dL2Scale)
return false;

// Redefine scale levels

xLevelID = xLODInfo.Element("LevelID");
xLevelID.Value = "1";
XElement xLOD0 = new XElement(xLODInfo);
xLevelID = xLOD0.Element("LevelID");
xLevelID.Value = "0";
xScale = xLOD0.Element("Scale");
xScale.Value = L0Scale;
xResolution = xLOD0.Element("Resolution");
xResolution.Value = L0Resolution;
xLODInfos.AddFirst(xLOD0);
XElement xLOD2 = new XElement(xLODInfo);
xLevelID = xLOD2.Element("LevelID");
xLevelID.Value = "2";
xScale = xLOD2.Element("Scale");
xScale.Value = L2Scale;
xResolution = xLOD2.Element("Resolution");
xResolution.Value = L2Resolution;
xLODInfos.Add(xLOD2);

// Write config file

StreamWriter sw = new StreamWriter(sConfigPath);
xDoc.Save(sw);
sw.Close();

// Rename L00 folder to L01

string sLayersPath = Path.Combine(sPath, "_alllayers");
string sL00Path = Path.Combine(sLayersPath, "L00");
string sL01Path = Path.Combine(sLayersPath, "L01");
Directory.Move(sL00Path, sL01Path);

return true;

}



}
}
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

more
0 0 152
MarkCederholm
Regular Contributor

At some point in the 100.x lifespan of ArcGIS Runtime SDK for .NET, the old tried-and-true method of treating a MapView as just another WPF Visual went sailing out the window.  Granted, the ExportImageAsync method should have been a simple workaround, but for one drawback: overlay items are not included!

Now I don't know about you, but I find the OverlayItemsControl to be a great way to add interactive text to a map.  You can have it respond to a mouse-over:

Bring up a context menu:

Modify properties:

And so on.  In the old days, when you created an image of the MapView, the overlays would just come right along:

		private RenderTargetBitmap GetMapImage(MapView mv)
{

// Save map transform

System.Windows.Media.Transform t = mv.LayoutTransform;
Rect r = System.Windows.Controls.Primitives.LayoutInformation.GetLayoutSlot(mv);
mv.LayoutTransform = null;
Size sz = new Size(mv.ActualWidth, mv.ActualHeight);
mv.Measure(sz);
mv.Arrange(new Rect(sz));

// Output map

RenderTargetBitmap rtBitmap = new RenderTargetBitmap(
(int)sz.Width, (int)sz.Height, 96d, 96d,
System.Windows.Media.PixelFormats.Pbgra32);
rtBitmap.Render(mv);

// Restore map transform

mv.Arrange(r);
mv.LayoutTransform = t;

return rtBitmap;

}
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Not so today!  Try that approach in 100.6 and you just get a black box.    

My workaround:

  1. Create a Canvas
  2. Create an Image for the Mapview and add it to the Canvas
  3. Create an Image for every overlay and add it to the Canvas
  4. Create a bitmap from the Canvas

Step 3 is trickier than you would think, however, because of two issues:  1) relating the anchor point to the overlay, and 2) taking any RenderTransform into account.

As far as I can tell, this is the rule for determining the relationship between the overlay and the anchor point:

HorizontalAlignment: Center or Stretch, anchor point is at the center; Left, anchor point is at the right; Right, anchor point is at the left.

VerticalAlignment: Center or Stretch, anchor point is at the center; Top, anchor point is at the bottom; Bottom, anchor point is at the top.

For a Canvas element, the anchor point is at 0,0 -- however, I have not found a good way to create an Image from a Canvas [if the actual width and height are unknown].

To create an Image from the element, any RenderTransform must be removed before generating the RenderTargetBitmap.  Then, the Transform must be reapplied to the Image.  Also, you need to preserve HorizontalAlignment and VerticalAlignment if you're creating a page layout using a copy of the MapView, so that the anchor point placement is correct.

So here it is, the code for my workaround:

using System.Collections.Generic;
using System.Diagnostics;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using System.Windows.Media.Imaging;

using Esri.ArcGISRuntime.Geometry;
using Esri.ArcGISRuntime.UI;
using Esri.ArcGISRuntime.UI.Controls;

namespace Workarounds
{

public struct MapOverlayExport
{
public Image OverlayImage;
public MapPoint Anchor;
public MapPoint TopLeft;
}

public static class MapExportHelper
{

// Export bitmap from map with XAML graphics overlays

public static async Task<ImageSource> GetMapImage(MapView mv)
{

RuntimeImage ri = await mv.ExportImageAsync();
ImageSource src = await ri.ToImageSourceAsync();
if (mv.Overlays.Items.Count == 0)
return src; // No XAML overlays

// Create canvas

double dWidth = mv.ActualWidth;
double dHeight = mv.ActualHeight;
Rect rMap = new Rect(0, 0, dWidth, dHeight);
Size szMap = new Size(dWidth, dHeight);
Canvas c = new Canvas();

// Add map image

Image imgMap = new Image()
{
Height = dHeight,
Width = dWidth,
Source = src
};
imgMap.Measure(szMap);
imgMap.Arrange(rMap);
imgMap.UpdateLayout();
Canvas.SetTop(imgMap, 0);
Canvas.SetLeft(imgMap, 0);
c.Children.Add(imgMap);

// Add map overlays

List<MapOverlayExport> Overlays = GetMapOverlays(mv);
foreach (MapOverlayExport overlay in Overlays)
{

// Get Image and location

Image img = overlay.OverlayImage;
MapPoint ptMap = overlay.TopLeft;
Point ptScreen = mv.LocationToScreen(ptMap);

// Create and place image of element

Canvas.SetTop(img, ptScreen.Y);
Canvas.SetLeft(img, ptScreen.X);
c.Children.Add(img);
img.UpdateLayout();

}
c.Measure(szMap);
c.Arrange(rMap);
c.UpdateLayout();

// Create RenderTargetBitmap

RenderTargetBitmap rtBitmap = new RenderTargetBitmap(
(int)dWidth, (int)dHeight, 96d, 96d, PixelFormats.Pbgra32);
rtBitmap.Render(c);
return rtBitmap;

}

public static List<MapOverlayExport> GetMapOverlays(MapView mv)
{

List<MapOverlayExport> Overlays = new List<MapOverlayExport>();
foreach (object obj in mv.Overlays.Items)
{

// Get element and location

if (!(obj is FrameworkElement elem))
{
Debug.Print("MapExportHelper: Non-FrameworkElement encountered.");
continue;
}
double dW = elem.ActualWidth;
double dH = elem.ActualHeight;
if ((dH == 0) || (dW == 0))
{
Debug.Print("MapExportHelper: Unsupported FrameworkElement encountered.");
continue;
}

// Remove RenderTransform and RenderTransformOrigin

Transform tRender = elem.RenderTransform;
Point ptOrigin = elem.RenderTransformOrigin;
elem.RenderTransform = null;
elem.RenderTransformOrigin = new Point(0,0);
elem.Measure(new Size(dW, dH));
elem.Arrange(new Rect(0, 0, dW, dH));
elem.UpdateLayout();

// Create image of element

ImageSource src = null;
if (elem is Image imgSrc)
src = imgSrc.Source;
else
{
RenderTargetBitmap bmp = new RenderTargetBitmap(
(int)dW, (int)dH, 96d, 96d, PixelFormats.Pbgra32);
bmp.Render(elem);
src = bmp;
}
Image img = new Image()
{
Height = dH,
Width = dW,
Source = src,
HorizontalAlignment = elem.HorizontalAlignment,
VerticalAlignment = elem.VerticalAlignment,
RenderTransform = tRender,
RenderTransformOrigin = ptOrigin
};

// Restore RenderTransform and RenderTransformOrigin

elem.RenderTransform = tRender;
elem.RenderTransformOrigin = ptOrigin;

// Find top left location in map coordinates

MapPoint ptMap = MapView.GetViewOverlayAnchor(elem);
Point ptScreen = mv.LocationToScreen(ptMap);
double dY = 0;
double dX = 0;
switch (elem.VerticalAlignment)
{
case VerticalAlignment.Center:
case VerticalAlignment.Stretch:
dY = -dH / 2;
break;
case VerticalAlignment.Top:
dY = -dH;
break;
}
switch (elem.HorizontalAlignment)
{
case HorizontalAlignment.Center:
case HorizontalAlignment.Stretch:
dX = -dW / 2;
break;
case HorizontalAlignment.Left:
dX = -dW;
break;
}
Point ptTopLeftScreen = new Point(ptScreen.X + dX, ptScreen.Y + dY);
MapPoint ptTopLeftMap = mv.ScreenToLocation(ptTopLeftScreen);

// Add exported overlay to list

Overlays.Add(new MapOverlayExport()
{
OverlayImage = img,
Anchor = ptMap,
TopLeft = ptTopLeftMap
});

}

return Overlays;

}

}
}
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

P.S. -- If you want ExportImageAsync to include overlays, vote up this idea:   

more
0 0 187
BlakeBilbo
Occasional Contributor

Originally posted by Courtney Kirkham, September 18, 2019 from the MapThis! Blog

While OAuth 2.0 is Esri’s recommended methodology for handling security and authentication for their ArcGIS platform, not everyone using it understands what it does or how to implement it. Here at GEO Jobe, we’ve had to explain it to more than a few of the people we’ve worked with. As such, we thought we’d lay out a quick guide to what OAuth is and how it works.

OAuth 2.0 handles security and authentication for the ArcGIS platform. Image Source

What is OAuth 2.0?

OAuth 2.0 is the protocol that ensures only users you give permission to can access your ArcGIS content. Esri chooses to use OAuth 2.0 for a number of reasons, including this list they’ve provided:

  • OAuth 2.0 meets the needs of both users and applications.
  • There are strong security practices around OAuth 2.0.
  • OAuth 2.0 is designed to function at Internet-scale across domains, networks, cloud services, and applications.
  • As a widely accepted standard OAuth 2.0 has many libraries and helpers for a variety of languages and platforms.

This is an important part of security for controlling who can access or edit content, as well as managing credit usage. By using OAuth 2.0 in your applications, you can make a map of company assets available to anyone in your company while still keeping it hidden from the public. A company working on building a new neighborhood could create a map to track the progress of the homes being built, while ensuring only supervisors can edit the status of the houses.

Perhaps the most important way OAuth 2.0 manages security is controlling access to premium content and services. Since interacting with these resources consumes credits, and credits cost real money, OAuth 2.0 is an important part of making sure that only the people you want accessing those resources are able to do so.
(Bonus: For additional control over security while reducing the overhead in your in your org, check out security.manager)

You’re not getting that data without valid credentials. Image Source

How does OAuth 2.0 work?

Here at GEO Jobe, we’ve found the best way to explain how OAuth 2.0 is with an analogy. Say your friend, Chris, got access to some exclusive event – a networking opportunity, a party, or something like that. There is a private guest list for the event, and the doormen are checking everyone. Your friend tells you all you need to do is tell the doorman you’re there with Chris, and the doorman will let you in.

When you get to the event and check in with the doorman, one of three things can happen. We’ve outlined them each below, and explained what they mean in the context of OAuth 2.0.

The Doorman Finds Your Friend; You Get a Wristband and Go In

This is what happens when OAuth 2.0 works. You’re able to get in and see your friend. In the case of ArcGIS, this means you requested access to content that you have permission to see. After OAuth checks your credentials, they give you a token (the wristband) that’s added to all your requests for content after that. Then, you get whatever you need (that you have permission to view), and everything is good.

The Doorman Finds Your Friend and You Don’t Get In

This is when the doorman comes back and tells you they found Chris, but Chris says they don’t know you. While this may be an awkward social situation, in OAuth 2.0, it’s pretty simple. It means you tried to access content, and OAuth 2.0 doesn’t think you are supposed to be able to see it. This will often result in an “Invalid Redirect URI” error.

In terms of development, this happens because the request is coming from a URL the app doesn’t recognize. To fix it, go to the app in your ArcGIS used to register for OAuth 2.0. Then, in the Settings menu, view the “Registered Info”. The domain sending the request will need to be included in the Redirect URIs.

The Doorman Can’t Find Your Friend

Maybe your friend left the party. Maybe the doorman thought the “Chris” they were looking for was a “Christopher” instead of a “Christine”. Regardless of the reason, the doorman can’t find your friend, and they’re not letting you into the party. When this happens, OAuth 2.0 will return an error stating that there is an “Invalid Client ID”. This is also easy for a developer to fix.

This situation occurs because there isn’t an app in the ArcGIS Organization in question with an App ID that matches what OAuth 2.0 was told to look for. This can happen if the app was deleted from your ArcGIS Org, or if the code where the App ID was specified was altered. In order to fix it, check where the App ID is specified in the code for the OAuth 2.0 call. Also, check the application in ArcGIS Org used to register for OAuth 2.0. If the application was deleted, you will need to create and register a new application, then use that App ID. If the application exists, check under the “Settings” menu and the “Registered Info” to find the App ID. This should match the value for the App ID in the code. If it doesn’t, recopy the App ID from the application in the ArcGIS, then paste the value into the code where the OAuth 2.0 information is initialized.

How to Implement an OAuth 2.0 Application

Setting up an OAuth 2.0 application in your ArcGIS Organization is fairly simple. In fact, it only takes five steps! It’s so easy, we’ve outlined the process below.

1. To start, sign into your ArcGIS Org and go to the Content menu. From there, click on “Add Item” and choose the option for “An Application”.

2. Next, you’ll select the type “Application” and fill out some basic information.

3. After you add the item, go to the Settings page and click the “Registered Info” button. Note: While on the settings page, you may want to select the option for “Prevent this item from being accidentally deleted.

4. After clicking the “Registered Info” button, the App ID you will need should be visible on the left. The final step will be to update the Rediret URIs for the application. Click the “Update” button on the right side of the screen.

5. A popup with the Registered Info should appear. Any applications a developer builds that will need to OAuth into your ArcGIS organization will need to have their domains added to the approved Redirect URIs of an OAuth application. Add the appropriate domains in the textbox, then click “Add”. After your domains are all added, click the “Update” button at the bottom of the popup.

And there you have it! Five easy steps and you’re ready to use OAuth 2.0 in your ArcGIS Organization.

You can relax, knowing your ArcGIS content is safe and only accessible by who you choose. Image Source

Conclusion

Securing your ArcGIS data is important. OAuth 2.0 can make it simple. If you need any assistance setting up OAuth for your ArcGIS Organization, or need some custom applications built while keeping your data secure, reach out to us at connect@geo-jobe.com. We’ll be glad to help!

Liked this article? Here’s more cool stuff

more
0 0 393
MattZand
New Contributor

Bootstrap 4 is the latest version of Bootstrap at the time of writing. This version makes Bootstrap more powerful and easy to customize and use. One of amazing feature of Bootstrap 4 is Cards

Cards are a new component in Bootstrap 4 that replaces wells, panels, and thumbnails. A card is a flexible and extensible content container. It includes options for headers and footers, a wide variety of content, contextual background colors, and powerful display options.

To create a card, we need to use the .card class. By default, card width is 100% of its parent container. Therefore, here we are using a grid system to control width. Also, by default, cards use dark text and assume a light background. Add .card- inverse for white text and specify the background-color and border- color values to go with it. Bootstrap 4 also provides a few classes that add a background color and border color to cards.

Here are 3 examples where cards are used:

Ex1:

<div class="card-block">
<h4 class="card-title">Card title</h4>
<h6 class="card-subtitle">Support card subtitle</h6>
</div>

Ex2:

<div class="card-block">
<p class="card-text">
Lorem ipsum dolor sit amet, at suscipit sodales eget ante ultricies mauris. Etiam dolor felis morbi nibh, mollit porttitor tempor, dignissim magna pellentesque dictumst bibendum dictum integer.
</p>
</div>

Ex 3:

<div class="card-block">
<a href="#" class="card-link">Card link</a>
<a href="#" class="card-link">Another link</a>
</div>

Here is a link for more examples of Bootstrap 4 Cards

more
0 0 462