BLOG
|
@MatthewKing great work! You're right, I posted the wrong section. Your code sample looks exactly like it should. I tried fixing the code samples in the blog, but the system seems to want to collapse mult-line code into a single line no matter what I do. This comments section will certainly help to guide other readers until I figure out how to fix it. Thanks for reaching out!
... View more
10-04-2021
08:08 AM
|
1
|
0
|
1170
|
BLOG
|
Introduction Date/Time fields add a whole new dimension to analysis and visualization of spatial data. They specify an exact point in time, similar to how coordinates specify an exact point in space. When working with spatial data, generating relative content is something we naturally do all the time. For example, we might calculate an area (such as a buffer) that is anchored on an input point. Or we might measure a distance from a point to another object. The outcomes of those analyses are relative, that is, they are relative representations based on an exact starting point. By themselves, the exact coordinates might not mean much, but we can derive something of meaning and value from it. Time works the same way. When talking about your birthday party, you could say “my birthday party on February 15 th 2020, 6:00 PM”…but it’s easier for the person you’re talking to understand if you just say “my birthday party last night”. Or when your friend asks you “When will the burgers be done?”, you could say “February 26 th , 5:30 PM”, when “in about 30 minutes” is going to be far more effective in getting your point across. These expressions are relative to a precise point in time: right now. In ArcGIS, dates and times are specified precisely, but you can incorporate relative representations of those dates and times in your maps to make them easier to understand. In this post, we're going to look at manipulating symbols and popups to reflect relative time. Incorporating Relative Time in Symbology When viewing a live feed, we might want to want to symbolize the data in such a way that, as events age, they become less and less visually significant until they phase out all together. For example, take a look at the map below. We pulled in MODIS Hotspots from the Living Atlas, applied a filter to just show the last 24 hours, and then used Arcade to categorize each hotspot to a "recentness" category. As these hotspots age, they'll diminish and ultimately disappear from view. With Arcade, you have the flexibility to work with your date fields in any number of ways, customize thresholds, and even calculate your own new, derived dates. Read on to learn how to build this exact map for yourself to get started. First, lets setup the map. Open a new webmap, Click Add >> Browse Living Atlas Layers >> and search "MODIS". The first result should be a "Satellite (MODIS) Thermal Hotspots and Fire Activity" feature layer by Esri. Go ahead and add that layer to your map. In the Content pane, click on the filter button for the MODIS layer and set a filter for "Acquisition Date in the last 1 day(s)". Apply Filter. This will filter the hotspots to be in the last 24 hours. Now click Basemap at the top, and select the Dark Gray Canvas basemap. Now it's time to use Arcade to build our own relative time symbology for the hotspots. For the MODIS layer, click the Change Style button. Expand "Choose an attribute to show", scroll to the bottom of the field list and click "New Expression". This will launch the Arcade Expression builder. Paste the following code into the Expression body. function determineCategory(value){
if (value <= 4 ){
return "1"
}
else if (value > 4 && value <= 8){
return "2"
}
else if (value > 8 && value <= 12){
return "3"
}
else if (value > 12 && value <= 16){
return "4"
}
else if (value > 16 && value <= 20){
return "5"
}
else {
return "6"
}
}
var hours = DateDiff(now(),$feature["ACQ_DATE"], "hours")
determineCategory(hours) You can modify the thresholds for each category here if desired. Just keep in mind that the filter we applied to the layer only displays those hotspots that were recorded in the last 24 hours. If you happen to be working with a different layer, you'll need to replace the $feature["ACQ_DATE"] so it's pointing to a date field in your layer. Click "Test" to make sure it's working. If it's successful, you should see a Result value of 1-6 (1 being most recent, 6 being oldest). Click OK to apply the script. The "Unique symbols" types will automatically be selected. Click Options to modify how these display. Modify the symbol order to 1-6 by dragging the entries to the appropriate position. Now that everything is in the right order, go ahead and change the Labels for each category so that they are reflective of the time range.. I used "In the last 4 hours" for the first one and a schema of "X to Y hours ago" for the last five. Alright, now we're ready to create the symbols. Click on the first symbol to open up the Symbol Changer. Click Shape, open the dropdown, and select "Firefly". The ramp I used was the light green to red ramp. Select the most intense one on the ramp (the farthest to the right), and set the size to 36. Repeat this process for the other categories, working your way left on the ramp, and decreasing the size each time by 6 (e.g. 36, 30, 24, 18, 12, and 6). Once all your symbols are configured how you like, go ahead and play around with the Transparency slider in the Style pane to dial in the look you're going for. When it's all said and done, your Style pane should look something like this: Click OK to lock it all in. And, voila! You should now have a map of MODIS hotspots that age over time based on current time. Great work! Also, don't forget to save your map! Incorporating Relative Time in Popups You can also setup relative time expressions in your popups and transform a precise date and time into something a little more human-friendly. For example, go back to the example map provided toward the top of this post. If you click a hotspot, in the popup you'll see the exact date and time as well as a relative expression like "2 hours and 5 minutes ago". That second part is being calculated by Arcade on-the-fly as well. To set this up in your own map, open the "Configure Popup" pane for the MODIS Hotspot layer. Click "ADD" under the Attribute Expressions section. This will launch another Arcade Expression builder. Paste the following code into the Expression block: var myDateField = $feature["ACQ_DATE"];
function getTimeDelta(alertDate){
var minutes = DateDiff(now(),alertDate, "minutes")
var hours = DateDiff(now(),alertDate, "hours")
var days = DateDiff(now(),alertDate, "days")
if (minutes <= 120){
return round(minutes) + " minutes ago"
}
if (minutes > 120){
if (hours <= 48){
var tFloor = floor(hours)
var deltaMin = datediff(now(),dateadd(alertDate,tFloor,'hours'),'minutes')
return round(tFloor) + " hours and " + round(deltaMin) + " minutes ago"
}
if (hours > 48){
var dFloor = floor(days)
var deltaHour = datediff(now(),dateadd(alertDate,dFloor,'days'),'hours')
return round(dFloor) + " days and " + round(deltaHour) + " hours ago"
}
}
}
return getTimeDelta(myDateField) This expression will look at the time delta between the current time and the date in the field and use some conditional logic to build a human friendly expression of that delta. This expression is built to also work with Days, but since we've setup our hotspots to only see the last 24 hours, we won't see that. But you can use this expression as a template to for other layers you want to work with. In that case, the only thing you'll have to do is change the $feature["ACQ_DATE"] bit to point to a date field in your layer. Click Test to verify it works, followed by an exuberant click of the "OK" button. From here, configure the popups however you like, making sure that the expression you created is visible. The expression acts like a field, so you can see it as part of a default popup style or configure it as part of a custom attribute display. Arcade gives you the flexibility to customize how things are conveyed to your audience without requiring you to modify the schema of your data or run constant field calculations. Temporal representations is just one use case of that. Hopefully this gives you a start to exploring whats possible. Thanks for reading!
... View more
03-10-2020
08:15 AM
|
5
|
5
|
2819
|
POST
|
Hello, I'm not sure I understand the question but I'll do my best. Are you essentially wanting to just extract polygons within 100+ shapefiles that have an attribute value greater than 75?. Assuming the shapefiles all have the same schema, you could use the Merge tool to combine all of the files into a single feature class. Then you could run the Feature Class to Feature Class tool, setting up the Expression parameter to just copy features that match the criteria you want to use. Would this work for your scenario?
... View more
02-27-2020
01:10 PM
|
2
|
1
|
1543
|
POST
|
Hi there. I haven't personally seen this before, but I have some questions about what you're seeing. Do the same images not populate every time, or does it vary each time you load the survey? Are you seeing this happen through the web view of the survey, the mobile app, or both? Are the images you're referencing on the internet or on an internal server?
... View more
02-27-2020
12:56 PM
|
1
|
1
|
534
|
BLOG
|
I recently came across a scenario where I had to evaluate many fields across a table and, for each record, determine which field had the highest value. At first, I thought this would be easy. It’s just a field calculation with some if statements, right? The problem was that I had to evaluate over 50 fields. This means I had to explicitly account for every field I wanted to evaluate. My field expression was long, tedious to build, and completely unusable for any other data that I’d want to do this for. At the same time, I realized that being able to evaluate values across a table is a useful idea that is applicable to all sorts of data. Sometimes in a collection of data there is a group of fields that represent a series. For example, for each branch or retail location, I might have annual revenue for each year in different fields (e.g. Rev2015, Rev2016, Rev2017, Rev2018, Rev2019 etc.). What if I want to simply evaluate, for each branch, which year had the highest (or lowest) revenue? Or what if I want to determine, for each branch, what the top three years for revenue are? Another scenario is a situation where I may have GeoEnriched many fields to a table. For example, I may have appended dozens of different consumer spending variables. What if I want to determine, for each trade area, which spending variable had the highest (or lowest) values? Or what if I want to rank the top three consumer spending values for each record? Some of these questions can be answered through custom Field Calculator expressions, but it’s a process that quickly becomes very tedious if you’re evaluating more than a few fields, and the expressions you create will be very much linked to the data structure. For example, I can’t take the expression I built for comparing revenue years and use it for comparing consumer spending variables without carefully modifying the script. And if you’re needing to rank fields and write the results to multiple fields? Field Calculator, at least by itself, won’t help you there because it can only write to a single field at a time. The solution: python script tools! Now don’t worry, I’m not going to walk you through the process of building a python toolbox, give you a few snippets, and wish you luck. What I’m here to do is share work that I’ve already done in building a flexible tool that tackles this challenge. In fact, you can download the tool today and test it out yourself…no coding, no configuration needed! Once you’ve downloaded the toolbox, browse to it in ArcGIS Pro, expand the toolbox, and you’ll see two different tools at your disposal. Evaluate Extremes Across Table - Evaluates each record in a table across a defined set of columns to find the highest or lowest values. These results are written to three new fields in either the input dataset or a new output file. The three new fields describe, for each row, the highest or lowest value in the evaluation, the name of the field that contained that value, and whether there was a tie between the winning field and another field. Here’s what the interface looks like: As an example, I geoenriched 15 consumer spending indexes to drivetime-based trade areas, and ran this tool to figure out which index was highest for each trade area. We can also reverse the analysis and identify which spending categories indexed the lowest for each trade area. Another useful feature highlighted here is Tie Detection. If two or more fields won out for the most extreme value, the tool lets you know through a field. Rank Values Across Table - Evaluates each record in a table across a defined set of columns to rank the n top or bottom values. These results are written to a number of new fields in either the input dataset or a new output file. The number of new output fields is dependent on the number of ranks specified. Each rank will create two new fields. For example, if 3 ranks are desired, 6 new fields will be created. The two new fields per rank describe, for each row, the name of the field containing the ranked value, and the ranked value itself. Here’s what the interface looks like: Working off the same example data we used in the first tool, we can use this Ranking tool to show us the top 3 spending categories that indexed highest for each trade area. Conversely, we can also figure out the 3 bottom categories on the same set of data. All of the documentation for each tool is in the Github repo itself, so I won’t get into it here. However, it’s worth reading through it before you start working with it. I’ve built a few niceties into the tool such as tie detection, alias handling, ranking controls, and support for evaluating multiple fields types at the same time. That’s right, you don’t have to do a bunch of field type conversions to make sure they all match up…that’s all handled by the tool as long as the fields are of a numeric type (short, long, float, or double). Hopefully this will be useful for someone struggling with the same challenge I had. Keep in mind, it’s not perfect, and I guarantee that there are bugs in there. If you run into any of them, or have ideas on how to make the tool work better, please submit an issue on Github so I can continue to improve on it. Happy calculating!
... View more
02-25-2020
01:30 PM
|
3
|
0
|
916
|
POST
|
Can you reproduce this in the python window in Pro? In other words, if you attempt to delete a layer that exists within the map using the python window, does it work?
... View more
11-13-2015
06:18 AM
|
0
|
2
|
917
|
POST
|
Hi Allen, though you can use python to do this, based on what I'm reading, it might not be necessary. You should be able to create this scenario in Modelbuilder. You would add the buffer tool to your model, and then create a new "Feature Set" variable to the model and set. The Feature Set variable type is an interactive, user-input type. Then set this new parameter as the "Input Features" parameter. So we've established this "Feature Set" parameter as our input geography parameter for the buffer tool. Now we'll want to tell the Feature Set parameter what kind of geography to allow. Right-click on the feature set parameter > Properties, and import the schema and symbology from another layer or feature class. This is handy. This allows you to control what the symbol looks like as the user drops it, and it also allows you to control what kind of geography they input: points, lines, or polygons. More than likely, I imagine you'll want to allow the user to adjust the size of the buffer rather than hardcoding it. You can make that a variable in this model by right-clicking the buffer tool > make variable > from parameter > Distance. Now we want to make sure that both of these new parameters are flagged for requiring user input. Right-both of them and make them model parameters. For sake of argument, go ahead and set the output of the buffer tool to something like in_memory\out_buffer. Now save this model in a toolbox. Once it's saved, double-click on it to launch it like a tool. You'll now have a tool that allows you to manually drop an input point, specify a buffer size, and it'll create that buffer. Now from here, you would publish this as a geoprocessing service and hook it up in the Web AppBuilder with the Geoprocessing widget. You'll be able to experience the same interactivity there.
... View more
11-12-2015
10:13 AM
|
3
|
2
|
1748
|
POST
|
What I'm noticing in your screenshot is that the iterate portion is shaded behind the shapes. This usually means that this portion of the model has been run. If you re-validate, does this shading disappear or remain?
... View more
12-20-2013
09:05 AM
|
0
|
0
|
883
|
POST
|
I agree with Richard. If it's not related to issue outlined in the KB, you might check the URL being used by your connection file and make sure it's not using a public-facing URL. If it is, and you lose connection to the internet, it will fail. In the situation that you are still connected to a network, but that network has no access to the internet, the connection should use internal URLs so that it doesn't attempt to leave your local network to reach the ArcGIS Server. In the event that the machine is completely disconnected from a local network and the internet, you'll probably need to use localhost in the connection URL unless you resolve the machine name to 127.0.0.1 (local loopback) in the hosts file (C:\Windows\System32\drivers\etc). Either way, Fiddler would be helpful in showing you what paths are being used to access your server.
... View more
12-18-2013
10:23 AM
|
0
|
0
|
798
|
POST
|
As a start, I would look into using the Iterate Feature Selection iterator in your model. This will let you iterate through each polygon, and run the same series of processes based on each polygon and any unique selections/objects that are created in that process.
... View more
12-18-2013
10:01 AM
|
0
|
0
|
776
|
POST
|
You need to revalidate the model after you've run it to reset the iterator. Click the checkmark in the menu in Modelbuilder and then rerun it. You should find that it iterates the values again. This is only an issue when you run the model from Modelbuilder. If you double-click it and run it as a tool, you won't see this as long as you validate the model prior to saving it.
... View more
12-18-2013
09:48 AM
|
0
|
0
|
883
|
POST
|
I understand. I'll be happy to test this if you can share a file geodatabase with a copy of the data. I'll publish it to AGOL and see if I get the same problem. Are there a lot of features in these services? Keep in mind that the service can only return 1000 features at a time, even in Geoprocessing tasks, so that may be related to why some features don't seem to draw or get processed.
... View more
12-18-2013
08:22 AM
|
0
|
0
|
434
|
POST
|
Something else you might look into is to use the Map Server Cache Tiling Scheme to Polygons tool in conjunction with the Tiled Labels to Annotation tool. This method essentially subdivides a data frame extent using the same scales as an existing map service cache tiling scheme and creates tiles over a large area, or "supertiles". Since the supertile extent is larger than the actual tiles defined in the scheme, tiles used as input into the Tiled Labels to Annotation tool can convert labels to annotation over a larger area at a time. This process minimizes annotation duplication across tiles. It also allows you to see how the map will label based on the zoom level you are at, so if you are using standard labels, you'll know if they will get cut off and be able to adjust accordingly before waiting for a cache to be created to see the result.
... View more
12-18-2013
04:33 AM
|
0
|
0
|
1023
|
POST
|
Are these hosted feature services shared publicly? If so, can you share the URLs with us so we can test? Do you have your language and regional settings set to anything other than English in your AGOL organization settings? Do you get the same problem when running the analysis from a different browser?
... View more
12-18-2013
04:22 AM
|
0
|
0
|
434
|
POST
|
You could use something like this: [ATTACH=CONFIG]29967[/ATTACH] Based on your description, it sounds like the Iterate Feature Selection tool would work best for your model.
... View more
12-18-2013
04:10 AM
|
0
|
0
|
366
|
Title | Kudos | Posted |
---|---|---|
1 | 10-04-2021 08:08 AM | |
1 | 12-10-2013 08:49 AM | |
1 | 02-27-2020 12:56 PM | |
5 | 03-10-2020 08:15 AM | |
2 | 02-27-2020 01:10 PM |
Online Status |
Offline
|
Date Last Visited |
01-26-2023
04:56 PM
|