POST
|
Thanks for the update Jean-Sebastien Lauzon, I've been testing and able to reproduce this issue. The team is looking into what's going on now. In the mean time, if you create a new layer (default option) instead of a view result, you should see the correct sum results. Thank you for reporting this! Sarah Ambrose Product Engineer, Esri
... View more
04-14-2020
11:53 AM
|
0
|
2
|
848
|
POST
|
Hi Mike Cuenca, Thanks for the datasets I've tried it out and I understand what's going on. Currently, the Join Features tool only joins when features have a match, and target features that don't have a match aren't included in the result. If you are familiar with ArcGIS Pro or ArcMap there is a setting to "Keep all Target Features" - this isn't currently an option in Join Feature in ArcGIS Online. So for an example with your data: You have a point layer with 2737 features A polygon layer with 3220 features It looks like there is only ever one point per polygon, but some polygons do not have a point. For example, here are the points (I made the points very big and purple) with the green polygons: You can see that some polygons don't contain a point. When I run Join Features, I only see polygons returned where there are points. This image shows the original points and the Join Features result (blue polygons): I have submitted and enhancement request to the team, and we'll work to add the option to keep target all features in a future release. In the meantime, I was able to complete the workflow you intended to do (or I assume you were working towards) by chaining a few tools together: 1. Run Join Features with target layer: polygons, join layer: points, and the spatial relationship completely contains. In this case I did a one to one join. 2. Overlay Layers - Erase. I ran Erase on the original polygon layers and the Join Result. This results in the original polygons that did not have points: 3. Merge Layers: Merge the Joined results (polygons where points existed) with the Erase results (polygons where points didn't exist). I removed the Area in Square <Units> field since I didn't need it and it should be recalculated when I run the tool. All other fields automatically matched: And that should result in a layer with all polygons and their joined point values. Field values will be null for the polygons that don't have any intersecting points. Then when you do the symbology - make sure to check the value "Draw features with no value" so your map shows the joined polygons but notes there was no point data for those counties. The other way - but you might end up with some weird field names is to run Aggregate Points instead of Join Features. In that tool you can specify "Keep areas with no points". Since this tool will summarize point values, you'll need to calculate statistics for the single point value. So for example, the "sum of Deaths" will result in the single death value (so if the value was 5 for a point, the polygon will say sum_deaths = 5). So it will keep the value you are interested in, and just end up with an unexpected column name. Either workflow should work for this case though. Please let me know if you have any questions! - Sarah
... View more
04-14-2020
07:31 AM
|
0
|
1
|
893
|
POST
|
Hi Mike Cuenca, Are you able to share your layers and parameters (a screen shot works) that you are using? If yes, can you please share the URLs and I can try and reproduce this? Has this tool worked as expected before? Or is this the first time you're running into this? Thanks, Sarah Ambrose Product Engineer, Esri
... View more
04-10-2020
10:18 AM
|
0
|
0
|
893
|
POST
|
This is not yet supported at 10.7.1 or 10.8. When support is added it will be listed in the What's New GeoAnalytics topic here: What's new in ArcGIS GeoAnalytics Server—GeoAnalytics (Windows) Deployment Guide | Documentation for ArcGIS Enterprise
... View more
04-03-2020
04:29 AM
|
0
|
0
|
752
|
POST
|
Hey - The bin size is the area you are aggregating your raw points into. From the documentation: The distance interval that represents the bin size and units into which the point_layer will be aggregated. The distance interval must be a linear unit. So from the image in the topic - the image on the right would have 36 bins: The neighborhood size is used to compare a bin to it's neighboring bins: The spatial extent of the analysis neighborhood. This value determines which features are analyzed together to assess local clustering. The neighborhood size must be greater than the bin size. This is similar to the distance band parameter in the Hot Spot Analysis (Getis-Ord Gi*) tool works. So with a bit of an adaptation of the Hot Spot Analysis (Getis-Ord Gi*) doc - this should help explain it: Each feature (bin!) will be analyzed within the context of neighboring features (bins). Neighboring features inside the specified neighborhood size will receive a weight of one and exert influence on computations for the target feature. Neighboring features outside the critical distance will receive a weight of zero and have no influence on a target feature's computations. Let me know if that help clarify it, and I'll modify the GeoAnalytics tool doc to be more clear for the next release. Thanks, Sarah Ambrose GeoAnalytics Team
... View more
04-02-2020
06:41 PM
|
2
|
3
|
1263
|
POST
|
Thanks Zakaria Douiri, GeoAnalytics does not currently support integrated authentication with Hadoop. If this is something you are interested in seeing, please don't hesitate to create an ENH through support. Sarah Ambrose Product Engineer, GeoAnalytics
... View more
07-18-2019
12:06 PM
|
0
|
0
|
804
|
POST
|
Hi Giacomo Joggerst, Lauren Bennett wrote a great blog post on Machine Learning, and what is meant by the term. I think this might sort out some of your question . Hope that's helpful, Sarah Ambrose Product Engineer, GeoAnalytics
... View more
07-16-2019
08:58 AM
|
1
|
0
|
266
|
POST
|
Hi Zakaria Douiri, No, there isn't a way to update the existing layer. Depending on what type of results you are looking for, you could use the Append tool to append a new result to the existing one. Another potential workflow is using the ArcGIS API for Python to automate deletion of the existing layer before rerunning analysis - but you won't be able to maintain your feature service URL or portal ID - but could update with the new one using the ArcGIS API for Python. We are looking into ways to support this updating an existing layer in the future. For this workflow, is it important that the portal item is the same as before, the feature service URL, or both? Thanks, Sarah Product Engineer, GeoAnalytics
... View more
05-22-2019
11:51 AM
|
0
|
1
|
641
|
POST
|
Hi Zakaria Douiri, Can you please clarify your question. Are you trying to run a tool with a big data file share as input? And running into issues? Thank you for updating the question. Since you question is different than the original, can you please ask a separate question so it gets it's own thread. Thanks, Sarah
... View more
05-22-2019
11:20 AM
|
0
|
1
|
1616
|
POST
|
Hi Javier Distefano, To clarify: At 10.6.1 we added support for Kerberos with HDFS We do not currently support Kerberos with Hive. - Sarah
... View more
05-22-2019
08:40 AM
|
0
|
0
|
1868
|
POST
|
Hi Zakaria Douiri, Can you please turn on debug on your GeoAnalytics Server and rerun the tool. The debug messages will be more informative about the issue. Please double check that your keytab is up to date, and available to all machines in the GeoAnalytics Server site. Thanks, Sarah Product Engineer, GeoAnalytics Server
... View more
05-09-2019
12:28 PM
|
0
|
1
|
1868
|
POST
|
Hi Mody Buchbinder, You'll want to turn on debug to see the cause of the invalid input. That will help figure out if there is an issue with your query. The format of the input should look like this {"url":"https://machine.domain.com/gax_web_adaptor/rest/services/DataStoreCatalogs/bigDataFileShares_pyTest/BigDataCatalogServer/californiapoints", "filter":"fieldname > 10"} Thanks, Sarah
... View more
04-25-2019
05:55 AM
|
0
|
0
|
404
|
POST
|
Hi Mody Buchbinder, Currently there isn't a way to set a filter on big data file shares when running tools through ArcGIS Pro (ENH-000118129) or your portal Map Viewer (ENH-000116581). But, filters are supported at the GeoAnalytics level by supplying them through REST - so you can definitely apply filters to your analysis that way. As you mentioned, Copy to Data Store in the Pro UI also exposes the filter parameter (and Find Similar Locations). For example: {"url" : "https://myportal.domain.com/server/rest/services/Hosted/hurricaneTrack/FeatureServer/0", "filter": "Month = 'September'"} Sarah Ambrose GeoAnalytics Product Engineer
... View more
04-10-2019
06:39 AM
|
0
|
2
|
404
|
POST
|
Thanks Josh Hevenor, As mentioned in the other thread, the Gen 2 Data Lake isn't supported. If you moved this over to a blob store with a similar structure you would would register the blob container 'lakefilsystem', and not include a folder. The 4 datasets, AIS, flight_delays, NYCTaxi and reference would be recognized and registered in your big data file share. The NYC Taxi dataset would include every folder below as a single data. If you wanted to create a new dataset for each year in NYC Taxi, like 2018 and then had folders for 2019, 2017 etc. You would set the folder as "NYCTaxi", and every subfolder at the next "level" (so what you see when you browse into NYXTaxi), would then be represented as an individual dataset. - Sarah
... View more
04-04-2019
12:10 PM
|
1
|
1
|
1063
|
Title | Kudos | Posted |
---|---|---|
1 | 07-18-2023 04:44 AM | |
1 | 06-01-2021 07:01 AM | |
1 | 07-29-2020 03:36 PM | |
1 | 04-13-2015 10:32 AM | |
2 | 05-29-2018 01:19 PM |
Online Status |
Offline
|
Date Last Visited |
05-14-2024
10:19 PM
|