|
POST
|
It sounds like you want to specifically check if a map is open as a View and then work with it if it is open and if it is not open, you want to open it and then specifically add layers to a certain map correct? There's a post here where it indicates that there's not a way to tell if a map is open in a view and an ESRI employee suggests that it would be good to submit an idea that exposes that property of a map. However, despite knowing whether or not the view is open, explicitly getting a reference to each map using the aprx object that you want to interact with is totally possible and might meet your need. aprx = arcpy.mp.ArcGISProject("CURRENT")
map1 = aprx.listMaps("my first map")
if map1 == None:
print("Could not find map: my first map")
# end or raise error here
map2 = aprx.listMaps("my second map")
if map2 == None:
print("Could not find map: my second map")
# end or raise error here
Each map object could then be used to add layers as you wish.
... View more
01-05-2023
09:14 AM
|
1
|
0
|
2151
|
|
POST
|
Yes, I would recommend running these tools through a python script. You can also call OGR2OGR through python using "subprocess" as described here.
... View more
01-04-2023
08:26 AM
|
0
|
4
|
2203
|
|
POST
|
Do you have ESRI Desktop Software? If so, you can use the Feature Class to Feature Class tool to convert the data from the feature service to a shapefile first. Then you can use OGR2OGR to convert it to GeoJSON and then add it to your Postgres Database possibly.
... View more
01-03-2023
02:56 PM
|
0
|
1
|
2215
|
|
POST
|
Yes there is. The ArcGIS Online Assistant has tools to do this as well as view and modify JSON. Hope that helps.
... View more
01-03-2023
02:48 PM
|
2
|
2
|
1511
|
|
POST
|
You might also use the Divide a Polyline functionality to chop up your line into 1 m chunks and then generate points at each midpoint along that line. Or you could use the Generate Points Along Lines tool which would be faster. You could then extract values to points from the DEM and then the attribute table of your point layer would be your table.
... View more
01-03-2023
08:59 AM
|
1
|
0
|
1890
|
|
POST
|
I don't think it's saying that STATE_ID is the column that's missing. The error message you posted says "no such column: CatTotal." So is the CatTotal column missing or did it's column name change by chance?
... View more
01-03-2023
08:53 AM
|
1
|
0
|
1240
|
|
IDEA
|
@JohannesLindner, good to hear that it's of value to someone else. I am able to do quite a bit with SQL. However, I think this enables similar things in the web that we cannot currently do. It might be good, however to move away from doing some of the things I've been doing in SQL considering that ESRI is now storing complex geometries such as arcs and multipatch in a separate column called gdb_geomattr_data which makes SQL query layers a bit more cumbersome as the SHAPE column may only contain a simplified version of the actual feature. So having Arcade Layers in Pro and the Web would be beneficial.
... View more
01-03-2023
08:44 AM
|
0
|
0
|
1775
|
|
IDEA
|
In Desktop software, we have the Make Query Layer tool that allows us to perform all sorts dynamic joining and filtering of Geodatabase data to visualize data from multiple feature classes or tables without having to duplicate the data storage. I would find it incredibly handy to have the same capability within web maps using Arcade FeatureSets. If we can define a FeatureSet in popups and label expressions and use them to dynamically access data from multiple layers, could we take it one more step and add functionality to give a map creator the option to say "Add Arcade Layer"? I'm thinking there would need to be requirements that only one spatial field be included in the output and that all the geometries in the final spatial field have consistent coordinate systems etc. But I can see great benefit in being able to add a layer to a map that (for example) combines features from multiple point feature service layers that all have a common status that needs attention so that they could be shown as a single layer in a web map. I can think of many other scenarios that would be useful here too and I would say that there's justification for it because of all the same reasons the "Make Query Layer" tool exists.
... View more
12-14-2022
08:49 AM
|
11
|
4
|
1912
|
|
POST
|
@davidchamberlain2, there are some other things I would check in your situation. Projections Do all of the layers in the map have the same projection? If not, when rendering many features in an export, this can cause slower-than-normal export times Using ESRI Basemaps in Web Mercator underneath local data in a local projection is a common cause of this especially when you change the map to be in the local projection. Drawing the basemaps in anything other than their Web Mercator projection really slows things down. Complex label classes When there are many label classes on layers with tens of thousands of features, I've seen this cause really slow draw times. Joins and Relates If any of your layers have joined or related data that are using unindexed fields for the join fields, this can definitely cause some slow export times. SDE Database tuning Rebuild Indexes and statistics (SQL Server) or equivalent in other DBMS Checking all these is bound to give you some improvement.
... View more
11-29-2022
07:33 AM
|
0
|
0
|
1832
|
|
POST
|
I guess I read the original question as "Can these two buffers be saved in the same layer?" and not as "Can these two buffers be saved in the point layer?". In which case, you can create a model with two buffer tools chained together with one set to buffer at 500 meters and one at 1,000 meters. Set the input to the first buffer as a model parameter. Then run the second buffer off of the result of the first buffer. Then add the merge tool to combine the two buffers into one layer. Set the output feature class of the merge as a model parameter as well. Then follow instructions in the documentation for how to publish a Geoprocessing Service. Hope that helps.
... View more
11-29-2022
07:05 AM
|
0
|
0
|
4254
|
|
POST
|
@ZacharyUhlmann1, I have not received any additional info. My initial concern was that ESRI indicates that generally the software will project on-the-fly for data that does not match the basemap; but cautioned "Although it is possible to edit data that is in a different coordinate system from the data frame, when high levels of accuracy are critical, it is better to project the data to a common coordinate system before editing." So I was trying to determine how much error could be introduced by not heading that caution. I have since discovered a simpler way to eliminate the project-on-the-fly step. I can use ArcGIS Pro to generate a vector tile package with very simple reference data in the projection I'd like to collect in and then share it to my ArcGIS Online Organization and use that as the basemap. This way, I'm only doing one project/transform from the coordinate system of my NTRIP provider to the coordinate system of the map and this method is known. I still don't know which on-the-fly transformation method ESRI is using under the hood when you use a one of ESRI's Basemaps. I'd love to just use theirs since they are wonderful and then I could just report on the methodology of projection/transformation. As of right now, I'll have to use this method.
... View more
08-11-2022
12:30 PM
|
1
|
0
|
5132
|
|
POST
|
I'm tying to understand what's going on under the hood with projections and transformations when I use Field Maps for high-accuracy data collection. I've read through this article in detail with the lure of being able to walk away from post-processing workflows by being able to collect in high-accuracy using external Bluetooth GNSS receivers and mobile devices. I've noticed that I can publish a hosted FeatureLayer in a variety of coordinate systems and enable them for editing so that I can add them into a map used for data collection in Field Maps. However, when setting up a location profile in the settings, the documentation indicates that the Map coordinate system is determined by the basemap it uses. So that leads to my first question, if my layer is in a projection other than that of the basemap, and my NTRIP provider is in yet another coordinate system, is my data being projected/transformed twice? Specific example. Let's say I have a hosted FeatureLayer that was published from pro in GCS NAD 1983 2011 (EPSG: 6318) and then I add it to a map with a basemap that is in WGS 1984 Web Mercator Auxiliary Sphere (EPSG: 3857). Then, I set up a location profile in Field Maps using EPSG 6318 as the GNSS coordinate system (the one used by my NTRIP correction service that I don't have ability to change) and EPSG 3857 as the Map coordinate system (because that's what the ESRI basemaps are in), and the horizontal datum transformation as ~WGS_1984_(ITRF08)_To_NAD_1983_2011. In my understanding of this, the following steps would occur when I collect a point: Corrected location is received from the receiver in EPSG 6318 Field Maps uses the transformation supplied in the location profile to project the point to that of the basemap (EPSG 3857) Field Maps sends the point to the FeatureLayer service ArcGIS Online receives the point in EPSG 3857 and detects that it does not match the coordinate system of the FeatureLayer This is the point at which I'm unsure of what's happening and I'm wondering if someone could answer my second question: Does ArcGIS Online use the top transformation from the list to project the point back to the coordinate system of the FeatureLayer (EPSG 6318) or does it use some other default? Without knowing what's going on under the hood, I can't truly report my accuracy or the method in which my data was projected. Thanks in advance for supplementing the available documentation so I can make this switch to more efficient data collection without loosing some of the important details.
... View more
07-20-2022
10:46 AM
|
1
|
3
|
5280
|
|
POST
|
Hi @Anonymous User, I don't know why I didn't pick up on this before but of course you're wanting this to happen regularly. This solution that was provided here works whenever you go in and calculate the field. So Calculating a field in this way is a "one-and-done" type of thing. It's not setting up an automatic default value of sorts on the field. So as new records are added or any information changes on the record, you'd have to calculate the field to show the changes. I don't think you're going to want to do that regularly but you could set up a minimal python script to run once or twice a day that would calculate that field for you https://pro.arcgis.com/en/pro-app/2.8/tool-reference/data-management/calculate-field.htm.
... View more
02-18-2022
07:12 AM
|
0
|
0
|
1592
|
|
POST
|
Yep, don't go off of mine ;). That was T-SQL for SQL. Didn't realize it was only a subset of SQL in that window.
... View more
01-14-2022
10:31 AM
|
1
|
0
|
4456
|
|
POST
|
Oh, ok. You might be able to pull the feature class into Pro and perform a field calculation on it using the previous python but i think @jcarlson had the right idea then. Only catch there is that if any of the values are NULL, you would get a NULL. So you could use this to change any empty strings to NULL and add 1 if not NULL. IIF(NULLIF(complaint1, '') IS NULL, 0, 1) +
IIF(NULLIF(complaint2, '') IS NULL, 0, 1) +
IIF(NULLIF(complaint3, '') IS NULL, 0, 1) +
IIF(NULLIF(complaint4, '') IS NULL, 0, 1) +
IIF(NULLIF(complaint5, '') IS NULL, 0, 1) +
IIF(NULLIF(complaint6, '') IS NULL, 0, 1) +
IIF(NULLIF(complaint7, '') IS NULL, 0, 1) +
IIF(NULLIF(complaint8, '') IS NULL, 0, 1) +
IIF(NULLIF(complaint9, '') IS NULL, 0, 1) +
IIF(NULLIF(complaint10, '') IS NULL, 0, 1)
... View more
01-14-2022
10:16 AM
|
1
|
6
|
4463
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 05-16-2025 09:52 AM | |
| 1 | 11-22-2024 10:56 AM | |
| 3 | 11-22-2024 10:40 AM | |
| 1 | 02-18-2021 11:26 AM | |
| 9 | 08-30-2023 09:09 AM |
| Online Status |
Offline
|
| Date Last Visited |
11-12-2025
01:09 PM
|