DOC
|
@DaltonBloom you may need to unzip (extract) the files once downloaded, and rename the extracted directory to be something other than 'Update_Aliases.tbx', as it also didn't appear for me, but when I renamed the directory that the Toolbox was in it appeared / would add to ArcGIS Pro. @Jordan I also experienced the 'Error: iterator should return strings, not bytes', on inspecting the toolbox script at the identified line it appears the csv is opened in binary mode. If you amend the text from; with open(in_csv, 'rb') as csvfile: to with open(in_csv) as csvfile: then it may work - as it appeared to start working for me. I am updating alias' to a PostgreSQL GeoDatabase to a table that has +1m rows and the tool is a bit slow at setting the alias', I could possibly have renamed them manually quicker - however using the tool will obviously free me up to do other things. To set the alias' for 92 attributes it took (my computer / Windows Surface Book 2) 57 minutes. I tried again with an empty table and it wasn't any quicker. Thanks to @MatthewLeonard for this solution, having explored FME (long winded) and ESRI user interface options (non-existent) this script is the best way I've come across to batch update many attribute alias' in one go, which is incredibly helpful
... View more
a month ago
|
0
|
0
|
125
|
POST
|
Glad I found this thread. Our organization seems to have encountered the same issue as the OP and others have posted; we've discovered that the Join Field tool in arcpy corrupts the input feature class. Just as some background, we had a script on a server that would run nightly via task scheduler. It would take a polygon feature class and join it to records from a standalone table to populate a feature class on our SDE and a map service (note we are running 10.4.1 and have SQL as RDBMS). The script had been in place for at least a couple of years running smoothly, however, sometime in early November of this year it was failing during the join step. After reviewing the error log we set up, a colleague and I attempted to troubleshoot. We spent a good amount of time trying to figure out why it was failing but were coming up dry. It was driving us crazy because nothing had changed on the code end; it just stopped working. After looking at the intermediate output data (we had a scratch GDB set up to see what was going on), we encountered an error on the staging join feature class. We couldn't open it up in ArcMap/Pro, view properties, nor could arcpy delete it ... it became corrupt. Personally, I had a suspicion that it may have been due to security updates that were installed on the server where the script resides. But I honestly don't really have a way to prove that besides timestamps when this script started failing and when updates were installed. We needed to find a workaround. Basically we modified the code to use Add Join and the in_memory workspace in place of Join Field. While it did indeed at some lines of code to our script, overall, it's been good in the sense that we went from a task that took an hour and 15 minutes to run, to a task that only takes roughly 10 minutes to complete. So at least something positive came out of the issues we were having. Overall, it was just highly annoying when you expect a tool to work, and it does for an extended period of time, but fails suddenly without any rhyme or reason.
... View more
12-07-2020
08:51 AM
|
0
|
0
|
143
|
IDEA
|
Yes, this would be very helpful! In my case, I'm trying to delete duplicate records based on identical values in two fields, but I want to specify to keep the one with the lowest value in another field. I believe I have found a roundabout way to do this: First, use Sort tool to produce a new copy of the dataset which will be sorted by the first two fields and also then by the third field. So then, when I run Delete Identical, the first duplicate record encountered will be the one with the lowest value in the third field, so this will be the one that remains. But I want to be sure this works. Ideally, I would be able to specify this behavior as an additional parameter in the Delete Identical tool.
... View more
07-22-2020
11:03 AM
|
0
|
0
|
48
|
POST
|
Esri's documentation doesn't address the question, likely because the behavior could vary by back-end datasource or DBMS. Most, if not all, DBMSs only guarantee the result set order stated by an ORDER BY clause. Any ordering outside of an ORDER BY clause may change between executions. It doesn't mean the ordering between executions will change, just that it may change if the fields/columns aren't part of an ORDER BY clause. The underlying reason is that a query optimizer is free to choose a different execution plan every time it executes a query, and fields/columns that aren't part of an ORDER BY clause may be processed or retrieved in a different order with different execution plans.
... View more
07-22-2020
12:35 PM
|
2
|
0
|
125
|
POST
|
When working with definition queries, ArcGIS clients simply pass the conditions through to the data store tier, e.g., file geodatabase, SQL Server, Oracle, Postgres, etc.... As far as I know, Esri does not place a limit, per se, on the string containing a definition query. So, the answer really depends on what back-end data store you are working with in ArcGIS. Oracle is commonly believed to limit IN clauses to 1,000 items, and Microsoft uses the open-ended language "many thousands of values." I don't think Esri has formally stated the limit for file geodatabases, but it is quite large, thousands if not tens of thousands. Beyond some limit that you may or may not hit in the future, large IN clauses are not very efficient in terms of execution plans. If you are working with thousands of values in an IN clause, there is likely a much better performing approach to selecting/filtering the records.
... View more
01-03-2020
03:35 PM
|
1
|
0
|
121
|
POST
|
OK, I solved this. I previously had the Label Buffer for the states layer set to 30%. I don't know why this would matter in this case, since there aren't any other labels on this map that would be near each other. But for some reason, after I changed the Label Buffer to 0%, the labels shifted towards the center of the visible portion of their respective polygons on each page, pretty much how I wanted.
... View more
11-22-2019
07:54 AM
|
1
|
0
|
116
|
POST
|
This was just submitted You can add your vote if this is something that is impacting you in Pro.
... View more
12-13-2019
02:15 PM
|
4
|
0
|
301
|
POST
|
Thanks! I haven't tried implementing this yet, but it looks like a good idea.
... View more
03-08-2018
08:39 AM
|
0
|
0
|
102
|
POST
|
networkanalyst Bringing in the network analyst place, since this question goes into that area as well. You could try using the Vehicle Routing Problem and set a MaxTotalDistance on the route (Vehicle routing problem analysis—Help | ArcGIS Desktop ). That will put a cap on the analysis but not a minumum. You could also try generating a service area where the break value is the fixed length you want, generate lines with it and make sure you check in the Accumulation tab to accumulate values on each line. Then you can look at the accumulation on each line to pick out a route with the right length. (Service area analysis—Help | ArcGIS Desktop). Maybe use a route from the facility to one of the end points to help. Hope one of those suggestions helps!
... View more
05-08-2017
03:02 PM
|
0
|
0
|
22
|
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:23 AM
|