POST
|
Note sure what the issue is - maybe just monkey with the values. For example in the interactive window try inserting some values like: updateRows.updateRow(("monkey", 1 , datetime.date(2017,2,5))) and see if they take. I know with the da cursors that dates are converted on the fly to Python datetime objects. If you interrogate the values in the dictionary are they all of the anticipated data type and order?
... View more
02-05-2018
03:39 PM
|
0
|
1
|
921
|
POST
|
I assume the fields are added to the mainTbl and dictionary gets populated with the correct values, right? I don't see anything in the update cursor, but... Are the "PIN_ID" and "PIN_Num" field types the same ("123" vs 123 vs 123.0)
... View more
02-05-2018
02:18 PM
|
0
|
5
|
921
|
POST
|
Here's a general example of dictionary/cursor method. Could certainly be made more elegant depending on what you are trying to do.... import arcpy mainTbl = r"C:\temp\blah.shp" joinTbl = r"C:\temp\lookup.dbf" fieldsToJoinList = ["FIELD1","FIELD2","FIELD3"] joinField = "COMMON_ID" arcpy.AddField_management(mainTbl, "FIELD1", "BLAH") arcpy.AddField_management(mainTbl, "FIELD2", "BLAH") arcpy.AddField_management(mainTbl, "FIELD3", "BLAH") joinTblDict = {r[0]:r[1:] for r in arcpy.da.SearchCursor(joinTbl, [joinField]+fieldsToJoinList)} updateRows = arcpy.da.UpdateCursor(mainTbl, [joinField]+fieldsToJoinList) for updateRow in updateRows: if updateRow[0] in joinTblDict: updateRow[1] = joinTblDict[updateRow[0]][1] updateRow[2] = joinTblDict[updateRow[0]][2] updateRow[3] = joinTblDict[updateRow[0]][3] updateRows.updateRow(updateRow) del updateRow, updateRows
... View more
02-05-2018
09:49 AM
|
0
|
7
|
1077
|
POST
|
Use a dictionary and update cursor (which will be much faster than the classic join/calc or join/export)
... View more
02-02-2018
05:03 PM
|
0
|
1
|
1077
|
IDEA
|
FYI: Here is an ugly copy/paste of some documentation for a Python AddIn that we (WA State Dept. of Natural Resources, Forest Resources Division) developed internally for our Timber Sales Program staff (see below, pictures got lost, but you get the point). The Python AddIn basically assists in the automation of standard high quality map products that are used by our customer base (private companies that bid on the timber resources we put up for auction). Prior to the Python Addin: Because we lacked the .NET/ArcObjects skills internally in the business group (and our IT Division refused to assist us at the time), we had to contract this work to a rather expensive and difficult to work with private consulting firm to build an ArcObjects .NET toolbar that did basically the same thing. The old ArcObjects tool was extremely buggy and very difficult to maintain. Now, the new Python-Addin tool is easily maintained and upgraded by very competent internal GIS Analyst staff who are in direct communication with our timber sales business staff. Talk about streamlining and silo busting - the development of our "SUMA" Python AddIn tool was a smashing success and we couldn't have done it without ArcGIS support for Python AddIns. Honestly, I (and many other users) find it maddening that ESRI regularly drops support for certain functionality on a seemingly regular basis without much care as to how it impacts their user base. I recognize that porting Python AddIn support to Pro is a bit of work, but really, it is entirely doable! Cmon' ESRI, your customers need you! State Uplands Mapping Application (SUMA) Toolbar 2.1.0 Quickstart Guide Forest Resources Forest Informatics 12/20/2016 Contents Scope. 2 General steps to create timber sale map documents with the new tool 2 Log in to the Citrix Environment. 2 Create New Project. 3 Edit Data. 5 Using the Dynamic Legend. 6 Navigate Map products. 8 Define pages for output. 8 Page Selector. 8 Import Pages. 10 View Pages. 11 Zoom To. 11 Reorder/Delete Pages. 12 Create Final Output. 14 Scope The State Uplands Mapping Application (SUMA) Toolbar is an ArcGIS Desktop addin developed by the DNR Forest Resources Division, Forest Informatics Section. It provides tools, functions, and standardized map products to assist DNR staff in duties associated with creating Timber Sale map packets. This version of the addin was developed using the Python programming language, and the Python Addin functionality of ArcGIS 10.4.1. In this version of SUMA, our goal is to leverage out-of-the-box functionality of ArcMap, automate processes when possible, and to rely on users’ skills working with the base software to more efficiently create SUMA map products. General steps to create timber sale map documents with the new tool Login to Citrix Add SUMA toolbar (If not checked on by default) Create a new project Edit data (and dynamic legend) Navigate map products Define map pages for output View page definitions Reorder or Delete pages Create final output 1 Log in to the Citrix Environment Login to Citrix: http://citrixstorefront/Citrix/DNRXAOLY101-IWeb/ Start Basic ArcMap 10.4.1 2 Create New Project The new version of SUMA will create all map products in a directory that you specify. It copies data from sources on the network, and creates a local file geodatabase to hold user data. The local geodatabase also stores a grid feature that is used to hold map page definitions (page extents). At the time of project creation all map products are copied into the user-specified location. Select ‘SUMA Project > New Project’ Select the Region where your timber sale is located. (This value is used to filter Timber Sales) Select the timber sale name from the drop down menu (This value is used to populate the agreement number) The agreement number (if it exists) is displayed in the next field. It is for display purposes only. In the ‘Output Folder’ field, enter the path to the network location where your region’s timber sale map projects are saved. A folder will be created in this directory with the naming convention ‘suma_<sale_name>’, where the sale name is stripped of special characters, converted to lowercase, and spaces are replaced with underscores. All map products and a file geodatabase will be created in the location. From the ‘Open map’ drop down menu, select the mxd that you would like to open when the tool completes. Optional: Browse to a shapefile or feature class that defines your timber sale. This will be used in place of spatial data in P&T. Optional: Provide a custom distance used when clipping corporate roads for use in your Haul Route feature class. This will be useful if your sale is greater than 1 mile away from the major road that will be used to access the sale area. You will need to provide a distance value and unit of measure. If this is left blank, a default of 1 mile will be used. Note: When the tool completes it will attempt to open the map document that you specified in your current ArcMap Session. If you select ‘Cancel’, the tool will not open the new SUMA map product that you specified in the tool dialog. 3 Edit Data SUMA 2.1 relies on the built-in functionality of ArcMap whenever possible. This is the case for editing where you will use Feature Templates that have been defined with existing symbology and default attribute values to add new features. Start an editing session within your map document. When you do this, the ‘Create Features’ window should be active. Within the ‘Create Features’ window, feature templates are available to add new features to the feature classes that are present in the table of contents. Save your edits and stop the edit session. 4 Using the Dynamic Legend The new SUMA map products use a dynamic legend. Therefore, only features that are visible in the data frame are present in the legend. As you pan to new extents, the legend should update to reflect the symbology of the features that are in the new extent. This may be undesirable if you do not want all of the symbolized values to be shown in the legend. You will notice that there are two Unit Tags layers and two Stream Labels layers in the Table of Contents of some of your map products. For example, there is a ‘Unit Tags Legend’ and ‘Unit Tags’ layer in the Timber Sale Map. The ‘Unit Tags Legend’ layer is the layer that determines which values are symbolized in the legend. You can remove symbols in this layer, which will remove them from the legend, without effecting the symbology in your dataframe. Example 1: You edit your ‘Unit Tags’ layer by adding a record for ‘Right of Way Tags’ (single line) and ‘Right of Way Tags’ (double line). You only want one of these values to be visible, so you manually edit the ‘Unit Tags Legend’ layer to show the symbols visible in the map. Example 2: You want all of the values from your ‘Unit Tags’ layer to show up in the legend, and you don’t want to update the ‘Unit Tags Legend’ layer for each new dataframe extent. You remove the ‘Unit Tags Legend’ from the legend, and add the ‘Unit Tags’ layer. If you select Items > Unit Tags > Only show classes that are visible in the current map extent, the software will update the legend dynamically. There are no ‘Unit Tags’ features visible in the dataframe, so no values are visible in the legend. Note: if a record with a value previously absent in the legend is added via an edit session, that new symbol will not be visible in the legend until the edits are saved and the edit session is closed. 5 Navigate Map products Navigation between map products is accomplished with the built-in ‘Open’ button. The default folder is the folder where the current map document is saved. You no longer open SUMA map products using a project file, you can open the .mxd just as you would any other. 6 Define pages for output This version of SUMA will use extent rectangles with attributes that identify the associated map product to generate pdf documents and define bookmarks. In general, when a user defines a map page, he or she is capturing the extent of the data frame, and a polygon is created using those coordinates and inserted into the Grid_Index feature class in the Source_Data.gdb. There are two tools that can be used to generate these extent rectangles. The first is the ‘Page Selector’ tool. This button will capture the current extent of the dataframe, and create an extent polygon. The page number is incremented automatically. This tool is only available in the Page Layout View. The second way to define map pages is to use the ‘Import Pages’ tool. This tool will copy the extent rectangles that have been defined for one map product, and apply them to another map product. It does not change any map product, or copy any data except the appropriate Grid_Index features. a) Page Selector Zoom to the extent that you want to record for final map production. Note: There are scale requirements for some map products. The tool will present a message box and will not work if the scale is outside the acceptable range. Select Tools>Page Selector When prompted, Select ‘Yes’ to record the current extent into the Grid_Index feature class. During the process of exporting map pages to pdf format, the current map product will be zoomed to this extent, and a page will be created. If you select ‘No’ or ‘Cancel’, the current extent is not recorded. If you are working in a map product that shares a scale range requirement with another, you will be asked to reuse the extent for the other map products. If you select ‘Yes’, the current extent will be recorded in the Grid_Index feature for later use in the ‘Create Final Output’ tool for the other map product. For example, if you are working in your Timber Sales Map and you select ‘Yes’ to reuse the current extent, that extent will be recorded for the FPA Map in addition to the Timber Sales Map. If you select ‘No’ or ‘Cancel’, the current extent will be recorded for the current map product. b) Import Pages Select Tools>Import Pages Enter the source map product that has the extents you wish to copy from (the requirement is that you must define pages for this product prior to using this tool) Enter the map product that will use the previously defined map extents (This will copy the extents, and associate the new index features with the map product that you select) 7 View Pages The user will be able to view the extent rectangles that have been recorded for the current map product with the use of the ‘View Pages’ tool. The intent is to allow users to preview the extents that have been recorded to ensure that the entirety of the sale will be included in the production of final map documents. This is different than having the ability to preview the final output, but should approximate that functionality. The extents, and the associated page number, will be displayed in the dataframe. A dialog box will also be present. By design, you will not be able to interact with the map document while viewing the pages that have been captured. Once you click ‘OK’, your map will be restored to the extent that it was zoomed to prior to using the ‘View Pages’ tool. Open the map product you wish to view page extents for Select Tools>View Pages 8 Zoom To Another method to view the map page extents that have been recorded for the current map product is to use the ‘Zoom To’ drop down. The pages will be listed by page number (e.g. “page 1, page 2, etc.”), and the map will be zoomed to the extent associated with the page number after the user selects that page from the menu. You will only be able to view pages for the current map product, it will not allow you to navigate to other map products. This tool relies on the extent polygons that are stored in the Grid_Index feature class within the Source_Data.gdb. Select the ‘Zoom To’ drop down menu Click on the page that you wish to zoom to 9 Reorder/Delete Pages This tool allows you to reorder and delete pages. Pages are grouped by map product, and the pages are numbered in the order they were added using the ‘Page Selector’ tool. You have the option to reorder/delete pages from all map products, or to reorder/delete pages from the current map product. The page numbers are used to populate the ‘Bookmarks’ drop down menu to allow users to navigate to the page views he or she has recorded. Additionally, the ‘Create Final Output’ tool uses the page numbers to determine the order in which map pages are converted to pdf documents. The user can reorder map pages by moving the map page name up or down relative to other pages of the same product. The map pages for each product are determined by the position of each map page in relation to other pages of the same product. The tool automatically renumbers the map pages during execution. The user can also delete page definitions with this tool by removing the page name from the list. Once a page is deleted, there is no way to undo this operation, although you can easily recreate a new page by using the Page Selector tool Select Tools>Page Manager Use the drop down menu to filter the map pages you wish to reorder. The options are ‘All Maps’ (you can reorder all pages for all map products) and ‘This Map’ (reorder pages for the current product only). In the following example, the Timber Sales Map Page 2 will become Timber Sales Map Page 1 and Timber Sales Map Page 1 will become Timber Sales Map Page 2. FPA Map Page 1 will remain FPA Map Page 1. Note: the order, or page numbering, is applied by map product. If you wish to delete a map page (extent polygon) so it is not used as a bookmark or in the final map page creation, highlight the map page name, and select the button. In this example FPA Map pg. 1 was selected, and removed from the list. Select ‘OK’, and the pages that were removed from the list will be deleted, and the pages will be renumbered to reflect the order that they appear in the list (Page numbers increase from top to bottom) 10 Create Final Output This tool uses the grid index feature to export map documents to pdf format. A date-stamped folder is created in the folder the current map document was opened from, using the naming convention: ‘output_YYYYMMDD_HHMMSS’, and optionally, another folder named ‘pages’ will be created within the date-stamped folder. This is done every time the user uses this tool. Previous output is not overwritten. By default, one pdf is created by map product. For example, if there are three pages defined for the Timber Sales Map and two pages defined for the Driving Map, then two pdf documents will be created: one Timber Sales pdf and one Driving Map pdf. Optionally, the user can check the box that asks, “Single Page Output (optional)”, and one pdf will be created for each page, regardless of the map product. These will be stored in the ‘pages’ subfolder. In the previous example, if the user had checked the optional box, then three Timber Sales Map pdf’s and two Driving Map pdf’s would be created in addition to the other pdf documents. Upon completion, the output folder is opened and the user can open the pdf’s that were created. Select Tools>Create Final Output In the ‘Pages’ list, manually check the box next to the pages that you would like to be included in the final pdf documents. (Optionally, you can select the ‘Select All’ button to include all pages) Check the box next to the ‘Single Page Output (optional)’ dialog if you wish to have single pdfs for each page. This can be useful if you will be using the pdfs with a mobile GIS application. Select ‘OK’, and your pdf documents will be created. A windows explorer window will be opened to the output folder that holds the final pdf documents.
... View more
10-24-2017
10:37 AM
|
3
|
1
|
6501
|
IDEA
|
Python is an important and necessary "easy" GIS automation language... much akin to AML and Avenue and how those languages kindled a strong, vibrant, and skilled "GIS Analyst" caste. Continued support of Python (and Python AddIns) in ArcGIS Pro is a confirmation by ESRI for the continued support for this vital "GIS Analyst" role. These are the people that wear many hats in an organization and do all the geoprocessing, cartography, analysis, and automation, but don't necessarily want to be a full blown .NET Programmer. Many of us got into GIS for the coolness aspect, but also many of us are not enamored with the idea of doing full time programming work. Prior to the active support of Python (ArcGIS v9.0), ESRI had, in my opinion, inadvertently divided the traditional "GIS Analyst" role two main groups: 1. Programmers (people that you had to go to in order to automate anything) 2. Cartographers and data editors (non-programmers) I am pleading with ESRI to actively support the classic GIS Analysts in the world by continuing strong and active support Python - and specifically continue their support for Python AddIns in ArcGIS Pro.
... View more
10-24-2017
09:55 AM
|
6
|
0
|
6501
|
POST
|
Very nice! Maybe the 'Dice' tool would provide a more "optimal" way to break the polygon geometry up?
... View more
10-03-2017
01:49 PM
|
1
|
2
|
345
|
POST
|
Like bixb0012 said, anything that you can do to get all the base datasets on the local machine (and off a network connection) would certainly be a big boost. Might there be z values in your point layer? If so, I bet dropping the z values would speed the read performance a bit. Hands down, all these web services are way faster with local data that is stripped down to the bare minimum. A casual armchair observation that .shp format generally remains faster to read/write that FGDB... I hate to say it, by most of ESRI's spatial search (select by location, etc. ) and overlay algorithms (union, etc.) are in fact "the best in the biz". Not all though...
... View more
10-03-2017
10:52 AM
|
2
|
1
|
1542
|
POST
|
Depending on what you are going to do with the results, 'Generate Near Table' might be a faster solution or even 'Intersect'. Both of these of course write an output dataset, so.... Another approach might be to break the geometry of your polygon into several simpler geometries. For example, a large rectangle and then several smaller triangles. As I recall, the ESRI algorithm uses this search hierarchy: 1. Bounding rectangle 2. Convex hull 3. Full geometry
... View more
10-02-2017
04:10 PM
|
2
|
3
|
1542
|
POST
|
Dan - Did you ever get any results from my script/data?
... View more
05-17-2017
03:38 PM
|
0
|
1
|
1188
|
POST
|
Yeah, I am starting to think that the benefits of the NVMe disks stuff might only shine through when doing really big multithreaded processing stuff. Possibly these single threaded tests don't have enough data throughput to reveal disk read speed as a bottleneck?
... View more
05-09-2017
04:08 PM
|
0
|
0
|
1188
|
POST
|
Hi Dan, Your code didn't work for me right out of the gate, so I rewrote it like this: import numpy as np from scipy.spatial.distance import cdist import timeit def test1(): return cdist(a, b) def test2(): np.save(driveLetter + ":/temp/d.npy", d) def test3(): np.load(driveLetter + ":/temp/d.npy") n = 50000000 # 50 million destinations, 1 origin a = np.random.mtrand.RandomState(1).randint(0, 10, size=(n,2)) b = np.random.mtrand.RandomState(2).randint(0, 10, size=(1,2)) d = cdist(a, b) # for saving for driveLetter in ["c","d","e"]: print("On the " + driveLetter + ":\ drive...") print("-Test #1 (proc speed)") for i in range(3): print("--"+str(timeit.timeit(test1, number=1))) print("-Test #2 (write speed)") for i in range(3): print("--"+str(timeit.timeit(test2, number=1))) print("-Test #3 (read speed)") for i in range(3): print("--"+str(timeit.timeit(test3, number=1))) In my case: C:\ is an NVME drive D:\ is actuially 4 NVMe drives in RAID0 E:\ is a 7.2k HDD SATA My results are (in seconds): On the c:\ drive... -Test #1 (proc speed) --0.883544537654 --0.878839311374 --0.902597402597 -Test #2 (write speed) --0.448425204932 --0.427565927223 --0.427990160524 -Test #3 (read speed) --0.262958274602 --0.261044435017 --0.267392538968 On the d:\ drive... -Test #1 (proc speed) --0.883161701312 --0.968546179848 --0.957553405499 -Test #2 (write speed) --0.41787253842 --0.361432745337 --0.361836109096 -Test #3 (read speed) --0.249162823478 --0.251594980362 --0.257517482517 On the e:\ drive... -Test #1 (proc speed) --0.863139258002 --0.902649405389 --0.906740519754 -Test #2 (write speed) --2.14880975189 --2.11361173074 --2.09666840009 -Test #3 (read speed) --0.247759090225 --0.251948051948 --0.251255935845 My equipment is an HP z840 workstation with dual socket Xeon 2687v4 processors w/ lots of RAM. C:\ drive is one of these: http://www8.hp.com/us/en/workstations/z-turbo-drive.html and D:\ drive is one of these http://www8.hp.com/us/en/workstations/z-turbo-drive-g3.html in a RAID0. At any rate, I am not super impressed and most likely there is something off with the system config... Not sure why my FTP link doesn't work for you... I reposted the test data as a zip file (~25 MB) to make it easier. Try just pasting ftp://ww4.dnr.wa.gov/frc/for_dan/ into windows explorer. Thanks, Chris
... View more
05-09-2017
01:50 PM
|
0
|
1
|
1188
|
POST
|
Thanks for the input Dan. So yes, I do indeed have some data and a script! Should take 3-4 minutes to run. The script will need a bit of path updating BTW. Maybe just uncomment that whole thing at the end which writes a dbf file to a network location. ftp://ww4.dnr.wa.gov/frc/for_dan/ If you could post your results (the txt log file would be fine) I'd love to see the numbers from a different NVMe machine. One thought I had is that it could be our corporate virus scan software, which our IT Dept. loves to crank up to the "max slowness" setting.
... View more
05-08-2017
06:00 PM
|
0
|
4
|
1188
|
POST
|
Anyone running a machine with these newfangled NVMe disks? We purchased a newer workstation with some of these, and while the "industry benchmark" tests we have run(CrystalMark) indicate the disks are indeed super fast and operating as expected, the real world test that we have run in ArcGIS and Python have not shown a significant performance increase over old school SATA HDD platter disks. Anyone know why this might be the case? My only assumption is that disk speed is not the bottle neck... If it helps, here's an excerpt of some of the tests we are running. Note that not all of these involve use of disk i/o and there are some others (not shown) that are just simple unions and dissolves and stuff: #Process: Export raster to pnts (about 2 million points) rasterPntsFC = os.path.join(fgdbPath, "raster_pnts") time1 = time.clock() arcpy.RasterToPoint_conversion(conRst, rasterPntsFC, "VALUE") time2 = time.clock() benchmarkDict["RASTER_TO_POINTS"] = time2 - time1 logMessage("RASTER_TO_POINTS = " + str(time2 - time1)) #Process: Build a large dictionary independent of disk time1 = time.clock() randomDict = {} sum = 0 i = 0 for x in range(1,2001): for y in range(1,2001): i = i + 1 randomDict[x,y] = [random.randint(1,1000)] sum = sum + randomDict[x,y][0] randomDict[x,y].append(sum / float(i)) time2 = time.clock() del randomDict benchmarkDict["BIG_DICTIONARY"] = time2 - time1 #Make another dictionary, but one sourced from disk time1 = time.clock() pntDict = {r[0]:r[1] for r in arcpy.da.SearchCursor(rasterPntsFC, ["OID@", "grid_code"])} time2 = time.clock() benchmarkDict["POINTS_TO_DICT"] = time2 - time1 #Sort something pretty big time1 = time.clock() sortList = sorted(pntDict.items(), reverse=True) sortList.sort() time2 = time.clock() benchmarkDict["SORT_DICT_ITEMS"] = time2 - time1 #Write a big txt file time1 = time.clock() testTxtFile = os.path.join(benchmarkFolderPath, "test_text_file.txt") f = open(testTxtFile, 'a') for i in range(10000000): f.write(str(i)) f.close() time2 = time.clock() benchmarkDict["WRITE_BIG_TXT_FILE"] = time2 - time1
... View more
05-08-2017
05:17 PM
|
0
|
9
|
1975
|
Title | Kudos | Posted |
---|---|---|
1 | 08-29-2024 08:21 AM | |
1 | 02-13-2012 09:06 AM | |
2 | 10-05-2010 07:50 PM | |
1 | 02-08-2012 03:09 PM | |
1 | 10-31-2013 02:18 PM |
Online Status |
Offline
|
Date Last Visited |
08-30-2024
12:25 AM
|