|
POST
|
It is true that sqlite is still not supported for interactive editing as at Pro 2.3 and it is also not supported for editing in ArcMap. But don't let that put you off using sqlite and the spatial extensions combined with Esri tools. It is still very useful for lots of purposes. It can fill in holes in ArcTools and for some operations it is 100 times faster. The way I use it is to keep the spatial tables (ie featureclasses) and aspatial tables (ie tables) in sqlite (with the spatial extension DLL built into Arc*) and run python scripts with SQL commands. You can do any (non interactive) editing with that method! After a few seconds of processing I then copy the spatial tables back into a filegeodatabase if I need to use other tools that do not support sqlite. I can see the featureclasses and tables in ArcCatalog and they can usually be a read-only source of data. The great benefit to exporting data as a sqlite database is that I can send it to a user without a GIS installation and they can easily read it with an ODBC driver.It is a single file database excellent for exchange. There are no size limits. Some of my databases are 6GB. Because they have good indexing they are way faster than Access ever was. Instead of an awful CSV file you get a proper database table with a full schema, metadata, and the flexibility of a relational database. Need the equivalent of Access?There are several open source equivalents available I purchased sqlitepro http://www.sqliteexpert.com to test out my sql expressions. What sort of tools? Consider if you have a many to one relate between owners and parcels. You would like a point layer of owners 'geocoded' by using the parcel centroid. The obvious solution is to add a spatial column to Owners and populate it with parcel centroids from a relate to the parcels. This typically takes 2 minutes for 5 million records. This is not easy with ArcTools. MakeQueryLayer is not available for filegeodatabases, MakeQueryTable is slow, does not scale well enough to complete. The only workaround I have found is to create a new empty Owner featureclass, copy all the data across with a cursor, then create a dictionary of geometry objects from the parcels and update the Owner featureclass. This takes some advanced python skills so I may as well write a SQL expression in spatialite (sqlite + mod spatialite) sql = """UPDATE {0}
SET shape = (SELECT shape
FROM parcel.Parcel_Label B
WHERE B.par_id = {0}.first_par_id)
""".format(new_title) time 0:00:17.682000 count 2149376 This was as expected because it was a port from ARC/INFO and AML which took a similar time.
... View more
05-06-2019
10:24 AM
|
2
|
0
|
944
|
|
POST
|
Spreadsheets and even shapefiles should only be regarded as data sources. Immediately translate them into a file geodatabase and permanently join them to the spatial datasets. There are huge benefits in having all the data in a flat denormalised table for analysis. You may have to normalise data for efficient maintenance, but for analysis it has to be flat. You can then index the geometry field efficiently for spatial queries and the attributes can all be indexed as well. Index everything since there is no harm on a extracted database since you are not using it in a transactional database. Don't worry about repeated columns and other relational db caveats. You can still do subsets equivalent to where clauses and build on the because a GIS has persistence available similar to a view, called a layer, that does not need to copy the subset. You don't say how large your database is, but if you have spreadsheets (65,000 rows max?) and shapefiles (2GB max) then I would consider that you datasets are "small". Most single operations should complete in a few minutes so a complete workflow should be able to be run in a reasonable time. When testing out the workflow, get subsets to make it run even faster
... View more
05-06-2019
09:35 AM
|
1
|
2
|
2964
|
|
IDEA
|
Use sqlite instead. It is open source, 64bit, supports spatial operations, has unlimited size, supported by all vendors, including Esri! It is much faster that access or filegeodatabases, has full SQL support and is already built into ArcGIS. Sqlite is used in all the mobile apps because it is built into Android and Apple operating systems. There are ODBC drivers for all Databases, and it is built into Python. Have a look at the Toolbox or help and you will find it tucked away.
... View more
05-05-2019
01:32 PM
|
1
|
2
|
1629
|
|
POST
|
You are not clear if the 'database ' is spatially enabled. If so, then you can run sql queries that include spatial operators. But if it is a non-spatial database, then you will need to use a relate to the table that joins to a featureclass and you can then run Arctools for the spatial operations. Actually it is best if you just replicate the tables (or a subset) in the geodatabase for speed and repetition, because you will need to run it many times until you get the process correct. You might use ModelBuilder to create some workflows because inevitably there will be many steps. Or you can script the steps in python. Python is your friend here, because there are some efficiencies possible to hold tables in memory and index them using dictionaries before running the tools. Since the tools do not optimise the query (like sql) you will need to be smart to get efficient workflows. Indexes are important, tolerances, search limits, selection sets and the right tools for the process all take experience that will be more than a week to collect. The tools often do not work as efficiently as sql queries so do them in the database first if possible.
... View more
05-05-2019
01:22 PM
|
2
|
7
|
2964
|
|
POST
|
The easiest way is to use a split() function with the split character. Switch to the python interpreter first. Because they are strings, convert them to floats as well. !latitude! = float(!field!.split('/')[0]) for the left side !longitude' = float(!field!.split('/')[1]) for the right side
... View more
05-09-2018
04:11 PM
|
2
|
0
|
1332
|
|
POST
|
Can you have another go at formatting the code? It is unreadable. My strategy for the joinfield tool is Don't. Especially if it is a large table. You will have trouble with joins across databases, and if it is in the same database, use a database join. Instead I extract the data I need using a dictionary of tuples of the required fields using a SearchCursor dictionary comprehension. Then add some fields to the target, or a copy and populate using an UpdateCursor. This is always faster, reliable and works across databases. It does not need an index because a dictionary has a hash table underneath the hood. To allow for missing values use the get function with a default of None.
... View more
02-27-2018
01:06 AM
|
0
|
0
|
817
|
|
IDEA
|
The default sqlite3.dll installed with Python does not have enable_extensions compiled into it not just disabled as the python docs suggest. Therefore you cannot add spatial extensions to access geopackage spatial featureclasses from Python in ArcTools. If the sqlite3.dll from Sqlite.org was installed in .../Python/Dlls (that is enabled) we could then load the mod_spatialite.dll extension and supplement ArcTools just as is done with the python numpy module for Grid Tools. This could be done for ArcMap (32 and 64 bit) and ArcPro (64 bit) installations. By the way, for those of you who cannot wait, you can do this yourself now for your own installations if you have admin rights and are prepared to hack.
... View more
02-23-2018
06:40 PM
|
1
|
9
|
3196
|
|
IDEA
|
The default sqlite3.dll installed with Python does not have enable_extensions compiled into it not just disabled as the python docs suggest. Therefore you cannot add spatial extensions to access geopackage spatial featureclasses from Python in ArcTools. If the sqlite3.dll from Sqlite.org was installed in .../Python/Dlls (that is enabled) we could then load the mod_spatialite.dll extension and supplement ArcTools just as is done with the python numpy module for Grid Tools. This could be done for ArcMap (32 and 64 bit) and ArcPro (64 bit) installations. By the way, for those of you who cannot wait, you can do this yourself now for your own installations if you have admin rights and are prepared to hack.
... View more
02-23-2018
06:40 PM
|
1
|
0
|
1019
|
|
POST
|
Perfect! Just what I was looking for. Something that actually works and answers the problem. There are a lot of false leads in this thread that I had already tried.
... View more
02-15-2018
03:00 AM
|
0
|
0
|
1305
|
|
POST
|
You need to share more of your workflow for us to comment. My initial though is "Why do you need to run many parallel processes?" Why not do them all at once? It feels like you are 'Reinventing GIS'. By this I mean that the spatial tools are designed to run on whole datasets, so running a tool for each feature or even a group is very inefficient, if you are doing that. I can't tell. Partitioning is a good strategy if the data overloads the tool sometimes and in theory you could run in parallel, but I find that the partitioning is so successful that i just run the tool in a loop of a few partitions (not thousands) is good enough. My goal is to run the tool in a few minutes so that the total time is still reasonable. I personally do not use Modelbuilder because I do not have enough control of intermediate results. They are always written out to a scratch geodatabase. In Python you can hold selections as views, use SQL queries, store sets in python dictionaries that are hashed arrays, use spatialite which is much faster for some operations using SQL and generally avoid some of the elegant but unscaleable standard tools. For example avoid any processing with a joined table. I think that a change of approach can make you process run in the time to have a cup of coffee.
... View more
09-15-2017
06:03 PM
|
0
|
0
|
2153
|
|
IDEA
|
I would like to see a little NLP to extract an address from other clutter to improve the match rate. At the moment a person's name in the address such as is put on mail labels confounds the very strict requirements of an address string. There are open source packages that do this very well that I have experimented with. They need a body of examples to be trained on, but that isn't very hard to do once. Here is an example: GitHub - datamade/usaddress: a python library for parsing unstructured address strings into address components
... View more
09-12-2017
04:25 PM
|
0
|
2
|
2124
|
|
POST
|
The SIDE is required. It would help if there was a sample reference dataset or at least some documentation of the allowable flags for these codes. I doubt if it will appear, so us users will just have to reverse engineer the options and document them here and on Stack Exchange. The SIDE field is REQUIRED in the locator builder, so something has to be populated. They are not intuitive. I had a fruitless search in the help for a list of codes. I then searched the configuration file and can guess that Parity can be L, R, M for perhaps Left, Right, Mixed. So what would SIDE use? Maybe the same but a D(ual) might be an option. Perhaps BLANK is allowed, but never NULLS I suppose that is because someone might be using a shapefile still where nulls are not supported. The default output of a locator is still set to shapefile.
... View more
06-14-2017
05:15 PM
|
0
|
0
|
612
|
|
IDEA
|
I would like to suppress some fields, reorder and change widths which can be done , and save the format for subsequent restarts. Having to do it again every time is a pain.
... View more
05-18-2017
04:09 PM
|
0
|
0
|
511
|
|
POST
|
If you are going to use Python to calculate fields with lots of logic then it is much easier to use an UpdateCursor. The FieldCalculator wraps a cursor around your expression anyway so it is the same thing. I regard the FieldCalculator as a prop to use in ModelBuilder only. The benefits of using a cursor are many: The logic is easier to write and understand It is easier to test the result and debug You can handle exceptions and unexpected input The syntax is simpler If you are using a join, then use a python dictionary instead for speed. with arcpy.da.UpdateCursor(feature_class,['A1',' B1',' A2', 'B2',' C']) as cur:
for row in cur:
if (row[0] == row[1] and row[2] == row[3]):
row[4] = "A AND B IDENTICAL"
else:
row[4] = "OK"
cur.updateRow(row)
... View more
05-18-2017
03:38 PM
|
0
|
0
|
901
|
|
POST
|
Have people noticed that these statistics do not appear until metadata is built for the geodatabase and needs to be refreshed to get featurecounts and dates? That is the clue on how to get these details programatically. You can't read the imbedded metadata easily because it is stored as a BLOB, but you can export it to XML and then read it with Python. All the tools are there to export metadata and the standard python module Xtree can extract what you need. No need for ArcObjects at all. 1. arcpy.management.SynchronizeMetadata(....) 2.arcpy.conversion.ExportMetadata(...) 3. import xml.etree.ElementTree as ET tree = ET.parse(xmlfile) [get required elements]
... View more
05-08-2017
01:30 PM
|
1
|
3
|
2505
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 08-26-2025 03:48 PM | |
| 1 | 05-08-2025 02:07 PM | |
| 1 | 05-07-2025 05:13 PM | |
| 3 | 04-04-2025 03:16 PM | |
| 2 | 05-07-2025 05:21 PM |
| Online Status |
Offline
|
| Date Last Visited |
10-20-2025
06:39 PM
|