|
POST
|
the server copy of the MXD seems to be a very little known fact that I just recently had to explain to someone. It would be crazy to think the link between map service and MXD would otherwise be lost. What I've used is something like the below. I can run this locally if I've shared the server directory.: import glob
for item in glob.glob('//myserver/arcgisserver/directories/arcgissystem/arcgisinput/**/*.mxd',recursive=True):
print(item)
... View more
02-05-2019
07:06 AM
|
1
|
0
|
2186
|
|
POST
|
Hey all, I'm caught in ESRI's library-name-change-and-circularly-referencing-documentation hell. I have a number of MXD's that were the basis for a number ArcGIS map services in a 10.5.x standalone ArcGIS Server environment. I would like to publish these in a new federated ArcGIS Server environment (ArcGIS Enterprise). How do I script that? Seems like that should be easy. The older documentation suggests the following steps, which ring familiar... (CreateMapSDDraft—Help | ArcGIS Desktop ) arcpy.mapping.MapDocument( )
arcpy.mapping.CreateMapSDDraft( )
arcpy.StageService_server( )
arcpy.UploadServiceDefinition_server( ) But it turns out I don't have 'arcpy.mapping' module anymore. AttributeError: module 'arcpy' has no attribute 'mapping' Where did it go? It's been replaced with arc.mp. Huge semantic improvement!! But it's not like that's just a new name: AttributeError: module 'arcpy.mp' has no attribute 'CreateMapSDDraft' Well, publishing from MXD is no longer cool anyways. ESRI apparently wants you to live in the APRX world. So I "converted" an MXD to APRX to try another approach. You'd think now you could use the following (CreateSharingDraft—Sharing module | ArcGIS Desktop 😞 CreateSharingDraft - function "The CreateSharingDraft function creates a MapServiceDraft from a Map in an ArcGIS Pro project that can be configured and shared to ArcGIS Server." service_draft = arcpy.sharing.CreateSharingDraft( )
service_draft.exportToSDDraft( )
arcpy.StageService_server( )
arcpy.UploadServiceDefinition_server( ) No! AttributeError: module 'arcpy.sharing' has no attribute 'CreateSharingDraft' What the heck??? sharing_draft = m.getWebLayerSharingDraft( )
sharing_draft.exportToSDDraft( )
arcpy.StageService_server( )
arcpy.UploadServiceDefinition_server( ) Using these steps, the code fails at sharing_draft = m.getWebLayerSharingDraft("FEDERATED_SERVER", "MAP_IMAGE", service)
sharing_draft.exportToSDDraft(sddraft_output_filename)
ValueError: MAP_IMAGE Is it possible that I've missed the one page in the documentation that makes sense? Please let me know. All I want to do is iterate over some directories with MXD files and publish them as map services. UPDATE: I had a hunch that with multiple ESRI associated Python installs, one might offer the old arc.mapping. So looks like ArcMap's old Py2.7 still has it. Didn't think about that earlier because I'm working almost exclusively in Jupyter Notebook. Whether this is of much help, I don't know. Can't imagine going back and forth between releases.
... View more
02-04-2019
03:28 PM
|
0
|
7
|
5463
|
|
POST
|
Hey folks, Lend me a hand. I don't need a code review or help troubleshooting. But I could use some guidance for coming up with a solution. Background: I have a large dataset of simple POINT features. These are locations (~40K) of objects (~2K) that are stationary for a number of days (20-50), i.e they have the same XY during that range of dates. Then they move to a new location (XY). I have keys for each location that's associated with those XY and said dates, and then ID's and other attributes for the objects that move from location to location. Objective: What I'd love to put together is an animated map that allows you to observe the "path" an object takes over time moving from Loc 1 to Loc 2, 3 , 4.. This could be either for a single object that you select (dropdown?) or a group of objects. Hope this make sense? So for starters, I've normalized things taking a rather wide attribute table and creating a sort of location history table. A simple point feature class with only Location Key, Date, and XY. I can join this to other tables where I have offloaded the rest of the attributes. I'm thinking this is probably a case for building a webmap using the JSAPI. Can anyone point me to any similar examples or throw out some suggestions? Right now the data sits in an SDE feature class but I can certainly publish it as a feature service. The closest thing to it that I have found for web and that I would try and adapt is this: Animate color visual variable | ArcGIS API for JavaScript 4.10 . I've also briefly looked at Time Animation for ArcMap here: Creating a time animation—Help | ArcGIS for Desktop. I stopped because it was insanely close. Might need some indexes? Looks like Pro (which I've barely used so far) offers some tools. But I didn't see anything like "Publish to Server". Instead it allowed you to create a movie. Animation basics—Animation | ArcGIS Desktop Any feedback ls welcome. (Categorized this is as discussion because of lack of specificity of what I'm asking. My apologies if that's not standard etiquette.)
... View more
02-01-2019
10:38 AM
|
0
|
0
|
996
|
|
POST
|
that's interesting. Probably way before my time. I just think it would be handy tool to have - one to just turn XY (old) into XY (new) without creating a feature class. I liked Dan's suggestions of creating a tool. Maybe a geoprocessing web tool? POST XY(old) and get XY(new) returned.
... View more
01-30-2019
02:42 PM
|
0
|
0
|
2091
|
|
POST
|
hadn't even occurred to me to try and use arcpy coursors with anything other than feature classes. Spent some time away from GIS and was using pyodbc successfully doing other things. So the familiar module came in handy. Will have to try cursors next time.
... View more
01-30-2019
02:39 PM
|
0
|
4
|
2091
|
|
POST
|
Okay, I realize I may draw the scorn from the community for asking the way I'm asking but ...Is there an easy way (easier than what I'm doing) to read Lat/Lon values stored in a non-spatial SQL table, reproject them and write the new values back to same or other table? Since this is for a non-GIS audience, I don't have any use for creating an intermediate feature class. I found arcpy.AddXY_management. But I believe, as most other scenarios I could come up with, requires a feature class. Basically, I'm curious if there is another way to do this without creating a feature class. This is what I tried: import pyodbc
sql_conn = pyodbc.connect('''bla bla bla''')
query = '''SELECT [LATITUDE],[LONGITUDE]
FROM MyTable'''
cursor = sql_conn.cursor()
cursor.execute(query)
ptDataRows = []
for row in cursor.fetchall():
#Turn cursor row into list
ptDataRows.append([x for x in row]) I was bringing in additional attributes from my source table which I'm leaving out in this example. But a longer list would be the only difference. Next I'm working with the Lat/Lon columns to create point geometries and then import arcpy
src_sr = arcpy.SpatialReference(4267)
dest_sr = arcpy.SpatialReference(4326)
for i,row in enumerate(ptDataRows):
pt = arcpy.Point(row[0],row[1])
ptGeom = arcpy.PointGeometry(pt, src_sr)
newpoint = ptGeom.projectAs(dest_sr,'NAD_1927_To_WGS_1984_4')
newcoord = [round(newpoint.centroid.X,7),round(newpoint.centroid.Y,7)]
pt_attrs = row.extend(newcoord) Now, I have a new list of lists, each containing the data for a new table row that I can write back to a target table. -- I also looked at arcpy.project_management. But besides the hilarious name that will make any PMP laugh, I couldn't get it to work. I think it requires a feature class again. Since this works as it, I have to mention that I'm needing to append additional spatial attributes such as UTM zones. That's why I'm wondering if this is the "best" approach.
... View more
01-29-2019
03:44 PM
|
0
|
10
|
2606
|
|
POST
|
I was reading the following as a convenience feature: If you have a small amount of data in a shapefile, you can make it available for others to view through a web browser by adding it as a .zip file containing the .shp, .shx, .dbf, and .prj files to a map you create with Map Viewer. Shapefiles—Portal for ArcGIS | ArcGIS Enterprise But it looks like the following really doesn't work: county_properties = {'title':'Active Counties',
'tags':'counties, rigs, portal upload test',
'type':'Shapefile'}
counties_shp = mygis.content.add(county_properties, data='c:/temp/counties.shp')
Error while analyzing Shapefile 'counties.shp' Invalid Shapefile Then, however, I can do this: import zipfile
import glob, os
shpDir = 'c:/temp'
nametrunc = 'counties*'
files = glob.glob(os.path.join(shpDir,nametrunc))
with zipfile.ZipFile(os.path.join(shpDir,'counties.zip'), 'w') as myzip:
for file in files:
myzip.write(file)
counties_shp = mygis.content.add(county_properties, data='c:/temp/counties.zip') I haven't had to upload a bunch of SHP files to Portal but it sure would be easier if you didn't have to zip them up.
... View more
01-25-2019
03:09 PM
|
0
|
1
|
2228
|
|
POST
|
Thanks, Joshua, foe being a better sleuth than myself. That explains it.
... View more
01-25-2019
01:21 PM
|
0
|
0
|
3836
|
|
POST
|
I'll definitely read up on that some more. When I first noticed that flag, I wondered how that really applied to me since my source fc's are all the same schema for sure. No checking needed. But the documentation suggested otherwise. Either way, lesson learned likely is to use the InsertCursor.
... View more
01-24-2019
03:21 PM
|
0
|
0
|
5693
|
|
POST
|
But I tried both "TEST" and "NO_TEST". Both took about the same time, 7+ hours.
... View more
01-24-2019
02:30 PM
|
0
|
2
|
5693
|
|
POST
|
What's the quickest way to determine what version of ArcSDE is installed and running? I've found a few suggestions for how to get this info but some additional clarification would be helpful. Method 1 Check the version sde_table (not to be confused with sde_versions table) using your RDBMS client of choice (SQL Server/SSMS for me)...(Way to find version of SDE ) SELECT *
FROM [MyDB].[dbo].[SDE_version]
MAJOR MINOR BUGFIX DESCRIPTION RELEASE SDESVR_REL_LOW
10 5 1 10.5.1 Geodatabase 105010 93001 Okay, I was expecting to get a version somewhere near or around 10.5. So that seems to make sense. But what's with the SDESVR_REL_LOW? Apparently, the 93001 is "the lowest release number of server allowed to run on this instance". Is this a reference to ArcGIS Server? (System tables of a geodatabase in SQL Server—Help | ArcGIS Desktop ) Method 2 When I try to get information using arcpy, I get the below output - no mention of 10.5.1 but 3.0.0 instead? I understand the 'Current Release' to indicate that I'm not current since I'm not on 10.6. import arcpy
connfile = r'<path_to_connectionfile>' print("Release: ", arcpy.Describe(connfile).release)
print("Current Release: ", arcpy.Describe(connfile).currentRelease) >>>Release: 3,0,0
>>>Current Release: False Finally when I try to get information inside ArcCatalog (Right-click the database connection in the Catalog tree and click Properties > General Tab...) (FAQ: How do you determine the version of an enterprise (ArcSDE) geodatabase in Catalog? | Esri Australia Technical Blog ) I also get 10.5.1. So long story short, can someone explain to me the significance of 93001 (9.3.0.0.1?) or 3.0.0?
... View more
01-24-2019
02:18 PM
|
0
|
5
|
4704
|
|
POST
|
So I was working on a little project merging a large number of datasets into one. Irritatingly, each partial dataset was residing in a feature class of the same name, scattered across several hundred file geodatabases, also of the same name but with an added publication date in the filename, and each stuffed into a ZIP archive. My process: Iterate over a list of archives Unzip each GDB's, while logging the occasional bad ZIP file With all GDB in the same directory, iterate over GDB's Access the feature class Copy all features into a in-memory "temp" feature class Add a date field and populate it with the the publication date of GDB. So much for background. --- But which arcpy method to use for writing all these partial datasets to an SDE feature class? All that was needed was to append all features from source to the target feature class with matching list of fields (identical schema). So I tried: arcpy.append_management(['in_memory\\temp'], target_fc, "TEST") This seemed to work fine for a smaller test dataset. With approx. 1200 features per feature class, the process took about 6 sec fer feature class. That's slower than 1200 inserts in SQL but I could live with that. So I added a logger from the Python standard library logging module to get some info on processing time info and pointed my script at the directory with 800+ GDB's, an 80 GB directory with all the database overhead. And it ran... and ran... and ran... The 1st run: Start 01/22/2019 04:14:14 PM Finish 01/22/2019 11:41:36 PM Total: 7h:27m:22s I was not prepared for that kinda of lengthy ordeal and started researching. I stumbled across this thread here (Loop processing gradually slow down with arcpy script) and the mention arcpy.SetLogHistory caught my attention. I had no idea that arcpy logs to an ESRI history file even when you're running a code outside of any application. So I set the flag to False, and ran my script again.
arcpy.SetLogHistory(False)
The 2nd run:
01/23/2019 05:03:23 PM
01/24/2019 12:49:21 AM
Total: 7h:45m:58s Even worse! Here is my code: arcpy.CopyFeatures_management(fc_source,'in_memory\\temp')
arcpy.AddField_management('in_memory\\temp',"date_reported",'DATE')
#this is the date portion of the GDB name
datestring = gdb[-12:-4]
updatedate = datetime.strptime(datestring, '%Y%m%d').strftime('%m/%d/%Y')
fields = "date_reported"
with arcpy.da.UpdateCursor('in_memory\\temp', fields) as cursor:
for row in cursor:
row[0] = updatedate
cursor.updateRow(row)
try:
arcpy.Append_management(['in_memory\\temp'], SDE_target_fc, "TEST")
logger.info("Successfully wrote data for {0} to SDE".format(datestring))
except:
logger.info("Unable to write data to SDE...")
arcpy.Delete_management("in_memory\\temp") This was driving me crazy, so I pulled the timestamps from my log and plotted the diff's for each feature class using matplotlib/pandas for both runs. X is a count of all the iterations over feature classes, Y is the time in seconds each INSERT (or the portion of the code doing the append) took Two questions that come to mind: 1) Why does each INSERT take longer than the one before? 2) What the heck happens when I get to feature class #500 or so where that steady slowing trend goes berserk? Just for sake of completeness, I also ran the script using the following ("NO_TEST") - with the same result. arcpy.Append_management(['in_memory\\temp'], SDE_target_fc, "NO_TEST") Since arcpy.append clearly wasn't performing, I followed the advice from this thread (Using arcpy.da.InsertCursor to insert entire row that is fetched from search cursor?) and replaced arcpy.append with: dsc = arcpy.Describe(target_fc)
fields = dsc.fields
fieldnames = [field.name for field in fields]
...
with arcpy.da.SearchCursor('in_memory\\temp',fieldnames) as sCur:
with arcpy.da.InsertCursor(target_fc,fieldnames) as iCur:
for row in sCur:
iCur.insertRow(row) Th 3rd run: 01/24/2019 01:40:31 PM 01/24/2019 02:19:09 PM Total: 38m:38s Now we're talking. So much faster! In fact, 2318 seconds for 850 INSERTS, comes out to about 3 sec per transaction, and when you plot it: Now, that's the kind of behavior I'd expect. That no matter how many INSERTS you do, they always take about 2-4 seconds. So my question: What in the world is append_management doing? Clearly, it's the wrong tool for what I was doing despite its name, or is it not?
... View more
01-24-2019
01:57 PM
|
4
|
5
|
6933
|
|
POST
|
Dan, thanks for chiming in again... I've been a little side tracked from looking at this again. I hear you. If you have a toolkit that does the trick, why learn another one that accomplishes the same. Especially in a world of wrappers around wrappers. But I kinda feel that way about pandas... I know it better than numpy and wanted to figure out how to do that. --- I did take a look at your blog just now and will be sure to return to it soon. Thanks.
... View more
01-08-2019
02:56 PM
|
0
|
0
|
5402
|
|
POST
|
Thanks, Dan, and happy new year! - I tried your above example and it works for me just fine. I had to use the full path to the SDE or GDB table. Don't understand why using only table name and env.workspace doesn't work but I can create a new table with correct field names and data types of the structured array. However, I am still not getting anywhere with my own example. The whole purpose of this exercise was to be able to work with a dataframe in pandas, and get the pandas datatypes converted to numpy so that the arcgis function can create a table from it. By now, I suspect this method may be the culprit - scipy.org search returns no results for it at all. np.rec.fromrecords In your case, the array you created "manually" looked like this: array([ (A1, B1, C1),(A2, B2, C3) ....], dtype = [('FieldA', 'DatatypeA'),('FieldB', 'DatatypeB')...]) When I use the above method on the df.values and convert back into ndarray, I get : x = nd.array(np.rec.fromrecords(df.values)) x >>array([ (A1, B1, C1),(A2, B2, C3) ....], dtype = (numpy.record, [('FieldA', 'DatatypeA'),('FieldB', 'DatatypeB')...])) So although np.array() is supposed to turn the record array back into a regular nd.array, there is still that "numpy.record" in the dtype. The more I think about it, this probably isn't a question for Geonet any longer but rather a numpy question. But if you have any suggestions, let me know.
... View more
01-03-2019
10:59 AM
|
0
|
2
|
5402
|
|
POST
|
Thanks, Dan. I tried that and get the exact same error. Could it be that I have to actually create the table before loading data? Ha! I will try that next.
... View more
12-27-2018
01:46 PM
|
0
|
4
|
5402
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 08-16-2025 07:32 AM | |
| 1 | 02-09-2024 05:18 PM | |
| 1 | 02-04-2025 09:27 AM | |
| 1 | 03-22-2019 10:55 AM | |
| 1 | 03-05-2020 08:46 AM |
| Online Status |
Offline
|
| Date Last Visited |
Thursday
|