|
POST
|
Thanks for that. Its should be just as straight forward as you wrote there, but the thing about my sql query is that I have like more than 20 columns from that results. If i will create a table first in the GDB, how can I read the columns and column type of my result and use it in creating the fields dynamically? In the part where we loop through the results of the query, if I have 20 columns, I don't want to write "cursor.insertRow((result[0], result[1], .......result[19])". Thanks
... View more
07-04-2016
03:45 PM
|
0
|
2
|
2674
|
|
POST
|
Hi guys, im using pypyodbc to connect to sql and perform some joins between several tables. How can I directly convert the results into a stand-alone table within a geodatabase? Regards, Thanos
... View more
07-04-2016
02:30 PM
|
0
|
6
|
6666
|
|
POST
|
Tried to right-click on the table but when I go to manage, all is greyed out. This sql database is actually managed by someone else and have tables that are being used in another business process. The table im trying to update will be an input for a reporting process using reporting tool. probably I should be asking our dba for some inputs as well. But since I can use insertcursor, should it be possible for me to use updatecursor to delete rows? or this is very inefficient?
... View more
02-15-2016
02:47 PM
|
0
|
1
|
2129
|
|
POST
|
Hi Vince, If its a sql server table, how do you register it with arcsde since they are non-spatial? Thanks
... View more
02-15-2016
02:09 PM
|
0
|
3
|
2129
|
|
POST
|
Hi guys, Need so inputs or advice on this, I have a sql server table that I need to sometimes truncate (completely wipe out data) or delete several rows from periodically. Then I also need to insert new rows based on the rows in a featureclass. I don't have a problem updating it, I just use arcpy.da.InsertCursor but maybe this is not the best? For deleting a row or totally wiping out the table row, should I be using ArcSDESQLEXECUTE? I tried using data management>table>delete rows but it seems to be just applicable to esri tables and features. I actually tried ArcSDESQLEXECUTE, passed a sql statement like "DELETE * FROM TABLE", but I keep hitting this "Sream" type error. I need to setup my python code to run this kind of admin task on the sql table. Hope you guys can give some tips. Thanks
... View more
02-15-2016
01:09 PM
|
0
|
6
|
4609
|
|
POST
|
Hi guys, I want to use PortalPy to automate some admin tasks in our Portal v10.3. Is there a sample code out there that I can use to key in proxy address? Seems if I just use the code in the samples I get error like "request was refused". Thanks
... View more
02-02-2016
11:42 AM
|
0
|
0
|
1968
|
|
POST
|
Yup sorry too Richard if I didn't fully considered your requirements. Luke has touched on it already. I highly recommend you switching to using arcpy.da (data access modules) that the Luke S mentioned to his reply to you. Its more flexible than how your using it for update cursor. Its pretty much what I used in my code I showed you. so if I update my code to consider your option for entering values or using values off a field, import arcpy
import os
def updateViaField(featureclass, updateProcess, fields, wc):
with arcpy.da.UpdateCursor(featureclass, fields, where_clause=wc) as cur:
for row in cur:
row[0] = row[1]
cur.updateRow(row)
def updateViaValue(featureclass, updateProcess, fields, wc, updateValue):
with arcpy.da.UpdateCursor(featureclass, fields, where_clause=wc) as cur:
for row in cur:
row[0] = updateValue
cur.updateRow(row)
if __name__ == '__main__':
'''Inputs'''
workspace = r'C:/DuluthGIS/SchemaUpdate/SDESchemaTest.gdb/Gas'
feature = 'gasDistributionMain'
featureclass = os.path.join(workspace, feature)
updateProcess = 'Field'
updateField = 'OPERATINGPRESURE'
wc = '"MAOP" >= 0'
updateValue = 123
'''Pass the condition'''
if updateProcess == 'Field':
valueField = 'MAOP'
fields = [updateField, valueField]
updateViaField(featureclass, updateProcess, fields, wc)
else:
fields = [updateField]
updateViaValue(featureclass, updateProcess, fields, wc, updateValue)
... View more
01-08-2016
11:52 AM
|
1
|
0
|
2031
|
|
POST
|
I think this is just what you want to do, import arcpy
import os
def updateField(feature, workspace):
with arcpy.da.UpdateCursor(os.path.join(workspace, feature), ['OPERATINGPRESSURE', 'MAOP'], where_clause='"MAOP" >= 0') as cur:
for row in cur:
row[0] = row[1]
cur.updateRow(row)
if __name__ == '__main__':
workspace = r'C:/DuluthGIS/SchemaUpdate/SDESchemaTest.gdb/Gas'
feature = 'gasDistributionMain'
updateField(feature, workspace)
... View more
01-07-2016
01:57 PM
|
0
|
0
|
2031
|
|
POST
|
Thanks heaps for all the feedback guys. Since I don't want to create query layers in an mxd, I could just probably use arcpy.MakeQueryLayer_management and do something with the records while in_memory I guess. But I just want to validate if "GeometryCollection" type can be recognized? In my case this type is just composed of multiple polygon so pretty much like a "MultiPolygon".
... View more
12-08-2015
02:59 PM
|
0
|
2
|
3087
|
|
POST
|
Sorry for the lack of detail earlier, Here is the traceback report, Traceback (most recent call last): File "Path\ExtractDataToCSV.py", line 37, in <module> extract = in_connection.execute(sql) File "C:\Program Files (x86)\ArcGIS\Desktop10.3\ArcPy\arcpy\arcobjects\arcobjects.py", line 27, in execute return convertArcObjectToPythonObject(self._arc_object.Execute(*gp_fixargs(args))) AttributeError: ArcSDESQLExecute: SreamBindOutputColumn ArcSDE Error -65 \uda10 So this happens when I include the geometry column that has multiple geometry types. Without it, the table loads fine. I can load the geometry type but I have to do something like this, "cast(GeoLocation as varchar(8000)) as GeoLocation" This will load it as string but still I have to work a process to convert them as geometries. Wondering if there is a simpler or better way of reading sql tables like this. Thanks Dan
... View more
12-06-2015
02:13 PM
|
0
|
4
|
3087
|
|
POST
|
hi guys, I got this sql table im working with, it has a geometry column but contains multiple geometry types, [u'POLYGON', u'LINESTRING', u'GEOMETRYCOLLECTION', u'POINT', u'MULTILINESTRING', u'MULTIPOLYGON'] Im using arcpy's arcsdesqlexecute and I seem to be hitting an error loading the column as is, I tried converting it to varchar but I don't feel its necessary. Any thoughts guys? Regards, Chris P
... View more
12-06-2015
12:29 PM
|
1
|
12
|
8057
|
|
POST
|
A colleague mentioned the GP service was giving off UTC date time format so I'll just create a function for converting UTC datetime input to local time.
... View more
10-27-2015
08:27 PM
|
0
|
0
|
463
|
|
POST
|
Hi guys, Im hitting an error with my GP service widget because my script is based to look at 24hr time format value but when I check the messages in the job directory, the 24h format input (16:30:00) was converted to 12hr format (4:30:00 am). And its not even right cause I should be PM. Regards, Chris P
... View more
10-26-2015
08:41 PM
|
0
|
2
|
2421
|
|
POST
|
Thanks Vince for all the feedback. Im not really that familiar with server architechture or that experienced in publishing geoprocessing services, so most of your server jargons are pretty new to me, sorry. We think the extracted records, which would probably range from 10 to 50k records per transaction do not need to be kept and could just stay in the scratchgdb till it gets cleaned up somewhere in the process. But due to the current environment setup, the database and table im connecting to and extracting those records from sometimes has another process or service running on it based on other jobs from different teams (non-GIS). So connecting and querying to it sometimes during a day would fluctuate from 5min to 15min alone. I could alternatively write the feature-converted records to an sde database so it'll persist. and separate the 2nd tool to fetch the data from there to run the analysis.
... View more
10-13-2015
01:32 PM
|
0
|
0
|
1092
|
|
POST
|
Thanks a lot for the response Vince. I could easily merge my two tools to become one service if published. I figured that would remove my concern for re-using scratchworkspaces, which as you pointed out is not an ideal direction to take. However, the only concern I could probably have if my 2 tools act as one service is if the 2nd process (2nd tool) fails, then re-doing it needs to go through the first process. You see my first tool's function is to connect to a DB and extract records and write them as features in the scratchGDB, the second tool then does analysis on these features. So if they are a single service, if something fails on the second process, you'd have to re-extract records again. Looking forward to your thoughts on this. Regards, Thanos
... View more
10-12-2015
09:02 PM
|
0
|
3
|
1092
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 10-23-2014 12:00 PM | |
| 1 | 01-23-2013 05:05 PM | |
| 1 | 02-13-2018 05:55 PM | |
| 1 | 07-04-2017 02:01 PM | |
| 1 | 08-02-2017 01:57 AM |