POST
|
Using 7zip or WinRAR etc. right click the .sd to get some options. You may be able to modify the json without uncompressing by replacing the .json file inside with the one you have modified.
... View more
06-16-2015
11:01 AM
|
0
|
0
|
506
|
POST
|
Try to make stage the sd file locally. Unzip it and edit the templates found within <sd file> / servicedescriptions / featureserver / featureserver.json Then recompress the sd (you may need to change the extension back to .sd) and try to publish through ArcMap, Arcpy, or after uploading the .sd to your organization.
... View more
06-16-2015
10:49 AM
|
0
|
2
|
506
|
POST
|
Randy Burton, I have found some more out while testing. 1. I've extracted the featureserver.json from an sd file and confirmed the template order is wrong after staging is complete. Although I haven't checked, one may be able to sort the templates in the json, compress the sd file, and publish. 2. all "contentType" values are set to "" for esriPMS picture marker symbols everytime I programmatically update the service definition via REST. It occurs regardless of whether changes are made or not. 3. When managing templates through the map viewer I tried to pick a new image symbol for a templates. After saving, the "contentType" was set to "image/png". 4. While using the style and templates management panels in map viewer, I found that changing the symbol size and keeping the image did not update "contentType" to "image/png" 5. Map viewer doesn't care about "contentType" but Collector for ArcGIS offline does.
... View more
06-16-2015
10:42 AM
|
0
|
0
|
506
|
POST
|
The hydrant example from the blog post at Related Tables – Exploring New Ways to use Collector for ArcGIS | ArcGIS Blog, nests a 1:M relationship under another 1:M relationship. It is laid out like so: Layer--> Table --> Table. I'm trying to use a service with all layers instead of the tables. Has anyone successfully used Collector Offline with a Layer --> Layer --> Layer relationship based on globalIDs? I can download web maps in Collector for ArcGIS 10.3 and 10.3.1 and create features for the first two layers. However, Collector always crashes when attempting to create a feature for the deepest layer. My service contains lines related to many points, each point related to many points. Any working example with just 3 layers would help point me in the right direction. Thanks, Davin
... View more
06-15-2015
09:32 PM
|
0
|
0
|
2331
|
POST
|
Check out Feature type sort order in drop-down list for some info.
... View more
06-15-2015
11:13 AM
|
0
|
0
|
385
|
POST
|
My feature classes don't have subtypes. The feature templates and symbology are based on a text field with a coded value domain. The symbology is match to a style file with names in the domain. I'm curious if your feature class has a similar setup?
... View more
06-11-2015
12:02 PM
|
0
|
7
|
1850
|
POST
|
I've suffered this known issue for 2 years now without a solutionWrong Feature Template Order. Here is another post with the same issue Order of Feature Templates Not Honored in ArcGIS Online. For me, the template order is already incorrect at service definition staging for publishing. I too have resorted to updating the service definition's json through REST. Maybe this will be helpful with sorting https://jsfiddle.net/Rx6jQ/15/ . I too have noticed that "contentType" is blank after altering the service definition. Not sure that this impacts in Collector though.
... View more
06-11-2015
11:48 AM
|
2
|
3
|
1852
|
POST
|
Has the this release occurred? I'm still getting the issue with a related table in the web map.
... View more
05-28-2015
08:00 AM
|
0
|
1
|
814
|
POST
|
I have what appears to be a simple requirement for my Collector for ArcGIS users. Simply put, I need field1 to be calculated as field2 X field3 upon feature insert or update. Is this possible to do outside of Collector for ArcGIS? Scenario: User submits new point and fills in field2 and field3 User views feature popup and sees field1 has been populated Thank you.
... View more
04-16-2015
11:38 AM
|
0
|
0
|
2693
|
POST
|
"Indicates whether the data is editable via the widget" from the reference. I'd interpret this to mean values can be edited within the table, but I don't see any difference between ReadOnly;true or false.
... View more
03-28-2015
10:45 PM
|
1
|
2
|
3966
|
POST
|
The REST API shows the following example to query: "where" : "NAME LIKE 'A%'" I'm trying to only label features with values. I've tried to update the definition of the service and the specific feature layer with: "where" : "NAME is not null" . But the labels disappear completely. I've also tried to use the the example esri provides: "where" : "NAME LIKE 'A%'" However, something happens in the background to encode the value: "where":"NAME LIKE \u0027A%\u0027" How do I set labelingInfo where:"Field is not null" ?
... View more
03-26-2015
07:35 AM
|
0
|
0
|
3149
|
POST
|
The interface tells you to name the replica, but ignores input. The random string for the names make it incredibly difficult to script automation. Here is what I get with sync model=none: FileGDB: C:\Downloads\cce93b7ebc4c47d18f656ae7ea613ce8\447be11bdf3c4a64b1c229caada76d6a.gdb SQLITE: C:\Downloads\a021300b5c8f445eac500e01fe713e07.geodatabase JSON: C:\Downloads\748a4e8ab624424ebeee38f14bbd1d95.json The zip name is even different from the gdb! The purpose of the replica name alludes me.What value does requiring a Replica Name provide? Additionally, what is the point of generating a random name for the file that has no sync model and you cannot search through the replicafiles directory?
... View more
03-11-2015
08:15 PM
|
0
|
0
|
4079
|
POST
|
I figured it out. I was trying to update 2 different fields at once and field calculator understandably only takes one. Its crazy how much time I looked up information on unicode stuff only to find it was a misuse of the parameter. Can't thank you enough Richard. I should be able to fumble through the rest of the conditionals.
... View more
03-03-2015
02:36 PM
|
0
|
0
|
432
|
POST
|
The last try at line 230 is where it is succeeding the REST call, but the layer's field values stay null. A few lines prior, updateRow[1] gets an integer from the dictionary CustomerIDDict. # Set Null tag to first available integer updateRow[1] = templist[0] Then updateRow[1] gets put into... print fl.calculate(where="OBJECTID={} AND CustomerID={}".format(updateRow[3], updateRow[0]), calcExpression={"field" : "TAG", "value" : "{}".format(updateRow[1]),"field" : "PropertyID_WO", "value" : "{}".format(updateRow[2])}) I'm guessing it turns into: calcExpression={"field" : "TAG", "value" : "4",....},but I'm not certain. Also, at line 97 added a decode and a typeerror went away. I can't tell if this decode causes issues in the dictionary later on though. # Build a composite key value from 2 fields: CustomerID;TAG keyValue = '{};{}'.format(searchRow[0], searchRow[1]).decode('utf-8')
... View more
03-03-2015
10:09 AM
|
0
|
2
|
432
|
POST
|
I've made some headway. My script is successfully insertings new rows into sql server with pyodbc. However, I've got an encoding issue when updating the hosted layer. The sql table varchar columns and I assume that hosted layers are nvarchar. The call to update the hosted layer succeeds but the text column doesn't actually update. Integer fields do. Here is my code I've kept the actual column names so the SQL Server table fields are: SiteID-->CustomerID, FeatureID-->PropertyID, and EasyID--> Location The hosted layer fields are: SiteID-->CustomerID, FeatureID-->PropertyID_WO, and EasyID--> TAG I've match the sequence to prevent name issues. I'm don't know how to go further without figuring out how to update TAG with Location. # -*- coding: utf-8 -*-
# ---------------------------------------------------------------------------
#
import sys, os, datetime, arcpy, pyodbc, time, uuid
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60.
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
# End hms_string
def timedif(end_datetime, start_datetime):
seconds_elapsed = (end_datetime - start_datetime).total_seconds()
return hms_string(seconds_elapsed)
# End timedif
start_time = datetime.datetime.now()
print "# Script start: {}".format(start_time)
from arcpy import env, da
from arcrest.security import AGOLTokenSecurityHandler
from arcrest.agol import FeatureService
from arcrest.agol import FeatureLayer
from arcrest.common.filters import LayerDefinitionFilter
from datetime import timedelta
print "# Imports loaded."
# Set Geoprocessing environments
Scratch_gdb = r'C:\Python\scratch.gdb'
arcpy.env.scratchWorkspace = Scratch_gdb
arcpy.env.workspace = Scratch_gdb
Local_Table = r'C:\Users\dkshokes\AppData\Roaming\ESRI\Desktop10.3\ArcCatalog\RTC03.sde\Rainbow.dbo.Property'
Output_Table_1 = "T1"
Expression_1 = r'"CustomerID" = 166602 AND "Location" is not Null'
Online_Trees = r'http://services.arcgis.com/###/arcgis/rest/services/###/FeatureServer/0'
username = "####"
password = "####"
proxy_port = None
proxy_url = None
agolSH = AGOLTokenSecurityHandler(username=username, password=password)
Output_Table_2 = "T2"
Expression_2 = '"CustomerID" = 166602'
fields_1 = ['CustomerID', 'Location', 'PropertyID', 'OID@', 'Description', 'CBH', 'PropertyQuantity', 'PropertyItemID', 'DateEntered', 'LastChangeDate', 'Special_Note']
fields_2 = ['CustomerID', 'TAG', 'PropertyID_WO', 'OID@', 'COMMON', 'DBH', 'QTY', 'PropertyItemID', 'CreationDate', 'EditDate', 'Notes', 'Creator', 'Editor', 'PruneYear', 'ProposalStatus', 'SoilYear']
print "\n# Online variables loaded. "
# Local variables:
T1 = Scratch_gdb + "\\" + Output_Table_1
print "# T1 Table: {}".format(T1)
print "# T1 Fields: {}".format(fields_1)
T2 = Scratch_gdb + "\\" + Output_Table_2
print "# T2 Table: {}".format(T2)
print "# T2 Fields: {}".format(fields_2)
FieldMap_1 = r"PropertyID \"PropertyID\" true true false 50 Long 0 10 ,First,#," + Local_Table + ",PropertyID,-1,-1;CustomerID \"CustomerID\" true true false 4 Long 0 10 ,First,#," + Local_Table + ",CustomerID,-1,-1;Location \"Location\" true true false 7 Text 0 0 ,First,#," + Local_Table + ",Location,-1,-1;PropertyItemID \"PropertyItemID\" true true false 4 Long 0 10 ,First,#," + Local_Table + ",PropertyItemID,-1,-1;Description \"Description\" true true false 255 Text 0 0 ,First,#," + Local_Table + ",Description,-1,-1;PropertyQuantity \"PropertyQuantity\" true true false 4 Long 0 10 ,First,#," + Local_Table + ",PropertyQuantity,-1,-1;CBH \"CBH\" true true false 8 Double 1 18 ,First,#," + Local_Table + ",CBH,-1,-1;Special_Note \"Special_Note\" true true false 255 Text 0 0 ,First,#," + Local_Table + ",Special_Note,-1,-1;Status \"Status\" true true false 10 Text 0 0 ,First,#," + Local_Table + ",Status,-1,-1;DateEntered \"DateEntered\" true true false 36 Date 0 0 ,First,#," + Local_Table + ",DateEntered,-1,-1;PropertyGroup \"PropertyGroup\" true true false 2 Short 0 1 ,First,#," + Local_Table + ",PropertyGroup,-1,-1;Removed \"Removed\" true true false 2 Short 0 1 ,First,#," + Local_Table + ",Removed,-1,-1;LastChangeDate \"LastChangeDate\" true true false 36 Date 0 0 ,First,#," + Local_Table + ",LastChangeDate,-1,-1"
if arcpy.Exists(T1):
arcpy.Delete_management(T1)
print "Previous T1 Found so it was deleted."
if arcpy.Exists(T2):
arcpy.Delete_management(T2)
print "Previous T2 Found so it was deleted."
#print "\n# Local variables loaded...Making temporary business table."
arcpy.TableToTable_conversion(Local_Table, Scratch_gdb, Output_Table_1, Expression_1, FieldMap_1, "")
#print "# Successfully made temporary business table."
#print "\n# Querying Feature Layer and Making Temporary Feature Class."
fl = FeatureLayer(Online_Trees, securityHandler=agolSH, initialize=False,proxy_url=proxy_url, proxy_port=proxy_port)
print fl.query(where=Expression_2, out_fields='*', returnGeometry=True, returnIDsOnly=False, returnCountOnly=False, returnFeatureClass=True, out_fc=Output_Table_2)
print "\n# Ready to compare." + " Time elapsed: {}".format(timedif(datetime.datetime.now(), start_time))
print "\n# Intialize T1 as a concatenated dictionary"
T1Dict = {}
# Initialize a list to hold any concatenated key duplicates found
T1KeyDups = []
# Open a search cursor and iterate rows
with arcpy.da.SearchCursor(T1, fields_1) as searchRows:
for searchRow in searchRows:
# Build a composite key value from 2 fields: CustomerID;TAG
keyValue = '{};{}'.format(searchRow[0], searchRow[1]).decode('utf-8')
if not keyValue in T1Dict and searchRow[1] is not None:
# Key not in dictionary. Add Key pointing to a list of a list of field values
T1Dict[keyValue] = [list(searchRow[0:])]
#print "T1Dict keyValue: {}".format(keyValue)
elif searchRow[1] is not None:
# Key in dictionary. Append a list of field values to the list the Key points to
T1KeyDups.append(keyValue)
T1Dict[keyValue].append(list(searchRow[0:]))
#print T1Dict[keyValue][0:]
del searchRows, searchRow
##print "\n# Sample of how to access the keys, record count, and record values of the dictionary"
##for keyValue in T1Dict.keys():
## for i in range(0, len(T1Dict[keyValue])):
## print "T1Dict Key CustomerID;TAG is {} Record:{} of {} PropertyID_WO:{} ObjectID:{}.".format(keyValue, i+1, len(T1Dict[keyValue]), T1Dict[keyValue][0], T1Dict[keyValue][1])
if len(T1KeyDups) > 0:
print "T1KeyDups keys found! {}".format(T1KeyDups)
# fix T1 here
else:
print ("No Duplicates!")
print "\n# T1Dict Complete:" + " Time elapsed: {}".format(timedif(datetime.datetime.now(), start_time))
print "\n# Get the list of TAGs associated with each CustomerID in a dictionary for T1"
T1CustomerIDDict = {}
with arcpy.da.SearchCursor(T1, fields_1) as searchRows:
for searchRow in searchRows:
keyValue = searchRow[0]
if not keyValue in T1CustomerIDDict:
# Key not in dictionary. Add Key pointing to a list of a list of field values
T1CustomerIDDict[keyValue] = [searchRow[0:]]
else:
# Append a list of field values to the list the Key points to
T1CustomerIDDict[keyValue].append(searchRow[0:])
#print T1CustomerIDDict[keyValue]
del searchRows, searchRow
print "\n# T1CustomerIDDict Complete"
print "\n# Get the list of TAGs associated with each CustomerID in a dictionary for T2"
T2CustomerIDDict = {}
with arcpy.da.SearchCursor(T2, fields_2) as searchRows:
for searchRow in searchRows:
keyValue = searchRow[0]
if not keyValue in T2CustomerIDDict:
T2CustomerIDDict[keyValue] = [searchRow[0:]]
else:
T2CustomerIDDict[keyValue].append(searchRow[0:])
del searchRows, searchRow
print "\n# T2CustomerIDDict Complete"
print "\n# Creating CustomerIDDict"
CustomerIDDict = {}
for keyValue in T2CustomerIDDict.keys():
intList = []
for TAG in T2CustomerIDDict[keyValue]:
try:
if TAG[1]:
TAG = int(TAG[1])
intList.append(TAG)
except:
print "Error with T2.TAG"
if keyValue in T1CustomerIDDict:
for TAG in T1CustomerIDDict[keyValue]:
try:
if TAG[1]:
TAG = int(TAG[1])
intList.append(TAG)
except:
print "Error with T1.TAG"
#print "\nnumeric T2.TAGs: {}".format(set(intList))
#print "\nnumeric T1.TAGs: {}".format(set(intList))
# remove already used numbers out of the numbers from 1 to 9999
# and get a sorted list for use with an update cursor.
CustomerIDDict[keyValue] = sorted(set(range(1, 200)) - set(intList))
print "\n# CustomerIDDict Complete"
print "\n# Inserting and Updating" + " Time elapsed: {}".format(timedif(datetime.datetime.now(), start_time))
### This cursor sucessfully inserts into sql server a new row with an unused TAG and attributes from the hosted feature.
### It then updates the hosted layer fields propertyID_WO and TAG.
## propertyID_WO succeeds but TAG is blank
#
print "# Working with pyodbc sql stuff"
cnxn_SQL = pyodbc.connect(Trusted_Connection='yes', Driver = '{SQL Server Native Client 10.0}', Server = '###', database = '###')
cursor_SQL = cnxn_SQL.cursor()
with arcpy.da.UpdateCursor(T2, fields_2) as updateRows:
for updateRow in updateRows:
# if T2.Row has TAG and CustomerID but no PropertyID compare and look for match in T1
if (updateRow[2] == None) and not (updateRow[1] == None):
print "1st if TAG {} PropertID {}".format(updateRow[1],updateRow[2])
for valu in T1CustomerIDDict[updateRow[0]]:
if valu[1] == updateRow[1]:
try:
updateRow[2] = valu[2]
print "# Updating PropertyID_WO for TAG:{} hosted layer with {}".format(updateRow[1],updateRow[2])
fl.calculate(where="OBJECTID={} AND CustomerID={} AND TAG={}".format(updateRow[3], updateRow[0], updateRow[1]), calcExpression={"field" : "PropertyID_WO", "value" : "{}".format(updateRow[2])})
print "# Success"
except:
print "# Failed Hosted Update of PropertyID"
if (updateRow[1] == None) and (updateRow[2] == None):
print "3rd if TAG {} PropertID {}".format(updateRow[1],updateRow[2])
newPropID = None
print CustomerIDDict[updateRow[0]][0]
# Set templist to show available integer TAGs for 1 CustomerID at a time.
templist = CustomerIDDict[updateRow[0]]
# Set Null tag to first available integer
print updateRow[1:3]
updateRow[1] = templist[0]
# Insert to local sql server table, update online from sql results, then Remove used tag.
print updateRow[1:3]
try:
# Get ItemID required by insert trigger
propitemid = cursor_SQL.execute("SELECT [PropertyItemID] from [TREESPECIESRELATIONSHIPTABLE] Where [COMMON__GIS_name_] = ?",updateRow[4]).fetchone()[0]
# Insert New trees into T1
cursor_SQL.execute("IF NOT EXISTS (SELECT 1 FROM Property WHERE CustomerID = ? and Location = ?) INSERT INTO Property(CustomerID, Location, Description, CBH, PropertyQuantity, PropertyItemID, DateEntered, LastChangeDate, Special_Note) values (?,?,?,?,?,?,?,?,?)", updateRow[0], updateRow[0], updateRow[0],updateRow[1],updateRow[4],updateRow[5],updateRow[6],propitemid,updateRow[8],updateRow[9],updateRow[10])
cnxn_SQL.commit()
# Grab to primary key for updating hosted layer
newPropID = cursor_SQL.execute("Select PropertyID From Property where CustomerID=? AND Location=?", updateRow[0], updateRow[1]).fetchone()[0]
print "Finished Insert for New PropertyID {}".format(newPropID)
cnxn_SQL.commit()
updateRow[2] = newPropID
templist.remove(updateRow[1])
print updateRow[1:3]
except:
print "# Failed SQL Server Insert"
try:
print "# Updating TAG and PropertyID_WO for hosted layer"
print fl.calculate(where="OBJECTID={} AND CustomerID={}".format(updateRow[3], updateRow[0]), calcExpression={"field" : "TAG", "value" : "{}".format(updateRow[1]),"field" : "PropertyID_WO", "value" : "{}".format(updateRow[2])})
except:
print "# Failed Hosted Update of TAG and PropertyID"
updateRows.updateRow(updateRow)
del updateRows, updateRow
cnxn_SQL.close()
#
##
###
###
print "Finished!" + " Time elapsed: {}".format(timedif(datetime.datetime.now(), start_time))
... View more
03-03-2015
09:10 AM
|
0
|
4
|
432
|
Title | Kudos | Posted |
---|---|---|
1 | 09-22-2022 01:52 PM | |
1 | 06-22-2022 12:38 PM | |
12 | 06-09-2022 11:53 AM | |
1 | 10-16-2015 11:30 PM | |
1 | 05-05-2016 11:49 AM |
Online Status |
Offline
|
Date Last Visited |
09-19-2024
10:04 PM
|