|
POST
|
The last try at line 230 is where it is succeeding the REST call, but the layer's field values stay null. A few lines prior, updateRow[1] gets an integer from the dictionary CustomerIDDict. # Set Null tag to first available integer updateRow[1] = templist[0] Then updateRow[1] gets put into... print fl.calculate(where="OBJECTID={} AND CustomerID={}".format(updateRow[3], updateRow[0]), calcExpression={"field" : "TAG", "value" : "{}".format(updateRow[1]),"field" : "PropertyID_WO", "value" : "{}".format(updateRow[2])}) I'm guessing it turns into: calcExpression={"field" : "TAG", "value" : "4",....},but I'm not certain. Also, at line 97 added a decode and a typeerror went away. I can't tell if this decode causes issues in the dictionary later on though. # Build a composite key value from 2 fields: CustomerID;TAG keyValue = '{};{}'.format(searchRow[0], searchRow[1]).decode('utf-8')
... View more
03-03-2015
10:09 AM
|
0
|
2
|
909
|
|
POST
|
I've made some headway. My script is successfully insertings new rows into sql server with pyodbc. However, I've got an encoding issue when updating the hosted layer. The sql table varchar columns and I assume that hosted layers are nvarchar. The call to update the hosted layer succeeds but the text column doesn't actually update. Integer fields do. Here is my code I've kept the actual column names so the SQL Server table fields are: SiteID-->CustomerID, FeatureID-->PropertyID, and EasyID--> Location The hosted layer fields are: SiteID-->CustomerID, FeatureID-->PropertyID_WO, and EasyID--> TAG I've match the sequence to prevent name issues. I'm don't know how to go further without figuring out how to update TAG with Location. # -*- coding: utf-8 -*-
# ---------------------------------------------------------------------------
#
import sys, os, datetime, arcpy, pyodbc, time, uuid
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60.
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
# End hms_string
def timedif(end_datetime, start_datetime):
seconds_elapsed = (end_datetime - start_datetime).total_seconds()
return hms_string(seconds_elapsed)
# End timedif
start_time = datetime.datetime.now()
print "# Script start: {}".format(start_time)
from arcpy import env, da
from arcrest.security import AGOLTokenSecurityHandler
from arcrest.agol import FeatureService
from arcrest.agol import FeatureLayer
from arcrest.common.filters import LayerDefinitionFilter
from datetime import timedelta
print "# Imports loaded."
# Set Geoprocessing environments
Scratch_gdb = r'C:\Python\scratch.gdb'
arcpy.env.scratchWorkspace = Scratch_gdb
arcpy.env.workspace = Scratch_gdb
Local_Table = r'C:\Users\dkshokes\AppData\Roaming\ESRI\Desktop10.3\ArcCatalog\RTC03.sde\Rainbow.dbo.Property'
Output_Table_1 = "T1"
Expression_1 = r'"CustomerID" = 166602 AND "Location" is not Null'
Online_Trees = r'http://services.arcgis.com/###/arcgis/rest/services/###/FeatureServer/0'
username = "####"
password = "####"
proxy_port = None
proxy_url = None
agolSH = AGOLTokenSecurityHandler(username=username, password=password)
Output_Table_2 = "T2"
Expression_2 = '"CustomerID" = 166602'
fields_1 = ['CustomerID', 'Location', 'PropertyID', 'OID@', 'Description', 'CBH', 'PropertyQuantity', 'PropertyItemID', 'DateEntered', 'LastChangeDate', 'Special_Note']
fields_2 = ['CustomerID', 'TAG', 'PropertyID_WO', 'OID@', 'COMMON', 'DBH', 'QTY', 'PropertyItemID', 'CreationDate', 'EditDate', 'Notes', 'Creator', 'Editor', 'PruneYear', 'ProposalStatus', 'SoilYear']
print "\n# Online variables loaded. "
# Local variables:
T1 = Scratch_gdb + "\\" + Output_Table_1
print "# T1 Table: {}".format(T1)
print "# T1 Fields: {}".format(fields_1)
T2 = Scratch_gdb + "\\" + Output_Table_2
print "# T2 Table: {}".format(T2)
print "# T2 Fields: {}".format(fields_2)
FieldMap_1 = r"PropertyID \"PropertyID\" true true false 50 Long 0 10 ,First,#," + Local_Table + ",PropertyID,-1,-1;CustomerID \"CustomerID\" true true false 4 Long 0 10 ,First,#," + Local_Table + ",CustomerID,-1,-1;Location \"Location\" true true false 7 Text 0 0 ,First,#," + Local_Table + ",Location,-1,-1;PropertyItemID \"PropertyItemID\" true true false 4 Long 0 10 ,First,#," + Local_Table + ",PropertyItemID,-1,-1;Description \"Description\" true true false 255 Text 0 0 ,First,#," + Local_Table + ",Description,-1,-1;PropertyQuantity \"PropertyQuantity\" true true false 4 Long 0 10 ,First,#," + Local_Table + ",PropertyQuantity,-1,-1;CBH \"CBH\" true true false 8 Double 1 18 ,First,#," + Local_Table + ",CBH,-1,-1;Special_Note \"Special_Note\" true true false 255 Text 0 0 ,First,#," + Local_Table + ",Special_Note,-1,-1;Status \"Status\" true true false 10 Text 0 0 ,First,#," + Local_Table + ",Status,-1,-1;DateEntered \"DateEntered\" true true false 36 Date 0 0 ,First,#," + Local_Table + ",DateEntered,-1,-1;PropertyGroup \"PropertyGroup\" true true false 2 Short 0 1 ,First,#," + Local_Table + ",PropertyGroup,-1,-1;Removed \"Removed\" true true false 2 Short 0 1 ,First,#," + Local_Table + ",Removed,-1,-1;LastChangeDate \"LastChangeDate\" true true false 36 Date 0 0 ,First,#," + Local_Table + ",LastChangeDate,-1,-1"
if arcpy.Exists(T1):
arcpy.Delete_management(T1)
print "Previous T1 Found so it was deleted."
if arcpy.Exists(T2):
arcpy.Delete_management(T2)
print "Previous T2 Found so it was deleted."
#print "\n# Local variables loaded...Making temporary business table."
arcpy.TableToTable_conversion(Local_Table, Scratch_gdb, Output_Table_1, Expression_1, FieldMap_1, "")
#print "# Successfully made temporary business table."
#print "\n# Querying Feature Layer and Making Temporary Feature Class."
fl = FeatureLayer(Online_Trees, securityHandler=agolSH, initialize=False,proxy_url=proxy_url, proxy_port=proxy_port)
print fl.query(where=Expression_2, out_fields='*', returnGeometry=True, returnIDsOnly=False, returnCountOnly=False, returnFeatureClass=True, out_fc=Output_Table_2)
print "\n# Ready to compare." + " Time elapsed: {}".format(timedif(datetime.datetime.now(), start_time))
print "\n# Intialize T1 as a concatenated dictionary"
T1Dict = {}
# Initialize a list to hold any concatenated key duplicates found
T1KeyDups = []
# Open a search cursor and iterate rows
with arcpy.da.SearchCursor(T1, fields_1) as searchRows:
for searchRow in searchRows:
# Build a composite key value from 2 fields: CustomerID;TAG
keyValue = '{};{}'.format(searchRow[0], searchRow[1]).decode('utf-8')
if not keyValue in T1Dict and searchRow[1] is not None:
# Key not in dictionary. Add Key pointing to a list of a list of field values
T1Dict[keyValue] = [list(searchRow[0:])]
#print "T1Dict keyValue: {}".format(keyValue)
elif searchRow[1] is not None:
# Key in dictionary. Append a list of field values to the list the Key points to
T1KeyDups.append(keyValue)
T1Dict[keyValue].append(list(searchRow[0:]))
#print T1Dict[keyValue][0:]
del searchRows, searchRow
##print "\n# Sample of how to access the keys, record count, and record values of the dictionary"
##for keyValue in T1Dict.keys():
## for i in range(0, len(T1Dict[keyValue])):
## print "T1Dict Key CustomerID;TAG is {} Record:{} of {} PropertyID_WO:{} ObjectID:{}.".format(keyValue, i+1, len(T1Dict[keyValue]), T1Dict[keyValue][0], T1Dict[keyValue][1])
if len(T1KeyDups) > 0:
print "T1KeyDups keys found! {}".format(T1KeyDups)
# fix T1 here
else:
print ("No Duplicates!")
print "\n# T1Dict Complete:" + " Time elapsed: {}".format(timedif(datetime.datetime.now(), start_time))
print "\n# Get the list of TAGs associated with each CustomerID in a dictionary for T1"
T1CustomerIDDict = {}
with arcpy.da.SearchCursor(T1, fields_1) as searchRows:
for searchRow in searchRows:
keyValue = searchRow[0]
if not keyValue in T1CustomerIDDict:
# Key not in dictionary. Add Key pointing to a list of a list of field values
T1CustomerIDDict[keyValue] = [searchRow[0:]]
else:
# Append a list of field values to the list the Key points to
T1CustomerIDDict[keyValue].append(searchRow[0:])
#print T1CustomerIDDict[keyValue]
del searchRows, searchRow
print "\n# T1CustomerIDDict Complete"
print "\n# Get the list of TAGs associated with each CustomerID in a dictionary for T2"
T2CustomerIDDict = {}
with arcpy.da.SearchCursor(T2, fields_2) as searchRows:
for searchRow in searchRows:
keyValue = searchRow[0]
if not keyValue in T2CustomerIDDict:
T2CustomerIDDict[keyValue] = [searchRow[0:]]
else:
T2CustomerIDDict[keyValue].append(searchRow[0:])
del searchRows, searchRow
print "\n# T2CustomerIDDict Complete"
print "\n# Creating CustomerIDDict"
CustomerIDDict = {}
for keyValue in T2CustomerIDDict.keys():
intList = []
for TAG in T2CustomerIDDict[keyValue]:
try:
if TAG[1]:
TAG = int(TAG[1])
intList.append(TAG)
except:
print "Error with T2.TAG"
if keyValue in T1CustomerIDDict:
for TAG in T1CustomerIDDict[keyValue]:
try:
if TAG[1]:
TAG = int(TAG[1])
intList.append(TAG)
except:
print "Error with T1.TAG"
#print "\nnumeric T2.TAGs: {}".format(set(intList))
#print "\nnumeric T1.TAGs: {}".format(set(intList))
# remove already used numbers out of the numbers from 1 to 9999
# and get a sorted list for use with an update cursor.
CustomerIDDict[keyValue] = sorted(set(range(1, 200)) - set(intList))
print "\n# CustomerIDDict Complete"
print "\n# Inserting and Updating" + " Time elapsed: {}".format(timedif(datetime.datetime.now(), start_time))
### This cursor sucessfully inserts into sql server a new row with an unused TAG and attributes from the hosted feature.
### It then updates the hosted layer fields propertyID_WO and TAG.
## propertyID_WO succeeds but TAG is blank
#
print "# Working with pyodbc sql stuff"
cnxn_SQL = pyodbc.connect(Trusted_Connection='yes', Driver = '{SQL Server Native Client 10.0}', Server = '###', database = '###')
cursor_SQL = cnxn_SQL.cursor()
with arcpy.da.UpdateCursor(T2, fields_2) as updateRows:
for updateRow in updateRows:
# if T2.Row has TAG and CustomerID but no PropertyID compare and look for match in T1
if (updateRow[2] == None) and not (updateRow[1] == None):
print "1st if TAG {} PropertID {}".format(updateRow[1],updateRow[2])
for valu in T1CustomerIDDict[updateRow[0]]:
if valu[1] == updateRow[1]:
try:
updateRow[2] = valu[2]
print "# Updating PropertyID_WO for TAG:{} hosted layer with {}".format(updateRow[1],updateRow[2])
fl.calculate(where="OBJECTID={} AND CustomerID={} AND TAG={}".format(updateRow[3], updateRow[0], updateRow[1]), calcExpression={"field" : "PropertyID_WO", "value" : "{}".format(updateRow[2])})
print "# Success"
except:
print "# Failed Hosted Update of PropertyID"
if (updateRow[1] == None) and (updateRow[2] == None):
print "3rd if TAG {} PropertID {}".format(updateRow[1],updateRow[2])
newPropID = None
print CustomerIDDict[updateRow[0]][0]
# Set templist to show available integer TAGs for 1 CustomerID at a time.
templist = CustomerIDDict[updateRow[0]]
# Set Null tag to first available integer
print updateRow[1:3]
updateRow[1] = templist[0]
# Insert to local sql server table, update online from sql results, then Remove used tag.
print updateRow[1:3]
try:
# Get ItemID required by insert trigger
propitemid = cursor_SQL.execute("SELECT [PropertyItemID] from [TREESPECIESRELATIONSHIPTABLE] Where [COMMON__GIS_name_] = ?",updateRow[4]).fetchone()[0]
# Insert New trees into T1
cursor_SQL.execute("IF NOT EXISTS (SELECT 1 FROM Property WHERE CustomerID = ? and Location = ?) INSERT INTO Property(CustomerID, Location, Description, CBH, PropertyQuantity, PropertyItemID, DateEntered, LastChangeDate, Special_Note) values (?,?,?,?,?,?,?,?,?)", updateRow[0], updateRow[0], updateRow[0],updateRow[1],updateRow[4],updateRow[5],updateRow[6],propitemid,updateRow[8],updateRow[9],updateRow[10])
cnxn_SQL.commit()
# Grab to primary key for updating hosted layer
newPropID = cursor_SQL.execute("Select PropertyID From Property where CustomerID=? AND Location=?", updateRow[0], updateRow[1]).fetchone()[0]
print "Finished Insert for New PropertyID {}".format(newPropID)
cnxn_SQL.commit()
updateRow[2] = newPropID
templist.remove(updateRow[1])
print updateRow[1:3]
except:
print "# Failed SQL Server Insert"
try:
print "# Updating TAG and PropertyID_WO for hosted layer"
print fl.calculate(where="OBJECTID={} AND CustomerID={}".format(updateRow[3], updateRow[0]), calcExpression={"field" : "TAG", "value" : "{}".format(updateRow[1]),"field" : "PropertyID_WO", "value" : "{}".format(updateRow[2])})
except:
print "# Failed Hosted Update of TAG and PropertyID"
updateRows.updateRow(updateRow)
del updateRows, updateRow
cnxn_SQL.close()
#
##
###
###
print "Finished!" + " Time elapsed: {}".format(timedif(datetime.datetime.now(), start_time))
... View more
03-03-2015
09:10 AM
|
0
|
4
|
909
|
|
POST
|
Wow thank you for these excellent resources! I will try to digest this over the weekend, but this looks very promising. Sorry that I didn't provide all of the aspects of this dilemma. I tried to simplify it enough that python masters like yourself wouldn't have to read a novel. As you ascertained, this is complex. The business table (T1) sits on a 2005 SQL server, unsupported since 10.2. EasyID is a 7 char string used for labeling at a site and it is the syncing bane of my existence. Users are allowed to assign "C5-C8" so that FeatureID represents numerous real world things. C5, C6, C7, and C8 could also exist in T1 individually. It is a data quality mess. To combat this the groupings will not make it to T2. Instead C5, C6, C7, and/or C8 will be digitized by the T2 user, if so desired. Luckily, users are unable to delete rows and can't view or change the FeatureID or SiteID. The other table (T2) and a copy of the Sites table are layers in a hosted service for collecting new site features. I update the Sites with a scheduled script and thanks to Collector for ArcGIS 10.3 honoring relationships, all features maintain their SiteID. In short, users are limited to picking an EasyID during feature creation in either table and they could possibly update EasyID on T2. On the off chance they do change EasyID on a feature existing in T1 and T2 there will be a script that joins by FeatureID and updates the one with an earlier edit date. I could go on and on, but don't want to spoil your weekend. Thanks again Richard Fairhurst!
... View more
02-27-2015
04:02 PM
|
0
|
0
|
2598
|
|
POST
|
Richard, I haven't gotten far enough to get any errors. I don't know how to build/filter the dictionary for T2.OID T2.SiteID string EasyID, then gather only integer EasyIDs, and return the 1st missing integer starting from 1. Without it I can't test updating Null EasyIDs in new T2 features. Thank you for posting that link to stackexchange! It looks very similar the components of my scenario. I will update after testing the code.
... View more
02-27-2015
07:40 AM
|
1
|
8
|
2598
|
|
POST
|
This script will run at night. The T1 features that don't have a Site ID in T2 are basically ignored until the T2 user adds the first feature with that SiteID. The user may have manually matched the SiteID and EasyID before the script runs. Additionally, T2 users adding the first occurances of features in either table will skip adding EasyID and rely on unique ones being generated. One other note is once a feature has all 3 ID'S it will never be changed by the users. Perhaps if I pre filter T1 to T2 SiteID, the script would be simpler. I believe you have identified step 1 as redundant since it occurs later and max(EasyID) is gained across both tables.
... View more
02-26-2015
08:21 AM
|
0
|
0
|
2598
|
|
POST
|
Good points Xander Bakker. I have included example info and fixed the mentioning. Thanks.
... View more
02-26-2015
07:09 AM
|
0
|
2
|
2598
|
|
POST
|
I'm having a little trouble setting up this script. Here are the bullet points. 2 Tables have the same three fields. [SiteID], [EasyID], [FeatureID]. For simplicity, lets pretend these are the only fields (besides OID). [FeatureID] is a unique integer for each feature created in Table 1. [SiteID] is a non-unique integer and groups features by location. [EasyID] is a string (typically a number) unique within the same [SiteID], but not unique in the field. Table 1 - The master table. New features have all three ID fields populated. Table 2 - New features always have [SiteID], sometimes [EasyID], and never a [FeatureID]. Table 2 - Features with [SiteID] and [EasyID], and no [FeatureID] may have a matching [SiteID] and [EasyID] in Table 1. If so, update [FeatureID] in Table 2. Table 2 - Features missing [EasyID] need to be assigned the next number for that [SiteID] in either Table 1 or 2. If current EasyID's for a site are '1', '3', '4A', '6-7','S', and '55', new features would be '2', '4', '5', '6', '7', '8'....etc. Table 2 - New features given a [FeatureID] when inserted into Table 1. The features are then updated in Table 2. Table 1 - Finally, any features with a [SiteID] in Table 2, but the [FeatureID] and [EasyID] are not in Table 2, are inserted into Table 2 At the end of the script both tables should match. I've looked through using model builder with no luck. With python dictionaries and search cursors i'm getting stuck trying to join against [SiteID] and [EasyID] at the same time. I am also don't know how to return the dictionaries with just integer EasyID's and loop through updating the next smallest integer. Here is what I've got so far. Much of it stems from what I read from Richard Fairhurst's post Turbo Charging Data Manipulation with Python Cursors and Dictionaries import arcpy
#Tables
T1 = r"C:\Python\Scratch.gdb\Table1"
T2 = r"C:\Python\Scratch.gdb\Table2"
fields = ["FeatureID", "SiteID", "EasyID"]
# Get FeatureID dictionaries for each Table
T1Dict = {r[0]:(r[0:]) for r in arcpy.da.SearchCursor(T1, fields)}
T2Dict = {r[0]:(r[0:]) for r in arcpy.da.SearchCursor(T2, fields)}
# Get SiteID+EasyID dictionaries for each Table
T1ConcatDict = {str(r[1]) + "," + str(r[2]):(r[0]) for r in arcpy.da.SearchCursor(T1, fields)}
T2ConcatDict = {str(r[1]) + "," + str(r[2]):(r[0]) for r in arcpy.da.SearchCursor(T2, fields)}
#First, If T2.FeatureID is Null but T2.EasyID and T2.SiteID are in T1, Update T2.FeatureID
with arcpy.da.UpdateCursor(T2, fields) as updateRows:
for updateRow in updateRows:
# store the Join value by combining 3 field values of the row being updated in a keyValue variable
keyValue = str(updateRow[1]) + "," + str(updateRow[2])
# verify that the keyValue is in the Dictionary
if keyValue in T1ConcatDict & updateRow[0] is None & updateRow[1] is not None:
# transfer the value stored under the keyValue from the dictionary to the updated field: FeatureID.
updateRow[0] = T1ConcatDict[keyValue][0]
updateRows.updateRow(updateRow)
#Rebuild Dictionary if it is needed again
T2ConcatDict = {str(r[1]) + "," + str(r[2]):(r[0]) for r in arcpy.da.SearchCursor(T2, fields)}
T2Dict = {r[0]:(r[0:]) for r in arcpy.da.SearchCursor(T2, fields)}
'''
#Get Max(EasyID) within SiteID
NumberList = []
for value in T1Dict[2]:
try:
NumberList.append(int(value))
except ValueError:
continue
T1EasyNumberDict = [s[2] for s in T1Dict[2] if s.isdigit()]
T1MaxEasyDict = max(T1EasyNumberDict)
'''
#Second, If T2.FeatureID and T2.EasyID are Null, Update T2.EasyID with next smallest number (as string) in either T1 or T2 for the specific SiteID
with arcpy.da.UpdateCursor(T2, fields) as updateRows:
for updateRow in updateRows:
# store the Join value by combining 3 field values of the row being updated in a keyValue variable
keyValue = str(updateRow[1]) + "," + str(updateRow[2])
# verify that the keyValue is in the Dictionary
if keyValue in T1Dict & updateRow[0] is None & updateRow[1] is None:
# transfer the value stored under the keyValue from the dictionary to the updated field.
# Perhaps retrieve max int occurs here?
updateRow[2] = max(T1Dict[keyValue][2], T2Dict[keyValue[2])
updateRows.updateRow(updateRow)
#Third, Insert into T1 if T2.SiteID is not null and T2.EasyID is not null
#Forth, Update T2.FeatureID with T1.FeatureID from previous insert where T2.SiteID=T1.SiteID and T2.EasyID=T1.EasyID
#Lastly, Insert any T1 features into T2 where T1.EasyID not in (Select EasyID from T2 where T2.SiteID = T1.SiteID) and T1.FeatureID not in (Select FeatureID from T2)
Thank you for any advise. Edit: Here is a sample of sites with 2 explanatory columns T1.FeatureID T2.FeatureID T1.SiteID T2.SiteID T1.EasyID T2.EasyID Status Result 358589 358589 136238 136238 1 1 Existing T1 and T2 Feature No Change 358590 358590 136238 136238 2 2 Existing T1 and T2 Feature No Change 358594 358594 136238 136238 4 4 Existing T1 and T2 Feature No Change 652538 652538 136238 136238 5 5 Existing T1 and T2 Feature No Change 486028 486028 136238 136238 8 8 Existing T1 and T2 Feature No Change 486029 486029 136238 136238 9 9 Existing T1 and T2 Feature No Change 525300 525300 136238 136238 34 34 Existing T1 and T2 Feature No Change 574802 574802 136238 136238 998 998 Existing T1 and T2 Feature No Change 670911 136238 300 New T1 Feature (Step 6) Inserted Into T2 493840 136238 9996 New T1 Feature (Step 6) Inserted Into T2 493839 136238 9997 New T1 Feature (Step 6) Inserted Into T2 493831 136238 9999 New T1 Feature (Step 6) Inserted Into T2 696019 136238 105-106 New T1 Feature (Step 6) Inserted Into T2 696037 136238 9999N New T1 Feature (Step 6) Inserted Into T2 696014 136238 Area1 New T1 Feature (Step 6) Inserted Into T2 670910 136238 N New T1 Feature (Step 6) Inserted Into T2 580636 136238 N30c New T1 Feature (Step 6) Inserted Into T2 360401 136401 AC Exiting T1 Feature (Skip) Not Inserted since no T2.SiteID match 360402 136401 SP Exiting T1 Feature (Skip) Not Inserted since no T2.SiteID match 360510 136427 Area 1 Exiting T1 Feature (Skip) Not Inserted since no T2.SiteID match 362653 136635 15 Exiting T1 Feature (Skip) Not Inserted since no T2.SiteID match 362943 362943 136698 136698 1 1 Existing T1 and T2 Feature No Change 362944 362944 136698 136698 2 2 Existing T1 and T2 Feature No Change 362945 362945 136698 136698 3 3 Existing T1 and T2 Feature No Change 362946 362946 136698 136698 4 4 Existing T1 and T2 Feature No Change 362947 362947 136698 136698 5 5 Existing T1 and T2 Feature No Change 362950 362950 136698 136698 11C 11C Existing T1 and T2 Feature No Change 362948 362948 136698 136698 8 New T2 Feature, Exists in T1 (Step 5) Update T2.EasyID 362949 362949 136698 136698 9 New T2 Feature, Exists in T1 (Step 5) Update T2.EasyID 362951 136698 136698 15 15 New T2 Feature, Exists in T1 (Step 1) Update T2.FeatureID 362954 136698 136698 16 16 New T2 Feature, Exists in T1 (Step 1) Update T2.FeatureID 362955 136698 136698 17 17 New T2 Feature, Exists in T1 (Step 1) Update T2.FeatureID 362956 136698 136698 18 18 New T2 Feature, Exists in T1 (Step 1) Update T2.FeatureID 362957 136698 136698 19 19 New T2 Feature, Exists in T1 (Step 1) Update T2.FeatureID 136698 20 New T2 Feature (Step 3,4) Insterted into T1, Update T2.FeatureID 136698 21 New T2 Feature (Step 3,4) Insterted into T1, Update T2.FeatureID 136698 22 New T2 Feature (Step 3,4) Insterted into T1, Update T2.FeatureID 136698 25 New T2 Feature (Step 3,4) Insterted into T1, Update T2.FeatureID 136698 New T2 Feature (Step 2,3,4) Get next lowest integer for T2.EasyID -> 6 136698 New T2 Feature (Step 2,3,4) Get next lowest integer for T2.EasyID -> 7 136698 New T2 Feature (Step 2,3,4) Get next lowest integer for T2.EasyID -> 10 136698 New T2 Feature (Step 2,3,4) Get next lowest integer for T2.EasyID -> 11
... View more
02-26-2015
12:57 AM
|
0
|
14
|
8563
|
|
POST
|
Give the layer a filter in the webmap feeding the Geoform. It can be anything like ObjectID = 0 so the no features show after form submission. If you want them to see their Dot after submitting you can try Filter by CreationDate = CURRENT_DATE(), but the user will see any from same time frame.
... View more
02-21-2015
06:50 PM
|
0
|
0
|
691
|
|
POST
|
Looks like javascript table editing is creeping into the realm of possibility with FeatureTables In beta now but readOnly:false will be great. I played around with the dgrid and its still a little sketchy with large tables. Someday.
... View more
02-21-2015
02:21 PM
|
0
|
0
|
1538
|
|
POST
|
I have been referring to the Resource Center for a long time and it is still difficult to find the right information. Road blocks constantly occur when I'm trying to update the definition of a hosted layer and the syntax needed isn't in the examples. Lets take a look at my latest scenario. Common data types - Labeling Objects - Nowhere under any of these objects does it say whether or not they are supported when administering hosted services. Since "labelingInfo" is a property that exists in hosted services, why wouldn't the labeling object be supported? This is the example that is provided in the resources: {
"labelPlacement": "esriServerPointLabelPlacementAboveRight",
"labelExpression": "[NAME]",
"useCodedValues": false,
"symbol": {
"type": "esriTS",
"color": [38,115,0,255],
"backgroundColor": null,
"borderLineColor": null,
"verticalAlignment": "bottom",
"horizontalAlignment": "left",
"rightToLeft": false,
"angle": 0,
"xoffset": 0,
"yoffset": 0,
"kerning": true,
"font": {
"family": "Arial",
"size": 11,
"style": "normal",
"weight": "bold",
"decoration": "none"
}
},
"minScale": 0,
"maxScale": 0,
"where" : "NAME LIKE 'A%'" //label only those feature where name begins with A
} This is my labelinginfo: {
"labelPlacement" : "esriServerPointLabelPlacementBelowCenter",
"labelExpression" : "[TAG]",
"labelExpressionInfo" : { "value" : "{TAG}"},
"useCodedValues" : false,
"symbol" : {
"type" : "esriTS",
"color" : [224, 215, 255, 255],
"backgroundColor" : null,
"borderLineColor" : null,
"verticalAlignment" : "top",
"horizontalAlignment" : "center",
"rightToLeft" : false,
"angle" : 0,
"xoffset" : 0,
"yoffset" : 0,
"font" : {
"family" : "Arial",
"size" : 9.75,
"style" : "normal",
"weight" : "bold",
"decoration" : "none"
}
},
"minScale" : 36000,
"maxScale" : 2000
} They are similar, but how come there is no documentation for "labelExpressionInfo" and how to modify it? Why do "backgroundColor" and "borderLineColor" remain null after I've set them to [38,115,0,255] yet UpdateDefinition succeeds without any issues? Why is "where" in the example, but will not stick when I try to add "TAG LIKE 'A%'"? I would love to get rid of null labels that fill up the screen. Earlier this week, I contacted esri support for a question on REST functionality. The receptionist said I wasn't covered since I didn't have ArcGIS for Server. I was baffled. Desktop, EDN, and AGOL annual fees aren't enough? The link at the top of my services directory page points to the same REST API Resource Center. So why is it so difficult to identify which objects and their properties can and can't be altered?
... View more
02-20-2015
03:05 PM
|
1
|
0
|
4224
|
|
POST
|
I would imagine that this is difficult to reproduce because the feature service was originally published in 2013. I cannot put out a new version until field staff complete their work this week. As a side effect, the 1400 photo attachments are currently eating 90 credits a day in feature storage fees. Chomp chomp.
... View more
02-19-2015
03:23 PM
|
0
|
0
|
820
|
|
POST
|
Well I solved it and this is indeed a bug. The feature service in question was republished and was given a new field for GlobalIDs. So it has a string field named "GlobalID" and an esriFieldtypeGlobalID field named "GlobalID_2". Copying features also copies the value that should be unique! This is why the first feature that is copied has the photo attachments of future features. Perhaps other sync issues are related to this failure to generate unique IDs. Be careful what your users copy! Edit: I read something interesting about copying with relationships: http://resources.arcgis.com/en/help/main/10.2/index.html#/Relationship_class_properties/004t00000004000000/ When you split a feature, the original feature is maintained (with updated geometry), and a new feature is created. If you have a relationship based on the original ObjectID, only one of the two features created in the split will maintain the relationship. However, if you used another field as the key, when you split the feature, the ID value of the original feature would be copied to the two new features. As a result, the records in the related table would now be related to both new features—ideal if the relationship class is set up as many-to-many.If you won't be splitting features and are sure that all objects will remain in their original class, you can use the ObjectID as their IDs. If you can't guarantee this, it's best to set up and use your own ID field instead of relying on the ObjectID field.
... View more
02-10-2015
03:50 PM
|
0
|
2
|
820
|
|
POST
|
Thanks Robert, I had seen a tool on github a while back and didn't know if someone had made a widget similar to it. Ideally editing would be standard functionality built in like Flex Viewer.
... View more
02-05-2015
02:25 PM
|
0
|
0
|
1538
|
|
POST
|
Is there a widget with these functionalities?: Editing individual cells like Excel Field Calculating selected features like in ArcMap? Thanks, Davin
... View more
02-05-2015
01:57 PM
|
0
|
5
|
5482
|
|
POST
|
I'm still working with support, but I'm posting in case anyone else sync problems that could be due to this issue. The bug occurs when I use continuous stream or copy features that have photo attachments; in other words, "Like the last one" functionality. My features are displaying the same attached photos. Sync Fails. If i delete one they both lose all photos. Sync Still Fails. If I delete all features. Empty Sync Succeeds. Here are images of the 1st and 5th test feature i took. for each feature i copied the previous and attached a photo via camera. but the 1st feature shows the attachments collected on the later features. Here is what happens on the 1st sync attempt During second attempt, the progress bar quickly moves to 99% but hangs indefinitely. Details: 10.3 version of Collector on 2 different Samsung Tab 3 7" and a Samsung S5 Reproduced in 2 different web maps with same hosted layers Arcmap's Copy Runtime Geodatabase to File Geodatabase tool produces features with duplicate attachments. Could not reproduce in different hosted layers in separate web map. The attachments were not carried over when copying a feature Initial download for offline mode has no issues I'll update when this gets resolved.
... View more
02-05-2015
11:18 AM
|
0
|
3
|
4712
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 01-24-2024 08:58 AM | |
| 1 | 09-22-2022 01:52 PM | |
| 1 | 06-22-2022 12:38 PM | |
| 12 | 06-09-2022 11:53 AM | |
| 1 | 10-16-2015 11:30 PM |
| Online Status |
Offline
|
| Date Last Visited |
Wednesday
|