POST
|
@ShanaBritt Resurrecting a (semi-)dormant thread here... I have exactly the same problem as the OP. Using ArcPro 2.7.1, with a local/file based locator. If I apply any query (custom or predefined), the problem occurs. And it does not matter whether Auto Apply is checked or not -- the problem always occurs with any query applied to the results. Any insight? Many thanks in advance!
... View more
02-10-2021
09:05 AM
|
0
|
0
|
1564
|
POST
|
I've run into the same problem, which only appeared after moving to Desktop 10.5.1. I agree that the work-around currently suggested by Esri is less than satisfactory. I've found the following seems to work: Create your selection-based layer as you would normally, but before trying to add labels or change the symbology, open the Python window in ArcMap, then enter this code: print([r[0] for r in arcpy.da.SearchCursor("SelectionLayer", "OID@")]) or, perhaps even better: print(", ".join(str(r[0]) for r in arcpy.da.SearchCursor("SelectionLayer", "OID@"))) #Doing this means there will be no enclosing brackets on the output where "SelectionLayer" is the name of your selection-based layer in the ArcMap Table of Contents. This will print out in the Python window a comma-separated list of all the ObjectIDs for the features in your selection-based layer. Copy that list, then open the Layer Properties for your selection layer and go to the Definition Query tab. Write the definition query as OBJECTID IN (>>paste the list of OIDs here<<) Just make sure you have the correct field delimiters around the name of the OBJECTID field (i.e., none for an enterprise geodatabase, "" for file geodatabase or shapefile, [] for personal geodatabase). Essentially, this just 'converts' the selection-based layer into just another layer with a definition query based on the OIDs of your selected features (which I suspect is what's supposed to be happening behind the scenes anyway, but which seems to have gone wrong with 10.5.1, and even earlier for some people). You should then be able to modify the layer (symbology, labels, whatever) without it reverting to show all of the underlying feature class. Anyway, I've found this works for me, and avoids having to export copies of the selected features. I'd be curious to hear if it works for other folks. Cheers, Andrew
... View more
03-22-2018
06:14 AM
|
2
|
0
|
1274
|
POST
|
OK, I figured out the problem: I needed to standardise the values in each of the input fields to z-scores (subtract the mean and divide by the standard deviation). My new code is as follows: # Get and print the starting time for running the script
import datetime
start_time = datetime.datetime.now()
print("Script started at " +str(start_time))
import arcpy
import numpy as np
def zScore(inTable, inFld, outFld):
"""Calculate z-scores for values in one field and write them to another field"""
fldArr = arcpy.da.TableToNumPyArray(inTable, inFld)
fldMean = fldArr[inFld].mean()
fldStdDev = fldArr[inFld].std()
with arcpy.da.UpdateCursor(inTable, [inFld, outFld]) as cur:
for x, y in cur:
y = (float(x) - fldMean)/fldStdDev
cur.updateRow([x,y])
del fldArr
del fldMean, fldStdDev
arcpy.env.workspace = r"E:\Pr6653_GIS_Data_Maps\SKATER_ClusterOutputs_1021FGDB.gdb"
inFC = "Clstrs_DstNcl_All_DSHC_RC_K9"
# inFC = "Clstrs_DstABCDE_DSHC_QC_K36"
inFields = ["DstNcl_All_SCL01INV", "DSInterp_SCL01INV", "HCInterp_SCL01INV"]
# inFields = ["DstNcl_A_SCL01INV", "DstNcl_B_SCL01INV", "DstNcl_C_SCL01INV", "DstNcl_D_SCL01INV", "DstNcl_E_SCL01INV", "DSInterp_SCL01INV", "HCInterp_SCL01INV"]
arrFields = []
# Copy input dataset into memory
print("Copying source features into memory")
arcpy.CopyFeatures_management(inFC, "in_memory\\inMemSrcCopy")
# Add and populate fields for z-Scores
print("Adding and populating z-score fields for in-memory FC")
for fld in inFields:
arcpy.AddField_management("in_memory\\inMemSrcCopy", fld + "_Z", "DOUBLE")
zScore("in_memory\\inMemSrcCopy", fld, fld + "_Z")
arrFields.append(fld + "_Z")
clstField = "SS_GROUP"
arrFields.append(clstField)
# Convert the attribute table of the in-memory input FC to a Numpy array
baseArr = arcpy.da.TableToNumPyArray("in_memory\\inMemSrcCopy", arrFields)
n = len(baseArr) # This is the number of features in the input FC
# Get values based on the clustering field
clstArr = np.unique(baseArr[clstField])
nc = len(clstArr) # This is the number of groups
nc_min = clstArr.min() # This is the minimum value for cluster ID
nc_max = clstArr.max() # This is the maximum value for cluster ID
SST = 0.0 # This is the total sum of squared differences
print("Calculating the total sum of squared differences")
for group in range(nc_min, nc_max + 1): # For each group...
grpArr = baseArr[baseArr[clstField] == group] # Create a new array just for that group
for fld in inFields: # For each field/variable...
fld = fld + "_Z"
varMean = baseArr[fld].mean() # Get the overall mean for the variable
varSS = ((grpArr[fld] - varMean)**2.0).sum() # Calculate the sum of squared differences from the variable mean for each value in the group
SST += varSS # Add that sum to the total sum of squared differences
SSE = 0.0 # This is the sum of squared differences within each group
print("Calculating the sum of squared differences within each group")
##lstSSE = []
for group in range(nc_min, nc_max + 1): # For each group...
# for group in range(nc): # For each group...
grpArr = baseArr[baseArr[clstField] == group] # Create a new array just for that group
for fld in inFields: # For each field/variable...
fld = fld + "_Z"
grpMean = (grpArr[fld]).mean() # Get the mean for the variable in that group
grpSS = ((grpArr[fld] - grpMean)**2.0).sum() # Calculate the sum of squared differences from the group mean for each value in the group
SSE += grpSS # Add that sum to the within-group sum of squared differences
R2 = (SST - SSE) / SST
fStat = (R2 / (nc - 1)) / ((1 - R2) / (n - nc))
print("For input " + inFC + ":")
print("\tNumber of features = " + str(n))
print("\tNumber of clusters = " + str(nc))
print("\tSST = " + str(SST))
print("\tSSE = " + str(SSE))
print("\tR2 = " + str(R2))
print("\tCH Index = " + str(fStat))
del baseArr, grpArr, clstArr
del n, nc, nc_max, nc_min
del SST, SSE, R2, fStat
del inFields, arrFields
arcpy.Delete_management("in_memory")
print("Script complete. Elapsed time: " + str(datetime.datetime.now() - start_time))
# End of script This revised code produces results identical to those reported by Esri's tool. Problem solved.
... View more
11-06-2015
02:40 AM
|
0
|
0
|
1282
|
POST
|
Hi Darren, This was my understanding as well, but since Trillium seemed to have found something that worked, I figured case closed. Cheers, Andrew
... View more
11-06-2015
01:34 AM
|
0
|
11
|
1627
|
POST
|
Hi Trillium, Perhaps the 'Define Projection' tool (ArcGIS Help (10.2, 10.2.1, and 10.2.2) ) is what you're after? If I understand you correctly, you want to walk over all the feature classes in a workspace and change the Spatial Reference for each one to ETRS_1989_UTM_Zone_32N. 'Define Projection' should enable you to do that. Hope this helps, Andrew
... View more
11-05-2015
05:20 AM
|
1
|
14
|
1627
|
POST
|
Hi Dan, I suspect the issue probably is that the way I've calculated SSE and SST are not correct (or at least not correct for a multivariate situation), but I can't figure out how or why. The issue won't be with projected vs unprojected input data. To be clear, the dataset I'm using as an input to my script is one that was output from the Grouping Analysis tool. And I was definitely using projected data as input to the Grouping Analysis tool. When I use a different dataset as an input to my script (again, a dataset that was output from the Grouping Analysis tool), where I created the clusters using only one input field, the results are as follows: CH Index from Esri tool: 6349.09734505356 CH Index from my script (not accounting for using samples): 6349.09734505 CH Index from my script (accounting for using samples): 6348.90412258 So when I have a univariate input, my script (not accounting for using samples) replicates exactly the results from Esri's tool (well, to 8 decimal places, which is good enough for me). And when I have a univariate input, accounting for using samples gives a result that is different from Esri's tool. What I'm trying to do is check that I can replicate the CH Index value that the Grouping Analysis tool produces using my own script, so that I can be sure that I've got the calculations correct. As far as I can tell, my calculations are correct when I'm calculating SSE and SST for a single input variable, but they are not correct when I calculate SSE and SST for more than one input variable. That's where I'm stumped.
... View more
11-04-2015
04:56 AM
|
0
|
1
|
1282
|
POST
|
Assuming I understand you correctly, I've accounted for the use of samples and the result is now very, very slightly better, but only very slightly: CH Index from Esri tool: 1911.75679228486 CH Index from my script (not accounting for using samples): 2448.47996865 CH Index from my script (accounting for using samples): 2448.40544257 Not much difference in my results, and both are a long way off from what Esri's tool reports.
... View more
11-04-2015
04:10 AM
|
0
|
3
|
1282
|
POST
|
Hi Dan, Many thanks for the quick reply. OK, for the dataset referenced in my code above, the CH Index as reported by Esri Grouping Analysis is 1911.75679228486. The CH Index value reported by my script is 2448.47996865. For a different dataset (same input fields as in my code, but with 31 clusters), the results are as follows: CH Index from Esri tool: 1069.1115254454 CH Index from my script: 1114.27024452 I just tried addressing the possible integer division problem you mention, so I changed line 23 to n = len(baseArr) * 1.0 and line 27 to nc = len(clstArr) * 1.0 and it made no difference in my results I also tried changing the reducing term to N - 1, so changing line 47 in my code to fStat = (R2 / (nc - 1.0)) / ((1.0 - R2) / ((n - 1.0) - nc)) ) This also has a negligible effect. The CH Index for the dataset referenced in the original code I posted is now 2448.40544257. So, still stumped.
... View more
11-04-2015
03:49 AM
|
0
|
5
|
1282
|
POST
|
Dear All, I have been using the Grouping Analysis tool to generate a range of clusters from my data. I'm using ArcGIS v10.2.1. I wrote a Python script to run Grouping Analysis a large number of times (with different clustering algorithms and spatial constraints and with the number of clusters (k) ranging from 3 to 250) and report the 'grouping effectiveness' statistic (the Calinski-Harabasz pseudo-F statistic: see ArcGIS Help (10.2, 10.2.1, and 10.2.2), or CH Index for short ) so I can try to identify the 'best' number(s) of clusters for my data. This has all been working fine. To compare these results with some from additional clustering methods, I have also used another application (REDCAP: Regionalization with Constrained Clustering and Partitioning | Spatial Data Mining and Visual Analytics Lab ) on my data as well. Fantastic though the software is, REDCAP does not report the CH Index to help evaluate the 'best' number of clusters in the data. I would like to be able to compare/evaluate my results from ArcGIS's Grouping Analysis with those from REDCAP using the same 'grouping effectiveness' statistic, so I have written another Python script to try to calculate the CH Index 'by hand', as it were, and this is where I've run into a problem. I wrote my script to try to replicate the approach set out in the Help file for 'How Grouping Analysis works' (ArcGIS Help (10.2, 10.2.1, and 10.2.2)) and in the Python script the tool calls (Partition.py). I tried out my script on some of my output feature classes from Grouping Analysis, to check that my script was producing the same results. Here (finally) is the problem: when the Grouping Analysis was based on a single variable, I get exactly the same CH Index result from my script as from Esri's own tool; when the Grouping Analysis was based on more than one variable, I get a different (higher) CH Index result from my script. I am stumped as to why this is happening. This is my code: # Get and print the starting time for running the script
import datetime
start_time = datetime.datetime.now()
print("Script started at " +str(start_time))
import arcpy
import numpy as np
arcpy.env.workspace = r"E:\Pr6653_GIS_Data_Maps\SKATER_ClusterOutputs_1021FGDB.gdb"
# The input feature class
inFC = "Clstrs_DstNcl_All_DSHC_RC_K9"
# The fields used to generate the clusters using Grouping Analysis
inFields = ["DstNcl_All_SCL01INV", "DSInterp_SCL01INV", "HCInterp_SCL01INV"]
# The field holding the values identifying the clusters
clstField = "SS_GROUP"
# Create a new list from inFields and add the cluster field, to be used when
# creating the structured Numpy array from the input FC table
arrFields = list(inFields)
arrFields.append(clstField)
# Convert the attribute table of the input FC to a NumPy array
baseArr = arcpy.da.TableToNumPyArray(inFC, arrFields)
n = len(baseArr) # This is the number of features in the input FC
# Get values based on the clustering field
clstArr = np.unique(baseArr[clstField])
nc = len(clstArr) # This is the number of groups
nc_min = clstArr.min() # This is the minimum value for cluster ID
nc_max = clstArr.max() # This is the maximum value for cluster ID
SST = 0.0 # This is the total sum of squared differences
for fld in inFields: # For each field/variable...
varMean = baseArr[fld].mean() # Get the overall mean for the variable
varSS = ((baseArr[fld] - varMean)**2.0).sum() # Calculate the sum of squared differences from the variable mean for each value
SST += varSS # Add that sum to the total sum of squared differences
SSE = 0.0 # This is the sum of squared differences within each group
for group in range(nc_min, nc_max + 1): # For each group...
grpArr = baseArr[baseArr[clstField] == group] # Create a new array just for that group
for fld in inFields: # For each field/variable...
grpMean = grpArr[fld].mean() # Get the mean for the variable in that group
grpSS = ((grpArr[fld] - grpMean)**2.0).sum() # Calculate the sum of squared differences from the group mean for each value in the group
SSE += grpSS # Add that sum to the within-group sum of squared differences
# Calculate the pseudo-F statistic
R2 = (SST - SSE) / SST
fStat = (R2 / (nc - 1)) / ((1 - R2) / (n - nc))
# Print the results
print("For input " + inFC + ":")
print("Number of features = " + str(n))
print("Number of clusters = " + str(nc))
print("SST = " + str(SST))
print("SSE = " + str(SSE))
print("R2 = " + str(R2))
print("CH Index = " + str(fStat))
# Clean up
del baseArr, grpArr, clstArr
del n, nc, nc_max, nc_min
del SST, SSE, R2, fStat
del inFields, arrFields
print("Script complete. Elapsed time: " + str(datetime.datetime.now() - start_time))
# End of script If it would help, I'd be happy to post some example data as well. If anyone has any ideas, I'd be very grateful. Like I said, this code seems to work fine in a univariate situation, but not in a multivariate situation. I should also point out that I don't have a strong background in either statistics or programming, so I may just be missing something blindly obvious! Many thanks in advance, Andrew Message was edited by: Andrew Lowerre
Just cleaning up some copy-and-paste issues in the code I posted. Should now all be correct!
... View more
11-04-2015
03:08 AM
|
0
|
7
|
3572
|
POST
|
Hi, Running the same code as set out in the last post on a set of my own polygons (some of which are highly convoluted), I did not get identical results. The coordinates returned by the 'SHAPE@XY' and 'SHAPE@TRUECENTROID' tokens were, without exception, identical. But for about 3% (29 of 1,038) of my polygons, the coords returned by getting centroid.X and centroid.Y from the geometry returned by the 'SHAPE@' token did not equal those returned by the other two tokens. Like the original poster, I need points inside my polygons, rather than the 'true' centroids. I've found using the labelPoint property of the Polygon geometry object gets me what I need. I agree with J A that the documentation is less than clear, and that the results of using the different tokens are not what I would have expected. The devil is, as always, in the details... Cheers
... View more
07-29-2014
07:31 AM
|
0
|
1
|
2156
|
Title | Kudos | Posted |
---|---|---|
1 | 11-05-2015 05:20 AM | |
2 | 03-22-2018 06:14 AM |
Online Status |
Offline
|
Date Last Visited |
02-10-2021
10:39 AM
|