|
POST
|
Hi Cody, When I export a single feature twice (to different feature classes) and compare both feature geometries, it results in: shp1 != shp2 shp1.WKT == shp2.WKT shp1.JSON == shp2.JSON shp1.__dict__ != shp2.__dict__ not shp1 is shp2 Even if I read the same feature from the same feature class twice, they will not be the same. Although one may assume that the geometry is the same, since we are comparing objects, these objects are consuming different parts in memory and for that are considered different. Only if it's the same instance of an object it will test equal. Kind regards, Xander
... View more
03-03-2014
05:00 AM
|
0
|
0
|
2541
|
|
POST
|
Maybe it's the maximum of points that can be used in the profile. You can notice when this maximum is exceeded when your rectangle (area) is not visible anymore. Kind regards, Xander
... View more
03-03-2014
01:22 AM
|
0
|
0
|
2318
|
|
POST
|
Hi Jelmer, The error "000968 : The symbol layer does not match the input layer" specifically states: The symbol layer must be the same type as the input layer: feature layer, raster layer, network analysis layer, or TIN layer. For feature layers, the feature type and geometry type must also match. For network analysis layers, the solver type must match. For example, you cannot apply symbology from a service area layer to a route layer. An ArcGIS 10.2 layerfile can be read by ArcGIS 10.1. You can see this if you choose to save a layerfile in 10.2 and click on the "save as layer type" list. All version are provided except 10.1 (meaning that the format is the same). Check if the data types of the layers being used match. Kind regards, Xander
... View more
03-03-2014
12:48 AM
|
0
|
0
|
2155
|
|
POST
|
There is something I don't quite understand yet: your are looping through the parts of a geometry, but for each part you compare the geometry as JSON... What you should realize is that the JSON will not contain any information on Z and M values. Since you are working on pipes the chances are that your geometry will have Z and/or M values. So, the question is, is your data Z or M aware? And if so, are there any changes in Z and/or M? See below the JSON and WKT output of a Polyline ZM feature (with M undefined). geometry WKT: MULTILINESTRING ZM ((91742.483000000226 427736.20099999971 1.8999999999996362 NAN, 91743.589999999676 427733.00599999959 1.8999999999996362 NAN, 91744.996000000538 427730.98500000004 1.8999999999996362 NAN, 91746.835999999574 427729.51300000038 1.5100000000020373 NAN, 91756.677000000025 427722.87599999964 1.3600000000042201 NAN, 91759.524000000223 427716.44400000037 1.3600000000042201 NAN, 91785.322999999873 427225.90999999986 -1.1399999999994179 NAN, 91785.823000000179 427205.91399999958 -1.0899999999983265 NAN, 91785.973000000376 427195.91200000024 -1.0400000000008731 NAN, 91783.874999999971 427190.82400000002 -1.0400000000008731 NAN, 91782.969999999477 427189.89800000004 -1.0400000000008731 NAN, 91764.613999999478 427166.16899999959 -0.40999999999985448 NAN, 91749.388999999865 427146.43199999968 -0.40999999999985448 NAN)) geometry JSON (no Z or M values): {"paths":[[[91742.483000000226,427736.20099999971],[91743.589999999676,427733.00599999959],[91744.996000000538,427730.98500000004],[91746.835999999574,427729.51300000038],[91756.677000000025,427722.87599999964],[91759.524000000223,427716.44400000037],[91785.322999999873,427225.90999999986],[91785.823000000179,427205.91399999958],[91785.973000000376,427195.91200000024],[91783.874999999971,427190.82400000002],[91782.969999999477,427189.89800000004],[91764.613999999478,427166.16899999959],[91749.388999999865,427146.43199999968]]],"spatialReference":{"wkid":28992}} What you could do is using the geometry's WKT for string comparison, but be aware that the WKT does not hold any spatial reference information. You should also look at the attributes to check for changes. It might be wise to have a look at the "Feature Compare (Data Management)" tool. Kind regards, Xander
... View more
03-02-2014
10:52 PM
|
0
|
0
|
2541
|
|
POST
|
You could use this: def average_word_length(text):
''' Return the average length of all words in text. Do not
include surrounding punctuation in words.
text is a non-empty list of strings each ending in \n.
At least one line in text contains a word.'''
words = text.split()
for word in words:
average=sum(float(len(word)) for word in words)/float(len(words))
return average Sounds a lot like this post: http://forums.arcgis.com/threads/103785-Hey-I-was-wondering-about-this-function-problem-for-a-while ... on comparing signatures ... Kind regards, Xander
... View more
03-02-2014
09:50 PM
|
0
|
0
|
868
|
|
POST
|
I don't think this has much to do with ArcGIS... You can find more on this topic on stackoverflow: http://stackoverflow.com/questions/19887466/program-done-but-not-running-correctly To make the code work you should apply some changes: import os.path, math
def main():
mystery_filename = r'C:\Project\_Forums\compareSignatures\in\infile.txt'
folder = r'C:\Project\_Forums\compareSignatures\signatures'
# folder = r'C:\Project\_Forums\compareSignatures\comp'
infile = open(mystery_filename, 'r')
text = infile.read()
infile.close()
mystery_signature = [mystery_filename]
mystery_signature.append(average_word_length(text))
mystery_signature.append(type_token_ratio(text))
mystery_signature.append(hapax_legomana_ratio(text))
mystery_signature.append(average_sentence_length(text))
mystery_signature.append(avg_sentence_complexity(text))
weights = [0, 11, 33, 50, 0.4, 4]
print "mystery_signature={0}".format(mystery_signature)
# every file in this directory must be a linguistic signature
files=os.listdir(folder)
## # create some signature files
## for this_file in files:
## compfilename = os.path.join(folder, this_file)
## compfile = open(compfilename, 'r')
## text = compfile.read()
## compfile.close()
## signature = [compfilename]
## signature.append(average_word_length(text))
## signature.append(type_token_ratio(text))
## signature.append(hapax_legomana_ratio(text))
## signature.append(average_sentence_length(text))
## signature.append(avg_sentence_complexity(text))
##
## print ""
## for row in signature:
## print row
this_file = files[0]
signature = read_signature(os.path.join(folder,this_file))
best_score = compare_signatures(mystery_signature, signature, weights)
best_author = signature[0]
for this_file in files[1:]:
signature = read_signature(os.path.join(folder,this_file))
score = compare_signatures(mystery_signature, signature, weights)
if score < best_score:
best_score = score
best_author = signature[0]
print( "best author match: {} with score {}".format(best_author, best_score))
def clean_up(s):
''' Return a version of string str in which all letters have been
converted to lowercase and punctuation characters have been stripped
from both ends. Inner punctuation is left untouched. '''
punctuation = '''!"',;:.-?)([]<>*#\n\t\r'''
result = s.lower().strip(punctuation)
return result
def average_word_length(text):
''' Return the average length of all words in text. Do not
include surrounding punctuation in words.
text is a non-empty list of strings each ending in \n.
At least one line in text contains a word.'''
words = text.split()
for word in words:
average=sum(float(len(word)) for word in words)/float(len(words))
return average
def type_token_ratio(text):
''' Return the type token ratio (TTR) for this text.
TTR is the number of different words divided by the total number of words.
text is a non-empty list of strings each ending in \n.
At least one line in text contains a word. '''
uniquewords = {}
words=0
for line in text.splitlines():
line=line.strip().split()
for word in line:
words+=1
if word in uniquewords:
uniquewords[word]+=1
else:
uniquewords[word]=1
TTR = float(len(uniquewords))/float(words)
return TTR
def hapax_legomana_ratio(text):
''' Return the hapax_legomana ratio for this text.
This ratio is the number of words that occur exactly once divided
by the total number of words.
text is a list of strings each ending in \n.
At least one line in text contains a word.'''
uniquewords = dict()
words = 0
for line in text.splitlines():
line = line.strip().split()
for word in line:
words += 1
word = word.replace(',', '').strip()
if word in uniquewords:
uniquewords[word] -= 1
else:
uniquewords[word] = 1
unique_count = 0
for each in uniquewords:
if uniquewords[each] == 1:
unique_count += 1
HLR = float(unique_count)/float(words)
return HLR
def split_on_separators(original, separators):
''' Return a list of non-empty, non-blank strings from the original string
determined by splitting the string on any of the separators.
separators is a string of single-character separators.'''
result = []
newstring=''
for char in original:
if char in separators:
result.append(newstring)
newstring=''
if '' in result:
result.remove('')
else:
newstring+=char
return result
def average_sentence_length(text):
''' Return the average number of words per sentence in text.
text is guaranteed to have at least one sentence.
Terminating punctuation defined as !?.
A sentence is defined as a non-empty string of non-terminating
punctuation surrounded by terminating punctuation
or beginning or end of file. '''
words=0
Sentences=0
for line in text.split():
words+=1
sentence=split_on_separators(text,'?!.')
for sep in sentence:
Sentences+=1
ASL = float(words) / float(Sentences)
return ASL
def avg_sentence_complexity(text):
'''Return the average number of phrases per sentence.
Terminating punctuation defined as !?.
A sentence is defined as a non-empty string of non-terminating
punctuation surrounded by terminating punctuation
or beginning or end of file.
Phrases are substrings of a sentences separated by
one or more of the following delimiters ,;: '''
Sentences=0
Phrases=0
sentence=split_on_separators(text,'?!.')
for sep in sentence:
Sentences+=1
Phrase=split_on_separators(text, ',;:')
for n in Phrase:
Phrases+=1
ASC = float(Phrases) / float(Sentences)
return ASC
def compare_signatures(sig1, sig2, weight):
'''Return a non-negative real number indicating the similarity of two
linguistic signatures. The smaller the number the more similar the
signatures. Zero indicates identical signatures.
sig1 and sig2 are 6 element lists with the following elements
0 : author name (a string)
1 : average word length (float)
2 : TTR (float)
3 : Hapax Legomana Ratio (float)
4 : average sentence length (float)
5 : average sentence complexity (float)
weight is a list of multiplicative weights to apply to each
linguistic feature. weight[0] is ignored.
'''
result = 0
i=1
while i <=5:
result +=(abs(sig1-sig2))*weight
i+=1
return result
def read_signature(filename):
'''Read a linguistic signature from filename and return it as
list of features. '''
cmpfile = open(filename, 'r')
# the first feature is a string so it doesn't need casting to float
result = [cmpfile.readline()]
# all remaining features are real numbers
for line in cmpfile:
result.append(float(line.strip()))
cmpfile.close()
return result
if __name__ == '__main__':
main() Kind regards, Xander
... View more
03-02-2014
09:46 PM
|
0
|
0
|
459
|
|
POST
|
Hi Mark, This sounds a lot like it has to do with the coordinate systems. Can you specify the following: - coordinate system of your streams dataset - coordinate system of your data frame If both are geographic, can you provide me a projected coordinate that suits your data (for the length calculation)? Probably the straight line geometry has to be created with the spatial reference of the source, and if that is geographic, it needs to be projected to obtain the proper length. This can all be done in the field calculator script. Just to be sure can you also provide an example: - start coordinate - end coordinate - distance measured manually (the correct distance) - distance calculated by the script Kind regards, Xander
... View more
03-02-2014
08:29 PM
|
0
|
0
|
2044
|
|
POST
|
Hi Clive, If your data is in raster format you can correct the individual rasters (A, B, C and D) to have a value 0 in case of NoData. Then just sum the new rasters together and perform a zonal statistics (SUM) of the raster using your polygons as zones layer. If the value grids are polygons (vector layers), you could merge the layers together to create featureclass with regions (overlapping polygons). When you do an intersect between your zones (also polygons) and the regions, it will intersect the regions themselves too and create smaller overlapping polygons. The stacked polygons can be use to dissolve the features and apply a Sum while dissolving. For this you will need a new key column, which you could create from the zones ID and the Shape_Area. See image below with the intersect and dissolve results: [ATTACH=CONFIG]31860[/ATTACH] Kind regards, Xander
... View more
02-28-2014
02:23 AM
|
0
|
0
|
1203
|
|
POST
|
Hi Danny, Suppose when I use the following code: for bkmk in arcpy.mapping.ListBookmarks(mxd,"*",df):
print bkmk.name I get this result: Total Bookmark1 Bookmark2 Bookmark3 If you can use an extra line of code, you could do it like this: for bkmk in arcpy.mapping.ListBookmarks(mxd,"*",df):
if bkmk.name in ["Bookmark1","Bookmark3"]:
print bkmk.name results in: Bookmark1 Bookmark3 If it has to be on one line, you could use some list comprehension: for bkmk in [bkmk for bkmk in arcpy.mapping.ListBookmarks(mxd,"*",df) if bkmk.name in ["Bookmark1","Bookmark3"]]:
print bkmk.name Kind regards, Xander
... View more
02-28-2014
01:43 AM
|
0
|
0
|
605
|
|
POST
|
Hi Mark, I see Duncan has already provided you with a good way to calculate the intra-nodal distance. I was wondering however, since your goal is to "determine the average stream flow direction within a polygon layer" and you don't want to use your DEM (aspect) for this purpose, you could do something else. What if you would create fields for each cardinal flow direction and each field would be filled with the sum of the lengths of coordinate pairs within each stream in that cardinal flow direction. My guess is that this would provide a better understanding of the flow direction once you summarize it within a polygon. Kind regards, Xander
... View more
02-27-2014
08:50 PM
|
0
|
0
|
2044
|
|
POST
|
Hi Tony, I can't find where this information is stored. Maybe some implementation of the logging module, although most of the functionality goes through ArcObjects... I guess it's not so hard to implement an alternative, but if you want to keep using the messaging of arcpy, it might be wise to contact Esri support and report the bug. Maybe in a future release they fix the bug, or change the help text 😉 Kind regards, Xander
... View more
02-25-2014
06:01 AM
|
0
|
0
|
1170
|
|
POST
|
Hi Mark, The error refers to something that went wrong with the field calculation. Since the code does not account for any type of error (polyline of 0 length, or polyline that has the same start and end point) there can be many reason for the error to occur. The angle is arithmetic (not geographic). 0° is East and the angle increments counter clockwise: < 22.5: "E" < 67.5: "NE" <112.5: "N" <157.5: "NW" <202.5: "W" <247.5: "SW" <292.5: "S" <337.5: "SE" <360.0: "E" Kind regards, Xander
... View more
02-25-2014
05:51 AM
|
0
|
0
|
2959
|
|
POST
|
What then does the documentation refer to ? You are completely right. I does not seem to do what is described in the help. Even the slightly change sample script does not return what one should expect based on the description. import arcpy
fc = r'C:\Project\_Forums\ZonStatPol\test.gdb\Intersect'
feature_count = int(arcpy.GetCount_management(fc).getOutput(0))
if feature_count == 0:
arcpy.AddError("{0} has no features.".format(fc))
else:
arcpy.AddMessage("{0} has {1} features.".format(fc, feature_count))
print arcpy.GetMessages() Results in: Executing: GetCount C:\Project\_Forums\ZonStatPol\test.gdb\Intersect Start Time: Tue Feb 25 15:46:45 2014 Row Count = 10 Succeeded at Tue Feb 25 15:46:45 2014 (Elapsed Time: 0,19 seconds) ... which is the output of the geoprocessing tool GetCount_management Said, but true (or I simply don't get it). Kind regards, Xander
... View more
02-25-2014
04:56 AM
|
0
|
0
|
1170
|
|
POST
|
'Challenge' 1: import arcpy
arcpy.env.workspace = r'C:\Project\999999\myFGDB.gdb' # example FileGeodatabase as workspace
arcpy.env.workspace = r'C:\Project\999999\mySHPfolder' # example folder (with shapefiles) as workspace
fclasses = arcpy.ListFeatureClasses()
for fclass in fclasses:
fctype = arcpy.Describe(fclass).shapeType
print "{0} is a {1} feature class".format(fclass, fctype)
'Challenge' 2: import arcpy, os
arcpy.env.workspace = r'C:\Project\999999\mySHPfolder'
fclasses = arcpy.ListFeatureClasses()
out_ws_all = r'C:\Project\999999\test_all.gdb'
out_ws_pol = r'C:\Project\999999\test_pol.gdb'
for fclass in fclasses:
fctype = arcpy.Describe(fclass).shapeType
newname = fclass
if newname[-4:].upper() == ".SHP":
newname = newname[:-4] # trim the .shp extension from the name
out_fc = os.path.join(out_ws_all, fclass)
arcpy.CopyFeatures_management(fclass, out_fc)
if fctype == 'Polygon':
out_fc = os.path.join(out_ws_pol, fclass)
arcpy.CopyFeatures_management(fclass, out_fc) Kind regards, Xander
... View more
02-24-2014
11:45 PM
|
0
|
0
|
858
|
|
POST
|
Hi Jollong, In case of going for an intersect, you should start with converting the raster to polygons. Make sure that the "Simplify polygons" option is switched off. Intersect this polygon featureclass with your zones (circles). The resulting polygon featureclass will have areas containing information for each (part of a) pixel within each zone (circle). Since you know the area of the circles you can calculate for each feature (combination of zone and pixel) the fraction of the circle area. If the area of the circle varies, you can join it from the zones to the intersect result. See the image below: [ATTACH=CONFIG]31721[/ATTACH] The ZoneArea is obtained by joining the original area of the zones and the percentage is calculated by dividing the Shape_Area by the ZoneArea (* 100). Please note that some combinations of zones and pixels occur twice in the table. These is caused by the overlap between the zones (circles). You should aggregate the results to obtain a more useful result. Kind regards, Xander
... View more
02-24-2014
11:21 PM
|
0
|
0
|
1515
|
| Title | Kudos | Posted |
|---|---|---|
| 6 | 12-20-2019 08:41 AM | |
| 1 | 01-21-2020 07:21 AM | |
| 2 | 01-30-2020 12:46 PM | |
| 1 | 05-30-2019 08:24 AM | |
| 1 | 05-29-2019 02:45 PM |
| Online Status |
Offline
|
| Date Last Visited |
a week ago
|