|
POST
|
You can actually include a DISTINCT statement as a "sql prefix" in the sql_clause parameter of the arcpy.da.SearchCursor(). I've used it before and it works quite well. sqlprefix = "DISTINCT MyFieldName"
sqlpostfix = "ORDER BY MyFieldName"
myField_Distinct = tuple(
row[0] for row in arcpy.da.SearchCursor(
"MyGeodatabaseTableName",
["MyFieldName"],
sql_clause=(sqlprefix, sqlpostfix)
)
) You could do that for each field and write it out however you like. But you did say you didn't want to use arcpy, so... maybe go with Dan's NumPy example.
... View more
05-22-2015
08:48 AM
|
1
|
0
|
5397
|
|
POST
|
Is this what you're looking for? It might help to see an example. Input table: PrimaryKey SomeAttr OtherSomething NewVal 123 ABC LOW 6.123 456 ABC MED 15.456 789 ABC HIGH 24.789 321 CBA LOW 6.321 654 CBA MED 15.654 Output CSV: PrimaryKey SomeAttr OtherSomething NewVal 123, 456, 789, 321, 654 ABC, CBA LOW, MED, HIGH 6.123, 6.321, 15.456, 15.654, 24.789
... View more
05-21-2015
04:34 PM
|
1
|
7
|
3057
|
|
POST
|
# Identify observation groups
sqlprefix = "DISTINCT UniqueID"
sqlpostfix = "ORDER BY UniqueID"
observations = tuple(
uid[0] for uid in arcpy.da.SearchCursor(
table_in,
["UniqueID"],
sql_clause=(sqlprefix, sqlpostfix)
)
) This section uses the SQL prefix and postfix options in the SQL Clause of an arcpy.da.SearchCursor() to bring back only distinct (no duplicate) values in the UniqueID field and order them in ascending order. That search cursor is inside something called a generator expression, which is coding shorthand to create a fully populated iterator. This makes a tuple, but you can also do it with lists and dictionaries. In this case, I'm putting all of the values returned by the search cursor (which, remember, are only distinct) into a tuple so I don't have to have two search cursors open on the same table. fields_in = [
"ItemID",
"Begin_Station",
"End_Station",
"TestDate"
]
row_tpl = namedtuple('row_tpl', 'iID, begin, end, tDate')
fields_out = [
"UniqueID",
"Begin",
"End",
"SegmentItems",
"MaxDate"
]
with arcpy.da.InsertCursor(table_out, fields_out) as i_cursor:
for obsv in observations: Here I'm defining the field names as a list for the input and output tables. I also create the named tuple object and define the value names in there. The field name lists are used for the arcpy.da cursors. At the end, I open an insert cursor on the output table, which will be used to insert rows as they are processed. Before processing the segments and items, I need to do it only for a particular UniqueID (or "Observation"). That last line there is iterating the distinct observations and doing whatever code is inside for each one. # Read table into dictionary with rows as item: (begin, end, date)
where_clause = "UniqueID = {}".format(obsv)
print("Processing {}".format(where_clause))
with arcpy.da.SearchCursor(table_in, fields_in, where_clause) as s_cursor:
table_in_rows = map(row_tpl._make, s_cursor) Here is where I populate the named tuple with all of the rows returned from the input table. I start by defining the where clause for the search cursor so it only bring back the rows for the current observation (UniqueID). Now that I think about it, this also probably could have used a generator expression. # Identify segments
allsegments = [row.begin for row in table_in_rows] + [row.end for row in table_in_rows]
segments = tuple(sorted(set(allsegments))) ## creates only unique segments
del allsegments This section first combines all begin and end segment values into a single list; you can see the concatenation of the two lists with a simple +. However, this list contains duplicates and can be in any order. The duplicates are eliminated using the set() method. They are ordered ascending by the sorted() function. Then, the distinct, sorted list is converted to a tuple to preserve the order (which, in hindsight, might not be necessary. I did it as a precaution but never tested it without). I also delete the allsegments variable to free up memory and because the data is no longer needed; it was just a scratch-pad to create the distinct segments tuple. # Identify items and date in each segment
for i in range(len(segments)-1):
begin = segments
end = segments[i + 1]
seg_itemsdict = {
row.iID: row.tDate for row in table_in_rows
if row.begin <= begin and row.end >= end
} Most everything up until now has just been gathering and manipulating the data. This is where the real "logic" is that identifies all the ItemID values that fall within a segment range. It also records the date for each item that will be compared to find the max in the next step. It's a little hard to describe with text, so let me know if you want something specific clarified here. # Write segment items to output table
itemstext = str(seg_itemsdict.keys()).strip('[]')
itemsdates = [i for i in seg_itemsdict.values() if i is not None]
## Do not attempt to find max date if there
## are no dates for the items in the segment.
if len(itemsdates) > 0:
itemsdates_max = max(itemsdates)
else:
itemsdates_max = None
row = (obsv, begin, end, itemstext, itemsdates_max)
i_cursor.insertRow(row) Finally, this is where the data is actually written to the output table. It starts by formatting the list of ItemID's as plain text so it looks nice in the table. Then I use another generator expression that puts all of the date values for the items in the segment into a list. That list is then first checked for length to see if there are any dates. If there are, then it finds the maximum, which is what will be written to the output table. The last line is where we call the insert cursor to write the row.
... View more
05-21-2015
10:56 AM
|
0
|
0
|
937
|
|
POST
|
I found the problem was in creating the itemsdict dictionary. A dictionary has to have unique keys so the duplicate values in your ItemID field were getting overwritten when put into the dictionary. To solve the problem, I used a named tuple to store the input data rows instead of a dictionary. Please test again and let me know how it goes. def main():
import arcpy
import os
from collections import namedtuple
# Local variables
sourcegdb = r"N:\TechTemp\BlakeT\Work\TEMP.gdb"
table_in = os.path.join(sourcegdb, "SegmentSequence_Input_20150516")
table_out_name = "SegmentSequence_Output_20150516"
# Create output table
arcpy.CreateTable_management(sourcegdb, table_out_name)
table_out = os.path.join(sourcegdb, table_out_name)
arcpy.AddField_management(table_out, "UniqueID", "LONG")
arcpy.AddField_management(table_out, "Begin", "DOUBLE")
arcpy.AddField_management(table_out, "End", "DOUBLE")
arcpy.AddField_management(table_out, "SegmentItems", "TEXT", "", "", 255)
arcpy.AddField_management(table_out, "MaxDate", "DATE")
# Identify observation groups
sqlprefix = "DISTINCT UniqueID"
sqlpostfix = "ORDER BY UniqueID"
observations = tuple(
uid[0] for uid in arcpy.da.SearchCursor(
table_in,
["UniqueID"],
sql_clause=(sqlprefix, sqlpostfix)
)
)
fields_in = [
"ItemID",
"Begin_Station",
"End_Station",
"TestDate"
]
row_tpl = namedtuple('row_tpl', 'iID, begin, end, tDate')
fields_out = [
"UniqueID",
"Begin",
"End",
"SegmentItems",
"MaxDate"
]
with arcpy.da.InsertCursor(table_out, fields_out) as i_cursor:
for obsv in observations:
# Read table into dictionary with rows as item: (begin, end, date)
where_clause = "UniqueID = {}".format(obsv)
print("Processing {}".format(where_clause))
with arcpy.da.SearchCursor(table_in, fields_in, where_clause) as s_cursor:
table_in_rows = map(row_tpl._make, s_cursor)
# Identify segments
allsegments = [row.begin for row in table_in_rows] + [row.end for row in table_in_rows]
segments = tuple(sorted(set(allsegments))) ## creates only unique segments
del allsegments
# Identify items and date in each segment
for i in range(len(segments)-1):
begin = segments
end = segments[i + 1]
seg_itemsdict = {
row.iID: row.tDate for row in table_in_rows
if row.begin <= begin and row.end >= end
}
# Write segment items to output table
itemstext = str(seg_itemsdict.keys()).strip('[]')
itemsdates = [i for i in seg_itemsdict.values() if i is not None]
## Do not attempt to find max date if there
## are no dates for the items in the segment.
if len(itemsdates) > 0:
itemsdates_max = max(itemsdates)
else:
itemsdates_max = None
row = (obsv, begin, end, itemstext, itemsdates_max)
i_cursor.insertRow(row)
print("Done. Output table at {}".format(table_out))
if __name__ == '__main__':
main()
... View more
05-21-2015
09:57 AM
|
0
|
1
|
937
|
|
POST
|
Not sure about the visible scale for labels in ArcGIS Online, but here's an Esri Blog Post on adding a halo to the label text. Suppose you want to create halos for your web map labels. First, start with your annotation feature class in ArcMap. You then use the Feature Outline Masks tool (in System Toolboxes > Cartography Tools > Masking Tools), selecting your annotation feature class as your input layer. When you used this tool to create the original labels you specified a Margin of zero and set the Mask Kind to EXACT. Since you now want to create a halo, you set the Margin to greater than zero (in map units) so that feature outlines will be created as buffered polygons around the annotation. For the labels in figure 1 a margin of 50km was used and the Mask Kind set to EXACT (figure 2). The resulting polygon feature class can then be exported to a shapefile and added to your web map in ArcGIS.com in the same way as the original labels. The effect you see in figure 2 was achieved by placing the halo shapefile underneath the labels shapefile in the web map layers and applying 50% transparency.
... View more
05-14-2015
02:14 PM
|
0
|
0
|
2838
|
|
POST
|
Looks like it can't find SegmentSequence_output12 in HT_Overlap_Inscope.gdb Did you verify that feature class is there and that it doesn't have a lock?
... View more
05-14-2015
01:52 PM
|
0
|
4
|
1537
|
|
POST
|
The best time to start coding is when you have a very specific task. That's how I learned. Get enough very specific tasks to solve with coding and you'll eventually be able to start combining them and seeing the bigger picture of how it all fits together.
... View more
05-13-2015
09:21 AM
|
0
|
0
|
255
|
|
POST
|
I don't get the duplicates when I run the script, so I can't modify the code unless I know what the problem is. Maybe this would be a good time for you to jump into the Python pool and try modifying the code yourself. Use my comments (and the other code posted by Xander) and look up what some of the functions are doing. Empower yourself to solve your own problems and develop new solutions. I don't mean for that to sound harsh, I'm just trying to help.
... View more
05-12-2015
11:33 AM
|
0
|
8
|
1537
|
|
POST
|
The pipe material and resistance would be a good candidate for subtypes. You can add multiple subtypes to a single feature class, but I think you can only specify one field from which to assign the subtypes. I think You would have two fields in your feature class (like PipeMaterial and PipeResistance) and four domains in your geodatabase like: domPipeMaterial PVC PEAD HD subtypePipeResistance_PVC RDE 21 RDE 26 RDE 41 subtypePipeResistance_PEAD PN 6 PN 8 PN 10 subtypePipeResistance_HD C25 C30 C40 Then you would assign the domPipeMaterial domain to the PipeMaterial field and specify it as the Subtype field. Then create your subtypes for each material's resistance and assign the appropriate subtype domain to the PipeResistance field for each one. If you wanted other subtypes, I think they still have to be based off the same subtype field (PipeMaterial, in your case). ArcGIS Help 10.2 - Creating subtypes
... View more
05-11-2015
03:54 PM
|
2
|
2
|
2035
|
|
POST
|
Check out Re: Some users unable to get attachments. Timothy Hales said: Make sure you open the discussion up and are not viewing it in your Inbox or Activity Stream.
... View more
05-11-2015
02:52 PM
|
0
|
2
|
1304
|
|
POST
|
I noticed in the attribute table screenshot you posted earlier that the ItemIDs are coming through with a decimal. I originally assumed they were integer (LONG) but if they have a decimal it has to be DOUBLE. Please list all your fields and their data type so we are on the same page.
... View more
05-11-2015
01:01 PM
|
0
|
6
|
1457
|
|
POST
|
Subtypes would only be used to validate attributes, not geometry. I believe you would have to develop connectivity rules in your geometric network to accomplish what you mentioned. Using subtypes would only support the connectivity rules in the attribute table. ArcGIS Help 10.2 - About geometric network connectivity rules
... View more
05-11-2015
12:57 PM
|
1
|
4
|
2035
|
|
POST
|
Here's the code that is in the attachment. I posted this issue in the GeoNet Help place in hopes of getting an answer on the attachment issue. Some users unable to get attachments. def main():
import arcpy
import os
# Local variables
sourcegdb = r"N:\TechTemp\BlakeT\Work\TEMP.gdb"
table_in = os.path.join(sourcegdb, "SegmentSequence_input")
# Create output table
arcpy.CreateTable_management(sourcegdb, "SegmentSequence_output")
table_out = os.path.join(sourcegdb, "SegmentSequence_output")
arcpy.AddField_management(table_out, "UniqueID", "LONG")
arcpy.AddField_management(table_out, "Begin", "DOUBLE")
arcpy.AddField_management(table_out, "End", "DOUBLE")
arcpy.AddField_management(table_out, "SegmentItems", "TEXT", "", "", 255)
arcpy.AddField_management(table_out, "MaxDate", "DATE")
# Identify observation groups
sqlprefix = "DISTINCT UniqueID"
sqlpostfix = "ORDER BY UniqueID"
observations = tuple(uid[0]
for uid in arcpy.da.SearchCursor(
table_in,
["UniqueID"],
sql_clause=(sqlprefix, sqlpostfix)
)
)
fields_in = [
"ItemID",
"Begin_Station_Num_m",
"End_Station_Num_m",
"TestDate"
]
fields_out = [
"UniqueID",
"Begin",
"End",
"SegmentItems",
"MaxDate"
]
with arcpy.da.InsertCursor(table_out, fields_out) as i_cursor:
for obsv in observations:
# Read table into dictionary with rows as item: (begin, end, date)
where_clause = "UniqueID = {}".format(obsv)
itemsdict = {r[0]:(r[1], r[2], r[3])
for r in arcpy.da.SearchCursor(
table_in,
fields_in,
where_clause
)
}
# Identify segments
allsegments = [s[0] for s in itemsdict.values()] + [s[1] for s in itemsdict.values()]
segments = tuple(sorted(set(allsegments))) ## creates only unique segments
del allsegments
# Identify items and date in each segment
for i in range(len(segments)-1):
begin = segments
end = segments[i + 1]
seg_itemsdict = {k: v[2]
for k, v in itemsdict.items()
if v[0] <= begin and v[1] >= end
}
# Write segment items to output table
itemstext = str(seg_itemsdict.keys()).strip('[]')
itemsdates = [i for i in seg_itemsdict.values() if i is not None]
## Do not attempt to find max date if there
## are no dates for the items in the segment.
if len(itemsdates) > 0:
itemsdates_max = max(itemsdates)
else:
itemsdates_max = None
row = (obsv, begin, end, itemstext, itemsdates_max)
i_cursor.insertRow(row)
if __name__ == '__main__':
main()
... View more
05-11-2015
12:37 PM
|
0
|
10
|
1457
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 12-01-2025 06:19 AM | |
| 1 | 07-31-2025 11:59 AM | |
| 1 | 07-31-2025 09:12 AM | |
| 2 | 06-18-2025 03:00 PM | |
| 1 | 06-18-2025 02:50 PM |