POST
|
So I have another layer of complexity that has been added to this with a new JSON spec. that I can't figure out. That is to add an additional array 'nestedTableInsert' containing arrays for keys and an array for values but only for a specific table, i.e. when the table 'k' has a specific name. I have created the structure but do not know how to make this additional 'nestedTableInsert' object creation conditional upon k = 'table_x' for example. I'm curious as to if this can be done with nested OrderedDict() structures or if I am going to have to go back to the drawing board for my approach here. constJSON = OrderedDict([
('transactionType', 'zzz'),
('transactionSubType', None),
('username', 'xxx'),
('jobId', 100000000),
('EntityType', 'PLACE'),
('tableInserts', [OrderedDict([
('table', k),
('inserts', [OrderedDict([
('fields', list(d.keys())),
('values', [d[f] for f in d.keys()]),
('nestedTableInsert', [OrderedDict([
('table', k2),
('inserts', [OrderedDict([
('fields', list(d2.keys())),
('values', [d2[f2] for f2 in d2.keys()]),
])for d2 in v2])
])for k2, v2 in prvDictNorm.iteritems()])
])for d in v]),
])for k, v in datDictNorm.iteritems()]),
]) So below, accomplishes what I am trying to do above. I just need to figure out how to ensure that the nestedTableInsert contains the data for the name array that it is being nested into. Currently it just creates the same data in the nested array for multiple place_nm inserts arrays created. Hmmm... #Encapsulate distilled lists of dictionaries into outer dict structure.
#Creates a dict{ table: list[ dict{ key: value } ] } structure(s) containing
#normalized data elements distilled in previous step(s).
#main distilled & normalized data elements
mainDictNorm = {
'place': placeNorm,
'prv_feat_li': prv_feat_liNorm,
'place_type': place_TypeNorm,
'place_nm': place_nmNorm,
'place_geom_p': place_geom_pNorm,
}
#distilled & normalized provider data elements
nestDictNorm = {
'prv_feat_nm_li': prv_feat_nm_liNorm
}
#construct tableInsert nested structure to go into final JSON string
tableInsert = []
i = 0
for k, v in mainDictNorm.iteritems():
tableInsert.append({"table": k, "inserts": []})
if v:
for d in v:
flds = list(d.keys())
if k != "place_nm":
tableInsert[i]["inserts"].append({
"fields": flds,
"values": [d[f] for f in flds]})
else:
i = 0
nestedTableInsert = []
for k2, v2 in nestDictNorm.iteritems():
nestedTableInsert.append({"table": k2, "inserts": []})
if v2:
for d2 in v2:
flds2 = list(d2.keys())
tableInsert[i],nestedTableInsert[i]["inserts"].append({
"fields": flds2,
"values": [d2[f2] for f2 in flds2]})
i += 1
tableInsert[i]["inserts"].append({
"fields": flds,
"values": [d[f] for f in flds],
"nestedTableInsert": nestedTableInsert})
i += 1
#construct sourceLinktables nested structure to go into final JSON string
sourceLinkTables = []
sourceLink = []
sourceLinkTables.append({"table": "place"})
sourceLink.append({"sourceLibId": 99, "tables": sourceLinkTables})
#construct final JSON string
JSONConstructor=OrderedDict({
'transactionType': 'entityInsert',
'transactionSubType': None,
'username': 'XXXX',
'jobId': 999000000,
'HGEntityType': 'PLACE',
'tableInserts': tableInsert,
'hasProviderFeatureLinkInfo': 'true',
'sourceLinks': sourceLink
})
myJSON = json.dumps(JSONConstructor)
print(myJSON)
... View more
04-22-2019
11:58 AM
|
0
|
0
|
28625
|
POST
|
Thank You! Slight mod to enable for loops in the nested for loop structure to correspond to respective OrderedDict() layers and it works beautifully: constJSON = OrderedDict([
('metafield1', 'somedata'),
('metafield2', None),
('metafield3', 'somedata'),
('metafield4', 100000000),
('EntityType', 'PLACE'),
('tableInserts', [OrderedDict([
('table', k),
('inserts', [OrderedDict([
('fields', list(d.keys())),
('values', [d[f] for f in d.keys()]),
])
for d in v
])
])
for k, v in dictNorm.iteritems()
])
])
... View more
04-02-2019
01:17 PM
|
0
|
0
|
28625
|
POST
|
Now I am trying to put the outer JSON object wrapper {} that contains metadata about the data contained in the string we have created so far around this. I originally thought this was going to be straight forward but I am stumped. I have a triple nested ordereddict I created that mimics the dictionary in list in dictionary structure I am calling my data from that creates the full JSON string structure below but I am unable to integrate or recreate the TNFL logic above that grabs and unpacks the key value pairs from the inner most dict structure I'm grabbing data from and unpacks them into their respective "fields": and "values": arrays. Is this even possible to do (appending inside and ordereddict structure) or should I adjust fire and try to create this outer JSON object layer in the TNFL defined above? constJSON = OrderedDict([
('metafield1', 'somedata'),
('metafield2', None),
('metafield3', 'somedata'),
('metafield4', 100000000),
('EntityType', 'PLACE'),
('tableInserts', [OrderedDict([
('table', k),
('inserts', [OrderedDict([
('fields', []),
('values', []),
])
])
]) for k, v in dictNorm.iteritems()])
]) Output: { "metafield1": "somedata", "metafield2": null, "metafield3": "somedata", "metafield4": 100000000, "EntityType": "PLACE", "tableInserts": [ { "table": "place_nm", "inserts": [ { "fields": [], "values": [] } ] }, { "table": "place", "inserts": [ { "fields": [], "values": [] } ] }, { "table": "place_geom_p", "inserts": [ { "fields": [], "values": [] } ] }, { "table": "place_type", "inserts": [ { "fields": [], "values": [] } ] } ] } What I want to do but can't iterate and append inside named list "inner" inside ordereddict: constJSON = OrderedDict([
('metafield1', 'somedata'),
('metafield2', None),
('metafield3', 'somedata'),
('metafield4', 100000000),
('EntityType', 'PLACE'),
('tableInserts', [OrderedDict(
inner=[]
i = 0
for k, v in dictNorm.iteritems():
inner.append({"table":k, "inserts":[]})
if v:
for d in v:
flds = list(d.keys())
inner[i]["inserts"].append({
"fields": flds,
" values": [d[f] for f in flds]
})
i += 1
])
])
])
... View more
04-02-2019
10:57 AM
|
0
|
3
|
28625
|
POST
|
Yes indeed. I couldn't quite figure out how to construct that inner most for loop in the TNFL. I see you picked up on some clues to the larger JSON string being constructed. I just modified it to fit the larger string structure and to iterate through multiple lists of dictionaries each containing distilled information from a specific table. constJSON = []
i = 0
for k, v in dictNorm.iteritems():
constJSON.append({"table":k, "inserts":[]})
if v:
for d in v:
flds = list(d.keys())
constJSON[i]["inserts"].append({
"fields": flds,
"values": [d[f] for f in flds]
})
i += 1 Output: [ { "table": "place_nm", "inserts": [ { "fields": [ "lu_stat_id", "lu_use_desig_id", "place_nm" ], "values": [ 1, 1, "National Monument" ] }, { "fields": [ "lu_stat_id", "lu_use_desig_id", "place_nm" ], "values": [ 2, 1, "Dulles Airport" ] } ] }, { "table": "place", "inserts": [ { "fields": [ "lu_stat_id", "lu_use_desig_id" ], "values": [ 1, 1 ] } ] }, { "table": "place_geom_p", "inserts": [ { "fields": [ "latitude", "longitude" ], "values": [ 5.55555, 55.55555 ] }, { "fields": [ "latitude", "longitude" ], "values": [ 4.444444, 44.44444 ] } ] }, { "table": "place_type", "inserts": [ { "fields": [ "lu_stat_id", "lu_hg_place_type_id", "lu_use_desig_id", "lu_use_rank_num" ], "values": [ 1, 1, 1, 1 ] } ] } ]
... View more
03-27-2019
04:15 PM
|
0
|
0
|
28625
|
POST
|
Yes it was my own "freehand" JSON provided as an example to show the key value pairs in a single array as they exist in their original dictionary structure and not separated out into "fields": and "values": arrays. I've corrected the non valid JSON.
... View more
03-27-2019
10:22 AM
|
0
|
0
|
28625
|
POST
|
Greetings, Using python and ArcPy search cursors, I've extracted list(s) of dictionaries containing normalized key value pairs originating from specific tables but residing in a denormalized database layer. In the same script, I am now creating a JSON string with an object containing field & value pair arrays [] that that are to contain the keys and values (key value pairs) I've already distilled into lists of dictionary structures. There is an 'inserts' object created for each source table that the data is distilled from so I end up with a structure having field & value arrays contained within objects contained within 'inserts' arrays contained within objects contained within 'tableInserts' arrays contained within the JSON object as follows: { "tableInserts": [ {"inserts": [ { 'fields":[], "values": [] } ] } }. I've got my python script to kick out the structure I am after but I am having difficulty separating the key value pairs contained in my lists of dictionary structures into their respective field & value target JSON arrays. Python dictNorm = {
'place': placeNorm,
'place_type': place_TypeNorm,
'place_nm': nameNorm,
'place_geom_p': geomNorm,
}
constJSON = []
i = 0
for key, val in dictNorm.iteritems():
constJSON.append({"table":key, "inserts":[]})
for p in val:
constJSON[i]["inserts"].append({"fields":p})
i += 1
Portion of produced JSON string structure: { "table": "place_nm", "inserts": [ { "fields": [ { "lu_stat_id": 1, "lu_use_desig_id": 1, "place_nm": "<name>" } ] } ] }, Whereas I need it to produce (specifically, breaking key value pairs being parsed from dictionaries into "fields": & "values": arrays): { "table": "place_nm", "inserts": [ { "fields": [ "lu_stat_id", "lu_use_desig_id", "place_nm" ], "values": [ 1, 1, <name> ] } ] }, The challenge is that the dictNorm dictionary structure references lists of dictionaries for the individual key value pairs I want to separate out so I need to unpack key value pairs existing in dictionaries wrapped in lists that are inside of a dictionary. I'm thinking I need another for loop iterator below line 13 to parse through these sub lists of dictionaries called in the dictNorm structure which if you were to print out dictNorm, a specific dictionary contained within would look like: 'place_nm': [ {'lu_stat_id': 1, 'lu_use_desig_id': 1, 'place_nm': u'National Monument'}, {'lu_stat_id': 1, 'lu_use_desig_id': 1, 'place_nm': u'Joe's Bar'}, {'lu_stat_id': 1, 'lu_use_desig_id': 1, 'place_nm': u'Pike's Peak'}, {'lu_stat_id': 1, 'lu_use_desig_id': 1, 'place_nm': u'Niagara Falls'}, {'lu_stat_id': 1, 'lu_use_desig_id': 1, 'place_nm': u'Dulles Airport'}, {'lu_stat_id': 1, 'lu_use_desig_id': 1, 'place_nm': u'Bay Bridge'}, {'lu_stat_id': 1, 'lu_use_desig_id': 1, 'place_nm': u'Bagram Airfield'}, {'lu_stat_id': 1, 'lu_use_desig_id': 1, 'place_nm': u'McDonals'}, ] Any assistance would be appreciated as I know I have a lot going on here before getting to the JSON construction piece.
... View more
03-26-2019
02:09 PM
|
0
|
8
|
45329
|
POST
|
Ran this by the team and they stated that the objectid we will be using using is not registered with the geodatabase.
... View more
02-28-2019
11:09 AM
|
0
|
0
|
464
|
POST
|
This did the trick. My apologies for only having latitude explicitly specified as as one of the field parameters. it should have been: (tableRegistry, ['location_id', 'latitude', 'longitude'], whereExp) And here is the final: Returns exactly what I wanted it to without having to do a second pass with a separate search cursor to pull in the attributes. For context, there are two object IDs (non-normalized) that match on a conflated location_id I am returning (object IDs below are anonymized) but I only want one location_id (normalized) and the associated set of lat long coords returned: objectIDList = (55555,44444) #small test
#gets distinct location_ids
def getUniqueLocIDs():
tableRegistry = os.path.join(conn_hg, "plc.plc.conflated_places_lv1")
fields = ('objectid', 'location_id', 'latitude', 'longitude')
whereExp = "objectid IN {0}".format(objectIDList)
uniqueLocIDs = list(set("{0}, {1}, {2}" .format(row[0], row[1], row[2]) for row in arcpy.da.SearchCursor(tableRegistry, ['location_id', 'latitude', 'longitude'], whereExp)))
return list(uniqueLocIDs)
#getUniqueLocIDs()
print(getUniqueLocIDs())
... View more
02-28-2019
09:07 AM
|
1
|
0
|
833
|
POST
|
End state is to retrieve location_id, lat and long by passing in a list of objectIDs but there exists a 1:M rel between location_ids and object ids (one location id can match to multiple objectIDs) in this non-normalized view I am querying. My result needs to be normalized so while I am using object id to match on to resolve distinct location_id keys, I don't necessarily need objectID in my result set. This leads me to believe that maybe I need to retrieve the normalized key structure as first pass to establish the normalized relational framework or key structure which the block of code I provided does do when just passing object_id in as the field param, and then query for my attributes using that returned key or keys in a second pass with another search cursor, but before I get to that point, I just wanted to figure out why only one column in fields() returns, regardless of whether I provide an explicit list, use an asterisk (all) or pass in 'fields' as the param. It just returns the first column of whatever is passed in as a param regardless of which method I use.
... View more
02-28-2019
08:49 AM
|
0
|
2
|
833
|
POST
|
Greetings, I am running into a small problem. SearchCursor documentation states that search cursors can handle multiple field_names yet the below script only returns the first field name explicitly stated in the field_names parameter of the search cursor. If I use an asterisk, it returns the first field name in the list. I need it to return the location_id and lat and long attributes respectively from the view I am querying. Is it possible for a searchcursor to return more than one column / field? #Explicitly defined object IDs known to be conflated in conflation data model
objectIDList = (55555,44444) #small test
#gets distinct location_ids
def getUniqueLocIDs():
tableRegistry = os.path.join(conn, "myView")
fields = ['objectid', 'location_id', 'latitude', 'longitude']
whereExp = "objectid IN {0}".format(objectIDList)
uniqueLocIDs = list(set("{0}".format(row[0]) for row in arcpy.da.SearchCursor(tableRegistry, ['location_id', 'latitude'], whereExp)))
return list(uniqueLocIDs)
print(getUniqueLocIDs())
... View more
02-27-2019
02:56 PM
|
0
|
9
|
1075
|
POST
|
I was close. I had tried this: whereExp = "OBJECTID IN ".format(objectIDList) Thank you!
... View more
02-25-2019
09:24 AM
|
1
|
0
|
464
|
POST
|
This is actually more of an exercise on my part for the time being to get familiar with arcpy searchcursors for extracting data from a relational database. Just assume for the purpose of this exercise that objectid is an appropriate entity key for matching against relational attributes and I have a list of keys that I want to pass into the where clause when I call for data from the database. I think I will try to construct a SQL where clause as a parameter of the search cursor and format my list as a tuple; WHERE key in <comma separated string>. Something like example below but syntactically correct. objectIDList = (55555,44444,33333,22222)
# Set overwrite to true, for in memory table view
arcpy.env.overwriteOutput = True
tableRegistry = os.path.join(conn, "myView")
fields = ['objectid', 'place_id']
##Use SearchCursor with list comprehension to return a
#### #unique set of placed_id values from place_id field
whereExp = "objectid IN ({})".format(",".join(["'{}'".format(v) for v in objectIDList])
values = [row[0] for row in arcpy.da.SearchCursor(tableRegistry, 'place_id', whereExp)]
uniqueValues = set(values)
print(uniqueValues)
... View more
02-24-2019
08:22 PM
|
1
|
2
|
464
|
POST
|
Greetings, I'm new to GIS and new to Python. Trying to return a distinct array (or list or dictionary) of place_ids where objectid matches predefined list (will eventually be provided by user). There is a 1:m relationship between objectids and place_ids and script currently only returns objectids when what I need is distinct place_ids. I'm essentially trying to extract normalized key elements (and eventually attributes) from a table that is not normalized. Note, I could bring in all rows for objectid and place_id from my view and then compare or join on my objectIDFilter list, but that would be millions of rows so how can I apply the filter as I'm calling data from my tableRegistry to limit the data I am bringing in to just those tuples that match against my list? To demonstrate using SQL, the end state I am after and how I am thinking through the logic in my head, it would be written as (assuming my list was a table or view, which its not): SELECT DISTINCT
v.place_id
FROM
mView v
INNER JOIN objectIDFilter F
ON v.objectid = f.objectid This is wrong but it's all I've got... objectIDFilter = (12345,54321,55555,44444,33333,22222,...)
# Set overwrite to true, for in memory table view
arcpy.env.overwriteOutput = True
tableRegistry = os.path.join(conn, "myView")
fields = ['objectid','place_id']
uniquePlaceIDs = {}
with arcpy.da.SearchCursor(tableRegistry, fields) as cursor:
occurances = [row[0] for row in cursor]
for i in set(objectIDFilter):
print(occurances)
#icount = occurances.count(i)
#print("{} has {} records".format(i, icount)) Thanks in advance for any assistance that can be provided.
... View more
02-22-2019
11:02 AM
|
0
|
6
|
948
|
Title | Kudos | Posted |
---|---|---|
1 | 02-28-2019 09:07 AM | |
1 | 02-24-2019 08:22 PM | |
1 | 02-25-2019 09:24 AM |
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:24 AM
|