POST
|
Hi, I have to admit I'm at a loss in trying to use this toolset. The output is incredible. There are some videos in the community from 2015 using ArcMap about importing AIXM data and there's a 3-part YouTube video (Aviation Charting in ArcGIS Pro - YouTube) that demonstrates what the tool does. The data is pre-prepared and annotations already seem to be created for the demo though, which makes this whole thing look more like magic. Are there videos or tutorials anywhere that demonstrate using this tool from start to finish? Or at least videos or walkthroughs that show AIXM data being imported and then stylized? Is going through the data dictionary the only way I know which pre-defined table the data goes into? I have been here (Get started with ArcGIS Aviation Charting—ArcGIS Pro | Documentation). I can complete setting up the aviation gdb, get the template, and then draw an AOI. From here I don't know what else to do. I have attempted to use the APT_AIXM file from FAA NASR Data to load just airports and apply the charting style, which based on the data dictionary (AIS schema data dictionary—ArcGIS Pro | Documentation) should load into the ADHP table? The table is empty on import. Thanks for reading through and any assistance.
... View more
08-21-2024
10:19 AM
|
0
|
2
|
208
|
POST
|
I just pasted that in and it worked out of the box. Did you also try adding media -> chart via the map?
... View more
07-15-2024
08:15 AM
|
0
|
0
|
246
|
POST
|
You may want to remove the password in def Portal_push(). Without a response message it's hard to tell anything. On this line, add in: resp = agoLayer.edit_attributes(updates=[unit_dict])
Print (resp) I am going to guess the update you are sending is too large and needs to be split into multiple updates. [EDIT]: Eh maybe not...you're only updating one feature at a time. Maybe a bad data type so it rejects making the update.
... View more
05-30-2024
05:28 AM
|
0
|
2
|
530
|
POST
|
It seems possible albeit somewhat involved: Part 2 - Working with Geometries | ArcGIS API for Python About halfway down is a Creating geometries interactively using the map widget section. It seems like you'll have the capture what the user draws inside the widget, copy the geometry, then append a new feature with that geometry to your layer. To fill out the metadata, you could import ipywidgets and make a few fields the user fills out https://ipywidgets.readthedocs.io/en/stable/
... View more
05-28-2024
09:01 AM
|
0
|
0
|
297
|
POST
|
I actually might? No idea if this will be of any use; I haven't looked at it since 2019, but I'll dump it in here and you can pick through it and see if anything jumps out at you as helpful. I'll try to pull out only the relevant bits. The below notebook at one point was more or less an email alert system that used python pickling to store data based on a geospatial evaluation. The notebook ran on a schedule and any changes detected between the stored/pickled dataframe and the new dataframe were emailed. I eventually abandoned the whole thing because it wasn't "great" and I jumped over to the Geoevent Server instead. I imagine your workflow could be something like your notebook (if you're using a notebook) is set to run every 15 minutes. The notebook "pickles" your FeatureLayer on first run. Every subsequent run the notebook compares the latest FeatureLayer pull with the pickled one. Changes are emailed and the pickle is overwritten with the latest. Your use case is probably a lot simpler than the one I have below. from arcgis.gis import GIS
import pickle
import pandas as pd
import smtplib
import csv
from arcgis.features import GeoAccessor, GeoSeriesAccessor
from IPython.core.display import display, HTML
# Pandas Columns to Drop Post Join
droppable_columns = ['index_right', 'objectid']
# Historic File Checker
h_pickle = False
# Global Run Flag. Used in each block.
global_run_flag = True
# Who is getting these emails
to_email_list = ['test1@test.com, test2@test.com']
def send_email_smtp(recipients, message,subject="Test Notification from AGOL Notebook - Tsunami"):
# Sends the `message` string to all of the emails rein the `recipients` list using the configured SMTP email server.
try:
# Set up server and credential variables
smtp_server_url = "relay_server"
smtp_server_port = 25
sender = "alert_sample@test.com"
# username = "your_username"
# password = secrets["smtp_email_password"]
# Instantiate our server, configure the necessary security
server = smtplib.SMTP(smtp_server_url, smtp_server_port)
server.ehlo()
# server.starttls() # Needed if TLS is required w/ SMTP server
# server.login(username, password)
except Exception as e:
log.warning("Error setting up SMTP server, couldn't send " +
f"message to {recipients}")
raise e
# For each recipient, construct the message and attempt to send
did_succeed = True
for recipient in recipients:
try:
message_body = '\r\n'.join(['To: {}'.format(recipient),
'From: {}'.format(sender),
'MIME-Version: 1.0',
'Content-type: text/html',
'Subject: {}'.format(subject),
'',
'{}'.format(message)])
message_body = message_body.encode("utf-8")
server.sendmail(sender, [recipient], message_body)
print(f"SMTP server returned success for sending email "\
f"to {recipient}")
except Exception as e:
log.warning(f"Failed sending message to {recipient}")
log.warning(e)
did_succeed = False
# Cleanup and return
server.quit()
return did_succeed
def create_empty_spatial() :
default_df = pd.DataFrame(columns=['ASSOC_CITY','ASSOC_ST','ActivePerimeter','CENTER','FACID','FACTYPE','GEO_CAT','GeoTime','LATITUDE','LG_NAME','LONGITUDE','OBJECTID','SHAPE','alertcode'])
return default_df
def create_spatial_evalulation_df(geospatialspatial_ww_area_frame) :
evaluation_df = tsunami_point_sdf.spatial.join(geospatialspatial_ww_area_frame, how='left')
evaluation_df = evaluation_df.drop(droppable_columns, axis = 1)
evaluation_df = evaluation_df.dropna(subset=['alertcode'])
evaluation_df['ActivePerimeter'] = 'Y'
return evaluation_df
def pickle_eval() :
default_df = create_empty_spatial()
pickle_check = os.listdir('.')
warning_pickle_file = 'tsunami_intersection_warning_previous.pkl' not in pickle_check
watch_pickle_file = 'tsunami_intersection_watch.pkl' not in pickle_check
advisory_pickle_file = 'tsunami_intersection_advisory_previous.pkl' not in pickle_check
if warning_pickle_file == True :
default_df.to_pickle("tsunami_intersection_warning_previous.pkl")
if watch_pickle_file == True :
default_df.to_pickle("tsunami_intersection_watch_previous.pkl")
if advisory_pickle_file == True :
default_df.to_pickle("tsunami_intersection_advisory_previous.pkl")
return True
def df_size_eval(frame):
s = frame.size
return s
def display_frames(framelist) :
for f in framelist :
display (f)
# Misc Notes
def Cleanup(*args) :
for filenames in args :
os.remove(filenames) The Evaluation stuff, if I've pulled it out correctly, is below. # Get our information layers and create dataframes. Split by warning type
tsunami_point_sdf = pd.DataFrame.spatial.from_layer(gis.content.get(Facility_Equipment_Points).layers[0])
tsunami_layer_sdf = select_features.sdf
# Query Watch/Warning/Advisory Layers and pull polygons based on alertCode
tsunami_layer_advisory_df = tsunami_layer_sdf.query('alertcode == "TSY"')
tsunami_layer_watch_df = tsunami_layer_sdf.query('alertcode == "TSA"')
tsunami_layer_warning_df = tsunami_layer_sdf.query('alertcode == "TSW"')
# Display Data
display_frames([tsunami_layer_advisory_df, tsunami_layer_watch_df, tsunami_layer_warning_df])
# If we have Data, create a df for evaluation
if df_size_eval(tsunami_layer_advisory_df) == 0 :
tsunami_intersection_advisory_df = create_empty_spatial()
else :
tsunami_intersection_advisory_df = create_spatial_evalulation_df(tsunami_layer_advisory_df)
if df_size_eval(tsunami_layer_watch_df) == 0 :
tsunami_intersection_watch_df = create_empty_spatial()
else :
tsunami_intersection_watch_df = create_spatial_evalulation_df(tsunami_layer_watch_df)
if df_size_eval(tsunami_layer_warning_df) == 0 :
tsunami_intersection_warning_df = create_empty_spatial()
else :
tsunami_intersection_warning_df = create_spatial_evalulation_df(tsunami_layer_warning_df)
### This is the pickle evaluation part ###
# Load our old data. We need to check to see if the files exist. If they don't, we create empty files based on the current advisories format. Basically, push an empty DF with the right columns to the directory.
if global_run_flag is True :
if h_pickle is False:
h_pickle = pickle_eval()
# Read our previous (or new)
previous_tsunami_intersection_advisory_df = pd.read_pickle("tsunami_intersection_advisory_previous.pkl")
previous_tsunami_intersection_watch_df = pd.read_pickle("tsunami_intersection_watch_previous.pkl")
previous_tsunami_intersection_warning_df = pd.read_pickle("tsunami_intersection_warning_previous.pkl")
display_frames([previous_tsunami_intersection_advisory_df, previous_tsunami_intersection_watch_df, previous_tsunami_intersection_warning_df])
# Determine if facilities is no longer under a watch/warning. Only retain records where the previous remains in new
if global_run_flag is True :
display (previous_tsunami_intersection_advisory_df['FACID'].isin(tsunami_intersection_advisory_df['FACID']))
display (previous_tsunami_intersection_watch_df['FACID'].isin(tsunami_intersection_watch_df['FACID']))
display (previous_tsunami_intersection_warning_df.FACID.isin(tsunami_intersection_warning_df.FACID))
previous_tsunami_intersection_advisory_df = previous_tsunami_intersection_advisory_df[previous_tsunami_intersection_advisory_df.FACID.isin(tsunami_intersection_advisory_df.FACID)]
previous_tsunami_intersection_watch_df = previous_tsunami_intersection_watch_df[previous_tsunami_intersection_watch_df.FACID.isin(tsunami_intersection_watch_df.FACID)]
previous_tsunami_intersection_warning_df = previous_tsunami_intersection_warning_df[previous_tsunami_intersection_warning_df.FACID.isin(tsunami_intersection_warning_df.FACID)]
# Now we do the opposite. Which facilities are newly detected as under things.
if global_run_flag is True :
display (tsunami_intersection_advisory_df['FACID'].isin(previous_tsunami_intersection_advisory_df['FACID']))
display (tsunami_intersection_watch_df['FACID'].isin(previous_tsunami_intersection_watch_df['FACID']))
display (tsunami_intersection_warning_df.['FACID'].isin(previous_tsunami_intersection_warning_df.FACID))
email_tsunami_intersection_advisory_df = tsunami_intersection_advisory_df[~tsunami_intersection_advisory_df.FACID.isin(previous_tsunami_intersection_advisory_df.FACID)]
email_tsunami_intersection_watch_df = tsunami_intersection_watch_df[~tsunami_intersection_watch_df.FACID.isin(previous_tsunami_intersection_watch_df.FACID)]
email_tsunami_intersection_warning_df = tsunami_intersection_warning_df[~tsunami_intersection_warning_df.FACID.isin(previous_tsunami_intersection_warning_df.FACID)]
display_frames([email_tsunami_intersection_advisory_df, email_tsunami_intersection_watch_df, email_tsunami_intersection_warning_df])
# We now want to concatenate our updated previous dataframe and our newest dataframe. Our previous dataframe only has ones still under an advisory, while and our email_tsunami_intersection_advisory_df has only new ones.
if global_run_flag is True :
tsunami_intersection_advisory_df = pd.concat([previous_tsunami_intersection_advisory_df, email_tsunami_intersection_advisory_df])
tsunami_intersection_watch_df = pd.concat([previous_tsunami_intersection_watch_df, email_tsunami_intersection_watch_df])
tsunami_intersection_warning_df = pd.concat([previous_tsunami_intersection_warning_df, email_tsunami_intersection_warning_df])
# Now we just need to pickle the current dataframes we've generated for each alert code and set them to our previous.
if global_run_flag is True :
tsunami_intersection_advisory_df.to_pickle("tsunami_intersection_advisory_previous.pkl")
tsunami_intersection_watch_df.to_pickle("tsunami_intersection_watch_previous.pkl")
tsunami_intersection_warning_df.to_pickle("tsunami_intersection_warning_previous.pkl")
if global_run_flag is True :
if email_tsunami_intersection_advisory_df.size > 0 or email_tsunami_intersection_watch_df.size > 0 or email_tsunami_intersection_warning_df.size > 0 :
send_email = True
print (send_email)
else :
send_email = False
print (send_email)
# Now we have a list of facilities already split out by warning type. We want to build this into one html email.
if global_run_flag is True :
# This sends only the additions/newest
tsy_notification = build_notification_table("Tsunami Advisory (TSY)", email_tsunami_intersection_advisory_df)
tsa_notification = build_notification_table("Tsunami Watch (TSA)", email_tsunami_intersection_watch_df)
tsw_notification = build_notification_table("Tsunami Warning (TSW)", email_tsunami_intersection_warning_df)
# This sends all data including new ones
# tsw_notification = build_notification_table("Tsunami Warning (TSW)", tsunami_intersection_warning_df)
# tsa_notification = build_notification_table("Tsunami Watch (TSA)", tsunami_intersection_watch_df)
# tsy_notification = build_notification_table("Tsunami Advisory (TSY) TESTING USES FWW", tsunami_intersection_advisory_df)
my_message = f"""
<div style="width: 100%; font-family:Calibri, Arial, san-serif">
<p style="font-size: 16px">A Tsunami Advisory, Watch, or Warning has been issued with potential impact to the following facilities & equipment:<br></p>
</div>
<div style="font-family:Calibri, Arial, san-serif">
<p style="font-size: 12px; font-weight: bold">
{tsw_notification}
</div>
<br>
<div style="font-family:Calibri, Arial, san-serif">
<p style="font-size: 12px; font-weight: bold">
{tsa_notification}
</div>
<div style="font-family:Calibri, Arial, san-serif">
<p style="font-size: 12px; font-weight: bold">
{tsy_notification}
</div>
<p style="font-family:Calibri, Arial, san-serif;font-size: 10.5px">Questions? Need Removed? Email john.r.evans@faa.gov</p>
"""
if global_run_flag is True :
if send_email is True :
send_email_smtp(recipients = to_email_list, message = my_message)
... View more
05-24-2024
07:09 AM
|
1
|
0
|
547
|
POST
|
In poking around for a few minutes I don't see a register_listener operation in the docs. Do you have a link to it anywhere? https://developers.arcgis.com/python/api-reference/arcgis.features.managers.html#featurelayermanager If I do a quick dir check, the operation doesn't exist dir (managers.FeatureLayerManager)
['__class__',
'__delattr__',
'__dict__',
'__dir__',
'__doc__',
'__eq__',
'__format__',
'__ge__',
'__getattribute__',
'__gt__',
'__hash__',
'__init__',
'__init_subclass__',
'__le__',
'__lt__',
'__module__',
'__ne__',
'__new__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__setattr__',
'__sizeof__',
'__str__',
'__subclasshook__',
'__weakref__',
'_check_status',
'_get_status',
'_hydrate',
'_invoke',
'_refresh',
'_refresh_callback',
'_token',
'add_to_definition',
'contingent_values',
'delete_from_definition',
'field_groups',
'fromitem',
'properties',
'refresh',
'truncate',
'update_definition']
... View more
05-24-2024
05:52 AM
|
0
|
1
|
551
|
POST
|
If you plop both spatial data frames on a map widget does the data look correct? Hard to do much without seeing the actual data.
... View more
05-13-2024
06:35 AM
|
0
|
0
|
307
|
POST
|
Awesome yeah after a quick glance its probably something like changing df["date_str"] = pd.to_datetime(df['fluorometry_date']).dt.strftime('%m/%d/%Y') to df["date_str"] = pd.to_datetime(df['fluorometry_date']).dt.strftime('%m/%dd/%YYYY')
... View more
05-07-2024
11:55 AM
|
0
|
0
|
689
|
POST
|
Sure seems that way; you're awfully close. Quickest way to check would be to comment out the date and see if success: true comes back and your updates show. # cyanopond_feature.attributes['Sample_Date'] = fluorsample_feature.attributes['fluorometry_date'] From there you can figure out what's going on; your pandas is likely not formatted in a way AGOL wants it to be. https://doc.arcgis.com/en/arcgis-online/manage-data/work-with-date-fields.htm This link has a table of what your timestamp should look like. You'll probably have to mess with your ['Sample_Date'] column a little bit to get it right.
... View more
05-07-2024
11:19 AM
|
0
|
0
|
695
|
POST
|
Should be able to. I'm not super familiar with field maps. If I've got a good read on what you're asking for I think it looks something like var poly1 = FeatureSetByName($map, "Plantation Areas")
var intersects = Intersects(Geometry($feature), poly1)
var distinct_planting = []
var display_output = ""
for(var f in intersects) {
Push(distinct_planting, f['plantation_year_field'])
}
distinct_planting = Distinct(distinct_planting)
for (var pyear in distinct_planting) {
display_output = display_output + distinct_planting[pyear] + ' '
}
display_output = Trim(display_output)
return {
type : 'text',
text : display_output
}
... View more
05-07-2024
10:03 AM
|
1
|
2
|
470
|
POST
|
What response are you getting when you run the update? can you add in: resp = cyanoponds_lyr.edit_features(updates=[cyanopond_feature])
Print (resp) It's sending out an update and getting a response back successfully, but the response is probably saying "hey i cant find this record so nothing was updated" or something; its not throwing an actual exception. I've wrestled with it before and how I fixed it was to create a template feature based on the fields I'm updating (In my example I'm only making things Active or Inactive), updated the attributes I want to send to with the objectID to update, and then fired them off. Row is just the row of data in my dataframe. features_to_update = []
def create_update_feature(row, objid) :
# Establish Feature Template
template_feature = {"attributes": {"OBJECTID": '', "active": ''}}
# Copy Template to a new feature var
update_feature = copy.deepcopy(template_feature)
# assign the updated values
update_feature['attributes']["OBJECTID"] = int(objid)
update_feature['attributes']["active"]=row["active"]
# Return the update feature
return update_feature
update_feature = create_update_feature(row, update_objid)
features_to_update.append(update_feature) Then you would do something like cyanoponds_lyr.edit_features(updates=[features_to_update]) This helped me out a lot: https://developers.arcgis.com/python/guide/editing-features/
... View more
05-07-2024
09:13 AM
|
0
|
0
|
700
|
POST
|
Is the water rights data a single feature service or multiple feature services? It kind of sounds like instead of labeling the BLM Survey layer, you build a label expression that spits out one label from multiple columns in your water rights FS. This is some lazy pesudo code but something like var txt = "" IF($feature.RIGHTS1 != "") {txt = txt + $feature.RIGHTS1 + TextFormatting.NewLine} IF($feature.RIGHTS2 != "") {txt = txt + $feature.RIGHTS2 + TextFormatting.NewLine} IF($feature.RIGHTS3 != "") {txt = txt + $feature.RIGHTS3 + TextFormatting.NewLine} return { type : 'text', text : txt }
... View more
03-25-2024
10:39 AM
|
0
|
0
|
272
|
POST
|
There's a more in-depth discussion here with example code that seems? like it's what you're looking to do arcade-expressions/dashboard_data/SplitCategories(PieChart).md at master · Esri/arcade-expressions · GitHub
... View more
03-25-2024
10:09 AM
|
1
|
1
|
549
|
POST
|
Get rid of your first 3 variables and just use PoleFeatures. f is just variable that holds the data from each feature within PoleFeatres ${f.PoleID} should just be ${f.ID}, sorry. Hard to work with data I can't see. If you put a Console(f) above the if statement, it will print out each feature within PoleFeatures as it iterates. You're sure the XYjoin field will match between the two data sets, yeah?
... View more
02-21-2024
03:23 PM
|
0
|
2
|
916
|
POST
|
Knowing me I overcomplicated it. What about something simpler like var pole_features = FeatureSetByName($map, "Pole", ['XYjoin','PoleID'])
for (var f in pole_features) {
If (f.XYjoin == $feature.XYjoin) {
return {
type : 'text',
text : `Pole ID is : ${f.PoleID}`
}
}
}
... View more
02-21-2024
05:06 AM
|
0
|
1
|
923
|
Title | Kudos | Posted |
---|---|---|
1 | 01-08-2024 07:28 AM | |
1 | 02-05-2024 06:47 AM | |
1 | 05-24-2024 07:09 AM | |
2 | 01-05-2024 11:15 AM | |
1 | 05-07-2024 10:03 AM |
Online Status |
Offline
|
Date Last Visited |
2 weeks ago
|