POST
|
If Azure Functions is anything like AWS Lambda, my guess is you are hitting some kind of size limit with the newer version. This used to happen to me all the time with Lambda functions and I eventually opted to just write custom code to keep things light. Per this issue, the size limit for zips appears to be 2GB: Is there a way to increase the file size limit of zip deployment for azure web app? Currently the limit based on the documentation is 2048MB (https://docs.microsoft.com/en-us/azure/app-service/deploy-zip). · Issue #49629 · MicrosoftDocs/azure-docs · GitHub Do you know if you are over the limit?
... View more
yesterday
|
0
|
0
|
12
|
POST
|
To me, that sounds like you're heading into web hook / custom solution territory. Not impossible to do, but if you are really wondering when and where (in a geographic sense) data was collected, this will require some work. At least in my case, /app/fieldmaps/maps seems to return the maps which I own that could support Field Maps. I however, am not using ANY of the results for field collection so I don't think you can use this as an approximation. In fact, that endpoint actually just returns a search result that uses this query string: orgid:<orgId> AND NOT typekeywords:FieldMapsDisabled AND type:"Web Map" (owner:<username>) AND NOT typekeywords:"Workforce Project" AND NOT typekeywords:"Workforce Dispatcher" AND NOT typekeywords:"Workforce Worker"
... View more
yesterday
|
1
|
0
|
18
|
POST
|
I think this might be a case where it would be preferable to tag/categorize your production collection maps in a meaningful way so as to distinguish them from Web Maps with other usages (including field maps used only for test purposes, for example). This would make for better, more reliable results with no false positives.
... View more
Tuesday
|
0
|
0
|
71
|
POST
|
Any chance that layer has globalids? If so, you try to set the "use_global_ids" parameter of edit_features to True. Only other thought is maybe there's a problem with the auth.
... View more
|
0
|
0
|
134
|
POST
|
Hey, have you tried using the proxy parameter? I've heard mixed reviews, but hopefully your proxy is more forgiving: # Usage Exmaple 9: Using a Proxy
proxy = {
'http': 'http://10.10.1.10:3128',
'https': 'http://10.10.1.10:1080',
}
gis = GIS(proxy=proxy)
... View more
a week ago
|
1
|
1
|
123
|
POST
|
I think one way you might be able to do this is: for webmap in webmaps_contents :
data = webmap.get_data()
authoring_app = d["authoringApp"]
if authoring_app == "ArcGISMapViewer"
# Created with new MapViewer
pass
else:
# Classic webmap (authoring_app is "WebMapViewer")
pass At least, based on my observations the Authoring App value is "ArcGISMapViewer" for the new MapViewer and "WebMapViewer" for the classic MapViewer.
... View more
a week ago
|
3
|
1
|
105
|
POST
|
Oh, I'm sorry I think I misread what you were saying. I see you are reading csv files. To convert the process, you could load the csv files into spatial dataframe (assuming there is coordinate information of some sort). and then use sedf.to_featureset() - this would be the thing you use for your adds. Here's a quick example: import pandas as pd
from arcgis.gis import GIS
from arcgis.features import GeoAccessor, FeatureLayer
url = "https://services3.arcgis.com/rtycvyukgbyuklbnhjkb/arcgis/rest/services/test/FeatureServer/0"
csv_path = r"C:\Users\you\Documents\xy.csv"
gis = GIS("https://www.arcgis.com", "user", "pass"")
fl = FeatureLayer(url, gis)
df = pd.read_csv(csv_path)
sdf = GeoAccessor.from_xy(df=df, x_column="LONGITUDE", y_column="LATITUDE", sr=4326)
fs = sdf.spatial.to_featureset()
fl.manager.truncate()
fl.edit_features(adds=fs)
... View more
2 weeks ago
|
1
|
1
|
91
|
POST
|
So the schema is not changing? f not I would just simplify this to be a truncate/append operation. I suspect the problem is the JSON, but without an example it's hard to say. There are tons of truncate/append examples if you search the board.
... View more
2 weeks ago
|
1
|
3
|
100
|
POST
|
You can accomplish this, but it requires a bit of setup and understanding of how things work. Let me give you an example. Suppose I want to join Feature Layer A to Table B. I know the field I want to join on is named "Activity," and I want to perform a Left Join. Begin with necessary imports, initial setup: from arcgis.gis import GIS
from arcgis.features import FeatureLayer, Table, FeatureLayerCollection
gis = GIS("https://www.arcgis.com", username="hari", password="seldon")
fl_url = "https://services3.arcgis.com/xxxxxxxxxxx/arcgis/rest/services/pntLayer/FeatureServer/0"
tbl_url = "https://services3.arcgis.com/xxxxxxxxxxx/arcgis/rest/services/TableToJoin/FeatureServer/0"
fl = FeatureLayer(fl_url, gis)
tbl = Table(tbl_url, gis) Next, you'll want to add an index on the join field to both services: index_to_add = {"indexes":[
{
"name": "Activity_Index",
"fields": "Activity",
"isUnique": False,
"isAscending": True,
"description": "Activity_Index"
}
]}
fl.manager.add_to_definition(index_to_add)
tbl.manager.add_to_definition(index_to_add) Create a blank view service, initialize a FeatureLayerCollection from the result for later use: view_service = gis.content.create_service(name="joined_view", is_view=True)
view_flc = FeatureLayerCollection.fromitem(view_service ) Now, you need to think about which fields you want from the source feature layer and source table: sourceFeatureLayerFields = [
{
"name": "Activity",
"alias": "Activity",
"source": "Activity"
},
{
"name": "Description",
"alias": "Description",
"source": "Description"
},
{
"name": "StartDate",
"alias": "StartDate",
"source": "StartDate"
},
{
"name": "EndDate",
"alias": "EndDate",
"source": "EndDate"
}
]
sourceTableFields = [
{
"name": "Note",
"alias": "Note",
"source": "Note"
}
] This is where the magic happens. We use some of the info above to create a definition we can use to end up with our desired joined view: field_to_join_on = "Activity"
view_lyr_name = "sampleJoinedView"
definition_to_add = {
"layers": [
{
"name": view_lyr_name,
"displayField": "",
"description": "AttributeJoin",
"adminLayerInfo": {
"viewLayerDefinition": {
"table": {
"name": "sampleJoinedView",
"sourceServiceName": fl.properties.name,
"sourceLayerId": 0,
"sourceLayerFields": sourceFeatureLayerFields,
"relatedTables": [
{
"name": "testjoin",
"sourceServiceName": tbl.properties.name,
"sourceLayerId": 0,
"sourceLayerFields": sourceTableFields,
"type": "LEFT",
"parentKeyFields": [
field_to_join_on
],
"keyFields": [
field_to_join_on
],
"topFilter": {
"groupByFields": field_to_join_on,
"orderByFields": "OBJECTID ASC",
"topCount": 1
}
}
],
"materialized": False
}
},
"geometryField": {
"name": f"{view_lyr_name}.Shape"
}
}
}
]
}
view_flc.manager.add_to_definition(definition_to_add) Put it all together and we have: from arcgis.gis import GIS
from arcgis.features import FeatureLayer, Table, FeatureLayerCollection
gis = GIS("https://www.arcgis.com", username="hari", password="seldon")
fl_url = "https://services3.arcgis.com/xxxxxxxxxxx/arcgis/rest/services/pntLayer/FeatureServer/0"
tbl_url = "https://services3.arcgis.com/xxxxxxxxxxx/arcgis/rest/services/TableToJoin/FeatureServer/0"
fl = FeatureLayer(fl_url, gis)
tbl = Table(tbl_url, gis)
index_to_add = {"indexes":[
{
"name": "Activity_Index",
"fields": "Activity",
"isUnique": False,
"isAscending": True,
"description": "Activity_Index"
}
]}
fl.manager.add_to_definition(index_to_add)
tbl.manager.add_to_definition(index_to_add)
view_service = gis.content.create_service(name="joined_view", is_view=True)
view_flc = FeatureLayerCollection.fromitem(view_service )
sourceFeatureLayerFields = [
{
"name": "Activity",
"alias": "Activity",
"source": "Activity"
},
{
"name": "Description",
"alias": "Description",
"source": "Description"
},
{
"name": "StartDate",
"alias": "StartDate",
"source": "StartDate"
},
{
"name": "EndDate",
"alias": "EndDate",
"source": "EndDate"
}
]
sourceTableFields = [
{
"name": "Note",
"alias": "Note",
"source": "Note"
}
]
field_to_join_on = "Activity"
view_lyr_name = "sampleJoinedView"
definition_to_add = {
"layers": [
{
"name": view_lyr_name,
"displayField": "",
"description": "AttributeJoin",
"adminLayerInfo": {
"viewLayerDefinition": {
"table": {
"name": "sampleJoinedView",
"sourceServiceName": fl.properties.name,
"sourceLayerId": 0,
"sourceLayerFields": sourceFeatureLayerFields,
"relatedTables": [
{
"name": "testjoin",
"sourceServiceName": tbl.properties.name,
"sourceLayerId": 0,
"sourceLayerFields": sourceTableFields,
"type": "LEFT",
"parentKeyFields": [
field_to_join_on
],
"keyFields": [
field_to_join_on
],
"topFilter": {
"groupByFields": field_to_join_on,
"orderByFields": "OBJECTID ASC",
"topCount": 1
}
}
],
"materialized": False
}
},
"geometryField": {
"name": f"{view_lyr_name}.Shape"
}
}
}
]
}
view_flc.manager.add_to_definition(definition_to_add) There are other options you can play around with in the view layer definition, but this should give you a good starting point. To get a better idea of what your definition needs to look like, you could use the UI to create the exact view you want and capture the network traffic right after you click the button to create the joined view. The call you need to pay most attention to is "addToDefinition". Hope this helps you and a lot of other people.
... View more
3 weeks ago
|
1
|
1
|
126
|
POST
|
Hey, my initial guess from the key error is there's a missing column header. I noticed you are exporting to csv without taking out the index column. Can you try changing this line trendstable.to_csv(outputCSVFile, sep='\t', encoding='utf-8') to trendstable.to_csv(outputCSVFile, sep='\t', encoding='utf-8', index=False)
... View more
3 weeks ago
|
1
|
0
|
170
|
POST
|
Sorry, had to reread what you did and said earlier. I am curious what happens if you split the df into chunks. There are a number of ways to do this, but one fairly simple way to do this might be: n = 10
df_list = [df[i:i+n] for i in range(0,len(df),n)] # Break up into groups of ten
for i, df in enumerate(df_list):
start_idx = i * n
print(f"Records {start_idx} through {start_idx + n - 1}")
df.spatial.to_featureset() I have seen that method fail because of illegal values, but never have I seen it behave in the manner you describe.
... View more
3 weeks ago
|
0
|
0
|
105
|
POST
|
The way to do this is to use Create Replica with returnAttachments set to false and dataFormat set to "filegdb" - in the Python API, this will correspond to SyncManager's Create method: arcgis.features.managers module | ArcGIS API for Python. Of course, in order to do this you will need to have the sync capability enabled (which I assume it is if you are doing offline field collection). You can find sample code that shows how to create a replica here: Sync overview | ArcGIS API for Python
... View more
02-23-2024
07:25 AM
|
1
|
1
|
255
|
POST
|
Hi @JillianStanford , It looks like that property isn't there by default. You can try hydrating each item object ahead of time as this seems to set the required size property. So, you would do something like this first: for item in items:
item._hydrate()
... View more
02-20-2024
03:40 PM
|
0
|
1
|
309
|
POST
|
I'm not sure why this doesn't work for you? I can reproduce your original issue and the fix works on my end. You can also try df = pd.DataFrame([{"id": item.id, "size": item.size, "type": item.type} for item in items]) This also works for me. My version of the python api is 2.3.0; the version of pandas is 2.0.2.
... View more
02-15-2024
01:27 PM
|
1
|
0
|
412
|
POST
|
items is a list of objects - sometimes this will work, sometimes not. The dictionary representation of the data is more reliable. There are many ways to fix this, but the most straightforward is probably to do: df = pd.DataFrame([vars(item) for item in items], columns=["id", "size", "type"]) vars() is just a handy function that returns the __dict__ attribute on each item object.
... View more
02-15-2024
12:42 PM
|
0
|
2
|
417
|
Title | Kudos | Posted |
---|---|---|
1 | yesterday | |
1 | a week ago | |
3 | a week ago | |
1 | 2 weeks ago | |
1 | 2 weeks ago |