|
POST
|
Hi, Seems like your problem is related to create_engine. pyodbc is used as the default DBAPI. You may also use pymssql. You can learn more about this here: https://docs.sqlalchemy.org/en/20/core/engines.html I would explain to your team that's just how things work. You need some kind of DBAPI and here you have two options.
... View more
06-26-2023
03:09 PM
|
0
|
0
|
15723
|
|
POST
|
That doesn't sound expected. Does the same thing happen if you try to add record manually via the UI/Pro? Maybe you're missing editing privileges? I doubt the process is taking so long it's exceeding the token expiration.
... View more
06-14-2023
03:32 PM
|
0
|
0
|
1353
|
|
POST
|
Hi @TommyTaylorDev , Since you said the data is being used in an app with filters, I'm assuming the schema doesn't change all the often if at all? If that's the case, you could do what a lot of people do and just truncate/append new records. This is a good option if the new data coming in is just an updated version of what's already published. It also causes less disruption to your end users than an overwrite, I'd say. There are a few different examples of how to do this if you just search "truncate append". Here's a script you can start with: Overwrite ArcGIS Online Feature Service using Trun... - Esri Community
... View more
06-13-2023
11:09 AM
|
0
|
2
|
1375
|
|
POST
|
Hi @CarlSunderman , I read the from_parquet documentation and I believe it may need to be updated. It states: if no geometry columns are read, this will raise a ValueError - you should use the pandas read_parquet method instead. My reading of this is you should use the standard pandas.read_parquet method, completely separate from GeoAccessor. This makes sense because if you have no geometry column(s) in your data, then you are working with non-spatial data. Of course, later on the User is told to do this: df = pd.DataFrame.spatial.read_parquet("data.parquet") This seems to contradict the previous statement. If you have spatial data, then you should be using pd.DataFrame.spatial.from_parquet. So, to read non-spatial parquet data use: pd.read_parquet(your_parquet_file_path) Hope this helps clear up some confusion. I think submitting feedback on the documentation is in order.
... View more
05-16-2023
01:39 PM
|
0
|
0
|
1238
|
|
POST
|
I've run into the Lambda deployment size limitation in the past, so I understand your predicament. What functionality do you need from the Python API? On at least one occasion, it was easier to just write a custom class for my Lambdas that handled the essentials. I was doing simple stuff like querying a service to generate a report.
... View more
05-11-2023
01:45 PM
|
0
|
4
|
1661
|
|
POST
|
Yep, I used to do this all the time. As long as nothing being edited is a non-hosted (ArcGIS Server) service, you will end up with copies and any edits to the data will be completely separate from the source.
... View more
03-21-2023
07:10 AM
|
2
|
0
|
964
|
|
POST
|
Hey, Not sure if this will help you, but here's a simple example that works for me to update an existing point: from arcgis import GIS
from arcgis.geometry import Point
from arcgis.features import FeatureLayer
gis = GIS("https://www.arcgis.com", "username", "password")
fl_url = url = "https://server.com/arcgis/rest/services/ex/FeatureServer/0"
fl = FeatureLayer(url, gis)
sdf = fl.query(as_df=True)
example_df = sdf.loc[sdf.OBJECTID==1].copy()[["OBJECTID", "SHAPE"]]
example_df.at[0, "SHAPE"] = Point({"x" : -118.15, "y" : 33.80, "spatialReference" : {"wkid" : 4326}})
update_fs = example_df.spatial.to_featureset()
fl.edit_features(updates=update_fs)
... View more
03-15-2023
02:35 PM
|
0
|
1
|
2850
|
|
POST
|
Hi, Using the sample you provided, I was able to parse the JSON correctly into an SEDF like so: import pandas as pd
from arcgis.features import GeoAccessor, GeoSeriesAccessor
parsed_json = [
{
**feature["properties"],
"geom": {
"rings" : feature["geometry"]["coordinates"],
"spatialReference" : {"wkid" : 4326}
}
} for feature in json["features"]]
df = pd.DataFrame.from_dict(parsed_json)
sedf = pd.DataFrame.spatial.from_df(df, geometry_column="geom") Hope this helps!
... View more
02-27-2023
02:58 PM
|
0
|
0
|
2674
|
|
POST
|
@SupriyaK @WilliamKyngesburye Here's what I would do next: go to he Contents page where you're assigning Categories to your items, open your browser's developer tools and switch to the Network tab, clear any existing traffic, make a nominal change to one of your items (add/remove a category), and see what the traffic reports back after you click Save. You'll be looking for an updateItems request - check what the request body for that is. It should look like this: items: [{"abcdefghijklmnop123456789":{"categories":["/Categories/test"]}}] There's probably some encoding to account for if your category has special characters in it.
... View more
01-06-2023
10:19 AM
|
1
|
0
|
2431
|
|
POST
|
Hey, you were pretty close! Categories takes a string or list, so form your example it would be either item_search = gis.content.search(query="", categories=["/Categories/category_1", "/Categories/category_2"]) OR item_search = gis.content.search(query="", categories="/Categories/test") So, the key is just to make sure you're writing "/Categories/{your_category_name}" Hope this helps!
... View more
12-23-2022
06:11 AM
|
1
|
0
|
2455
|
|
POST
|
Do you mind providing an example of what you want to end up with? I want to make sure I'm understanding your needs. This was my understanding of what you're asking for: data = {'owner': {'email': 'email@address'},'users': [{'username': 'FrankT','memberType': 'member'}, {'username': 'Chris','memberType': 'admin'}, {'username': 'Test','memberType': 'admin'}]}
email = data['owner']['email']
users = data['users']
admins = [{'email': email, **user} for user in users if user['memberType'] == 'admin']
... View more
12-07-2022
02:23 PM
|
2
|
2
|
794
|
|
POST
|
I've used the Python API in a Lambda before and ran into the bloat issue with dependencies that you hint at. In my case, I was able to get it under the size limit but not without some work. My recommendation would be to take a moment to consider what functionality you actually need from the api and any other 3rd party libraries. If all you're using the api for is 2-3 things it might make sense just to code those parts yourself and keep things lean. As far as setup goes, I'd recommend trying out a Serverless - it makes things much easier and leaves you with a template you can use for future Lambdas.
... View more
11-14-2022
06:08 AM
|
0
|
0
|
1313
|
|
POST
|
Hi, There's actually a user_type parameter so this is as easy as this: gis.users.search(user_type='creatorUT') Hope this helps!
... View more
10-18-2022
06:50 AM
|
0
|
1
|
1141
|
|
POST
|
I've gotten this to work like so: Edit the VS Code settings.json file located here on Windows: C:\Users\[YOUR_USER]\AppData\Roaming\Code\User\settings.json From there, update the line with "python.pythonPath" to "python.pythonPath": "C:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\Scripts\\propy.bat" If for some reason that does not work, I have gotten it to work by pointing to the desired cloned environment directly like this: "python.pythonPath": "C:\\Users\\[YOUR_USER]\\AppData\\Local\ESRI\\conda\\envs\\[NAME_OF_ENV]\\python.exe" Hope this helps!
... View more
10-11-2022
10:47 AM
|
0
|
0
|
19655
|
|
POST
|
I have done this, but on the Feature Service side (for which I made use of the ArcGIS API for Python). In my case, I was re-attaching files on a Hosted Feature service so that was the most feasible option. You've probably already seen this, but to do it locally you would use this function: Add Attachments (Data Management)—ArcGIS Pro | Documentation So, you'd need a match table seems like - I'm guessing you could programmatically create a csv with summary information on the directory containing the images, create a TableView from that, and that would basically get you most of the way there.
... View more
09-23-2022
09:13 AM
|
1
|
0
|
1539
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 01-18-2024 01:34 PM | |
| 1 | 09-13-2023 06:48 AM | |
| 1 | 09-23-2022 09:04 AM | |
| 1 | 06-14-2024 01:14 PM | |
| 2 | 09-24-2019 08:22 AM |