POST
|
Interesting. @Anonymous User what version of Chrome are you on that has resolved the issue? I experience the issue in Chrome Version 91.0.4472.106 (Official Build) (64-bit)
... View more
06-30-2021
09:36 AM
|
0
|
0
|
1334
|
POST
|
Update: This does work for me in the Chrome based MS Edge - v91.0.864.59 (Official build) (64-bit)
... View more
06-30-2021
09:30 AM
|
0
|
0
|
1341
|
POST
|
Same. Enterprise 10.8.1. Anyone have an update? @KellyGerrow or @jill_es?
... View more
06-30-2021
09:16 AM
|
0
|
1
|
1341
|
POST
|
Still struggling a bit with the FeatureLayerManager class and when to use add_to_defintion, updateDefinition or delete_from_definition. This time, I am trying to enable editing and editor tracking on a hosted feature service. I started by opening up the Chrome developer tools Network monitor, then on the Feature Layer settings page I manually checked the options 'Enable editing', 'Keep track of created and updated features' and 'Keep track of who created and last updated features', then hit Save. I then saw the console send an update_definition request. I copied the form data and popped that into my update_definition method in my python notebook @ line 13 # Convert to a Spatially Enabled Dataframe
sdf = pd.DataFrame.spatial.from_xy(df, "lon_y", "lat_x", sr=4326)
print('Publishing layer...')
item = sdf.spatial.to_featurelayer('myFeatureLayer', tags=["test"], folder="My Folder")
print("Deleting interim fGDB...")
interimGDB = gis.content.get(gis.content.search(query="title:myFeatureLayer", item_type="File Geodatabase")[0].id).delete()
featureLayer = gis.content.get(item.id).layers[0]
print("Enabling Change Tracking...")
resp = featureLayer.manager.update_definition({
"hasStaticData":False,
"capabilities": "Query, Editing, Create, Update, Delete, ChangeTracking",
"editorTrackingInfo":{
"enableEditorTracking":True,
"enableOwnershipAccessControl":False,
"allowOthersToUpdate":True,
"allowOthersToDelete":True,
"allowOthersToQuery":True,
"allowAnonymousToUpdate":True,
"allowAnonymousToDelete":True}
})
print(resp)
print(featureLayer.properties) Reviewing the properties/definition however, I noticed that under nothing in the definition actually was updated, other than 'hasStaticData' which was changed from False to True. 'ChangeTracking' is not listed as a capability and the editorTrackingInfo array is not present in the updated definition.
... View more
06-23-2021
07:28 AM
|
0
|
2
|
2179
|
IDEA
|
The Spatially Enabled Dataframe has a .to_featurelayer() method which allows users to easily publish a feature service from a Spatially Enabled Dataframe. It's a great, long desired capability with really only one shortcoming that I can find. The SEDF's .to_featurelayer() method has the unfortunate behavior of sanitizing column names, even if the column names are valid to begin with. The result is that its changes column names from their original case to snake_case. For example 'storeName' gets changed to 'store_name' when you utilize the .to_featurelayer() method to publish the data frame as a feature layer, even though there is absolutely nothing invalid at all about a feature layer with a field named 'storeName'. Interestingly however, the .to_featureclass() method exposes a sanitize_columns parameter which is defaulted to True. Meaning you can set that parameter to False on the .to_featureclass() method to avoid this behavior, but for whatever reason this parameter wasn't exposed on the .to_featurelayer() method. Please expose a 'sanitize_columns' parameter on the SEDF's .to_featurelayer() method so that we can disable this feature if we so desire. Like the SEDF's .to_featureclass method, the 'sanitize_columns' parameter can be defaulted to True to avoid an unexpected change in behavior for users who might already be using this.
... View more
06-15-2021
07:54 AM
|
5
|
3
|
1727
|
POST
|
Have been tinkering with this for the better part of a day and can't figure it out. I'm calling an API to get a json response containing store locations, which contain the coordinates, then pushing that into a Feature Layer. Esri's implementation of the Spatially Enabled Dataframe's .to_featurelayer has the unfortunate behavior of sanitizing column names, even if the column names are valid to begin with. The result is that its changing my column names from their original case to snake_case. For example 'storeName' gets changed to 'store_name' when I utilize the .to_featurelayer method to publish the data frame as a feature layer, even though there is absolutely nothing invalid at all about a feature layer with a field named 'storeName'. Unlike the .to_featureclass method, where sanitize_columns is exposed as a parameter in the method and defaulted to True, meaning you can set it to False to avoid this behavior, sanitize_columns is defaulted to to True in the .to_featurelayer method and is not exposed as a parameter so there is no way to avoid it. As a result, I'm trying to go back and update the column names using the update_definition method on the feature layer manager but I keep getting one of two errors. The first approach I took: from arcgis.features import FeatureLayer
featureLayer =gis.content.get("c952e9e257bd4fc887be2934291548cb")
lyr = featureLayer.layers[0]
lyr.properties
originalDefinition = lyr.properties
# Get the fields array
originalFields = originalDefinition["fields"]
newFields = originalFields.copy()
for f in newFields:
if "_" in f["name"]:
newFieldName = f["name"].split("_")[0].lower()+f["name"].split("_")[1].title()
f["name"] = newFieldName
#newFields
print('"fields": ' + json.dumps(newFields))
lyr.manager.update_definition('"fields": ' + json.dumps(newFields)) Fails with: ---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-28-34cef8ca738c> in <module>
13 #newFields
14 print('"fields": ' + json.dumps(newFields))
---> 15 lyr.manager.update_definition('"fields": ' + json.dumps(newFields))
/opt/conda/lib/python3.7/site-packages/arcgis/features/managers.py in update_definition(self, json_dict)
2002 u_url = self._url + "/updateDefinition"
2003
-> 2004 res = self._con.post(u_url, params)
2005 self.refresh()
2006 return res
/opt/conda/lib/python3.7/site-packages/arcgis/gis/_impl/_con/_connection.py in post(self, path, params, files, **kwargs)
718 file_name=file_name,
719 try_json=try_json,
--> 720 force_bytes=kwargs.pop('force_bytes', False))
721 #----------------------------------------------------------------------
722 def put(self, url, params=None, files=None, **kwargs):
/opt/conda/lib/python3.7/site-packages/arcgis/gis/_impl/_con/_connection.py in _handle_response(self, resp, file_name, out_path, try_json, force_bytes)
512 return data
513 errorcode = data['error']['code'] if 'code' in data['error'] else 0
--> 514 self._handle_json_error(data['error'], errorcode)
515 return data
516 else:
/opt/conda/lib/python3.7/site-packages/arcgis/gis/_impl/_con/_connection.py in _handle_json_error(self, error, errorcode)
534
535 errormessage = errormessage + "\n(Error Code: " + str(errorcode) +")"
--> 536 raise Exception(errormessage)
537 #----------------------------------------------------------------------
538 def post(self,
Exception: Unable to update feature service layer definition.
Object reference not set to an instance of an object.
(Error Code: 400) Kinda makes sense. Maybe I need to instantiate a FeatureLayer object on the actual layer...Let's try that: featureLayer =gis.content.get("c952e9e257bd4fc887be2934291548cb")
lyr = FeatureLayer(featureLayer.layers[0])
originalDefinition = lyr.properties
# Get the fields array
originalFields = originalDefinition["fields"]
newFields = originalFields.copy()
for f in newFields:
if "_" in f["name"]:
newFieldName = f["name"].split("_")[0].lower()+f["name"].split("_")[1].title()
f["name"] = newFieldName
newFields
print('"fields": ' + json.dumps(newFields))
#lyr.manager.update_definition('"fields": ' + json.dumps(newFields)) But it fails with: ---------------------------------------------------------------------------
Exception Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/arcgis/gis/__init__.py in _hydrate(self)
11481 if isinstance(self._con, Connection):
> 11482 self._lazy_token = self._con.generate_portal_server_token(serverUrl=self._url)
11483 else:
/opt/conda/lib/python3.7/site-packages/arcgis/gis/_impl/_con/_connection.py in generate_portal_server_token(self, serverUrl, expiration)
1313 resp = self.post(path=self._token_url, postdata=postdata,
-> 1314 ssl=True, add_token=False)
1315 if isinstance(resp, dict) and resp:
/opt/conda/lib/python3.7/site-packages/arcgis/gis/_impl/_con/_connection.py in post(self, path, params, files, **kwargs)
719 try_json=try_json,
--> 720 force_bytes=kwargs.pop('force_bytes', False))
721 #----------------------------------------------------------------------
/opt/conda/lib/python3.7/site-packages/arcgis/gis/_impl/_con/_connection.py in _handle_response(self, resp, file_name, out_path, try_json, force_bytes)
513 errorcode = data['error']['code'] if 'code' in data['error'] else 0
--> 514 self._handle_json_error(data['error'], errorcode)
515 return data
/opt/conda/lib/python3.7/site-packages/arcgis/gis/_impl/_con/_connection.py in _handle_json_error(self, error, errorcode)
535 errormessage = errormessage + "\n(Error Code: " + str(errorcode) +")"
--> 536 raise Exception(errormessage)
537 #----------------------------------------------------------------------
Exception: Unable to generate token.
'username' must be specified.
'password' must be specified.
'referer' must be specified.
(Error Code: 400)
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
<ipython-input-27-77bf06c45026> in <module>
2 featureLayer =gis.content.get("c952e9e257bd4fc887be2934291548cb")
3 lyr = FeatureLayer(featureLayer.layers[0], gis=gis)
----> 4 lyr.properties
5 originalDefinition = lyr.properties
6 # Get the fields array
/opt/conda/lib/python3.7/site-packages/arcgis/gis/__init__.py in properties(self)
11460 return self._lazy_properties
11461 else:
> 11462 self._hydrate()
11463 return self._lazy_properties
11464
/opt/conda/lib/python3.7/site-packages/arcgis/gis/__init__.py in _hydrate(self)
11507 # try as a public server
11508 self._lazy_token = None
> 11509 self._refresh()
11510
11511 except HTTPError as httperror:
/opt/conda/lib/python3.7/site-packages/arcgis/gis/__init__.py in _refresh(self)
11450 dictdata = self._con.get(self.url, params)
11451 else:
> 11452 raise e
11453
11454 self._lazy_properties = PropertyMap(dictdata)
/opt/conda/lib/python3.7/site-packages/arcgis/gis/__init__.py in _refresh(self)
11443 else:
11444 try:
> 11445 dictdata = self._con.post(self.url, params, token=self._lazy_token)
11446 except Exception as e:
11447 if hasattr(e, 'msg') and e.msg == "Method Not Allowed":
/opt/conda/lib/python3.7/site-packages/arcgis/gis/_impl/_con/_connection.py in post(self, path, params, files, **kwargs)
619 try_json = kwargs.pop("try_json", True)
620 add_token = kwargs.pop('add_token', True)
--> 621 if url.find('://') == -1:
622 url = self._baseurl + url
623 if kwargs.pop("ssl", False) or self._all_ssl:
AttributeError: 'FeatureLayer' object has no attribute 'find' I don't understand why I would need to authenticate with credentials manually here or even how I would. I have a connection to a GIS object and even passing that in as the gis parameter on the instantiation of the FeatureLayer object at line 3 doesn't change result. I also tried using FeatureCollection but got the same result in both approaches. Help or insight appreciated.
... View more
06-13-2021
10:39 AM
|
0
|
1
|
2528
|
POST
|
Looking for a widget in JSAPI 4.x that would allow me to pass a layer object and have a list view model displayed as a sidebar in my app. Kind of ArcGIS Workforce style, like the below I came across LayerList-LayerListViewModel in my search, but that seems to display a ListViewModel of all of the layers in the map, allowing the user to reorder those layers and toggle visibility and such. Similar to what I am looking for, but I just want to be able to pass a layer object and list the features within the given layer in a ListViewModel
... View more
04-12-2021
08:53 AM
|
0
|
2
|
1187
|
POST
|
We have an ArcGIS Pro Addin that we deploy to all end user systems via the AddinFolder registery key. As explained by @Wolf in this post, the process that a ProAddin follows when using this mechanism for Addin deployment is: Each time Pro starts up, Pro checks the well-known addinFolders locations for esriAddinX files, and unzips the add-in content (in the 'esriaddinx' file) under the log-in user profile folder: ...\AppData\Local\ESRI\ArcGISPro\AssemblyCache. When the add-in is instantiated by Pro that folder location (…\AppData\Local…) is used to load the Dlls required to run the add-in. If you delete the AssemblyCache folder completely step 1. above will repeat. What's not stated is that this also works for distributing updates. Somewhere behind the scenes, there is probably some checksum wizardry going on to see if the files in the assembly cache match the files in well-known addinFolder to determine if it needs to unzip the remote addin again. Also, if you delete an addin file from the remote folder, it gets removed from the local assembly cache. All of this is great and it works pretty smoothly, but we'd like to not be required to use a publicly accessible shared drive. We already keep all of the source for our addins in a Git repo wth releases for each new update of the addin. Is there any way to set up a Pro Addin to download the esriAddin file from a Git repo, and always check for new releases from that repo?
... View more
03-12-2021
08:41 AM
|
1
|
1
|
857
|
IDEA
|
It occurred to me this morning that if this could be supported, there could also be an ability to associate specific Pro core and extension licenses with memberships in specific Active Directory groups
... View more
01-25-2021
07:58 AM
|
0
|
0
|
1666
|
IDEA
|
@Anonymous User bit of a bad take. The privilege to manage another user's content is an administrative privilege and allows them to not manage just the one content item you intend, but any other user's content. It's asking for trouble. It's a solution that shouldn't be offered up lightly. Using a shared account is only a slightly better approach, but is still playing incredibly fast and loose with security. An even bigger challenge with both this suggestion and the previous is that when you grant any administrative privilege to a user, you end up exposing every single service in the organization to that user because I guess Portal now considers them a semi-administrator. It's kind of bananas. Shared Accounts are almost never a good idea, regardless of data sensitivity or classification. To maintain security, one should use a centralized group that several users can contribute curated content to. Shared Update Groups are by far the best suggestion here but they have their limitations. For one, if an item could have multiple owners then decommissioning a user account that was being removed from the organization would be far easier as you wouldn't run into as many issues with having to find a new owner for the departing user's content before you removed them. It is a real headache for org admins. Props to @DamianSmith for having the foresight to see that coming if/when he leaves his org and bringing it up now to save his org admin some grief.
... View more
01-05-2021
03:22 PM
|
0
|
0
|
1333
|
IDEA
|
What Esri doesn't seem to understand is that there is zero reason to license by named user at all for Enterprise/Server. Just license by individual core. That is it. Scaling the platform to maintain performance and reliability will automatically increase licensing costs proportionally. Who cares if I have 10,000 named users if I only ever have 100 concurrent connections? That's what matters and what should drive cost - actual usage. The Named User Licensing scheme coupled with the inability to license less than 4 cores cause 98% of licensing complexities and inefficiencies that lead to customer frustration.
... View more
01-05-2021
02:57 PM
|
0
|
0
|
1186
|
IDEA
|
@Anonymous User I don't have any issue with an "Expiration" configuration setting that an Admin could set to 'Session', indicating that the User Acceptance is presented again with every new session and logged each time. That said, whether or not you need this message to be presented with every session is entirely dependent on the message being presented. Let's step outside of the box of highly regulated industries for a moment and think about those who are not - as we all must do when thinking about how Esri technology should be implemented given that it is used in every industry imaginable. What if you just wanted to use the Access Notice capability as a way to present the Terms of Service and require acceptance? Do you really want to present the ToS with every session? Probably not, that would get annoying very quickly. What if you just wanted to use the Access Notice capability to present a code of conduct? Or maybe a privacy notice? What if you just wanted users to acknowledge that there is a maintenance window coming on MM/DD/YYYY from HH:MM:SS to HH:MM:SS and the platform will be unavailable, or maybe in Read Only mode during that time? Could you use a banner as a more passive way to present this information Sure! But you what if you want logging behind the acceptance. That would be a new thing for a banner to do, but not a modal dialog. Ultimately, I don't think we're on different wavelengths. I'm just asking for logging of acceptance and configurable Expiration setting.
... View more
11-20-2020
10:33 AM
|
0
|
0
|
1150
|
IDEA
|
ArcGIS Enterprise 10.8.1 introduced the Access Notice, which is a great and fantastic capability for ensuring user's are informed of really important items. While the feature is very useful, it would be more useful with a couple of additional configuration options for logging user acceptance and expiration. Logging options: For heavily regulated industries, this capability could be used to require acceptance or acknowledgement of risks, data sensitivity, limitations and all sorts of other things. Half of the point of doing that is to ensure the user is informed and understands the issue. The other half of it is giving the administrator an audit trail to demonstrate that the user accepted or acknowledged, and it is this second half that is missing. It would be nice when configuring the access notice, to be able to configure it to keep a log - perhaps in a hosted table service - to keep track and have an auditable trail indicating which users have accepted/acknowledged and when. It's the simplest feature service with editor tracking enabled. Obviously however, this wouldn't work for Portal's with anonymous access enabled and I think that's fine as someone without an account is unlikely to need to be audited in the first place. Its typically the authenticated users that you need to track in this way. Expiration Currently, the acceptance just writes to a cookie, which is configured to expire with the browser session. That means that if a user visits your Portal 5 times in the same day using 5 different browser sessions, they will have to acknowledge/accept 5 different times. That doesn't seem logical. We really only need them to acknowledge/accept once every n-days. So I'd offer that there needs to be a configuration option here to allow us to set the expiration to a value that meets our organizational requirements. These two combined together would make for much more useful Access Notice capability.
... View more
11-16-2020
11:19 AM
|
4
|
3
|
1189
|
Title | Kudos | Posted |
---|---|---|
1 | 08-20-2021 09:29 AM | |
2 | 02-19-2020 11:08 AM | |
1 | 10-18-2021 10:38 AM | |
1 | 10-18-2021 10:17 AM | |
3 | 10-18-2021 10:51 AM |
Online Status |
Offline
|
Date Last Visited |
05-19-2023
08:06 PM
|