POST
|
The AttachmentsCreated feature service webhook fires reliably, but I discovered Editor Tracking on the SDE geodatabase attachment table has to be enabled in order for the attachments:{adds:[]} JSON object returned from /extractChanges to be populated with the attachment info. But ONLY in an online workflow. If the field maps app is used offline, then the feature (with attachment) is synced, the webhook fires, but attachments:{adds:[]} is empty. Anyone else experience this? We're using ArcGIS Enterprise 11.4
... View more
03-22-2025
07:25 AM
|
0
|
0
|
152
|
POST
|
Why can't I retrieve any attachments when using an attribute rule, Insert trigger on a layer that has attachments enabled? This is an Enterprise Geodatabase that's published to Enterprise 11.4 as a referenced feature service. Both methods below fail to retrieve the attachment on the photo_point layer. Count(Attachments($feature))
//always reports 0
var sql = Concatenate(["(rel_globalid = '",$feature.globalid,"')"]);
var photoTableFeatSet = Filter(FeatureSetByName($datastore,'dbname.dbo.photo_point__ATTACH',['rel_globalid','att_name'],false),sql);
Count(photoTableFeatSet)
//always reports 0 I even setup a feature service webhook on photo_point layer, listening for 'AttachmentsCreated', which successfully fires a powerAutomate flow. But, inside the flow, calling /extractChanges, with the below query params {
"token": "validToken",
"dataFormat": "json",
"layers": "[5]",
"returnAttachments": "true",
"returnAttachmentsDataByUrl": "true",
"f": "json",
"serverGens": "1742307850054,1742307850055"
} then the returned change "attachments" value always is this: "attachments": {
"adds": [],
"updates": [],
"deleteIds": []
} what is happening? All I want to do is retrieve attachments on a feature that is inserted into a layer. Either by attribute rule or feature service webhook, I don't care!
... View more
03-18-2025
11:06 AM
|
0
|
1
|
203
|
POST
|
My only advice is to migrate both FsLogix profile containers AND GIS file share to Azure NetApp files and stop fussing with Azure Files (all storage/transaction tiers). We have 1 Azure NetApp 3TB capacity pool (premium tier) that's used for /profile and /gis shares and have had zero issues since my above thread reply. We opted for manual throughput, so that we could favor the /gis volume and assign it 132 MiB/Sec. The profiles are a bit slow to sign-out due to the throughput bottleneck, but we just advised users to sign-out of AVD, then minimize the AVD session and go about their business on the local PC. FsLogix eventually completes all communication with the profile container in netapp and signs-out w/out any issues. We started using this CPU SKU: Standard NC16as T4 v3, and we're able to comfortably fit 4 pooled (multi-session) users on the same VM. I actually think we could increase that to 5, maybe 6... and all of them are using Pro. It's pretty amazing.
... View more
02-18-2025
11:55 AM
|
1
|
0
|
1384
|
POST
|
Thanks David. Yes, Count(FeatureSetByName) returns a different number depending on replica sync, vs. inserting a feature using Pro. VERY confusing and not documented at all (at least from what I could find). This arcade script is already bloated at 500 lines. Could be trimmed down if I take some more time, but complex. I really see no other alternative to firing a script by insert trigger. Maybe a feature service webhook would be more reliable. Better yet, that webhook calls powerAutomate flow, that invokes an Azure python runbook...
... View more
10-24-2024
12:29 PM
|
0
|
0
|
297
|
POST
|
We have a fairly complex Arcade script that executes on Enterprise Geodatabase feature class Insert via a sync from an offline area in the field maps app. If we query the feature class using Filter(FeatureSetByName()), the returned feat_set does not contain $feature that triggered the attribute rule. var calcFeatId = -99;
var sqlexpression = "feat_id = @calcFeatId"
var feat_set = Filter(FeatureSetByName($datastore,'mydb.dbo.pt_layer',['globalid'],false),sqlexpression);
return {
'attributes':{ //values to calculate on $feature
'feat_id': calcFeatId
}
}; Is this expected behavior? Further, what if that same attribute rule modifies attribute values of $feature. Would the above query include $feature in the filtered featureSet?
... View more
10-24-2024
11:18 AM
|
0
|
2
|
333
|
POST
|
We've had the same postgresql (15.6) database and feature service published for a few years, its been working great and is heavily utilized by staff using the Field Maps app (both iOS and Android). After upgrading to 11.3 we're getting failed sync's and consistent error logs in server manager. Field-collected edits are successfully synced to the SDE geodatabase, but the users are unable to sync in the download direction (from SDE geodatabase to Field maps device). A lot of edits/manipulation is done when the upload sync happens via attribute rules, and the field staff are UNABLE to then sync these edits to their devices, causing field-chaos. I have a case open with tech. support, just seeing if any other users are experiencing this behavior at 11.3
... View more
07-26-2024
07:15 AM
|
0
|
1
|
430
|
POST
|
We use a Microsoft PowerAutomate Cloud Flow as our webhook receiver. After updating to Enterprise 11.3, we're trying to use signature verification to authenticate the webhook trigger. However, since our Microsoft services are deployed in US Goverment (GCC High), we're unable to use the Encodian - Connectors | Microsoft Learn and Utlity - Create HMAC. I'm wondering how others are generating these HMACs, using the secret key specified in the Server webhook? Or, any pointers on how to do this in Power Automate Cloud flows without using Encodian?
... View more
07-10-2024
07:48 AM
|
0
|
0
|
500
|
POST
|
Fortunately, this was just cached browser data causing the issue! I initially cleared Edge cache from "Last 7 days" selected, but I tried "All Time" when clearing it this time, and it's working great now! Phew... didn't want to deal with this on a Fri evening.
... View more
06-21-2024
03:34 PM
|
2
|
1
|
1735
|
POST
|
after updating all Enterprise components from 11.1 to 11.3 I'm seeing this in the Portal logs. Error before stopping configuration observer. C:\arcgisportal\content\items\portal\portal-config.properties (The system cannot find the path specified) What's odd is that our content directory is: E:\arcgisportal\content And, "portal-config.properties" file is missing from both C: and E: When the upgrade was progressing through steps 1-7 initially it failed, because of a disk space issue. After that was increased, Retry upgrade worked without issue. The map viewer is missing all the layers, and details on the left-hand pane also. Otherwise, everything seems to be working fine. License.json imported from My ESRI, no other issues; all Enterprise components successfully updated. Help, please!
... View more
06-21-2024
02:04 PM
|
0
|
3
|
1769
|
POST
|
We have the below 2 immediate calculation rules specified on the same layer, both are executed by an Insert Trigger. I wasn't able to find anything definitive in the Attribute Rules docs, about evaluation order. Does rule "update_trans_table" wait for rule "update_last_known" to fully finish before execution? i.e. can we assume that rules are executed in order, one at a time? thanks!
... View more
05-23-2024
08:27 AM
|
0
|
1
|
370
|
POST
|
@DevinBartley2 wrote: Appreciate the extra provided detail and comments, this saves me an immense amount of time and lets me focus on the heart of the issue. Your comment makes me wonder how the performance would have been if you stuck with Azure Files but had the AVD hosts Entra ID joined and Kerberos auth enabled on Azure Files. The performance of Azure Files with AVD hosts Entra ID joined would still not match the NetApp volume (Premium performance Tier). How do I know this? After we deployed Entra ID joined AVD session hosts, we continued to use Azure Files for our FsLogix profile containers (user_name.vhdx). To do so, we had to create a new Azure Files premium StorageAccount for the "profiles" file share because you can only configure 1 authentication method per storage account, and the old storage account was configured for AD DS authentication. AD DS auth. isn't supported for Entra ID joined AVD session hosts, so the new storage account was configured with Kerberos authentication. We tried to migrate the old user_name.vhdx profile containers to the new storage account, but were unsuccessful, because the SID of the user_name changed (AD DS identity vs. syned Entra ID identity). Something internal with how FsLogix looks up the location of the user's profile container prohibited it from working, generally it's unsupported according to MS docs; migrating a profile container after a user's SID is changed. Everything seemed to work great, UNTIL the session host failed to retrieve an updated Kerberos ticket from Microsoft (Azure Files) to authenticate the "profiles" share that was hosting the user_name.vhdx profile container. Windows command: klist would show that the "profiles" share kerberos ticket expired 1 hour after it was issued. If that ticket wasn't updated by Windows successfully before that expiration date/time, then it was the equivalent of pulling a hard drive out of a working windows machine, complete chaos ensues for the user's session. I have no clue why Windows wasn't able to retrieve updated Kerberos tickets from Azure Files. It would work fine for all users for hours, then all of a sudden, it wouldn't. There was no rhyme/reason why it would/not work and when. All I can think of is Azure vnet sputtering or packet loss was the cause. Someone on StackOverflow said they cannot think of a worse use-case scenario for Azure Files (profile container hosting)...that's all we had to hear and pulled the plug on Azure Files, especially given our past GIS experience with it. Now, remember the Windows command: klist. While we were troubleshooting the above profile container issue with Azure Files, we noticed that the kerberos ticket for the NetApp "GISfiles" volume (stores FGDBs, ect.) was being issued by our AD machine, and had a lifespan of 10 hours. So, why not create another NetApp volume "profiles" and simply move the profile containers to there? That's exactly what we did, and all of our problems are now solved. Something tells me that the un-reliable Kerberos ticket renewal process with Azure Files would cause problems for GIS data stored on there as well, I can almost guarantee it. Pro would throw the "General Function Failure" error when that ticket expires, if a new one wasn't issued in time. I recently overheard an analyst last week "has anyone worked on their local machine with Pro lately?" someone then responded "no, the AVD machines are 10x faster". Simple conversation, but very satisfying to hear, we've been put through the paces getting this technology to work. When paired with a high performing NVIDIA GPU, our Pro machines are BLAZING fast. Next step will be to purchase another 1TB of capacity on our NetApp capacity pool, so the total would be 3TB, split between 2 volumes (GIS and Profiles), then we get 3 * 64 MiB/sec = 192 MiB/sec throughput to play with. NetApp lets you manually specify how much throughput to allocate to each volume, its highly configurable and super simple. I would highly recommend NetApp, it's amazing.
... View more
05-14-2024
10:29 AM
|
1
|
0
|
1556
|
POST
|
I had a different thread I was updating: Azure Files premium file share - SLOW File Geodata... - Esri Community Yes, absolutely upgrade to NetApp files, you will not be disappointed. But, also note that we found performance still lacking until our AVD session hosts were Entra ID joined; and Kerberos authentication to the NetApp volume was configured. Traditional AD-joined AVD session hosts still had issues with NetApp performance, the machines would seemingly drop the NetApp volume connection, which caused Pro to display "General Function Failure" when accessing an attribute table of a FGDB layer, that was stored on the NetApp volume.
... View more
05-14-2024
07:42 AM
|
0
|
0
|
1568
|
POST
|
We deployed a new Azure Virtual Desktop host pool that only contained Entra ID joined session host VMs, same NVIDIA GPUs. upgraded to Pro 3.2. Since the VMs were no longer AD joined, we had to migrate all the AD Group Policy Objects to InTune, which really wasn't that difficult given the GPO import tools in InTune. FsLogix profile containers were switched away from Azure Files Premium and are being stored in an Azure NetApp Volume, just like our GIS file-based data. Entra ID-joined session hosts can only authenticate with SMB shares using Kerberos authentication, not NTLM (on-prem AD). So, Kerberos auth. and ACLs (permissions) were configured per Azure docs. for both the NetApp \profile and \gis volumes. For the results: ALL PROBLEMS ARE SOLVED! Added bonus is that all MS office apps, including the Remote Desktop client are SSO using Entra ID (cloud) credentials. The VMs, Pro, and FGDB access is 100% improved. I wish I had a better explanation, but our problems with FGDB performance in AVD was 100% caused by authentication issues with traditional on-prem AD (which was a VM in Azure, not really on-prem). With the sputtering/performance issues resolved, I have no doubt that better performance could be achieved by increasing the NetApp capacity pool quota (i.e. monthly A$ure invoice). Our current cap. pool quota is 2TB which comes with 128 MiB/s throughput, divided how ever we want between \profile and \gis volumes. Another 1TB would add +64 MiB/s throughput. Azure NetApp is quite amazing, so simple.
... View more
05-02-2024
04:44 PM
|
0
|
0
|
3308
|
POST
|
@JonahLay Yes, that redirect URI is what I have. We can close this thread, the problem was a CA policy in InTune scoped to demo_user requiring "All users terms of use". This is odd because demo_user has already accepted our all users Terms of Use policy. So, it seems like the ESRI auth. connection doesnt' support that CA grant control. After excluding demo_user from that CA policy, everything works as expected both with/without MFA. This is great progress ESRI, thanks!
... View more
04-23-2024
09:46 AM
|
0
|
0
|
859
|
POST
|
Our Azure tenant is deployed in Azure Government. Followed these steps: Connect to authentication providers from ArcGIS Pro—ArcGIS Pro | Documentation When I attempt to sign into the connection in Pro, I get this error: I assigned demo_user permission to access the ArcGIS Pro Azure Enterprise app. I also edited our conditional access policies to exclude demo_user from any policies requiring MFA. Even with MFA, I complete the MS Authenticator prompt and still see this error in Pro. The Azure enterprise app sign-in log shows successful login attempts with both MFA/not, no issues. Anyone have any ideas? Here's my concern: Redirect URI (reply URL) restrictions - Microsoft identity platform | Microsoft Learn Redirect URIs must begin with the scheme https From the first link, step #1C when you register Pro as an Azure app: For Redirect URI, choose Mobile and desktop applications as the platform and enter the URI: arcgis-pro://auth Could this error be caused by the authorization server (Microsoft) not allowing demo_user to be redirected back to Pro because the arcgis-pro:// schema doesn't match the required https:// schema that MS requires?
... View more
04-22-2024
04:18 PM
|
0
|
2
|
969
|
Title | Kudos | Posted |
---|---|---|
1 | 02-18-2025 11:55 AM | |
1 | 05-14-2024 10:29 AM | |
2 | 06-21-2024 03:34 PM | |
1 | 11-10-2023 05:45 PM | |
4 | 01-18-2024 07:57 AM |
Online Status |
Offline
|
Date Last Visited |
a month ago
|