POST
|
Thanks David. Yes, Count(FeatureSetByName) returns a different number depending on replica sync, vs. inserting a feature using Pro. VERY confusing and not documented at all (at least from what I could find). This arcade script is already bloated at 500 lines. Could be trimmed down if I take some more time, but complex. I really see no other alternative to firing a script by insert trigger. Maybe a feature service webhook would be more reliable. Better yet, that webhook calls powerAutomate flow, that invokes an Azure python runbook...
... View more
3 weeks ago
|
0
|
0
|
75
|
POST
|
We have a fairly complex Arcade script that executes on Enterprise Geodatabase feature class Insert via a sync from an offline area in the field maps app. If we query the feature class using Filter(FeatureSetByName()), the returned feat_set does not contain $feature that triggered the attribute rule. var calcFeatId = -99;
var sqlexpression = "feat_id = @calcFeatId"
var feat_set = Filter(FeatureSetByName($datastore,'mydb.dbo.pt_layer',['globalid'],false),sqlexpression);
return {
'attributes':{ //values to calculate on $feature
'feat_id': calcFeatId
}
}; Is this expected behavior? Further, what if that same attribute rule modifies attribute values of $feature. Would the above query include $feature in the filtered featureSet?
... View more
3 weeks ago
|
0
|
2
|
111
|
POST
|
We've had the same postgresql (15.6) database and feature service published for a few years, its been working great and is heavily utilized by staff using the Field Maps app (both iOS and Android). After upgrading to 11.3 we're getting failed sync's and consistent error logs in server manager. Field-collected edits are successfully synced to the SDE geodatabase, but the users are unable to sync in the download direction (from SDE geodatabase to Field maps device). A lot of edits/manipulation is done when the upload sync happens via attribute rules, and the field staff are UNABLE to then sync these edits to their devices, causing field-chaos. I have a case open with tech. support, just seeing if any other users are experiencing this behavior at 11.3
... View more
07-26-2024
07:15 AM
|
0
|
0
|
214
|
POST
|
We use a Microsoft PowerAutomate Cloud Flow as our webhook receiver. After updating to Enterprise 11.3, we're trying to use signature verification to authenticate the webhook trigger. However, since our Microsoft services are deployed in US Goverment (GCC High), we're unable to use the Encodian - Connectors | Microsoft Learn and Utlity - Create HMAC. I'm wondering how others are generating these HMACs, using the secret key specified in the Server webhook? Or, any pointers on how to do this in Power Automate Cloud flows without using Encodian?
... View more
07-10-2024
07:48 AM
|
0
|
0
|
228
|
POST
|
Fortunately, this was just cached browser data causing the issue! I initially cleared Edge cache from "Last 7 days" selected, but I tried "All Time" when clearing it this time, and it's working great now! Phew... didn't want to deal with this on a Fri evening.
... View more
06-21-2024
03:34 PM
|
2
|
1
|
998
|
POST
|
after updating all Enterprise components from 11.1 to 11.3 I'm seeing this in the Portal logs. Error before stopping configuration observer. C:\arcgisportal\content\items\portal\portal-config.properties (The system cannot find the path specified) What's odd is that our content directory is: E:\arcgisportal\content And, "portal-config.properties" file is missing from both C: and E: When the upgrade was progressing through steps 1-7 initially it failed, because of a disk space issue. After that was increased, Retry upgrade worked without issue. The map viewer is missing all the layers, and details on the left-hand pane also. Otherwise, everything seems to be working fine. License.json imported from My ESRI, no other issues; all Enterprise components successfully updated. Help, please!
... View more
06-21-2024
02:04 PM
|
0
|
3
|
1032
|
POST
|
We have the below 2 immediate calculation rules specified on the same layer, both are executed by an Insert Trigger. I wasn't able to find anything definitive in the Attribute Rules docs, about evaluation order. Does rule "update_trans_table" wait for rule "update_last_known" to fully finish before execution? i.e. can we assume that rules are executed in order, one at a time? thanks!
... View more
05-23-2024
08:27 AM
|
0
|
1
|
213
|
POST
|
@DevinBartley2 wrote: Appreciate the extra provided detail and comments, this saves me an immense amount of time and lets me focus on the heart of the issue. Your comment makes me wonder how the performance would have been if you stuck with Azure Files but had the AVD hosts Entra ID joined and Kerberos auth enabled on Azure Files. The performance of Azure Files with AVD hosts Entra ID joined would still not match the NetApp volume (Premium performance Tier). How do I know this? After we deployed Entra ID joined AVD session hosts, we continued to use Azure Files for our FsLogix profile containers (user_name.vhdx). To do so, we had to create a new Azure Files premium StorageAccount for the "profiles" file share because you can only configure 1 authentication method per storage account, and the old storage account was configured for AD DS authentication. AD DS auth. isn't supported for Entra ID joined AVD session hosts, so the new storage account was configured with Kerberos authentication. We tried to migrate the old user_name.vhdx profile containers to the new storage account, but were unsuccessful, because the SID of the user_name changed (AD DS identity vs. syned Entra ID identity). Something internal with how FsLogix looks up the location of the user's profile container prohibited it from working, generally it's unsupported according to MS docs; migrating a profile container after a user's SID is changed. Everything seemed to work great, UNTIL the session host failed to retrieve an updated Kerberos ticket from Microsoft (Azure Files) to authenticate the "profiles" share that was hosting the user_name.vhdx profile container. Windows command: klist would show that the "profiles" share kerberos ticket expired 1 hour after it was issued. If that ticket wasn't updated by Windows successfully before that expiration date/time, then it was the equivalent of pulling a hard drive out of a working windows machine, complete chaos ensues for the user's session. I have no clue why Windows wasn't able to retrieve updated Kerberos tickets from Azure Files. It would work fine for all users for hours, then all of a sudden, it wouldn't. There was no rhyme/reason why it would/not work and when. All I can think of is Azure vnet sputtering or packet loss was the cause. Someone on StackOverflow said they cannot think of a worse use-case scenario for Azure Files (profile container hosting)...that's all we had to hear and pulled the plug on Azure Files, especially given our past GIS experience with it. Now, remember the Windows command: klist. While we were troubleshooting the above profile container issue with Azure Files, we noticed that the kerberos ticket for the NetApp "GISfiles" volume (stores FGDBs, ect.) was being issued by our AD machine, and had a lifespan of 10 hours. So, why not create another NetApp volume "profiles" and simply move the profile containers to there? That's exactly what we did, and all of our problems are now solved. Something tells me that the un-reliable Kerberos ticket renewal process with Azure Files would cause problems for GIS data stored on there as well, I can almost guarantee it. Pro would throw the "General Function Failure" error when that ticket expires, if a new one wasn't issued in time. I recently overheard an analyst last week "has anyone worked on their local machine with Pro lately?" someone then responded "no, the AVD machines are 10x faster". Simple conversation, but very satisfying to hear, we've been put through the paces getting this technology to work. When paired with a high performing NVIDIA GPU, our Pro machines are BLAZING fast. Next step will be to purchase another 1TB of capacity on our NetApp capacity pool, so the total would be 3TB, split between 2 volumes (GIS and Profiles), then we get 3 * 64 MiB/sec = 192 MiB/sec throughput to play with. NetApp lets you manually specify how much throughput to allocate to each volume, its highly configurable and super simple. I would highly recommend NetApp, it's amazing.
... View more
05-14-2024
10:29 AM
|
0
|
0
|
647
|
POST
|
I had a different thread I was updating: Azure Files premium file share - SLOW File Geodata... - Esri Community Yes, absolutely upgrade to NetApp files, you will not be disappointed. But, also note that we found performance still lacking until our AVD session hosts were Entra ID joined; and Kerberos authentication to the NetApp volume was configured. Traditional AD-joined AVD session hosts still had issues with NetApp performance, the machines would seemingly drop the NetApp volume connection, which caused Pro to display "General Function Failure" when accessing an attribute table of a FGDB layer, that was stored on the NetApp volume.
... View more
05-14-2024
07:42 AM
|
0
|
0
|
659
|
POST
|
We deployed a new Azure Virtual Desktop host pool that only contained Entra ID joined session host VMs, same NVIDIA GPUs. upgraded to Pro 3.2. Since the VMs were no longer AD joined, we had to migrate all the AD Group Policy Objects to InTune, which really wasn't that difficult given the GPO import tools in InTune. FsLogix profile containers were switched away from Azure Files Premium and are being stored in an Azure NetApp Volume, just like our GIS file-based data. Entra ID-joined session hosts can only authenticate with SMB shares using Kerberos authentication, not NTLM (on-prem AD). So, Kerberos auth. and ACLs (permissions) were configured per Azure docs. for both the NetApp \profile and \gis volumes. For the results: ALL PROBLEMS ARE SOLVED! Added bonus is that all MS office apps, including the Remote Desktop client are SSO using Entra ID (cloud) credentials. The VMs, Pro, and FGDB access is 100% improved. I wish I had a better explanation, but our problems with FGDB performance in AVD was 100% caused by authentication issues with traditional on-prem AD (which was a VM in Azure, not really on-prem). With the sputtering/performance issues resolved, I have no doubt that better performance could be achieved by increasing the NetApp capacity pool quota (i.e. monthly A$ure invoice). Our current cap. pool quota is 2TB which comes with 128 MiB/s throughput, divided how ever we want between \profile and \gis volumes. Another 1TB would add +64 MiB/s throughput. Azure NetApp is quite amazing, so simple.
... View more
05-02-2024
04:44 PM
|
0
|
0
|
1053
|
POST
|
@JonahLay Yes, that redirect URI is what I have. We can close this thread, the problem was a CA policy in InTune scoped to demo_user requiring "All users terms of use". This is odd because demo_user has already accepted our all users Terms of Use policy. So, it seems like the ESRI auth. connection doesnt' support that CA grant control. After excluding demo_user from that CA policy, everything works as expected both with/without MFA. This is great progress ESRI, thanks!
... View more
04-23-2024
09:46 AM
|
0
|
0
|
460
|
POST
|
Our Azure tenant is deployed in Azure Government. Followed these steps: Connect to authentication providers from ArcGIS Pro—ArcGIS Pro | Documentation When I attempt to sign into the connection in Pro, I get this error: I assigned demo_user permission to access the ArcGIS Pro Azure Enterprise app. I also edited our conditional access policies to exclude demo_user from any policies requiring MFA. Even with MFA, I complete the MS Authenticator prompt and still see this error in Pro. The Azure enterprise app sign-in log shows successful login attempts with both MFA/not, no issues. Anyone have any ideas? Here's my concern: Redirect URI (reply URL) restrictions - Microsoft identity platform | Microsoft Learn Redirect URIs must begin with the scheme https From the first link, step #1C when you register Pro as an Azure app: For Redirect URI, choose Mobile and desktop applications as the platform and enter the URI: arcgis-pro://auth Could this error be caused by the authorization server (Microsoft) not allowing demo_user to be redirected back to Pro because the arcgis-pro:// schema doesn't match the required https:// schema that MS requires?
... View more
04-22-2024
04:18 PM
|
0
|
2
|
570
|
POST
|
We deployed a new Azure Virtual Desktop host pool that contains NVIDIA GPUs and Pro 3.2.1. This general function failures STILL happen, only when accessing FGDBs stored in Azure NetApp files. Anyone experience this, ESRI, any advice here?
... View more
02-06-2024
11:45 AM
|
0
|
0
|
1528
|
POST
|
We have a python script wired to a toolbox that suddently started throwing the below error at 3.2.1. Pro 3.1.3 doesn't have any issues opening this tool in Pro, but 3.2.1 will not even open the tool for users to input parameters. Anyone experience this, how to diagnose? initializeParameters Error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x91 in position 8: invalid start byte The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\david\QAP_Toolbox.tbx#CIPAutoQAPRound2.InitializeParameters.py", line 29, in <module> File "C:\Users\david\QAP_Toolbox.tbx#CIPAutoQAPRound2.InitializeParameters.py", line 8, in __init__ File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\__init__.py", line 1603, in GetParameterInfo return ParameterArray(gp.getParameterInfo(tool_name)) File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\geoprocessing\_base.py", line 457, in getParameterInfo self._gp.GetParameterInfo(*gp_fixargs(args, True))) SystemError: <built-in method getparameterinfo of geoprocessing object object at 0x000001D11824C150> returned a result with an error set
... View more
01-18-2024
07:57 AM
|
3
|
4
|
976
|
POST
|
I can't believe this isn't documented, or discussed more..? We abandoned Azure Files and deployed an Azure NetApp Files 2TB smb volume, premium tier. Throughput given that tier and volume size is 128 MiB/s, equivalent to an on-prem 1000 Mbps network. And, the performance shows, Id say it's faster then our 10k RPM SSD SATA disks on a 1000mbos network. This is our identical architecture: https://learn.microsoft.com/en-us/azure/architecture/example-scenario/data/esri-arcgis-azure-virtual-desktop The only issue we have are random "general function failures" in Pro 3.1.2. Happens when the project is using FGDBs. SDE GDBs and you never get that failure. I'd say that error was happening 1-4 times /hr on Azure Files, and now once per day on NetApp files. It seems to happen when you open an attribute table. There's no way for Pro to recover, nothing draws, ect until you close Pro, then re-open..back to normal. So close to a REALLY nice, highly scalable virtual platform.
... View more
11-10-2023
05:45 PM
|
1
|
1
|
2046
|
Title | Kudos | Posted |
---|---|---|
2 | 06-21-2024 03:34 PM | |
1 | 11-10-2023 05:45 PM | |
3 | 01-18-2024 07:57 AM | |
3 | 08-29-2023 07:50 AM | |
1 | 05-10-2021 07:48 PM |
Online Status |
Offline
|
Date Last Visited |
3 weeks ago
|