POST
|
I am trying to determine the rules of engagement when you use an ESRI REST service I issue the following request rest/services/Utilities/Geometry/GeometryServer/simplify?sr=4283&geometries=%7b%22geometryType%22%3a%22esriGeometryPolygon%22%2c%22geometries%22%3a%5b%7b%22rings%22%3a%5b%5b%5b123.83226587344406%2c-28.061656435295191%5d%2c%5b123.83226587344406%2c-28.008807755308091%5d%2c%5b123.88049895044182%2c-28.008807755308091%5d%2c%5b123.88049895044182%2c-28.061656435295191%5d%2c%5b123.83226587344406%2c-28.061656435295191%5d%5d%5d%7d%5d%7d&f=json" It normally returns statuscode of 200 and a json string containing a polygon every now and again (perhaps 1 in a 1000, 1 in 1000000) it returns the following HTTP/1.1 200 OK Strict-Transport-Security: max-age=31536000;includeSubDomains Vary: Accept-Encoding Cache-Control: private,no-store,no-cache Content-Type: text/html; charset=utf-8 Date: Sun, 30 Jun 2019 11:00:21 GMT Server: Microsoft-IIS/10.0 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Set-Cookie: ZNPCQ003-32383000=6f2aa75d; Path=/; Domain=.dmp.wa.gov.au; HttpOnly Via: 1.1 gissdi.dmp.wa.gov.au (Access Gateway-ag-683BD5D4662193FD-9926260) Content-Length: 7293 ArcGIS Web Adaptor Could not access any server machines. Please contact your system administrator. Leaving aside the obvious error that has occurred somewhere between the ArcGIS WebAdapter and the server machine. (although any clues would be appreciated!) Is it valid to return a status code of 200 and not honour the f=json Or is it OK to return status code of 200 and override the format with ‘Content-Type: text/html’ The answers to these questions go the heart of how code should check the response from a REST call My expectation was that a code of 200 says that the request has been accepted and processed and that the result of that processing (including possible errors) is contained in a json construct.
... View more
06-30-2019
09:38 PM
|
0
|
0
|
422
|
POST
|
Further clarification.. In the above description, when I am talking about 'live/open transactions' I am referring to SQL transactions, not ESRI transactions/lock tables etc
... View more
09-21-2018
08:41 PM
|
0
|
0
|
631
|
POST
|
I have progressed this investigation further and I am now in a position that I can recreate the problem (or at least a manifestation of the problem) 100% of the time. In ArcCatalog (with SQL 2016 FP2 backend) Display properties of a layer by right-click properties Check with SQL to see if there are any live transactions (there should be none) Establish an ESRI lock on the table being displayed (I went into ArcMap and displayed the layer) Repeat the display of properties (while ArcMap is still using layer) Check with SQL to see if there are any live transactions (In our case the ArcCatalog session will now be holding an open transaction) The transaction will remain open until you close properties Additionally, every second time that you display the properties of a locked layer the transaction will stay open even after the property window is closed. In this case just clicking on the whitespace within ArcCatalog will close the transaction. We are raising an issue with ESRI but any confirmation that you too can create the problem (or not) would be appreciated
... View more
08-20-2018
02:20 PM
|
0
|
1
|
631
|
POST
|
On rare occasions ArcCatalog leaves a transaction running with no SQL activity occurring. I am looking for anyone else who may have noticed this issue and, if I am really lucky, has a solution. It appears to be associated (not yet 100% confirmed) with moving between database connections in the ArcCatalog tree view. Environment is ArcGIS 10.5.1 running on SQL Server backend As discussed in http://desktop.arcgis.com/en/arcmap/latest/manage-data/geodatabases/concurrency-and-locking.htm ‘Beginning with ArcGIS 10.4, geodatabases in SQL Server must have the SQL Server database options READ_COMMITTED_SNAPSHOT and ALLOW_SNAPSHOT_ISOLATION set to ON, and ArcGIS uses the READ COMMITTED isolation level for transactions’ From a SQL perspective this causes the history of a transaction to be kept in TEMPDB until the transaction ends. If a transaction remains open, the space in TEMPDB is not removed. In addition, any TEMPDB transaction information for any other transactions across the whole SQL Instance will be retained until the original long running transaction completes (This is not 100% true, but sufficient detail to explain the problem). Accordingly, it becomes important that ArcGIS transactions do not run for extended (like hours) periods of time. We are seeing the situation that, on rare occasions a user of ArcCatalog has an active transaction running for hours, when, from his perspective, all he is doing is looking at the treview of data base connections. When we go to the client PC we see an ArcCatalog screen like… If we then right-click on ‘SDIDIV_SQLP2_PRD2.sde’ and then release the ‘right-click’ the problem is resolved. In a SQL trace (started just before the right-click) we see a series of SQL queries, followed by a ‘COMMIT’ of a transaction with a start time of 01/08/2018 10:51 and an endtime of 01/08/2018 14:37 (nearly 4 hours!) My conclusion is that ArcCatalog has ‘forgotton’ to commit some processing at 10:51 (probably the user moving between database connections). My DeLorean is in for a service at the moment and so I cannot show you the trace around 10:51. The simplest conclusion (from my perspective) is to ask ESRI to identify their coding error and fix it, but I think, without further documentation, that would be a little unrealistic. A second option is to get the user to recreate the problem and perform a trace while he is doing this (This has so far been unsuccessful). Given that we cannot identify root cause, the next option is to mitigate the situation. • Can we set a transaction timeout? • Can we set an idle timeout for a transaction? Problem seems to be isolated to one user. His usage profile is such that he often switches between database connections, whereas a ‘normal’ user chooses a database connection and sticks with it for extended periods of time Any thoughts welcome
... View more
08-01-2018
07:22 PM
|
1
|
3
|
987
|
POST
|
I have previously published a generalised question on this subject, but received only one reply. Following is a more specific question with an actual example.. Running 10.5 on SQL Server 2016 Post https://community.esri.com/thread/112345 describes a situation where an apparently valid FGDB is copied to an EGDB and detects errors. In particular Vince Angelo states ‘The ArcSDE API enforces Clementini geometry integrity rules. It is not possible to construct an SE_SHAPE object which does not conform to those rules. File geodatabase and shapefile permit non-conformant rings.’ Our situation is the reverse. If you take the following POLYGON 'POLYGON ((120.002794839062 -32.8470945869896, 120.002794613047 -32.836274112266, 119.992113708534 -32.8362742049204, 119.99211391052 -32.8470946837942, 120.002796135038 -32.8470945869779, 120.002794839062 -32.8470945869896)) And import it into an EGDB Layer specified with ESRI defaults then it does so without error. (Using either GEOMETRY or SDEBINARY) If you take the same POLYGON and import it into an FGDB with ESRI defaults (same as EGDB defaults) then run CHECK GEOMETRY it shows as ‘SELF-INTERSECTING’. How can a POLYGON be valid in EGDB and yet self-intersecting in FGDB? I accept that point 2 is ‘wrong’, but why did EGDB load not fail. (Or alternatively why did FGDB load detect self-intersection). My conclusion from this is that the ONLY way to change data reliably is to do so via FGDB and then to save to EGDB. This is manageable at take-up time, but will have implications going forward in that processing taking place directly on EGDB may appear to have worked, but when it is sent to a FGDB for distribution it shows up as ‘self-intersecting’. For clarity I have included additional information in attachment Please tell me my conclusion is wrong!
... View more
04-12-2018
07:01 PM
|
0
|
0
|
414
|
POST
|
Joshua I am not working in feet/metres, but in degrees and using following coordinate system GCS_GDA_1994 WKID: 4283 Authority: EPSG Angular Unit: Degree (0.0174532925199433) Prime Meridian: Greenwich (0.0) Datum: D_GDA_1994 Spheroid: GRS_1980 Semimajor Axis: 6378137.0 Semiminor Axis: 6356752.314140356 Inverse Flattening: 298.25722210 If I am using ArcCatalog and am in a FGDB, I right click-New-Feature Class, specify Name, hit next, select 'Geocentric Datum of Australia 1994', then next and it prompts me for tolerance with a default of '0.000000008983153' (If I hit 'Reset to Default' it is unchanged). I then hit next, leave storage config as default, hit next and finish
... View more
04-01-2018
04:16 PM
|
0
|
0
|
545
|
POST
|
I have progressed further down the path and I am documenting my findings here to perhaps help some other poor soul who is travelling down the same path. I am not an ESRI expert, my findings are based purely on what I see. I had previously said we were dealing with polygons, to be more precise we are dealing with polygons and multipolygons. If you have non-ESRI data and want it in an Enterprise geodatabase (and nowhere else) then using FME with an input of SQL data, passing through an FME Geometry-Validator is a reasonable way to go. If however, you then take the resultant Enterprise Layer and move to a File Geodatabase you may well find additional 'Self-intersections'. On the other hand if you want to be able to use the data in both File Geodatabase and Enterprise geodatabase then the easiest path is to use FME to take the non-ESRI data and output it directly to a File Geodatabase (do not use FME Geometry-Validator as it produces many false positives). Once it is in the File geodatabase you can then use ESRI Repair-Geometry to fix the self-intersections (hopefully). The resultant File Geodatabase layer can now be copied to an Enterprise geodatabase with confidence (an added bonus is that it can be exported to a SHAPE file as well). My overall conclusion (if someone more knowledgeable can confirm) is that the File Geodatabase is more aligned with a SHAPE file than an Enterprise geodatabase. Perhaps the accuracy that a Fiile geodatabase stores its data is the same as that used for a SHAPE file?
... View more
03-31-2018
07:19 PM
|
0
|
0
|
545
|
POST
|
Some time ago I raised a discussion titled 'ESRI and Microsoft SQL disagreeing on what is a valid shape'. The response from that gave me a far greater understanding of Resolution/Tolerance and the differences between the ESRI and Microsoft SQL Worlds. This discussion is now looking at ESRI Enterprise Geodatabase and ESRI File Geodatabase having differing points of view on what is valid. We are running 10.5.1 using SQL Server 2016. The steps I took were Ran FME (2016) to read a SQL table containing 600K polygons and write to an ESRI Enterprise Geodatabase with default Resolution/Tolerance (0.000000001/0.000000008983153) My understanding is that ESRI will not allow invalid (eg self-intersecting) polygons into the Enterprise Geodatabase . This is confirmed by ArcCatalog managing to display the results. Using ArcCatalog I then did an Export to a file geodatabase with the target having default Resolution/Tolerance (0.000000001/0.000000008983153) I then used the Toolbox DataManagementTools - Features - Check geometry against the File geodatabase. This identified 315 'self-intersections'. This appears to me to be an illogical situation. How can Enterprise geodatabase load an apparent 'self-intersecting' polygon without detecting an error. (Or is logic in 'Check geometry' different to that in Enterprise input processing?) My next step is to try loading from SQL to the File Geodatabase (using FME) and to identify and resolve 'self-intersections', but I thought it worth raising the issue to a wider community before I waste time going down a wrong path.
... View more
03-13-2018
07:28 PM
|
0
|
3
|
872
|
POST
|
We are using ESRI 10.5 in an Enterprise environment using SQL Server 2016. We are seeing gigantic memory creep in the ARcSOC session related to System.CachingTools.GPServerSync. Within 15 minutes of starting it has hit 2.8 gig and after 3 hours it has reached 17 gig (at which point our server turns up its toes). Thread and handle counts are not increasing. Our current mitigation is to recycle service after 4 hours (I think I will be bringing this down to 1 hour) Before I start opening an official case with ESRI I was wondering if anyone else has seen this behaviour and can tell me possible causes datetime MemUsage(K) HandleCount ThreadCount CPUTime 13/12/2017 12:15 2,831,548 1141 44 2,402,187,500 13/12/2017 12:30 4,685,008 1159 46 6,837,187,500 13/12/2017 12:45 6,801,564 1050 43 11,008,906,250 13/12/2017 13:00 8,299,000 1049 43 15,670,312,500 13/12/2017 13:15 9,691,264 1048 43 20,743,750,000 13/12/2017 13:30 10,694,804 1046 43 25,436,562,500 13/12/2017 13:45 11,255,444 1049 43 30,856,718,750 13/12/2017 14:00 12,145,916 1048 43 36,877,500,000 13/12/2017 14:15 13,335,792 1049 43 42,753,437,500 13/12/2017 14:30 14,612,320 1049 43 48,973,125,000 13/12/2017 14:45 15,484,756 1048 43 55,534,531,250 13/12/2017 15:00 16,742,188 1048 43 62,003,281,250 13/12/2017 15:15 16,830,284 1009 41 66,780,781,250
... View more
12-13-2017
03:11 PM
|
0
|
0
|
293
|
POST
|
We have a mixed ArcGIS environment (10.2.1 and 10.5) connecting to a 10.2.1 Enterprise geodatabase SQL Server .2016. https://support.esri.com/en/technical-article/000013039 states (in part) "Using ArcGIS 10.4 with SQL Server databases or 10.3.1 or earlier release geodatabases, requires manually setting READ_COMMITTED_SNAPSHOT to ON in the database" Does this imply that you MUST set these values, or you CAN set these values. This is important to us as the ESRI code in the Map Services in our 10.2 ArcGIS environment start a transaction at startup and retain this until shutdown (24 hours later). With READ_COMMITTED_SNAPSHOT turned on we will have to make TempDB big enough to retain 24 hours of database updates (or recycle the mapservices on a more regular basis). On the other side, if the answer is 'CAN set these values' then in our 10.5 ArcGIS Server we have lost whatever protection that the longrunning open transaction provide in 10.2 and we have not gained any protection that was offered by the READ_COMMITTED_SNAPSHOT change. Can anyone confirm one way or the other? EDIT: I have been advised by ESRI support that it is mandatory. I also found another link http://desktop.arcgis.com/en/arcmap/latest/manage-data/geodatabases/concurrency-and-locking.htm that was more definitive Beginning with ArcGIS 10.4, geodatabases in SQL Server must have the SQL Server database options READ_COMMITTED_SNAPSHOT and ALLOW_SNAPSHOT_ISOLATION set to ON. When you edit a geodatabase in SQL Server in a nonversioned edit session, ArcGIS uses the READ COMMITTED isolation level for transactions.
... View more
11-29-2017
06:27 PM
|
0
|
1
|
2099
|
POST
|
If you are using ArcEditor/ArcMap in 10.5.1 and you are cannot see some layers then read on.. We run multiple SQL 2016 Instances and multiple databases within instances. Our Enterprise Geodatabase resides within this environment We do not define individual users to the Instance or the database, all access is defined via AD groups. The users are members of AD groups The AD groups are defined to the Instance The AD groups are granted membership of SQL roles within the Instance and the database We use ArCatalog to grant the SQL roles to individual layers When we started migrating to ArcCatalog 10.5.1 (from 10.2.1) we noticed that some users could no longer see certain layers within ArcCatalog We tracked it down to these users belonging to groups that granted ‘VIEW PERMISSIONS’ at the Instance level. Removing ‘VIEW PERMISSIONS’ allowed them to see the correct results in ArcCatalog. So our ‘fix’ was to remove ‘VIEW PERMISSIONS’. However the ‘proper’ fix would be to identify why granting additional access (which is what ‘GRANT PERMSSIONS’ does in a normal SQL environment) resulted in a reduction in access in ArcCatalog Has anyone else seen this issue? If so, what action did you take?
... View more
10-04-2017
04:46 PM
|
0
|
1
|
421
|
POST
|
We are using ArcGIS Enterprise 10.2.1 I have a need to determine appropriate MAX_INSTANCE values for our fleet (100+) of Mapservices. My method was • Set MAX_INSTANCE to 1 for all MapServices • regularly extract the statistics provided via the REST service http://xxx/gis/admin/services/AppsShared/Basemap.MapServer/statistics • I then used the total busy time to calculate 10 minute utilization of individual Mapsservices • MapServices with a utilization of greater than 30% were then given a higher Max_Instance value until the utilization figure dropped below 30%. This process worked well until we introduced MapServices that sourced their data from pre-cached tiles. For some unknown reason these Mapservices are not accurately reported in the REST statistics (I think their startup is recorded as an invocation, but after that nothing). The ArcGIS log does not have CODE= 100004 records for these MapServices either My questions are • Is this lack of statistics by design (perhaps performance considerations?) • My understanding is that the MapService instance is still required to extract the required data for the tile. Is this correct? • If the previous statement is correct then how do I determine an appropriate MAXINSTANCE value? I am experimenting with analysis of the IIS records to determine utilisation, but I would have preferred to get the data directly from the REST Statistics
... View more
05-15-2017
12:42 AM
|
0
|
0
|
469
|
POST
|
I have a similar problem and so I thought I would take Vince's advice and see if I could isolate the baddies using ascinfo. I dumped my SQL geometry into a flat file 2 POLYGON ((120.258097207178 -28.8259932764729, 120.266262438367 -28.8253653184617, 120.271354650676 -28.8315606240709, 120.271588304722 -28.8377220084302, 120.26499880165 -28.8385718603031, 120.258097207178 -28.8259932764729)) 3 POLYGON ((120.271354650676 -28.8315606240709, 120.277827541831 -28.8300055765609, 120.282974908246 -28.8436300321902, 120.276499614822 -28.8451858541119, 120.271588304722 -28.8377220084302, 120.271354650676 -28.8315606240709)) etc I then setup a CTL file in various combinations, but I could not get the syntax correct. The current sticking point is how to specify that input is polygons. ..\jc.ctl: Class 'GeoArea' not found (line 10) ..\jc.ctl: Class 'GeoPolygon' not found (line 11) ..\jc.ctl: Class 'GeoPoly' not found (line 11) ..\jc.ctl: Class 'Geopoly' not found (line 11) ..\jc.ctl: Class 'GEOPOLygon' not found (line 11) ..\jc.ctl: Class 'GEOPOL' not found (line 11) COORDREF_XY -210,-120,1000000 COORDSYS GCS_WGS_1984 EFLAGS "np" SKIP 1 COLUMNS OBJECTID String - 11 N $WKT String - 32767 N Shape GEOPOL($WKT) - 1 Y END Can someone push me along a little bit? Thanks
... View more
11-07-2016
08:00 PM
|
0
|
1
|
820
|
POST
|
We have 32 gig of memory so, based on what you are saying, the javaw.exe process may consume up to 8GB of memory as long as the system is not under memory pressure. Thanks for that information (plus the pointer to the Update)
... View more
10-25-2016
02:04 AM
|
0
|
0
|
201
|
POST
|
We are running 10.2.2 and I thought I would share our javaw memory issues As you can see the javaw.exe memory usage expanded from late on 20/10 through to 24/10. This server is the second in our cluster. The only process running over the weekend was the ArcGIS Cache creation process creating 20 tiles per second. Our first ArcGIS server was running the same process creating the same amount of tiles. It's memory profile was.. The cache file creation process began at lunchtime Thursday with 3 cache creation processes on each server, this was increased to 8 on each server at 16:00 and decreased back to 3 on Monday 24 at 8AM. The changing of the number of processes involves stopping the cache creation process and then starting it again. As can be seen in the first chart this stop/start had no impact on javaw memory hogging. The javaw process that is consuming the vast majority of the memory has a command line of.. "D:\Program Files\ArcGIS\Server\framework/runtime/jre\bin\javaw" -Dnop -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dsun.locale.formatasdefault=true -Djava.endorsed.dirs="D:\Program Files\ArcGIS\Server\framework\runtime\tomcat\endorsed" -classpath "D:\Program Files\ArcGIS\Server\framework\runtime\tomcat\bin\bootstrap.jar;D:\Program Files\ArcGIS\Server\framework\runtime\tomcat\bin\tomcat-juli.jar" -Dcatalina.base="D:\Program Files\ArcGIS\Server\framework\runtime\tomcat" -Dcatalina.home="D:\Program Files\ArcGIS\Server\framework\runtime\tomcat" -Dcatalina.log.level="OFF" org.apache.catalina.startup.Bootstrap start Handle count on server 2 over the period... And thread count on server 2 over the period.. We have another week of running the cache process (although I think server 2 may need a reboot before then). Anyone with any ideas of how to better track what the memory is being used for?
... View more
10-23-2016
11:59 PM
|
0
|
2
|
674
|
Title | Kudos | Posted |
---|---|---|
1 | 08-01-2018 07:22 PM |
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:24 AM
|