POST
|
dear Readers, thank you for taking the time to help me out. we have setup a test AGS monitor inside our firewall. AGS monitor is very quick to point out infrastructure issues. Map and feature server services were published from SQL Server layer/ spatial table w attributes with upwards of 5 million records to 'trigger' meesages on ArcGIS Monitor software. This error was present on ArcGIS server hosting the feature and map server services. SEVERE Apr 22, 2019, 11:15:41 AM Error: Underlying DBMS error [[Microsoft][SQL Server Native Client 11.0][SQL Server]The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions.] [MyDBName.dbo.VW_LayerName_SHAPE_P_***]. cpDelMe/myServiceName.MapServer When we consumed these services in ArcGIS pro it started behaving inconsistently when the feature server was selected for visibility and panning. Panning and Zooming created the above error messages. ArcGIS monitor did not record any service errors but pointed out that more than 85% of the processor resources have been used and only than 1GB was available at the server's disposal. There were many instances of the above error messages when there was a server request to the database. How to get these messsages to show up on ArcGIS Monitor? We could not locate the config documents to pipe the information to ArcGIS Monitor. thanks for your help. regards ravi Kaushika.
... View more
04-22-2019
02:08 PM
|
0
|
6
|
943
|
POST
|
Cody Benkelman, good morning. thank you for your pointed response. Between your response and Peter Becker 's response, we should have good starting point. We might add less than 10 attribute fields - thanks for clarifying that adding attribute fields would NOT affect the performance. We don't plan to cache this service. So it is good to know that serving the derived mosaic dataset as a dynamic image service (with additional attribute fields ) eliminates the need for a separate metadata service. thanks for your help and regards Ravi Kaushika
... View more
03-29-2019
09:42 AM
|
0
|
0
|
316
|
POST
|
Peter Becker, good morning. thank you for your pointed response. I will be sharing it with my colleagues for additional discussion. I will answer some of your questions: Derived dataset - we 'decided' to merge batch 1, batch 2, and 3 into a 2018 image service for faster display at zoomed out (national extent) - small scales. When seen on ArcGIS map viewer, we would see only 2 records on the attribute grid. We decided to serve out metadata from foot print export to a file geo-database/ Feature class to modify the attribute structure and onwards to SDE feature class. Till recently, there were 'analog' and digital collections - they were merged and served by year. Again, metadata for all the years were served as a separate map server service. In 2018, metadata was presented as both map and feature services. We are NOT doing on the fly ortho-rectification; that is one less problem for us. Vendors would be providing us with 'rectified' imagery. Thanks for mentioning that users need to have pop-up configured for seeing Bozeman GIS kind of services. Thanks for the clarifications about standardized attributes and mentioning that there is no upper limit to the size of the mosaic dataset. On a side note, I am a Arc Objects (VB macro and .NET), Web ADF, Server ADF, Silverlight, JS, and a web app hard-core vector person. Apologies if I exhibited my raster ignorance with my questions. thanks for the pointers and I will follow up on them. thanks and regards Ravi Kaushika.
... View more
03-29-2019
09:32 AM
|
0
|
0
|
316
|
POST
|
dear Readers, thank you for taking the time to help me. till recently we had analog imagery captured, geo-rectified and served as a map service on ArcGIS server. For the last couple of years, we have started using image service for the images with a separate metadata map service based on what the end users wanted to see about the image. Before the collection begins, a vector file (shape of file geodatabase) given to vendors to collect imagery. Imagery would be collected digitally moving forward with all fiducial information. Among the information the users currently care about are the actual date of collection, batch (FY2017 or FY2018 etc), quad name (or DOQQ), and the contract areas unique ID. To get an idea about these, I was closely looking at the Multi-year imagery by Bozeman GIS (Bozeman GIS) http://www.arcgis.com/home/item.html?id=b57b656dc5824edcbb40b02f7acad893 The multi-year imagery service when viewed in ArcGIS Pro had attribute tables that would be present while creating a mosaic dataset. Along with the 'default' columns from the software, there were collection date and other user-defined columns. these columns were not present when viewed in 'ArcGIS Map Viewer' . Imagery collected by our agency might be over 10TB as tiff and close to 4 or 6 TB in MRF. There are an estimated 10,000 to 12,000 raw picture files or contract areas that need to be studied. This is an annual effort for different parts of the country as determined by the business and legal needs. Questions: 1. Should we get 1 mosaic dataset per year or break 1 years image files into many states or region mosaic datasets? if we break the image collection by state/ can we get a national single mosaic dataset to be served as a service? conversely, what is the upper limit of mosaic datasets? 2. As indicated before the end users might be interested in seeing certain attributes of the collected image - is it a desirable practice to add 2 or more columns as done by Bozeman GIS in the above service? or create a metadata map server with 'envelope'/ footprint of the images from the mosaic dataset --> SDE --> ArcGIS map document --> map server. An approx. max of 12,000 vector records with a few attributes a year would be created - this layer/service is not worrisome to us. 3. Does ESRI recommend 1 mosaic dataset with (or without) metadata columns per year or have a single mosaic dataset for many years with updating the dataset with a year tag for the each of ~5 TB MRF files/year? If people consume the above data in an application and when the application goes to production, there has to be a conscious effort/time spent to ensure the data is 'consistent' with previous years 'style'. That should be planned up front. We are thinking out loud and trying to 'adopt' industry's best practices as we move forward for increased efficiency. thanks for your time again. regards Ravi Kaushika.
... View more
03-28-2019
03:42 PM
|
0
|
4
|
427
|
POST
|
Andrew, good afternoon. My previous reply was lost. first - thanks for the offer of help. I don't have login privileges to the production servers. I can request them to give a zip version. I can try.. Based on Jonathan Quinn's suggestion and ESRI support staff, we are planning to request a windows job to run every few hours to delete unwanted files. thanks and regards ravi.
... View more
06-08-2018
02:55 PM
|
0
|
0
|
370
|
POST
|
Thanks Jonathan Quinn for your help. we are planning to request a Windows job to delete the files every few hours. thanks and regards ravi.
... View more
06-08-2018
02:50 PM
|
0
|
0
|
370
|
POST
|
Thanks for the kind words. One of the things that helped to reduce overview errors were using 'Exclude Duplicates' while adding raster imagery to mosaic dataset. Usually I see a 'Removed 0 duplicate mosaic dataset items in the log output of 'Add Rasters to Mosaic Dataset' operation,
... View more
05-31-2018
08:52 AM
|
0
|
0
|
1206
|
POST
|
good afternoon. Sorry for the delay in responding. I was taking care of higher priority projects before I could get back to it. Updates: 1. One of our main servers crashed and that was causing problems during the process of building overviews; we waited for a new server to be ready. 2. on a 10.5.1 machine, we created a file geodatabase, added rasters, defined and built overviews; since ESRI had not suggested an upper limit, we tried to build overviews on all the 7.4 TB of lidar 1m MRF files. 3. it kept failing building overviews resulting in a lot of gaps. to make it easy, we decided to add 1 raster folder (approx. 500 GB) and build overviews. it has been fairly stable. 4. once the overviews are built, we are planning to make it available as an imagery service, things have been moving slowly in the right direction. thanks and regards ravi.
... View more
05-30-2018
02:53 PM
|
1
|
2
|
1206
|
POST
|
Joris and Andrew, I went through the app when no one was using the app - and watched the network debug on the browser and saw that: 'C:\\Users\\S_SPRD~\\AppData\\Local\\Temp\\geoprocess\\dataextract_gpserver\\j4a463fb27ba448a2a5907f7f571c4c42\\scratch\\15SVA095230_286969_sa6.tif', 'C:\\Users\\S_SPRD~\\AppData\\Local\\Temp\\geoprocess\\dataextract_gpserver\\j4a463fb27ba448a2a5907f7f571c4c42\\scratch\\15SVA095245_286970_sa6.tif', 'C:\\Users\\S_SPRD~\\AppData\\Local\\Temp\\geoprocess\\dataextract_gpserver\\j4a463fb27ba448a2a5907f7f571c4c42\\scratch\\15SVA095260_286971_sa6.tif', 'C:\\Users\\S_SPRD~\\AppData\\Local\\Temp\\geoprocess\\dataextract_gpserver\\j4a463fb27ba448a2a5907f7f571c4c42\\scratch\\15SVA110215_286989_sa6.tif', 'C:\\Users\\S_SPRD~\\AppData\\Local\\Temp\\geoprocess\\dataextract_gpserver\\j4a463fb27ba448a2a5907f7f571c4c42\\scratch\\15SVA110230_286990_sa6.tif', 'C:\\Users\\S_SPRD~\\AppData\\Local\\Temp\\geoprocess\\dataextract_gpserver\\j4a463fb27ba448a2a5907f7f571c4c42\\scratch\\15SVA110245_286991_sa6.tif', 'C:\\Users\\S_SPRD~\\AppData\\Local\\Temp\\geoprocess\\dataextract_gpserver\\j4a463fb27ba448a2a5907f7f571c4c42\\scratch\\15SVA110260_286992_sa6.tif', 'C:\\Users\\S_SPRD~\\AppData\\Local\\Temp\\geoprocess\\dataextract_gpserver\\j4a463fb27ba448a2a5907f7f571c4c42\\scratch\\15SVA125215_287010_sa6.tif', 'C:\\Users\\S_SPRD~\\AppData\\Local\\Temp\\geoprocess\\dataextract_gpserver\\j4a463fb27ba448a2a5907f7f571c4c42\\scratch\\15SVA125230_287011_sa6.tif', Many such rows - the prod support staff member told me that the process were running on a user that had access to the \\networkShare\folders\. As per the previous posting of python script setting a scratchFolder: self._ws = env.scratchFolder I opened pyScripter and was trying to change the arcpy.geoprocessing.env.scratchfolder and I tried to reset or change it to another folder - it would not let me do it. >>> arcpy.geoprocessing.env.scratchFolder = "D:\MyDownload" Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "D:\Program Files\ArcGIS\Server\ArcPy\arcpy\geoprocessing\_base.py", line 541, in set_ self[env] = val File "D:\Program Files\ArcGIS\Server\ArcPy\arcpy\geoprocessing\_base.py", line 601, in __setitem__ ret_ = setattr(self._gp, item, value) AttributeError: Object: Environment <scratchFolder> cannot be set thanks and regards ravi.
... View more
05-30-2018
12:41 PM
|
0
|
5
|
1036
|
POST
|
Further to the python environment settings, there is statement in py code: self._ws = env.scratchFolder (line 159 of elevation_async.py) ; i didn't know where to check for the py env being set. i searched the Python27 folder but could not make headway. hope it helps. regards ravi
... View more
05-24-2018
03:22 PM
|
0
|
0
|
1036
|
POST
|
About the upgrade process: here is the answer from the support staff member: "This was an In place upgrade 10.4 to 10.5.1, no new machines. Service account was not changed. Service directories were not changed and are all still pointing to the file share."
... View more
05-24-2018
11:28 AM
|
0
|
0
|
1036
|
POST
|
Thanks Joris, i will look at the link and do the needful. thanks for your time. regards ravi.
... View more
05-24-2018
10:00 AM
|
0
|
0
|
1036
|
POST
|
Thanks Andrew for the offer of help. i have attached PY files. it was written in mid to late 2014 if i am not wrong. thank you for the help. regards ravi.
... View more
05-24-2018
09:59 AM
|
0
|
0
|
1036
|
POST
|
Andrew Valenski, good afternoon. the staff member mentioned to me that all 'directories' are pointing to \\MyNetworkShare\folders and no trace of c:\temp anywhere. i asked the staff member to check whether the 'user' running the GP service has write permission to the folder(s). i will update the thread as i make more findings. regards Ravi.
... View more
05-23-2018
11:04 AM
|
0
|
3
|
1036
|
POST
|
dear Readers, good afternoon. our production team migrated the ArcGIS server instance to 10.5.1. After the upgrade we observed c:\temp became the 'scratch' or working directory for all the services and GP Functions. as a result of this change, c:\drive was getting filled up quickly and services/apps were throwing errors, on the other hand, we verified that site and all services in load balanced clustered DEV instance to be set to: cache: \\myShare_das\GeoData_FS\directories\arcgiscahe jobs: \\myShare_das\GeoData_FS\directories\arcgisjobs output: : \\myShare_das\GeoData_FS\directories\arcgisoutput system: : \\myShare_das\GeoData_FS\directories\arcgissystem cache (tiles): \\myCacheServer\arcgiscache the values are consistent at the site and individual service level. Since we don't have direct access to the prod servers, we are not able to check this. we have placed a request for the prod support team to look into it. Are there any other settings that could be causing the working folders to be set to c:\temp? the GP service created using python (10.3?) was working well before the upgrade. we are working closely with prod support team - we wanted to get more ideas while requesting them to make changes if necessary (all in 1 shot if possible). regards ravi.
... View more
05-21-2018
01:57 PM
|
0
|
14
|
1784
|
Title | Kudos | Posted |
---|---|---|
1 | 09-13-2013 07:52 AM | |
1 | 09-10-2013 02:15 PM | |
1 | 03-20-2023 06:38 AM | |
1 | 05-17-2022 07:41 AM | |
1 | 01-04-2021 10:08 AM |
Online Status |
Offline
|
Date Last Visited |
12-08-2023
07:21 PM
|