POST
|
I have a feature service I am trying to enable for offline editing in Collector. The feature classes in the service are unversioned, and each has archiving enabled. In ArcMap with the feature service loaded in, when I right-click the service in the ToC and then click Create Local Copy for Editing, after several minutes of progress bar it reports: A local copy could not be created. The specified feature dataset extension type was not found. What does this mean, and how do I correct it? The user needs this enabled, and so I would like help, please. Thanks, Justin ArcGIS for Server 10.5.1
... View more
11-06-2018
10:07 AM
|
0
|
14
|
5615
|
POST
|
Thank you, Luke, for this. It works great, but for anyone who gets an error on the last line, try changing it to: arcpy.Append_management(inData, target_layer, "NO_TEST", fieldmappings)
... View more
12-19-2017
11:01 AM
|
1
|
0
|
11664
|
POST
|
I put in a request for restore of backup this morning before I saw this. The renaming, etc. happened in late December, so I'm hopeful the backups go back that far. Thank you for confirming my action on this.
... View more
04-26-2017
12:26 PM
|
0
|
2
|
582
|
POST
|
A colleague made many edits in a version of the parcel feature class a year ago before retiring. Since her departure the version was left untouched, but the feature class was renamed after being moved out of the feature dataset, where it was replaced by an XML Workspace Document from another server. Today we tried viewing her version, but could see no difference between it and the default. This must be because the feature class is not the one she worked on. But by loading the copy of the original into ArcMap and changing to her version, we see no difference. I suspect that copying a feature class leaves its versioned edits behind, so we fear we have lost her work. What (if anything) can be done to see the versioned edits? Maybe only by a database backup from that far back? I'm looking for ideas. Thanks, Justin version geodatabase
... View more
04-26-2017
08:25 AM
|
0
|
4
|
827
|
POST
|
I have a script I'm developing to automate changing the data source from parcels_1 to the original feature class parcels from which I had manually changed source several weeks ago. It was a most tedious task to do manually, and was done for several layers in multiple MXDs, hence my desire to automate it. The MXD I am using for testing has two layers that have the same feature class for data source, but each has a different definition query to show just the relevant subtypes: Layer Parcels has definition query "SUBTYPE" <2 OR "SUBTYPE" = 8 Layer RoadROW has definition query "SUBTYPE" BETWEEN 2 AND 5 Both layers are drawn with Unique Value symbology, and in the output MXD, everything draws correctly symbolized on the map from the successfully updated data source. Trouble is in Layer Properties, only RoadROW has a Symbology tab that shows what it did before. The Symbology tab for Parcels has just Single Symbol, and actually entirely lacks Categories as an option to choose. I do not understand why the two layers behave so differently. Perhaps it's because Parcels has two joins and one relate, while RoadROW has neither? The joins come through properly, but the relate does not. For RoadROW, lyr.symbology.valueField was bcwa.GIS_ADMIN.parcels_1.SUBTYPE before being changed For Parcels, lyr.symbology.valueField was SUBTYPE already before being changed. I don't know why the difference between the two layers in that regard. As a work around, in my script I have tried if "parcels_1" in lyr.symbology.valueField: lyr.symbology.valueField = 'SUBTYPE' but when running it I get ValueError: SUBTYPE Maybe it doesn't matter because lyr.symbology.valueField is already SUBTYPE? I hesitate to leave the Parcels layer in the state it is in, but ought I even be worried? Thanks, Justin
... View more
01-09-2017
10:57 AM
|
0
|
0
|
830
|
POST
|
I had left it only a second or two before starting it again. So I've stopped it once more, this time waiting more than 30 seconds, but with all services back up, the job still reports as being in progress (in Manager Cache Status).
... View more
08-15-2016
12:04 PM
|
0
|
0
|
1383
|
POST
|
yes, the service was already set for manual cache creation. I did restart the Windows ArcGIS Server service, but neglected to cancel caching before doing so. I clicked Cancel Job in the Job Details page, but after 10 minutes, it still says esriJobCancelling, so I don't know what more to do to make it actually stop so that I may restart it
... View more
08-15-2016
11:43 AM
|
0
|
2
|
1383
|
POST
|
okay thank you. I'll try restarting as you described. I realized just moments ago the mistake of turning off the dynamic image service (as I said I did), because the cached service needs the dynamic one as its source. And AGS logs screamed at me for the source service not being found. Silly me. I looked at status.gdb last week, and that's how I knew which areas remained uncached, and used that to create polygons to use as the small update extents.
... View more
08-15-2016
11:13 AM
|
0
|
0
|
1383
|
POST
|
I've launched the MMSCT again, after stopping the dynamic image service, and CPU usage ranges between 20% and 65%, but the server has active map users. I've still not restarted any services.
... View more
08-15-2016
11:06 AM
|
0
|
1
|
1383
|
POST
|
Thank you Jennifer. I didn't notice the CPU usage before cancelling the job shortly after my post. How can I determine what other process might be holding on to the instance, which I assume you mean to be the image service? Perhaps it might be that I publish the same service dynamically to handle the larger, non-cached scales. Should I stop that service to see if that affects things? To restart ArcGIS Server service in windows, do you mean in Manager, or in Windows Services?
... View more
08-15-2016
10:59 AM
|
0
|
7
|
1383
|
POST
|
I have an image service I've successfully cached previously. It had four empty scales 1:50, 1:100, 1:250, and 1:500 I created in order that the Geocortex viewer would permit zooming closer than the cached levels. I have found need to populate the cache at 1:500, and was able to get perhaps a third of the service extent to do so. The table below shows the Cache Status where it is stuck, despite that Level 17 is currently underway for the dozenth or so attempt. Level Scale Size Expected Tiles Completed Tiles Percent In Progress 0 1,000,000 0.18 MB 6 6 100 1 500,000 0.61 MB 18 18 100 2 250,000 2 MB 55 55 100 3 125,000 4.36 MB 210 180 85.71 4 100,000 6.76 MB 312 275 88.14 5 64,000 9.87 MB 760 624 82.11 6 50,000 13.92 MB 1,200 960 80 7 32,000 27.75 MB 2,886 2,356 81.64 8 25,000 39.45 MB 4,653 3,744 80.46 9 16,000 65.76 MB 11,242 9,150 81.39 10 10,000 162.53 MB 28,405 23,183 81.62 11 8,000 228.27 MB 44,352 36,058 81.3 12 5,000 471.78 MB 113,390 91,868 81.02 13 4,000 742.54 MB 177,408 143,395 80.83 14 2,500 1.91 GB 453,560 366,520 80.81 15 2,000 2.79 GB 708,400 572,390 80.8 16 1,000 11.01 GB 2,832,450 2,285,258 80.68 17 500 4.47 GB 11,327,500 2,583,061 22.8 18 250 0 MB 45,310,000 0 0 19 100 0 MB 283,126,752 0 0 20 50 0 MB 1,132,434,765 0 0 The trouble is the tile count never changes, despite my running Manage Map Server Cache Tiles with each attempt using an Update Extent shapefile covering a progressively smaller fraction of the area needed. So after three hours on an extent of just 30 sq. miles, the job details URL reports 0% completion on each line, with a different Estimated Time Remaining that grows larger and larger before dropping back to begin growing larger again. The Logs in AGS Manager report what I suspect may be the reason for no tile creation: SEVERE Aug 15, 2016, 1:21:23 PM Error exporting image Wait time of the request to the service 'public/swoop2015_mosaic.ImageServer' has expired. Rest SEVERE Aug 15, 2016, 1:21:23 PM Unable to process request. No instances for 'public/swoop2015_mosaic.ImageServer' were available for 60.006 seconds. Wait timeout exceeded. public/swoop2015_mosaic.ImageServer SEVERE Aug 15, 2016, 1:20:41 PM Error exporting image Error handling service request :Processing request took longer than the usage timeout for service 'public/swoop2015_mosaic.ImageServer'. Rest SEVERE Aug 15, 2016, 1:20:38 PM Processing request took longer than the usage timeout for service 'public/swoop2015_mosaic.ImageServer'. Server request timed out. Check that the usage timeout is appropriately configured for such requests. public/swoop2015_mosaic.ImageServer I don't know what to try next. I have already restarted the image service with no change in behaviour. Short of contacting Esri tech support, what should I try to correct this issue? Thanks, Justin
... View more
08-15-2016
10:39 AM
|
0
|
9
|
2825
|
POST
|
Further to my "it would explain nothing", I discovered later the same day the source of my trouble. The "BC" geodatabase runs on SQL Server 2008. I learned that ArcGIS 10.4 drops support for that DBMS. "BCWA", by the way, runs on SQL Server 2012. On a colleague's machine running 10.3.1, the SDE_compress_log in "BC" is visible. Additionally, on that machine the 911 address table is visible, but on mine, it is not. Short of upgrading the DBMS, the workaround is to install the SQL Server 2012 client on the DBMS machine.
... View more
04-11-2016
05:46 AM
|
0
|
0
|
1154
|
POST
|
In case it helps, another thought has just came to mind: I upgraded to Desktop 10.4 on March 3, i.e. the day before the next scheduled run of the python script. Could this have impacted things? From my perspective it would explain nothing about why "BCWA" still compresses and not "BC".
... View more
04-08-2016
09:05 AM
|
0
|
1
|
1154
|
POST
|
I maintain in SQL Server an ArcSDE geodatabase named "BC" that synchronizes twice weekly via Python script (attached) to a remote geodatabase named "BCWA". The script has been running successfully for several years, which I know because the script writes to a CSV file the pre- and post- compress state and lineage counts, and the length of time compress took to complete. I monitor the CSV in Excel. I can see in Excel that there's no trouble on "BCWA", but the last time I could say the same for "BC" was February 26, when Excel shows it went from a state count of 67 before compress down to 15 afterward, and likewise from a lineage count of 163 to 51, and the compress task took just over 15 minutes. Since that date, Excel shows every run of the Python script with a pre and post state count of 0, a blank lineage count, and a blank compress time. This made me think perhaps the compress task is being skipped, but when to troubleshoot I ran compress manually, it completed within 60 seconds and reported no error. The number of rows in SDE_states and SDE_state_lineages, however, did not change. Today in SSMS I viewed the SDE_states and SDE_state_lineages tables in both "BC" and "BCWA", and noted that the record count of SDE_states in "BCWA" matches the end_state_count shown in SDE_compress_log, just as I expected. But on "BC", I could view SDE_states and SDE_state_lineages, but found there is no SDE_compress_log to compare it to! Ack! Both "BC" and "BCWA" are 10.3.1 geodatabases, but Database Properties for "BCWA" says "database internals such as stored procedures can be upgraded." So the Upgrade Geodatabase button is enabled. Could it be that whenever it was that I clicked that button for "BC", one of the database internals upgraded caused the removal of table SDE_compress_log? I don't believe I pressed the button for "BC" after Feb 26, so this idea may be irrelevant. Will someone please suggest what might be causing this problem and how I might fix it? I really need to get compress working again on the "BC" geodatabase. Thanks, Justin
... View more
04-08-2016
08:05 AM
|
0
|
3
|
3641
|
Title | Kudos | Posted |
---|---|---|
1 | 04-11-2013 08:01 AM | |
1 | 07-07-2020 07:04 AM | |
1 | 07-07-2020 07:33 AM | |
1 | 07-07-2020 08:14 AM | |
1 | 12-19-2017 11:01 AM |
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:23 AM
|