|
POST
|
They could query the versioned views, but for the requirement that is being fulfilled the compression to state 0 once weekly is sufficient and simpler; not to mention it keeps our SDE running smoothly. What I need to figure out is the cause of these unreferenced states to pop-up and/or how to get them to compress. I have successfully gotten to state 0 with my method before, and from what I've read in ESRI docs and forum post it should work without fail each time...so I'm obviously missing something here. Assuming it's not our services (and I'm 99% sure it's not, the data owner for all our datasets is a different user schema) and it's not a lock, where are these coming from? Thank you for the code, I'll give it a look over when I have some spare time.
... View more
10-04-2021
11:11 AM
|
0
|
1
|
4560
|
|
POST
|
@George_Thompson wrote: You are correct that locks may prevent a "full" compress to state 0. Is the big thing that you are looking for is getting all the edits flushed out of the delta tables? - Yes we want everything in the base tables after this script is run, it's part of our weekly process. Otherwise anyone trying to query the data at the database tier is in for a headache, don't want to be messing with querying the delta tables to figure out what is what. If you have any services pointed to the data that it will put a lock on the tables, even if not editing. I don't think that's exactly true, but feel free to correct me if im wrong. Defining "service" as something running on ArcGIS Server (to not be confused with a window service) it depends on the user who created the service, does it not? To my knowledge we have no services using our admin/sde user, we use a separate schema specifically for map services. Do you know how I can track down the origin of the states?
... View more
10-04-2021
10:00 AM
|
0
|
3
|
4572
|
|
POST
|
Can you give reasoning and state the relevance to the above? We assume everything of importance is posted at the time of this script run, any existing lock would interfere with getting to state 0, that's why those tables are truncated. Process_Information is truncated to remove any lingering connections, I then reconnect after that to get the sde connection back. Regardless the real question is where at these excess states coming from? They shouldn't be because again I delete all versions (except default of course) and turn all our services off (and by services I mean window services connecting to the SDE that mange our post and qa queues). No map service uses the SDE connection
... View more
10-04-2021
09:55 AM
|
0
|
0
|
1281
|
|
POST
|
No there shouldn't be, but i'll drop check that. Truncate those tables because at the time this script runs everything should be posted and our mappers are aware of that, we work on a week timeframe and always got to state 0 at the end of the week. Locks could prevent the compression to state 0 because they prevent the deleting of a version (correct me if im wrong). This is not a common workflow probably but the docs support it, if you are assuming all non-posted versions are to be deleted (which we are).
... View more
10-04-2021
09:49 AM
|
0
|
5
|
4576
|
|
POST
|
and I can provide this token to initiate a connection AGOL via the python api's GIS module?
... View more
10-04-2021
08:41 AM
|
0
|
0
|
3024
|
|
POST
|
sorry I should have shared the code itself, I truncate all those tables prior to compression...but perhaps you could shed some light on this. I ran into an issue where if I make a direct connect to the SDE w/admi credentials and then truncate process_information the compress would fail; I don't understand the underlying reasoning other then it needs whatever info is in that table to compress. Maybe I need to separate my truncating of all the lock tables from process_information? I do it in a loop since they are all the same commands. # Connect to the sde, truncate all the lock tables and process information,
# to help with getting to state 0. Note, this block **MUST** be ran before
# making a connection to the SDE via the arcpy interface as it will add a row
# to the process_information table that is essential to the compress operation.
logger.info("Truncating lock tables and process information for sde")
sde_conn = cx_Oracle.connect('{}/{}@{}'.format(admin_user,admin_pw,sde))
sde_cursor = sde_conn.cursor()
sde_truncate_ls = ["object_locks","table_locks","state_locks","layer_locks","process_information"]
for table in sde_truncate_ls:
sde_cursor.execute("Truncate table {}".format(table))
# Get an sde connection file for admin, prevent users from connecting
# disconnect anyone currently connected.
admin_sde = check_get_sde_conn(None,sde,admin_user,admin_pw,True)
arcpy.env.workspace = admin_sde
arcpy.AcceptConnections(admin_sde,False)
arcpy.DisconnectUser(admin_sde,"ALL")
# Before compress need to delete all versions if we want to get
# the sde to state 0
logger.info("Getting and deleting all versions that have not been posted")
ver_list = [ver.name for ver in arcpy.da.ListVersions()
if ver.name.lower() != 'sde.default']
if len(ver_list) > 0:
for version in ver_list:
logger.info("Deleting Version: {}".format(version))
arcpy.DeleteVersion_management(admin_sde,version)
# Truncate mm_sessions,mm_px_version,gdbm_post_queue tables.
# Not nessecary for state 0, but after deleting all versions
# any rows left in those tables are erroneous there is no longer
# any state or version history associated with them. Erroneous rows
# can be in gdbm_post_queue so it should be truncated also, everything
# should be posted at this point.
logger.info("Truncating process tables")
proc_conn = cx_Oracle.connect('{}/{}@{}'.format(proc_user,proc_pw,sde))
proc_cursor = proc_conn.cursor()
proc_truncate_ls = ["mm_session","mm_px_versions","gdbm_post_queue"]
for table in proc_truncate_ls:
proc_cursor.execute("Truncate table {}".format(table))
# Time to compress the sde, then query the database
# and check how many states remain
# If not at state 0 after first compress, log the remaining states and try one more time
# sometimes a second compression will clean up any lingering unreferenced states
logger.info("Compressing the sde")
state_id,states = compress_and_check(admin_sde,sde_cursor)
... View more
10-04-2021
08:39 AM
|
0
|
0
|
4634
|
|
POST
|
Ah cool, so if I do it like this even if the password expires on the account that registered the application I can still generate the access token indefinitely?
... View more
10-04-2021
08:28 AM
|
0
|
2
|
3047
|
|
POST
|
My org uses ArcGIS Online and we have a number of python scripts currently using named users to authenticate with AGOL via the python api. The issue is that we must use a password policy and have passwords reset every x amount of days. The API does not seem to have any way to check when an accounts password will expire. Problematic as I have 5 or 6 accounts running scripts (use a dedicated account for each grouping of servers to prevent conflicts as 1 account can only be used on 3 machines at a time) and it's a hassle to manual manage all of them. I want to be programmatically check when a password will need to be changed so I can automate the reset and updating of the stored passwords in protected directories. Before I come up with my own solution, I was wondering if there is a better way to deal with this? Maybe I missed something and there is a more permanent way to authenticate without using an on-prem Portal. Or there is something in the API refs I missed. It seems I'll have to store a log of when passwords were reset. As far as I can tell from the documentation this or using ArcGIS Pro authentication are the only ways to connect to AGOL and run our scripts...pro auth token will only last two weeks so that's out of the window (unless there is someway to specify a longer default token parameter for Pro with AGOL, but there does not seem to be)
... View more
10-04-2021
07:37 AM
|
1
|
6
|
3056
|
|
POST
|
As title states, for some reason my SDE does not get to state 0 during my weekly compression via python script. In our SDE the 'SDE' user is the admin, I connect with the user, disconnect everyone, block new connections, turn off our geodatabase manager services, delete all unposted versions and then compress, yet I'm always left with something like this (from my logs): 2021-10-01 21:19:29,839 - DEBUG - Failed to compress to state 0, current def state 16140 2021-10-01 21:19:29,839 - DEBUG - The following states are still remaining: 2021-10-01 21:19:29,839 - DEBUG - (STATE_ID,OWNER,CREATION_TIME,CLOSING_TIME,PARENT_STATE_ID,LINEAGE_NAME) 2021-10-01 21:19:29,855 - DEBUG - (0, u'SDE', datetime.datetime(2013, 2, 4, 16, 1, 51), datetime.datetime(2013, 2, 4, 16, 23, 14), 0, 0) 2021-10-01 21:19:29,855 - DEBUG - (16140, u'SDE', datetime.datetime(2021, 10, 1, 16, 45, 33), datetime.datetime(2021, 10, 1, 16, 45, 34), 0, 16122) 2021-10-01 21:19:29,855 - DEBUG - (15687, u'SDE', datetime.datetime(2021, 10, 1, 14, 2, 15), datetime.datetime(2021, 10, 1, 14, 2, 35), 0, 15687) 2021-10-01 21:19:29,871 - DEBUG - (16144, u'SDE', datetime.datetime(2021, 10, 1, 16, 46, 40), datetime.datetime(2021, 10, 1, 16, 46, 40), 16140, 16122) 2021-10-01 21:19:29,871 - DEBUG - (16146, u'SDE', datetime.datetime(2021, 10, 1, 16, 46, 42), datetime.datetime(2021, 10, 1, 16, 46, 43), 16140, 16146) 2021-10-01 21:19:29,885 - DEBUG - (16148, u'SDE', datetime.datetime(2021, 10, 1, 16, 46, 45), datetime.datetime(2021, 10, 1, 16, 47, 7), 16140, 16148) 2021-10-01 21:19:29,885 - DEBUG - (16150, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 9), datetime.datetime(2021, 10, 1, 16, 47, 10), 16140, 16150) 2021-10-01 21:19:29,901 - DEBUG - (16152, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 11), datetime.datetime(2021, 10, 1, 16, 47, 13), 16140, 16152) 2021-10-01 21:19:29,901 - DEBUG - (16154, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 14), datetime.datetime(2021, 10, 1, 16, 47, 16), 16140, 16154) 2021-10-01 21:19:29,901 - DEBUG - (16156, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 17), datetime.datetime(2021, 10, 1, 16, 47, 17), 16140, 16156) 2021-10-01 21:19:29,917 - DEBUG - (16158, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 19), datetime.datetime(2021, 10, 1, 16, 47, 31), 16140, 16158) 2021-10-01 21:19:29,917 - DEBUG - (16160, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 33), datetime.datetime(2021, 10, 1, 16, 47, 34), 16140, 16160) 2021-10-01 21:19:29,933 - DEBUG - (16162, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 35), datetime.datetime(2021, 10, 1, 16, 47, 44), 16140, 16162) 2021-10-01 21:19:29,933 - DEBUG - (16164, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 46), datetime.datetime(2021, 10, 1, 16, 47, 47), 16140, 16164) 2021-10-01 21:19:29,933 - DEBUG - (16166, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 48), datetime.datetime(2021, 10, 1, 16, 47, 50), 16140, 16166) 2021-10-01 21:19:29,948 - DEBUG - (16168, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 51), datetime.datetime(2021, 10, 1, 16, 47, 52), 16140, 16168) 2021-10-01 21:19:29,948 - DEBUG - (16170, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 54), datetime.datetime(2021, 10, 1, 16, 47, 54), 16140, 16170) 2021-10-01 21:19:29,963 - DEBUG - (16172, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 56), datetime.datetime(2021, 10, 1, 16, 47, 56), 16140, 16172) 2021-10-01 21:19:29,963 - DEBUG - Attempting compress to state 0 again 2021-10-01 21:19:52,631 - DEBUG - Second attempt failed, current def state 16140 2021-10-01 21:19:52,631 - DEBUG - The following states are still remaining: 2021-10-01 21:19:52,647 - DEBUG - (STATE_ID,OWNER,CREATION_TIME,CLOSING_TIME,PARENT_STATE_ID,LINEAGE_NAME) 2021-10-01 21:19:52,647 - DEBUG - (0, u'SDE', datetime.datetime(2013, 2, 4, 16, 1, 51), datetime.datetime(2013, 2, 4, 16, 23, 14), 0, 0) 2021-10-01 21:19:52,647 - DEBUG - (16140, u'SDE', datetime.datetime(2021, 10, 1, 16, 45, 33), datetime.datetime(2021, 10, 1, 16, 45, 34), 0, 16122) 2021-10-01 21:19:52,661 - DEBUG - (15687, u'SDE', datetime.datetime(2021, 10, 1, 14, 2, 15), datetime.datetime(2021, 10, 1, 14, 2, 35), 0, 15687) 2021-10-01 21:19:52,661 - DEBUG - (16144, u'SDE', datetime.datetime(2021, 10, 1, 16, 46, 40), datetime.datetime(2021, 10, 1, 16, 46, 40), 16140, 16122) 2021-10-01 21:19:52,677 - DEBUG - (16146, u'SDE', datetime.datetime(2021, 10, 1, 16, 46, 42), datetime.datetime(2021, 10, 1, 16, 46, 43), 16140, 16146) 2021-10-01 21:19:52,677 - DEBUG - (16148, u'SDE', datetime.datetime(2021, 10, 1, 16, 46, 45), datetime.datetime(2021, 10, 1, 16, 47, 7), 16140, 16148) 2021-10-01 21:19:52,677 - DEBUG - (16150, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 9), datetime.datetime(2021, 10, 1, 16, 47, 10), 16140, 16150) 2021-10-01 21:19:52,694 - DEBUG - (16152, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 11), datetime.datetime(2021, 10, 1, 16, 47, 13), 16140, 16152) 2021-10-01 21:19:52,694 - DEBUG - (16154, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 14), datetime.datetime(2021, 10, 1, 16, 47, 16), 16140, 16154) 2021-10-01 21:19:52,709 - DEBUG - (16156, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 17), datetime.datetime(2021, 10, 1, 16, 47, 17), 16140, 16156) 2021-10-01 21:19:52,709 - DEBUG - (16158, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 19), datetime.datetime(2021, 10, 1, 16, 47, 31), 16140, 16158) 2021-10-01 21:19:52,709 - DEBUG - (16160, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 33), datetime.datetime(2021, 10, 1, 16, 47, 34), 16140, 16160) 2021-10-01 21:19:52,724 - DEBUG - (16162, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 35), datetime.datetime(2021, 10, 1, 16, 47, 44), 16140, 16162) 2021-10-01 21:19:52,724 - DEBUG - (16164, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 46), datetime.datetime(2021, 10, 1, 16, 47, 47), 16140, 16164) 2021-10-01 21:19:52,740 - DEBUG - (16166, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 48), datetime.datetime(2021, 10, 1, 16, 47, 50), 16140, 16166) 2021-10-01 21:19:52,740 - DEBUG - (16168, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 51), datetime.datetime(2021, 10, 1, 16, 47, 52), 16140, 16168) 2021-10-01 21:19:52,740 - DEBUG - (16170, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 54), datetime.datetime(2021, 10, 1, 16, 47, 54), 16140, 16170) 2021-10-01 21:19:52,756 - DEBUG - (16172, u'SDE', datetime.datetime(2021, 10, 1, 16, 47, 56), datetime.datetime(2021, 10, 1, 16, 47, 56), 16140, 16172) I do a second compression if the first fails to get to state 0 as I read that sometimes this helps fix the issue, but still no luck. How can I remedy this? I'm not sure where these states are coming from as at this point there no versions, they've all been posted or deleted so these states are unreferenced. The datetimes show they are opened around 4PM while this script runs at 9PM and the only things that use this schema/user are myself and the services for managing the geodatabase (which I turn off before running).
... View more
10-04-2021
07:27 AM
|
0
|
12
|
5931
|
|
POST
|
See I’ve read that and it seems like the most plausible thing, but I don’t gather that as the cause for not seeing the edits. Once they are posted to the default why would I not see them at the database tier until after a compress? They were posted to the default version and I think by default that is what you would be querying when connecting through another interface like SQL developer or python. We do run a weekly compress, actually we compress twice; one is a quick compress, the other is an attempt to get state 0
... View more
08-25-2021
04:38 AM
|
0
|
1
|
1934
|
|
POST
|
As the title says, for some reason I do not see posted edits at the database tier until the week after they're posted. I can't figure out why, I've looked through the relevant documentation and nothing about this. We are using ArcGIS 10.2.1. Our SDE is using oracle 11g. For example: if I start an edit session and new version, edit a point feature and then post it on say Wednesday, it will be visible in the default version within ArcMap after the posting completes. However, if I go to sql developer and query that same feature you will not see the edit until the next monday for some reason. Am I not querying the default version using sql developer? What's going on here? Thanks!
... View more
08-24-2021
01:12 PM
|
0
|
3
|
2035
|
|
POST
|
Url is formatted slightly wrong, should be: url = '{0}/sharing/rest/community/users/{1}/delete?f=json&token={2}'.format(gis.url, username, token) Also this is python 3 so you would want to use an f string instead url = f'{gis.url}/sharing/rest/community/users/{username}/delete?f=json&token={token}'
... View more
06-21-2021
07:49 AM
|
0
|
0
|
4066
|
|
IDEA
|
When you create a user using the UserManager in the arcgis for python API it currently does not let you modify in anyway the boiler plate email sent to that user, however this functionality is there if you look at the code at __init__.py in the GIS module, you just need to change and add a few lines. This should be added officially to the next release of the API. It's a useful functionality, for example in my org we have an application for our external customers and there is some extra information we want to convey to them when we add them to our organization and we'd rather not have to send two emails, one from arcgis and one from us as this can confuse some users or they'll miss one of the emails
... View more
05-24-2021
06:01 AM
|
0
|
0
|
819
|
|
POST
|
For some reason if you configure a smart form for a layer or table on an editable layer, the submit button will be grayed out and you can't actually submit the form when using Field Maps...if you just use the regular pop-up it all works fine, but as soon as you do a smart form it the button will be grayed out. I know all my layers are configured properly so I'm certain this is a bug unless I'm missing something, anyone else have an issue with this?
... View more
02-25-2021
06:26 AM
|
1
|
14
|
4447
|
|
POST
|
I've checked the documentation for both and there does not seem to be, which is surprising. Wondering if maybe I missed something or perhaps there is another way?
... View more
09-22-2020
01:58 PM
|
0
|
1
|
918
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 07-01-2024 08:12 AM | |
| 3 | 03-17-2025 11:41 AM | |
| 1 | 05-03-2024 09:38 AM | |
| 1 | 08-26-2022 12:49 PM | |
| 7 | 07-20-2022 08:08 AM |
| Online Status |
Offline
|
| Date Last Visited |
05-31-2025
08:23 AM
|