|
POST
|
Our Oracle is about 60 GB total. The outside contract we have was in place before I took my job as the SDE admin. That contract handles patches, back ups, and other software relate stuff for all Oracle in our organization (more than just the SDE Oracle). So while I could manage the Oracle software myself, the policy is for the Oracle support contract to do it since the other area's DBAs are not interested in doing it themselves. The decision to move all databases to SQL Server affects more than GIS especially since other 3rd party (non-GIS) applications are dropping Oracle integration support in favor of SQL Server. But there is also a big cost savings which is the main reason.
... View more
10-20-2017
06:05 AM
|
0
|
0
|
1871
|
|
POST
|
We are moving from Oracle to SQL Server for three reasons. First and most important, the licensing cost is much higher for Oracle compared to SQL Server. By switching, we have over 50% on licensing fees. Secondly, Oracle boxes are UNIX while every other server is Windows. While we could run Oracle on Window's it was determined that our organization will use Microsoft software as much as possible. Lastly, Oracle is overkill for what we do and is not very user friendly compared SQL Server. We have an additonal outside contract to manage the Oracle software which will go away when we move to SQL Server. This is an additional saving for us.
... View more
10-19-2017
06:48 AM
|
0
|
2
|
3657
|
|
POST
|
I believe there is arcpy method that does this but an easy way is to search the string for the .gdb. strTest = r"C:\WorkDoc\Default.gdb\myDataset\myFeatureClass"
pathGDB = strTest[:strTest.index('.gdb')+4]
print pathGDB
... View more
10-18-2017
05:38 AM
|
2
|
1
|
1391
|
|
POST
|
We are currently migrating too. We have SDE 10.4.1, Workflow manger, and Data reviewer in our database. We do not have any rasters in the database. Here is sizes: Oracle 11.2.0.4: 49897 MB SQL Server 2016: 55341 MB One thing we have notice is that the SDE_Log tables make the log files swell and take a TON of space in Oracle (often 1 to 3 times total datafiles size) when doing large selections and heavy data reviewer sessions. In SQL Server, they are stored in the TempDB and take almost no space at all.
... View more
10-17-2017
01:24 PM
|
0
|
5
|
3657
|
|
POST
|
Are you trying to edit the feature class while adding fields? This will prevent you from adding it. Fields can only be added outside an edit session.
... View more
10-12-2017
11:25 AM
|
2
|
2
|
13467
|
|
POST
|
You will need to look at the MXD you used to publish the service. The layers in the MXD should correspond to the layers in the service. Then it is a matter of looking at source tab of each layer to determine it's table. Remember also that if your data versioned then there is 3 tables and not 1 you will need to look at. There is the base table, then A & D tables (ex A512 and D512 would be for the same layer).
... View more
10-11-2017
11:12 AM
|
2
|
0
|
955
|
|
POST
|
Hi, We have a geodatabase (10.4.1) that is stored in an AlwaysOn cluster in SQL Server 2016. Since most of our traffic on the database is from the web and is readonly, I have configured the database to use readonly routing so that readonly connections go to the secondary replica. I have specified in my SDE connections the listener name and the ApplicationIntent=Readonly. In Arcmap, the connection is correctly sent to the secondary node, but when I publish the map to Server, it always sends it to the primary node. I have checked the data stores, and they are set correctly to use ApplicationIntent=Readonly. Why is ArcGIS Server not honoring the application intent while ArcMap is? How do I fix this? Thanks, Kevin
... View more
10-04-2017
06:29 AM
|
0
|
1
|
1503
|
|
POST
|
Try variable = "SAMPLE NAME"
truncVar = variable[:variable.rfind(' ')]
... View more
09-26-2017
12:53 PM
|
2
|
9
|
3901
|
|
POST
|
You are right. For versioned tables use the Delete Rows tool.
... View more
09-19-2017
08:28 AM
|
1
|
0
|
1815
|
|
POST
|
I would look into using editor tracking. This will timestamp the update date field with the date and time it was updated last. It is then a matter of adding the update date field to the page header.
... View more
09-19-2017
08:21 AM
|
0
|
1
|
1180
|
|
POST
|
Why are you unregistered, then truncating the table, only to registered it? It kinda defeats the purpose of versioning. You can truncate a versioned table. As for the error, it would be helpful to see your script. Can you post the code or at least the part that is having the problem? I have seen this error when working with a table that is part of a relationship. But if it works in pyscripter but not ArcGIS, it is most likely a parameter problem (can't simply use sys.argv, you need to use the arcpy.GetParameter) or a syntax issue.
... View more
09-19-2017
08:09 AM
|
0
|
2
|
1815
|
|
POST
|
I also recommend you look into an enterprise Geodatabase. If you try to do it in a file GDB or even a personal GDB, you will encounter lock issues at some point. If you are unable to move to an enterprise GDB, I would recommend keeping the master GDB as read only then creating seperate GDB clones for each of your local users. Assuming the updates are regular schedule (ie nightly), I would write a python script that updates (or replaces) your master GDB. This script and be schedule to run automatically using Window's Task Scheduler. After updates are complete, you can schedule another task to recreate the local clones. The problem with this solution is that any changes to the local clones are lost each time they are recreated. And if you have users with the clone GDB open, they won't copy correctly. Finally, depending on how big the data is, you could be using alot of storage as each clone will be a copy.
... View more
09-12-2017
07:26 AM
|
1
|
1
|
2761
|
|
POST
|
Depending on your underlying database (ie SQL Server, Oracle, ...) you can set a constraint in the database's back end. However, I am pretty sure there isn't a tool to do it in ArcGIS itself. I also don't know if a back end constraint will carry over to Collector. You could use editor tracking which will auto populate the date for the user when they create or update a feature.
... View more
09-11-2017
05:37 AM
|
1
|
1
|
1171
|
|
POST
|
What we do is that we seperate each of the address parts into different fields. ST_NO (ex 1234) ST_Name (exS Main) ST_Type (ex St) UNIT_NO (ex 1) This allows the type of queries you are interested in very easily. If you need the full address, then it is just a matter of concatenating them. As for the special characters, you could strip them on input (insert trigger) or strip them in the where clause. Depending on the database you are using, the functions you will need should be pretty straight forward to find a # and remove everything behind it.
... View more
09-07-2017
06:18 AM
|
0
|
1
|
3861
|
|
POST
|
Yes and No. For the No part, the python script doesn't work because the database function to get the next ObjectID can't work in bulk with SQL Server via the ArcSDESQLExecute method (see Calling sde.next_rowid with ArcSDESQLExecute ). So I was never able to get it to work. If anyone can help me solve that issue, I would be glad to make the python work with SQL Server. For the Yes part, I was able to create the process via an SQL script that can be run directly in SQL Server. Here is a copy of the SQL code. --Script to update the archive table in SQL Server.
--This example uses a feature class called LUPAPWC with a schema owner VECTOR
--You will need to update the fields for your feature class
USE GISTEST;
GO
IF NOT EXISTS (select * from sysobjects where name='ArcCutDate' and xtype='U')
CREATE TABLE VECTOR.ArcCutDate(
cutDate datetime NULL,
tableName nchar(50) NULL
)
GO
SELECT count(*) from VECTOR.LUPAPWC_H; --Current Archive table
SELECT count(*) from VECTOR.LUPAPWC_OLD; --Old Archive Table
SELECT MIN( gdb_from_date) FROM GISTEST.VECTOR.LUPAPWC_H;
DECLARE @cutDate datetime
SELECT @cutDate = MIN( gdb_from_date) FROM GISTEST.VECTOR.LUPAPWC_H;
INSERT INTO VECTOR.ArcCutDate(cutDate,TableName) VALUES (@cutDate, 'LUPAPWC');
GO
BEGIN TRAN update_LUPAPWC;
DECLARE @cutDate datetime
SELECT @cutDate = cutdate FROM GISTEST.VECTOR.ArcCutDate WHERE TableName = 'LUPAPWC';
DECLARE @iterator INT
SELECT @iterator = MIN(OBJECTID_1) FROM GISTEST.VECTOR.LUPAPWC_old WHERE GDB_TO_DATE <= @cutDate
WHILE @iterator is NOT NULL
BEGIN
DECLARE @newid int
DECLARE @gid uniqueidentifier
EXEC GISTEST.sde.next_rowid 'VECTOR', 'LUPAPWC_H', @newid OUTPUT
EXEC GISTEST.sde.next_globalid @gid OUTPUT
INSERT INTO VECTOR.LUPAPWC_H (OBJECTID,
LUPA,
UPDATE_DATE,
GLOBALID,
SHAPE,
GDB_FROM_DATE,
GDB_TO_DATE,
GDB_ARCHIVE_OID)
SELECT OBJECTID,
LUPA,
UPDATE_DATE,
@gid,
SHAPE,
GDB_FROM_DATE,
GDB_TO_DATE,
@newid
FROM VECTOR.LUPAPWC_old
WHERE @iterator = OBJECTID_1;
SELECT @iterator = MIN(OBJECTID_1)
FROM GISTEST.VECTOR.LUPAPWC_OLD
WHERE GDB_TO_DATE <= @cutDate AND @iterator < OBJECTID_1;
END
GO
DECLARE @cutDate datetime
SELECT @cutDate = min(cutdate) FROM GISTEST.VECTOR.ArcCutDate WHERE TableName = 'LUPAPWC';
DECLARE @iterator INT
SELECT @iterator = MIN(OBJECTID_1) FROM GISTEST.VECTOR.LUPAPWC_old WHERE GDB_TO_DATE > @cutDate
WHILE @iterator is NOT NULL
BEGIN
DECLARE @newid int
DECLARE @gid uniqueidentifier
EXEC GISTEST.sde.next_rowid 'VECTOR', 'LUPAPWC_H', @newid OUTPUT
EXEC GISTEST.sde.next_globalid @gid OUTPUT
INSERT INTO GISTEST.VECTOR.LUPAPWC_H (OBJECTID,
LUPA,
UPDATE_DATE,
GLOBALID,
SHAPE,
GDB_FROM_DATE,
GDB_TO_DATE,
GDB_ARCHIVE_OID)
SELECT OBJECTID,
LUPA,
UPDATE_DATE,
@gid,
SHAPE,
GDB_FROM_DATE,
@cutDate,
@newid
FROM GISTEST.VECTOR.LUPAPWC_OLD
WHERE @iterator = OBJECTID_1;
SELECT @iterator = MIN(OBJECTID_1)
FROM GISTEST.VECTOR.LUPAPWC_OLD
WHERE GDB_TO_DATE > @cutDate AND @iterator < OBJECTID_1;
END
GO
select * from GISTEST.VECTOR.LUPAPWC_H;
--rollback TRAN update_LUPAPWC;
commit TRAN update_LUPAPWC;
... View more
09-05-2017
01:43 PM
|
1
|
3
|
4028
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 10-19-2018 06:01 AM | |
| 2 | 10-15-2024 06:24 AM | |
| 1 | 10-09-2012 03:56 AM | |
| 1 | 07-28-2022 12:52 PM | |
| 2 | 09-21-2022 11:28 AM |
| Online Status |
Offline
|
| Date Last Visited |
10-18-2024
02:22 PM
|