|
POST
|
In the Buffer tool there is Side Type value. The default is Full but there is options for Right and Left which will buffer only one side of your line. When you use the in modelbuilder just set this value in the tool parameters.
... View more
08-29-2017
05:15 AM
|
3
|
1
|
3235
|
|
POST
|
Hi, I saw that there is a new patch for data reviewer (patch 6). However, the download under the link points to patch 5. Does anyone have the proper download link for it? http://support.esri.com/en/Products/Desktop/data-and-workflows-extensions/arcgis-data-reviewer/10-4-1#downloads?id=7526
... View more
08-22-2017
07:20 AM
|
0
|
1
|
680
|
|
POST
|
We are experiencing the same issue. We were able to determine it was tied to the quotes (") around the username in the version owner. The version that data reviewer is looking at is appears as "domain\username" with the quotes. Did you ever find a solution?
... View more
08-22-2017
07:08 AM
|
0
|
1
|
860
|
|
POST
|
You need to use the GetParameterAsText method. Remember that you need to keep the proper index order for your inputs. Flowaccumulation= arcpy.GetParameterAsText(0)
Flowdirection= arcpy.GetParameterAsText(1)
... View more
08-07-2017
06:13 AM
|
2
|
0
|
788
|
|
POST
|
I agree performance is the main reason. Another reason is that while very few people look at it, there is additional information about you GIS system stored it that you might not want to make public. For example, it may refer to another sensitive layer that was used in a clip tool. While the history doesn't provide access to that layer, it does advertise the layer's existence. In addition, full paths are often stored which gives people an idea of your network infrastructure. Most of the time, the history is garbage to start with but someone looking for info on your private GIS system could get stuff out it. Removing the history lessens this risk. This really only a concern in highly sensitive GIS systems.
... View more
07-21-2017
08:00 AM
|
1
|
1
|
3872
|
|
POST
|
I am not sure why you want to have it all as a 1 line. It makes it very hard to read. But if you must then use this code: import arcpy
import datetime
date = datetime.date.today()
quarter = 'Q1' if '2017-01-01' <= str(date) and str(date) <= '2017-03-31' else 'Q2' if '2017-04-01' <= str(date) and str(date) <= '2017-06-30' else 'Q3' if '2017-07-01' <= str(date) and str(date) <= '2017-09-30' else 'Q4' You had several things wrong. First, the <= was backwards (you had =<). Secondly, you can't do 1 > x > 3. It has to be 1 > x and x > 3. Finally, when you do 1 line if statements, you need to use else, not elif since you are basically doing nested if statements. Hopefully this helps.
... View more
07-10-2017
11:27 AM
|
1
|
6
|
3283
|
|
POST
|
Hi all, I am having trouble updating one of my scripts to work with SQL Server 2016. The problem lies when I try to call the sde.next_rowid procedure using the arcpy.ArcSDESQLExecute method. I am using the code that is described in How To: Insert geometry from XY coordinates using SQL and it works just fine in SSMS. However, when it try in it my python, I get the following message. I believe the problem likes with the SQL but it could be with https://community.esri.com/community/developers/gis-developers/python?sr=search&searchId=94336a51-e622-4a11-98a0-c617c9df218e&searchIndex=0. Anyone see anything I am missing? Thanks, Kevin Error Message: ArcSDESQLExecute: StreamPrepareSQL ArcSDE Extended error 11514 [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]The metadata could not be determined because statement 'EXECUTE sp_executesql @sql, N'@newid INTEGER OUTPUT', @newid = @rowid OUTPUT' in procedure 'next_rowid' contains dynamic SQL. Consider using the WITH RESULT SETS clause to explicitly describe the result set. Python #Snippet of code with issue.
import arcpy
strSDE = 'C:\WorkDoc\Dev\ArchiveRestore\ArchiveRestore\G02TLSNR_VECTOR.sde'
conn = arcpy.ArcSDESQLExecute(strSDE)
SQL = '''DECLARE @RowCount int\n'''
SQL += '''SET @RowCount = (SELECT COUNT(*) FROM GISTEST.VECTOR.ELEMBND_old WHERE GDB_TO_DATE > CONVERT(datetime,'2017-06-22 11:37:33', 20))\n'''
SQL += '''DECLARE @iterator INT\n'''
SQL += '''SELECT @iterator = MIN(OBJECTID_1) FROM GISTEST.VECTOR.ELEMBND_OLD WHERE GDB_TO_DATE > CONVERT(datetime,'2017-06-22 11:37:33', 20)\n'''
SQL += '''WHILE @iterator is NOT NULL\n'''
SQL += '''BEGIN\n'''
SQL += '''DECLARE @newid int\n'''
SQL += '''DECLARE @gid uniqueidentifier\n'''
SQL += '''EXEC GISTEST.sde.next_rowid 'VECTOR', 'ELEMBND_H', @newid OUTPUT\n'''
SQL += '''EXEC GISTEST.sde.next_globalid @gid OUTPUT\n'''
SQL += '''INSERT INTO GISTEST.VECTOR.ELEMBND_H (GDB_ARCHIVE_OID, GDB_TO_DATE, GLOBALID, OBJECTID, NAME, NAME_CODE, SCHLNUM, CREATED_USER, CREATED_DATE, LAST_EDITED_USER, LAST_EDITED_DATE, GDB_FROM_DATE, SHAPE)\n'''
SQL += '''SELECT @newid, CONVERT(datetime, '2017-06-22 11:37:33',20), @gid, OBJECTID, NAME, NAME_CODE, SCHLNUM, CREATED_USER, CREATED_DATE, LAST_EDITED_USER, LAST_EDITED_DATE, GDB_FROM_DATE, SHAPE\n'''
SQL += '''FROM GISTEST.VECTOR.ELEMBND_old\n'''
SQL += '''WHERE @iterator = OBJECTID_1;\n'''
SQL += '''SELECT @iterator = MIN(OBJECTID_1) FROM GISTEST.VECTOR.ELEMBND_OLD WHERE GDB_TO_DATE > CONVERT(datetime,'2017-06-22 11:37:33', 20) AND @iterator < OBJECTID_1;\n'''
SQL += '''END\n'''
try:
sqlResult = conn.execute(SQL)
except Exception as err:
print err
sqlResult = False
if sqlResult:
print "SQL insert command 2 successful"
else:
print "SQL insert command 2 failed"
#Clean up
del conn SQL DECLARE @RowCount int
SET @RowCount = (SELECT COUNT(*)
FROM GISTEST.VECTOR.ELEMBND_old
WHERE GDB_TO_DATE > CONVERT(datetime,'2017-06-22 11:37:33', 20))
DECLARE @iterator INT
SELECT @iterator = MIN(OBJECTID_1) FROM GISTEST.VECTOR.ELEMBND_OLD WHERE GDB_TO_DATE > CONVERT(datetime,'2017-06-22 11:37:33', 20)
WHILE @iterator is NOT NULL
BEGIN
DECLARE @oid int
DECLARE @gid uniqueidentifier
EXEC GISTEST.sde.next_rowid 'VECTOR', 'ELEMBND_H', @oid OUTPUT
EXEC GISTEST.sde.next_globalid @gid OUTPUT
INSERT INTO GISTEST.VECTOR.ELEMBND_H (GDB_ARCHIVE_OID,
GDB_TO_DATE,
GLOBALID,
OBJECTID,
NAME,
NAME_CODE,
SCHLNUM,
CREATED_USER,
CREATED_DATE,
LAST_EDITED_USER,
LAST_EDITED_DATE,
GDB_FROM_DATE,
SHAPE)
SELECT @oid,
CONVERT(datetime, '2017-06-22 11:37:33',20),
@gid,
OBJECTID,
NAME,
NAME_CODE,
SCHLNUM,
CREATED_USER,
CREATED_DATE,
LAST_EDITED_USER,
LAST_EDITED_DATE,
GDB_FROM_DATE,
SHAPE
FROM GISTEST.VECTOR.ELEMBND_old
WHERE @iterator = OBJECTID_1;
SELECT @iterator = MIN(OBJECTID_1) FROM GISTEST.VECTOR.ELEMBND_OLD WHERE GDB_TO_DATE > CONVERT(datetime,'2017-06-22 11:37:33', 20) AND @iterator < OBJECTID_1;
END
select * from GISTEST.VECTOR.ELEMBND_H
... View more
06-30-2017
08:53 AM
|
0
|
2
|
3299
|
|
POST
|
Thanks for the great info Vince. I am going to stick one filegroup now. Do you know if there is a benefit to have multiple files (on the same drive) in that filegroup or should I just use a single large file?
... View more
06-27-2017
11:27 AM
|
0
|
1
|
1966
|
|
POST
|
Thanks, that is similar information to what I have found with other extensions. I was hoping there was someone who has had experience with this that could tell me what works best for the overall geodatabase.
... View more
06-27-2017
10:14 AM
|
0
|
0
|
1966
|
|
POST
|
I am currently working on setting up the SQL Server database now and will be testing the tool and making the changes within the month. I hope to have an update by end of July 2017 at the lastest if any changes need to be made.
... View more
06-27-2017
08:45 AM
|
1
|
1
|
4031
|
|
POST
|
All, I have a questions about filegroups in SQL Server and geodatabases. Is there any performance or other benefit to have multiple filegroups for things like vector data, indexes, A/D tables compare to have 1 filegroup with many files in it? I have check the ESRI help document and search the forums but it mute on what is better. It tells you how to set the dbtune keywords to use different filegroups but not if you should use them or how many. If you go into some of the extensions such as data reviewer and workflow manager, they say to use 10 different filegroups (see Creating data files for the Reviewer workspace in SQL Server—Help | ArcGIS Desktop ). So anyone have experience with what is a best practice? Should I use 1 filegroup with say 10 files. Or should I have small set of filegroups with 2-3 files each for things like vector, indexes, and versioning tables. Or should I use all 10 different filegroups as describe in the help doc above? Some notes about our system. We are using SQL Server 2016. We are different drives for the data files and log files but all data files will be on the same drive. We are using synchronize AlwaysOn with 2 replicas. We will be using ArcGIS 10.4.1. Based on some tests we estimate our database will be roughly 45-60GB when everything is load. We will have versioning, archiving, and topologies in the database. Thanks, Kevin
... View more
06-27-2017
08:40 AM
|
0
|
5
|
2768
|
|
POST
|
You did not assign the db_connection. Try # Create connection
db_connection = arcpy.ArcSDESQLExecute(server='172.16.200.16', instance='sde:sqlserver:172.16.200.16', database='BYS', user='cbssa',password='Adm18712')
# Execute stored procedure
sql = "EXEC dbo.MyStoredProcedure @id = uSP_DB_IslemYap_CELIKVANA"
db_connection.execute(sql)
... View more
05-15-2017
07:11 AM
|
1
|
2
|
2937
|
|
POST
|
In current release of ArcGIS desktop, connecting and performing actions in a lower client to a higher GDB isn't normally a problem. However, in earlier release it was. So I would first try upgrading your client to at least 9.3.1 to match your database, or even to version 10.X. Also remember that 9.3.1 does not support the most current version Oracle 11g and requires special patches. So you will need to make sure that your SDE is compatible with the database version you are using. If you can't upgrade, then the question is, has the compress worked in the past? If so, what has changed since the worked last? This would be the mostly problem area. Finally, read this post. It might be a bad index. Underlying DBMS error [ORA-29875: failed in the execution of the ODCIINDEXINSERT rout
... View more
05-12-2017
06:24 AM
|
0
|
2
|
1571
|
|
POST
|
So, you been using geodatabase archiving to keep a history of your data's changes. But you need to make a change to the feature class that requires archiving/versioning to be turned off or worst, you need to change a different feature class in the same data set. You turn off, archive, make the change, and go to turn it back on. Well poopy, your old archive history can't be reconnected and it is in a separate feature class. And there is no tool to restore it. That was the issue I was facing with several of my feature classes. In addition, with a near future database vendor switch, I was looking at losing all my archive history. So I wrote a this tool, the Archive Restore, to put the records in the old archive history back into the current archive history. Both the old archive feature class and parent feature class must resided in the same SDE database under the same data owner. Versioning and archiving on the parent feature class must be enabled for this tool to work. Old archive records with no end (current features at the time archiving was turned off), will receive the end date of when the archive was turned back on. This tool allows for the addition and removal of fields. In the case of where a field is removed from the parent feature class, the field data from the old archive will not be copied over. In the case of an addition, the new field will be left null. The old archive feature class and records inside it are not deleted during this process. It must be manually deleted after the tool has finished. This way you can keep it if you want. This tool was written in python using Arcpy for ArcGIS 10.4.1. I turned it into a ArcToolbox script so people who have limited python experience can still use it. It has been tested against Oracle and Postgres. I hope to test it against SQL Server in the near future. If you use any other databases, feel free to provide any changes you need to make them work. I strongly suggest testing this on some test data before running on your live production database. While I have tested in our environment, I can't guarantee that it will work in every environment. Once the old archive records are added to the new archive, they can't be removed without using SQL on the back end. Use at your own risk. Please let me know how it worked for you and your data. I hope it helps.
... View more
05-08-2017
11:59 AM
|
5
|
8
|
5988
|
|
POST
|
I found that it gives the download file the same name as the layer name inside the mxd you publish. So if you change your layer names, it should change the download names.
... View more
05-01-2017
01:19 PM
|
1
|
0
|
539
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 10-19-2018 06:01 AM | |
| 2 | 10-15-2024 06:24 AM | |
| 1 | 10-09-2012 03:56 AM | |
| 1 | 07-28-2022 12:52 PM | |
| 2 | 09-21-2022 11:28 AM |
| Online Status |
Offline
|
| Date Last Visited |
10-18-2024
02:22 PM
|