|
POST
|
Let's try recreating the MXD. Open the corrupted MXD and select all the layers within the Table of Contents > right-click on one of the selected layers > Copy. Next, start a new MXD and right-click on the data frame > Paste Layers. Save this MXD with a new name and try exporting to a PDF using your script. Do you still receive the same error?
... View more
07-29-2011
02:51 AM
|
0
|
0
|
3486
|
|
POST
|
Does your data within your MXDs resides within an SDE geodatabase? If so, can you try this script on an MXD that contains data from a File or Personal geodatabase?
... View more
07-28-2011
10:42 AM
|
0
|
0
|
3486
|
|
POST
|
You can download the MFC71.dll here. Save this file to your 'C:\Windows\SysWOW64' and then try installing python again.
... View more
07-28-2011
07:10 AM
|
0
|
0
|
1202
|
|
POST
|
You can use a wildcard to select the layers, for example by specify the database name. Below is an example: lstMXDs = glob.glob(env.workspace + "\\" + "*.mxd")
for mxd in lstMXDs:
mxd = mapping.MapDocument(mxd)
for df in mapping.ListDataFrames(mxd, ""):
lstLayers = mapping.ListLayers(mxd, "*", df)
for lyr in lstLayers:
if "vector" in lyr.dataSource:
print lyr.dataSource
lyr.replaceDataSource("arcsde_DIRECT.sde", "SDE_WORKSPACE", "")
print "Successfuly updated data sources"
else:
print lyr.dataSource
lyr.replaceDataSource("RASTER.sde", "SDE_WORKSPACE", "")
print "Successfully updated data sources"
mxd.save()
del mxd I'm using the wildcard "vector" to find all data sources in my VECTOR database and replacing the datasource to my SDE direct connection. If the layer is not in my VECTOR database, I replace it with a new connection to my RASTER database.
... View more
07-28-2011
04:09 AM
|
0
|
0
|
1605
|
|
POST
|
You will not have to add quotes to the 'mxds' variable. I believe because you are looping through a list that is comprised of strings, and therefore python knows the variable 'mxds' is already a string. Example, you could simply write: for mxds in mxdLst:
print mxds + " is a map document" You would not have to write: for mxds in mxdLst:
print str(mxds) + " is a map document" In your code, try: inMxd = arcpy.mapping.MapDocument(mxds) #Make current mxd in loop the mapdocument
... View more
07-27-2011
12:21 PM
|
0
|
0
|
3486
|
|
POST
|
If you want the map document to update use "CURRENT" for the mxd. Ex: mxd = arcpy.mapping.MapDocument("CURRENT") Then add at the bottom of your code: arcpy.RefreshTOC()
arcpy.RefreshActiveView() ListLayers always returns a Python list object even if only one layer is returned. In order to return a Layer object, an index value must be used on the list (e.g., lyr = arcpy.mapping.ListLayers(mxd)[0]). More information can be found here.
... View more
07-27-2011
11:22 AM
|
0
|
0
|
1249
|
|
POST
|
Try the following: mxd = arcpy.mapping.MapDocument(r"j:\mxdfiles\mxdname.mxd")
df = arcpy.mapping.ListDataFrames(mxd, "Layers")[0]
updateLayer = arcpy.mapping.ListLayers(mxd, "TestFC", df)[0]
sourceLayer = arcpy.mapping.Layer(r"j:\test\test.lyr")
arcpy.mapping.UpdateLayer(df, updateLayer, sourceLayer)
mxd.save() I believe the problem is that you are not defining the data frame correctly.
... View more
07-27-2011
10:07 AM
|
0
|
0
|
1249
|
|
POST
|
Try the following code: import arcpy, glob, os
#Place script in same folder as MXDs to get 'current working directory'
baseF = os.getcwd()
print baseF
#Set local variables
mxdLst = glob.glob(baseF + '\\' + '*.mxd')
mxdCnt = len(mxdLst)
#Print how many mxd's found
print '\n' + 'Found ' + str(mxdCnt) + ' mxds for PDF exporting...'
print 'Directory: ' + str(baseF) + '\n'
#Loop to process each mxd into a PDF
for mxds in mxdLst:
PDFr = mxds.replace('mxd', 'pdf') #Replace 'mxd' extension with 'pdf'
print 'Exporting ' + str(mxds) + ' to:' + '\n' + str(baseF) + '\\' + str(PDFr) #Print current mxd exporting and output pdf name...
inMxd = arcpy.mapping.MapDocument(mxds) #Make current mxd in loop the mapdocument
arcpy.mapping.ExportToPDF(inMxd, PDFr) #Export mapdocument to pdf
print 'Done exporting: ' + str(PDFr)
del mxds I specified 'baseF = os.getcwd()' first and then added the baseF variable to the glob module, 'mxdLst = glob.glob(baseF + '\\' + '*.mxd')'.
... View more
07-27-2011
08:40 AM
|
0
|
0
|
3486
|
|
POST
|
Would you be able to copy/paste in your python code? Be sure to wrap CODE tags around your python code to maintain the correct format/indentations.
... View more
07-27-2011
08:23 AM
|
0
|
0
|
1249
|
|
POST
|
I would post this question on the Python forum. You will be able to receive more intuitive help from that forum.
... View more
07-25-2011
01:15 PM
|
0
|
0
|
2912
|
|
POST
|
To create the FOR loop you would need to do this in IDLE or pythonwin. Once you have the script created, you can import the script to your Toolbox and execute it from ArcMap/ArcCatalog.
... View more
07-25-2011
12:30 PM
|
0
|
0
|
2912
|
|
POST
|
I use to work in Tech Support and came across this issue quite a few times. Here are queries that you can run that will remove all the entries from the SDE, GDB, and base tables. These queries should only be run AFTER you create a database backup. ESRI rarely recommends editing tables directly within the geodatabase, but this is the only solution that I've been able to find in these situations. As vangelo recommended, I would follow up with Tech Support before running these queries. Also, the below queries are for ArcSDE 9.x for SQL Server. use <database>
select rastercolumn_id from sde.sde_raster_columns where table_name = '[table_name]'
select ID from sde.gdb_objectclasses where name = '[table_name]'
select layer_id from sde.sde_layers where table_name = '[table_name]'
--Recall the three numbers from these queries. Then run the following queries:
use <database>
delete from sde.sde_table_registry where table_name = '[table_name]'
delete from sde.sde_layers where table_name = '[table_name]'
delete from sde.gdb_rastercatalogs where objectclassid = [number from above sde.sde_raster_columns query]
delete from sde.gdb_objectclasses where Name = '[table_name]'
delete from sde.sde_raster_columns where table_name = '[table_name]'
delete from sde.sde_geometry_columns where f_table_name = '[table_name]'
DROP TABLE <owner>.<table_name>
DROP TABLE <owner>.sde_aux_<# from above rastercolumn_id query>
DROP TABLE <owner>.sde_bnd_<#>
DROP TABLE <owner>.sde_blk_<#>
DROP TABLE <owner>.sde_ras_<#>
DROP TABLE <owner>.f[number from above sde_layers query]
DROP TABLE <owner>.s[number from above sde_layers query]
... View more
07-25-2011
10:38 AM
|
0
|
0
|
2119
|
|
POST
|
You can create a script to do this. Below is an example: import arcpy
from arcpy import env
env.workspace = r"C:\temp\python\test.mdb"
inputTable = "XY"
lstFields = arcpy.ListFields(inputTable, "Lat")
for field in lstFields:
name = field.name
precision = field.precision
length = field.length
fieldtype = field.type
scale = field.scale
domain = field.domain
alias = field.aliasName
env.workspace = r"C:\temp\python\Test2.mdb"
lstTables = arcpy.ListTables("*")
for table in lstTables:
arcpy.AddField_management(table, name, fieldtype, precision, scale, length, alias, "", "", domain)
print "Successfully added field" The code exports the name, precision, etc from a table called 'XY' in geodatabase Test1 and then adds a new field to each table in geodatabase Test2.
... View more
07-25-2011
10:24 AM
|
0
|
0
|
2912
|
|
POST
|
I have not submitted this as a bug. Feel free to do so if you'd like, I'm not sure that I will have the time. Mosaic Datasets are a hybrid of raster datasets and raster catalogs. They render extremely fast due to the mosaic dataset's overviews. Mosaic Datasets render just as fast as raster datasets, plus you receive the additional benefits that the mosaic dataset has to offer. I would recommend taking a look at the following video to see how great and easy to use Mosaic Datasets are.
... View more
07-21-2011
11:38 AM
|
0
|
0
|
2840
|
|
POST
|
Looks like this may be a bug for the replaceDataSource method and raster datasets. However, this works with Mosaic Datasets. Have you considered migrating your large raster mosaics to Mosaic Datasets? This is a much more efficient way to manage your imagery at ArcGIS 10. The mosaic dataset makes it easy for you to manage, search, and discover imagery in collections of any size. The mosaic dataset catalogs image and raster data sources, stores detailed metadata, and defines how imagery should be transformed into different products on the fly without the need to preprocess. The mosaic dataset includes: �?� Dynamic mosaicking for efficient handling of overlapping imagery �?� On-the-fly processing to create multiple products from a single source �?� Management and accessibility to metadata �?� Integration for multiple disparate image sensors and formats �?� Streamlined data maintenance and updates of new imagery The mosaic dataset also saves time and storage space. The mosaic dataset simply reads the images from their native file locations and format. Since it reads the data from their native file locations it is extremely fast to create mosaic datasets. Loading large rasters into SDE that took hours will only take seconds now and you will no longer have to allocated the storage space in your SDE database to accomodate these large rasters.
... View more
07-21-2011
08:34 AM
|
0
|
0
|
2840
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 03-25-2026 04:16 AM | |
| 1 | 03-16-2026 01:00 PM | |
| 1 | 12-22-2025 10:39 AM | |
| 1 | 01-20-2026 04:04 AM | |
| 1 | 12-29-2025 06:27 AM |
| Online Status |
Offline
|
| Date Last Visited |
yesterday
|