I am trying to create a new "Cartway" feature class using the buffer tool on a centerline feature class with ~750,000 features in ArcGIS 10.1 SP1. The tool has basic parameters as inputs, such as "FULL", "ROUND", and "ALL", and background geoprocessing is disabled.
The problem I am having is the tool is stuck at 100% completion and does not finish. However the ArcCatalog.exe process is still working (13% CPU, or 100% / 8 cores). I even let it run over the weekend and it is still not done.
The data and scratch workspace are all on solid state drives (SSD's). There are no "spinners" in this workstation to slow things down, so there should be no bottleneck there. The workstation is a Dell T5500 Precision workstation. The specifications are:
2 x Intel Xeon X5677 (2 physical quad-core CPUs, 8 physical cores total, HT disabled).
2 x NVIDIA Quadro 600 GPUs
120GB Corsair V12 SSD
120GB Corsair Force GT SSD
Any ideas as to how I can get this to finish?
Sounds like you've hit some "internal" problem with the Buffer tool rather than anything to do with your high spec machine. As a solution could you write a simple model and iterate over say batches of 5,000 features, buffer those and do some final merge?
I had to do this recently as I was trying to buffer 400,000 points, it was bombing out as you described. When I batched the process it ran slower but it did complete.
The process has not "bombed", it is still running; in task manager the ArcCatalog.exe process is using 13% CPU (which is 100% of one core, as it is a single-threaded application. 100% / 8 cores = 13%); also the amount of RAM being used by the process fluctuates rapidly (between 500MB to 650MB).
So it is definitely still working.
Splitting up and then merging is not an option due to the way the buffer tool dissolves intersections to remove overlapping polygons. This has to be done as a single process (we do not have time to manually correct thousands of overlapping polygons).
Have you tried using the old style buffer wizard, I find it out performs the buffer tools in the toolbox.
To add the Buffer Wizard to ArcMap, go to Tools > Customize > Commands tab. In the Categories window on the left, scroll down and select Tools. From the Commands window on the right, select the Buffer Wizard tool, then drag and drop the tool onto any toolbar displayed in the ArcMap window.
Worth a try
We have encountered this problem too. The way we solved it was to buffer without the dissolve option and afterwards running the dissolve tool. This worked for us.
This sped up my process significantly!
Thank you for the suggestions. I have tried running the buffer wizard in ArcMap, which ran for over 8 hours. When I came back to the office in the morning, there was an error message saying "ArcMap has crashed".
I then tried running the Buffer tool again in ArcCatalog without the dissolve option checked. This worked, however when attempting to run the Dissolve tool on the output I got an error stating "Invalid Topology". I then tried running the repair geometry tool on the output and re-running the Dissolve tool, but received the same error about invalid topology.
Can someone from esri please help?
Are you buffering features that are creating some insanely large multi-part buffer? May be you could upload an image of what you are trying to buffer? You may find that if you batch up your buffering (as I suggested) you may overcome this problem?
We need to create 25 foot buffers for the entire state of New Jersey for our land base. We are using 2013 TeleAtlas centerlien data as the input. It does not have to be multi-part.
I am open to the idea of doing to it in batches, but would not there then be overlap that would require another dissolve?
Why does it matter how many features are being run through the tool? The software should be able to handle large sets of data; it is almost 2015 and 4TB hard drives are less than $150. I have made posts in the past pleading for esri to rework their core product's most basic tools to bring them into the 21st century but we still continually face these types of issues when working with large datasets..
As I understand it the underlying building blocks of desktop is 32bit. So it does not matter if you have the memory, huge storage capacity and the latest cpu it will never be able to utilises them. Your buffering maybe creating some monster geometry that it cannot handle. So if you batch it up into smaller batches with the extra over heading of running the dissolve tool, yes it will be slower but it should run to completion with you doing some final merge.
As the risk of sounding flippant the time and effort you have spent researching on this forum you could have done it all by now with a fairly simple model or python script?