POST
|
I don't know if the multipatch helper functions have a problem or not. But, I would like to point out that there is a simple solution: don't rely on them. They don't do much of anything all that useful anyway. You should be able to 'roll your own' very effectively by reading and understanding the doc. Especially with multitpatches, a thorough understanding the doc is essential.
... View more
03-08-2012
11:40 AM
|
0
|
0
|
556
|
POST
|
These issues and others have been addressed in version 1.2 of the API, which is going through the release certification process as we speak. I don't know the exact release date, but it will be very soon. Don't be concerned - this project is very much alive.
... View more
02-28-2012
06:42 AM
|
0
|
0
|
556
|
POST
|
At the present time, the File Geodatabase does not support the concept of a view. It has been discussed, and could be added without a huge amount of difficulty. However, as of yet, there has been no decision to go forward with this idea.
... View more
02-27-2012
08:02 AM
|
0
|
0
|
193
|
POST
|
There should be no difference in the behavior of Next() between a table and a feature class. What are the steps to reproduce this problem? This is not something that we have seen in the past.
... View more
02-21-2012
07:32 AM
|
0
|
0
|
255
|
POST
|
If a call to Next does not return a row, it returns S_FALSE to indicate that fact. This is the expected behavior. I don't know why you are getting an access violation. This will require some investigation.
... View more
02-16-2012
08:01 AM
|
0
|
0
|
255
|
POST
|
The samples for the .NET wrapper are written in C#. At this time, there are no samples for VB.NET. I think that you will find that referring to the C# samples should be sufficiently instructive to help you write VB code.
... View more
01-30-2012
07:15 AM
|
0
|
0
|
581
|
POST
|
It would be very undesirable for the spatial index grid sizes to be recalculated automatically each time you added more features. That would add an enormous amount of overhead, and would not be likely to yield any improvement in the index itself. Ideally, the grid sizes that are calculated should achieve a balance between a grid size that is too large or too small. If the grid size is too large, the result will be an index that is not sufficiently selective, that is, each grid cell contains an excessive number of features which means that a large number of false positives could be returned from the index scan. On the other hand, too small of a grid size causes the index to become very large, due to the fact that each grid cell intersected by the feature's envelope must have a separate reference for that feature. This slows down the index scan because it has to do a much larger amount of file i/o to read through the index. The grid index calculation algorithm works best when the size distribution of the features currently contained in the feature class is representative. Ideally, all the features should be present, and then the algorithm is run. If only some of the features are present, and they are not representative of the true size distribution, the calculation will be thrown off. For that reason, the best workflow is to enter load-only mode, add all of your features, and then exit load-only mode. Exiting load-only mode triggers the grid size calculation and the loading of the index. Here is a brief description of how the algorithm works: The envelope of each feature is obtained. We look at the delta X and delta Y of the envelope, and use the larger of the two. The resulting delta values undergo a logarithmic transformation, and the resulting values are used to create a histogram which represents the size distribution of the features. The histogram is statistically analyzed and the initial grid size that is selected is approximately at the 66th percentile of the size distribution. Depending on the number of features that are beyond the 66th percentile, it is possible that a second and even a third grid size might be calculated. The algorithm seems to do a reasonably good job, and is vastly superior to the situation that existed prior to 9.2. I hope this helps in understanding how the index works.
... View more
12-20-2011
03:02 PM
|
0
|
0
|
247
|
POST
|
This is being looked at for a future release. Rather than simply use the curve structure definitions that are in the shape doc, it will probably be better to have a class hierarchy for the three curve types. This avoids the ugliness of the nested unions which is shown in the doc. One of the challenges is the fact that when serialized in the shape buffer, the three curve types are different in size, and the curves are packed in the buffer with no padding bytes. That means that it is not possible to cast the byte array containing the curves into an array of curve objects.
... View more
11-09-2011
08:57 AM
|
0
|
0
|
358
|
POST
|
I believe that your calculations are correct. It looks as though there is an error in the doc.
... View more
10-20-2011
10:45 AM
|
0
|
0
|
232
|
POST
|
This is a well known issue with multi-user concurrent access to File Geodatabases. Even with ArcObjects it is a common occurrence. The scenario typically involves read-only access by several client processes (often via a server), conflicting with occasional write-access by the database administrator. The several readers remain active nearly constantly, which effectively blocks the writer. In ArcGIS 10.1, a new capability has been added to improve this situation. A concept of lock request queuing was added. This behavior is optional and is switched on by changing a setting in the locking system. It adds the notion of mutliple retries for lock requests. Instead of simply failing when a lock conflict exists, the lock is added to a queue. At user controlled time intervals, the lock request is retried a specified number of times. Since read locks are generally short lived, there is a good chance that a write lock request will succeed on retry. Once the write lock request is queued, it takes priority over any subsequent read lock requests. This new capability has not yet exist in the FileGDB API, but it could be added in a future release. In the meantime, there is another technique that some users have employed successfully. What they do is have two versions of the File Geodatabase. One is the database that is made publicly available via a server. This database should be placed on a shared folder. The second is the one that is used by the administrator to maintain the data. The second database is not publicly accessible, therefore the admin can do whatever updates are required without having to worry about lock conflicts. Once the second database has been updated, you un-share the public shared folder of the public version and share the updated version. This happens within milliseconds. If any readers were connected to the original data, they will be "kicked off", but they can re-acquire the connection to the new version. Hopefully, I have explained this adequately.
... View more
09-07-2011
08:42 AM
|
0
|
0
|
731
|
POST
|
Go into the File Geodatabase API help system and read the following: Shapefile Technical Description Extended Shapefile Format I can't really point you to specific parts of these documents - you need to read and understand them in their entirety. Once you have done that, refer back to the ShapeBuffer accessor functions and I think they will make a lot more sense to you.
... View more
08-29-2011
10:17 AM
|
0
|
0
|
255
|
POST
|
Why not just use the C++ version of the API? I am not sure what advantage there would be in adding a COM interop to the .NET version of the API.
... View more
08-29-2011
10:05 AM
|
0
|
0
|
261
|
POST
|
I highly recommend that you read the document describing the shape buffer, which is included in the API help system. This is essential reading. If you do not understand the details, the likelihood of errors is very high.
... View more
08-23-2011
09:33 AM
|
0
|
0
|
255
|
POST
|
Might I suggest running Upgrade on the pre-10x FileGeodatabases? The reason that the API requires 10x FileGeodatabases is that all of the dataset schema information returned by the API is in XML format. With 10x data, the XML already exists in the system tables. With pre-10x data, it would all have to be generated from scratch, and that would require a large amount of code.
... View more
08-17-2011
12:50 PM
|
0
|
0
|
249
|
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:23 AM
|