Q1: Is there a faster mechanism to create a point layer in ArcGis Pro from a custom formatted data file in a .net add-in?
Currently :
I am loading data point programmatically from a custom file format, firstly into a data structure,
I create a FeatureLayer in CurrentProject.DefaultGeodatabasePath [which seems to add to current map automatically]
I then iterate the points data, creating with MapPointBuilder.CreateMapPoint() and adding with an EditOperation, and then commit using createOperation.ExecuteAsync()
Then apply a renderer
All good, BUT the commit operation is very slow : eg 10k points takes 20-30 secs, 100k points takes 3-5 minutes
Q2 : Often this is an ephemeral/temp display layer - do i still need to create in the geoDB, or can i create in some faster, memory-only, map-only mechanism ?
For reference on the same machine , our stats package would load and display 100k points on a map in 6-7 secs, QGIS plugin using the same mechanism would load & display 10k points in under 2 secs and the 100k points file in 10-11 secs
Abbreviated Code :
FeatureLayer pointFeatureLayer = [... create empty Feature layer using : Geoprocessing.ExecuteToolAsync("CreateFeatureclass_management",...]
AddFields(); // using Geoprocessing.ExecuteToolAsync("management.AddFields",.....
// add points & attributes :
var createOperation = new EditOperation();
createOperation.SelectNewFeatures = false;
[foreach source point record ]
{
featureAttribs = [..get point attribute values..]
// create the geometry & add the attribute values
MapPoint newMapPoint = null;
Coordinate3D newCoordinate = new Coordinate3D(x, y, z);
newMapPoint = MapPointBuilder.CreateMapPoint(newCoordinate, sref);
// add the geometry as an attribute
featureAttribs.Add("Shape", newMapPoint);
// queue feature creation
createOperation.Create(pointFeatureLayer, featureAttribs);
}
// then commit :
Task<bool> res = createOperation.ExecuteAsync();
[ create and apply CIMUniqueValueRenderer ]
Hi,
The faster way is to create shapefile and write to it without EditOperation.
It could be done using CreateRowBuffer from FeatureClass. More info here:
https://github.com/Esri/arcgis-pro-sdk/wiki/ProSnippets-Geodatabase
If this custom file format is something you use frequently, you might want to consider writing a PlugIn datasource (Conceptual Doc, Guide). This would allow you to read and display the data in the map without needing to copy it into another format.
If you want to stick with the current copying idea, shapefiles are good, but an in-memory database is probably better. Instructions for creating an in-memory database are here. Once the database is created, you can create a feature class, and then copy data into it. If you go the route of using an in-memory database, keep in mind that they are destroyed when closing ArcGIS Pro. So if you create a layer from an in-memory database, that layer will come in broken if you save it in a map and restart Pro.
I hope this helps,
--Rich
Thanks for those suggestions - shapefiles i think have some limitations, will look at in-mem database : It is not clear to me from documentation whether i can use existing geoprocessing-style code with an in-mem geodB, or whether i need to move all code over to the DDL API ?
Jack,
If you just want to stick with geoprocessing, here is a link for using geoprocessing with in-memory geodatabases (you don't even have to create it first).
--Rich
Thanks Rich - are you aware of any c# examples ? i am struggling to get geoprocessing to work with memory geodb.
Jack,
Would you mind posting some code snippets so that we can see what (if anything) you are doing wrong?
--Rich
I have got this working today actually. Using memory gdb has approximately doubled the performance, which is certainly an improvement. I'll post my working test harness code for reference, but would love to know if you can see how to improve significantly ?
For example the geoprocessing call to create the empty featureclass seems to be taking longer than QGIS takes to create and display a 100k points layer - surely I am doing something wrong here?
Also - I would be interested to see what the best practice equivalent would look like in the DDL API ?
Thanks
// Works :
protected async Task MemDB_Geoprocessing()
{
string gdbPath = "memory";
//string gdbPath = CoreModule.CurrentProject.DefaultGeodatabasePath;
string featureclassName = "testFC";
string featureclassType = "POINT";
int spatialRefID = 20351; //Z51
var activeMap = MapView.Active.Map;
// 1) Create the Feature class table - use GP tools
var sr = SpatialReferenceBuilder.CreateSpatialReference(spatialRefID);
List<object> geoProcessingArgs = new List<object> { gdbPath, featureclassName, featureclassType, "", "DISABLED", "DISABLED", sr };
GPExecuteToolFlags flags = GPExecuteToolFlags.None | GPExecuteToolFlags.AddOutputsToMap;
IGPResult result = null;
var t = Task.Run(async () =>
{
result = await Geoprocessing.ExecuteToolAsync("CreateFeatureclass_management", Geoprocessing.MakeValueArray(geoProcessingArgs.ToArray()), null, null, null, flags);
});
await (t);
// 2) Set up attribute structure
int numFields = 64 / 2; // 64 fields, 1 each of type text, numeric
var fields = new Dictionary<string, string>();
for (int fldCnt = 0; fldCnt < numFields; fldCnt++)
{
fields.Add($"Field{fldCnt}", $"Text # 255 myDefaultValue");
fields.Add($"Field{fldCnt}_num", $"DOUBLE # # #");
}
AddFields(gdbPath + @"\" + featureclassName, fields);
// 3) Add some test point features to the Feature layer
var pointFeatureLayer = activeMap.FindLayers(featureclassName)[0] as FeatureLayer;
var featureAttribs = new Dictionary<string, object>();
Random rnd = new Random();
var createOp = new EditOperation() { Name = "Generate points", SelectNewFeatures = false };
long pointCnt = 0;
for (pointCnt = 0; pointCnt < 50*1000; pointCnt++) // for each new point feature
{
featureAttribs.Clear();
// geometry - random-ish coords
Coordinate2D newCoordinate = new Coordinate2D(320000 + rnd.Next(-1000, 1000), 7140000 + rnd.Next(-1000, 1000));
featureAttribs.Add("Shape", MapPointBuilder.CreateMapPoint(newCoordinate));
// attribute values
for (int fldCnt = 0; fldCnt < numFields; fldCnt++)
{
featureAttribs.Add($"Field{fldCnt}", $"SomeText{fldCnt}");
featureAttribs.Add($"Field{fldCnt}_num", fldCnt);
}
createOp.Create(pointFeatureLayer, featureAttribs); // queue feature creation
}
// commit features
Task<bool> tRes = createOp.ExecuteAsync(); // execute the batch edit operation
await (tRes);
if (!tRes.Result) throw new Exception(createOp.ErrorMessage);
Task tse = Task.Run(async () => { await Project.Current.SaveEditsAsync(); });
await(tse);
Task tz = Task.Run(async () => {
await MapView.Active.ZoomToAsync(pointFeatureLayer, false);
});
await(tz);
}
private void AddFields(string tableNamePath, Dictionary<string, string> fields)
{
string fieldNames = "";
List<object> geoProcessingArgs = null;
foreach (KeyValuePair<string, string> item in fields)
{
fieldNames = String.Format("{0}{1} {2}; ", fieldNames, item.Key, item.Value);
}
geoProcessingArgs = new List<object>{tableNamePath,fieldNames};
var t = Task.Run(async () =>
{
var result = Geoprocessing.ExecuteToolAsync("management.AddFields", Geoprocessing.MakeValueArray(geoProcessingArgs.ToArray()));
});
t.Wait();
}
Hi,
The faster way is not using EditOperation as I wrote in my first post and use in_memory featureclass instead of shapefile,
Another alternative is to use InsertCursor . @RichRuh knows more about results as he commented that post.