Select to view content in your preferred language

Performance penalty with using Beta 3 version of the API

2055
6
03-23-2011 12:16 PM
SasaI_
by
Emerging Contributor
I tested out the performance of the Beta3 API release and I noticed a huge difference in writing and deleting geometries in the database.  I wrote two simple test programs using the Editing sample to write 25k points to a table then delete them.  As my test numbers show, it takes almost 5x as long to Write and Delete the points in the beta3 version than it did in beta2.  Is there a reason there is such a huge difference between the two APIs?

The times from my test programs:

Beta2 version:

Testing with 25,000 points.
Finished inserting rows, time taken: 4.531s
Finished deleting all rows, time taken: 3.371s
Total time taken: 7.904s

Beta3 version:

Testing with 25,000 points.
Finished inserting rows, time taken: 19.388s
Finished deleting all rows, time taken: 18.013s
Total time taken: 37.404s


The testing programs were based on the Editing sample.  They write to the a clean geodatabase/table.  Beta2 version uses the raw buffer (allocate + memcpy) while Beta3 version uses PointShapeBuffer (Setup + GetPoints) for geometry creation.  They both use table.Delete(row) to remove a row.
0 Kudos
6 Replies
SasaI_
by
Emerging Contributor
Just to add to the above findings.  I tested writing lines this morning, and found similar issues.  The speed isn't largely affected by the number of points written, the slowdown is mainly in the number of features written (this is true for beta 2 and beta 3 releases, and makes perfect sense).

Beta 2 Timings:

Wrote 10000 polylines to the database (total of 25006 points) in 2,554.00ms
Wrote 10000 polylines to the database (total of 59854 points) in 2,462.00ms

Beta 3 Timings:

Wrote 10000 polylines to the database (total of 25005 points) in 9,871.00ms
Wrote 10000 polylines to the database (total of 60040 points) in 10,348.00ms
0 Kudos
VinceAngelo
Esri Esteemed Contributor
Which Visual Studio? 32-bit or 64-bit? Are you using the Release or Debug libraries?

Are the jobs run on the same machine with identical load?

Why do the Beta-2 and Beta-3 passes have different numbers of vertices? 

How large are the directories before you start delete operations?

What happens if you use the Beta-2 shape creation protocol with the Beta-3 API DLL (one variable test)?

- V
0 Kudos
SasaI_
by
Emerging Contributor
Which Visual Studio? 32-bit or 64-bit? Are you using the Release or Debug libraries?

Are the jobs run on the same machine with identical load?

Why do the Beta-2 and Beta-3 passes have different numbers of vertices? 

How large are the directories before you start delete operations?

What happens if you use the Beta-2 shape creation protocol with the Beta-3 API DLL (one variable test)?

- V


Visual Studio 2008 v9.0.30729.4462 QFE (32bit) running on Windows 7 64bit.  Projects are compiled as 32-bit projects against the Debug libraries.  Jobs are run on the same machine with an identical load.  The reason for the different number of vertices in the second run is because the test creates 10k polylines, each with 1 part and a random number of vertices per part (2-3 in the first run for both, 2-10 in the second run).

For sake of comparing apples to apples, I re-wrote the test to write 10k polylines, each with 1 part.  The first test writes 2 vertices per part for a total of 20k points, the second writes 6 vertices per part for a total of 60k points.  I've posted both the results and the code I use to test.  I'd be more than happy to hear about any improvements to the code that I can as I'm definitely not a highly experienced C++ programmer (I come from a C# background).  As with the previous tests, both of the tests write to the same copy of the database (I created an original and make a copy of the clean original for each test run).  I've tried running the test a number of times, each time receiving results that vary by less than 0.5s in case of beta 3, and around 0.25s for beta 2.  Apparently I didn't install X64 Compilers and Tools when I installed VS2008 so I can test with 64bit until I install the component, but when I do I will re-run the test.

Here are the results:

Beta 2:

Wrote 10000 to the database (total of 20000 points) in 1.955s
Finished deleting all rows, time taken: 1.385s

Wrote 10000 to the database (total of 60000 points) in 2.057s
Finished deleting all rows, time taken: 1.422s


Beta 3:

Wrote 10000 to the database (total of 20000 points) in 8.042s
Finished deleting all rows, time taken: 7.591s

Wrote 10000 to the database (total of 60000 points) in 8.094s
Finished deleting all rows, time taken: 7.729s



Here's the code snippet I use to create the lines in beta 2:

int numPoints = 6;
int numParts = 1;
int partIdx = 0;
unsigned long length = 48 + numPoints * 16;
int totPoints = 0;
int totLines = 10000;
int shapetype = 3;

wcout << "Testing with " << totLines << " lines." << endl;

Row r;
ShapeBuffer geom;
clock_t start = clock();
for (int i=0; i<totLines; i++) {
 table.CreateRowObject(r);

 r.SetString(L"TextField", L"");
 r.SetDouble(L"DoubleField1", 0.0);
 r.SetDouble(L"DoubleField2", 0.0);
 r.SetNull(L"DateField");
 r.SetInteger(L"IntegerField1", 0);
 r.SetInteger(L"IntegerField2", 0);

 geom.Allocate(length);
 memcpy(geom.shapeBuffer, &shapetype, sizeof(shapetype)); // Shapetype
 memcpy(geom.shapeBuffer + 36, &numParts, sizeof(numParts)); // Number of parts
 memcpy(geom.shapeBuffer + 40, &numPoints, sizeof(numPoints)); // Number of points
 memcpy(geom.shapeBuffer + 44, &partIdx, sizeof(partIdx)); // Fist (and only) part index

 double minx = 180, miny = 90, maxx = -180, maxy = -90;

 rand();
 for (int j=0; j<numPoints; j++) {
  totPoints++;

  double x = -180.0 + rand() * (360.0) / RAND_MAX;
  double y = -90.0 + rand() * (180.0) / RAND_MAX;

  if (x < minx) minx = x;
  if (x > maxx) maxx = x;
  if (y < miny) miny = y;
  if (y > maxy) maxy = y;

  memcpy(geom.shapeBuffer + 48 + j*2*sizeof(double), &x, sizeof(double)); // X
  memcpy(geom.shapeBuffer + 48 + j*2*sizeof(double)+sizeof(double), &y, sizeof(double)); // Y
 }

 // Copy the extent
 memcpy(geom.shapeBuffer + 4, &minx, sizeof(double));
 memcpy(geom.shapeBuffer + 12, &miny, sizeof(double));
 memcpy(geom.shapeBuffer + 20, &maxx, sizeof(double));
 memcpy(geom.shapeBuffer + 28, &maxy, sizeof(double));

 geom.inUseLength = length;
 r.SetGeometry(geom);

 // Store the row.
 if ((hr = table.Insert(r)) != S_OK)
 {
  wcout << "An error occurred while inserting a row." << endl;
  wcout << "Error code: " << hr << endl;
  return -1;
 }
}


time_t delstart = clock();
wcout << "Wrote " << totLines << " to the database (total of " << totPoints << " points) in " << (delstart - start)/1000.0 << "s" << endl;


And the similar code for beta 3:

int numPoints = 6;
int numParts = 1;
int partIdx = 0;
int totPoints = 0;
int totLines = 10000;
int shapetype = 3;

wcout << "Testing with " << totLines << " lines." << endl;

Row r;
MultiPartShapeBuffer geom;

clock_t start = clock();
for (int i=0; i<totLines; i++) {
 table.CreateRowObject(r);

 r.SetString(L"TextField", L"");
 r.SetDouble(L"DoubleField1", 0.0);
 r.SetDouble(L"DoubleField2", 0.0);
 r.SetNull(L"DateField");
 r.SetInteger(L"IntegerField1", 0);
 r.SetInteger(L"IntegerField2", 0);

 hr = geom.Setup(shapetype, numParts, numPoints, 0);

 Point* points;
 hr = geom.GetPoints(points);

 for (int j=0; j<numPoints; j++) {
  totPoints++;

  points.x = -180.0 + rand() * (360.0) / RAND_MAX;
  points.y = -90.0 + rand() * (180.0) / RAND_MAX;
 }

 geom.CalculateExtent();

 r.SetGeometry(geom);

 // Store the row.
 if ((hr = table.Insert(r)) != S_OK)
 {
  wcout << "An error occurred while inserting a row." << endl;
  ErrorInfo::GetErrorDescription(hr, errorText);
  wcout << errorText << "(" << hr << ")." << endl;
  return -1;
 }
}

time_t delstart = clock();
wcout << "Wrote " << totLines << " to the database (total of " << totPoints << " points) in " << (delstart - start)/1000.0 << "s" << endl;
0 Kudos
SasaI_
by
Emerging Contributor
More test results.  This time I've compiled it for both x86/x64 and compiled it against the debug and release library versions.  Tests are writing 10k lines each with 1 part and 6 vertices, then deleting all lines.

Beta 2:


Win32 Debug:

Wrote 10000 to the database (total of 60000 points) in 1.894s
Finished deleting all rows, time taken: 1.372s
 

x64 Debug:

Wrote 10000 to the database (total of 60000 points) in 1.078s
Finished deleting all rows, time taken: 0.806s


Win32 Release:

Wrote 10000 to the database (total of 60000 points) in 0.424s
Finished deleting all rows, time taken: 0.397s


x64 Release:

Wrote 10000 to the database (total of 60000 points) in 0.311s
Finished deleting all rows, time taken: 0.317s


Beta 3:


Win32 Debug:

Wrote 10000 to the database (total of 60000 points) in 7.247s
Finished deleting all rows, time taken: 7.061s


Win32 Release:

Wrote 10000 to the database (total of 60000 points) in 4.394s
Finished deleting all rows, time taken: 4.734s


x64 Debug:

Wrote 10000 to the database (total of 60000 points) in 5.436s
Finished deleting all rows, time taken: 5.285s


x64 Release:

Wrote 10000 to the database (total of 60000 points) in 4.898s
Finished deleting all rows, time taken: 5.301s
0 Kudos
LanceShipman
Esri Regular Contributor
Thank you for identifying this issue which was also picked up by our own internal performance tests.  We've duplicated the problem and have a solution.  The issue was due to the addition of the IsEditable check to block the modification of features with advanced geodatabase behavior.  The fix will be available in the Final code drop for the File API.
0 Kudos
SasaI_
by
Emerging Contributor
Thank you for identifying this issue which was also picked up by our own internal performance tests.  We've duplicated the problem and have a solution.  The issue was due to the addition of the IsEditable check to block the modification of features with advanced geodatabase behavior.  The fix will be available in the Final code drop for the File API.


Thank you Lance.  I figured it may have something to do with the IsEditable check, as there was nothing else mentioned in beta3 updates that should have had any effect on the code.  I'm glad its a bug that will be fixed, and not just something we have to live with.
0 Kudos