Hi guys,
I published a very simple geoprocessing tool from ArcGIS Pro v3.1 to ArcGIS Server v11.1. The inputs are fixed for a test purpose but when I execute it from a Javascript application with Maps SDK v4, it returns very different results every time. At a time, it even returns empty data. The processing time is around 2 or 3 seconds. So, it is mush faster than timeouts.
The model uses "Extract Values to Table" tool and the input features and input rasters parameters are fixed. When I run it within ArcGIS Pro, it produces a table in a file geodatabae with 7092 rows. However, when I call it from the Javascript application, sometimes it returns around 120, 650 or 1200 rows (features) or even zero row.
And the code snippet is here. I use ArcGIS Maps SDK v4.
const params= {
Extract_Values: "outputTable"
};
geoprocessor.submitJob(gpUrl, params)
.then((jobInfo) => {
const jobid = jobInfo.jobId;
console.log("ArcGIS Server job ID: ", jobid);
const options = {
interval: 1500,
statusCallback: (j) => {
console.log("Job Status: ", j.jobStatus);
}
};
jobInfo.waitForJobCompletion(options).then(() => {
if (jobInfo.jobStatus == "job-succeeded") {
jobInfo.fetchResultData("Extract_Values")
.then((result) => {
const cellValues = result.value.features.map(item => item.attributes.Value);
// it shows 120, 650 or 1200 rows or even empty.
console.log(cellValues);
});
}
});
})
.catch((error) => {
console.error("Error:", error);
});
The results would look like:
{
paramName: "Extract_Values",
dataType: "GPRecordSet",
value: {
exceededTransferLimit: false,
displayName: "",
fields: [
{name: "OID", type: "esriFieldTypeOID", alias: "OID"},
{name: "Value", type: "esriFieldTypeDouble", alias: "Value"},
{name: "SrcID_Feat", type: "esriFieldTypeInteger", alias: "SrcID_Feat"},
{name: "SrcID_Rast", type: "esriFieldTypeInteger", alias: "SrcID_Rast"}
],
// there could be 120, 650 or 1200 features or empty []
features: [
{attributes: {OID: 1, Value: ...., SrcID_Feat: ..., SrcID_Rast: ...}},
{attributes: {OID: 2, Value: ...., SrcID_Feat: ..., SrcID_Rast: ...}},
...
]
}
}
Any thoughts about why the results vary every time?
When you test the model in Pro is it going to a local drive or network drive?
I ask because I had python scripts that used to write to a network drive, but with an upgrade it started returning non- consistent results like what you are seeing. When I changed the script to write to a local drive I got the same correct results every time.
I hope this info helps your troubleshooting.
Thanks for your tip. It's a network drive that ArcGIS Pro reads the data from or writes them into. But what I'm talking is the geoprocessing service on ArcGIS Server that returns different results every time. So, from the ArcGIS Serer standpoint, it's a local drive on the server. It returns the same results when it's executed within ArcGIS Pro.
I'm totally lost now. Even when I submit a job from the ArcGIS REST Services Directory on my browser (Firefox), it returns different results every time. So, I don't think my javascript code is the culprit. Are there settings / environment I need to set properly when I publish the geoprocessing service? Any ArcGIS Server settings that affect how the geoprocessing service is run? Are there any things to remember or need to pay attention to when migrating from ArcGIS Desktop 10 & ArcGIS Server 10 to ArcGIS Pro 3 & ArcGIS Server 11 when it comes to geoprocessing services?
In a script that installs the ArcGIS Server, there are two lines like this:
ulimit -n 65536
ulimit -u 65536
I'm not sure what these lines do (I'm not the person who wrote the install script) but wonder if these lines limit the memory that ArcGIS Server can consume?