Select to view content in your preferred language

the mechanism of arcade Attribute Validation

187
1
Jump to solution
2 weeks ago
yockee
by
Frequent Contributor

I have Attribute Validation created in Arcgis Pro. This validation is used to check a coloumn uniqueness. The Feature Class is then published to be consumed by Field Map apps via Feature Service. This is the code :

var id = $feature.code_oid;

if (id == null || id == "") {
return true;
}

var fs = FeatureSetByName(
$datastore,
"db1.schema1.FC1",
["code_oid", "objectid"],
false
);

var oid_self = $feature.objectid;

var others = Filter(fs, "code_oid = @ID AND objectid <> @oid_self");

if (Count(others) > 0) {
return {
"errorMessage": "code_oid already existed."
};
}

return true;

The validation works eventhough it is accessed via Field Map. I am wondering :

1. How can this Attribute Validation be accessed via Field Map ? Where is it actually stored ? 

2. "var others = Filter(fs, "code_oid = @ID AND objectid <> @oid_self");"

a. can this part be changed to other logic ? More like code refactoring. I am thinking of using a Postgres-like function like "EXISTS" to avoid scanning all rows.

b. Can the fields involved in the Filter here (code_oid and objectid) be indexed ? How to index them if they are stored in Datastore? 

3. "var fs = FeatureSetByName($datastore,"db1.schema1.FC1",["code_oid", "objectid"],false);"

a. Does this mean that the data are stored in the Datastore ? If yes, it will mean that its space will increase as the amount of data increasing. Where to check the physical location of Datastore ?

b. Does $datastore acts as pointer to database or it acts as storage of the table itself (ie: the data are copied from database into Arcgis Server) ?

Thanks

0 Kudos
1 Solution

Accepted Solutions
TimWestern
MVP Regular Contributor

My Understanding is that when you create attribute rules via ArcGIS Pro on a Feature class (And I think this may apply to tables and relationships too), that they get stored in the geodatabases system tables.  I don't think they are stored like a script file at rest on disk, they're stored as metadata in the database.

1. So when you publish the service for a feature class as a feature service, the service knows about these rules and will enforce them with edits I believe.

I'm not sure what the disconnect is with Field Maps, but can't field maps send edits to the same feature service? And then the Feature Service executes any and all such rules as part of its normal operation.   

If the rule returns some kind of error condition or object, then the edit would fail and show that error.

2. Now as to your concern about refactoring, I don't think it handles it this by reloading all rows in memory, I think it defers a query against the underlying data source.  All Filter Does is add a where clause to a query, unless I'm misunderstanding something.

I've gone back on forth on some similar ideas in my previous C# work but could something like this work?

var others = Filter(fs, "code_oid = @ID AND objectid <> @oid_self");
var first = First(others);

if (!IsEmpty(first)) {
  return {
    "errorMessage": "code_oid already existed."
  };
}
return true;




I think indexes provide the more robust performance boosts than just refactoring expressions.  However, if refactoring it makes it more clear to understand what its doing, that's always a bonus I  think.


3. the $datastore parameter is not a data container, it is a handle to the same data source that holds the feature.  (I think this could be a database like PostGre/SQL/Oracle, or it could be a File Geodatabase, or some other arcgis compatible database.  So it could be the connection to an RDBMS, it could also be a reference to an arcgis data store (relational) which might be in ArcGIS Online or Enterprise I believe.


So because its not the thing, but a reference to get access to what's under lying, the size of the underlying thing could go up or down over time depending on what operations you perform.

Now I don't know if the verbiage 'pointer' is correct, because, typically that is a keyword in programming reserved for Random Access Memory, or perhaps even Cache Memory) storage, not to a possible internet IP, socket, port, servername, folder path, file path, or file name location.

(Although in some cases these layers might be built in memory depending on what you are building I'd not get hung up on the pointer vs reference idea here)



View solution in original post

1 Reply
TimWestern
MVP Regular Contributor

My Understanding is that when you create attribute rules via ArcGIS Pro on a Feature class (And I think this may apply to tables and relationships too), that they get stored in the geodatabases system tables.  I don't think they are stored like a script file at rest on disk, they're stored as metadata in the database.

1. So when you publish the service for a feature class as a feature service, the service knows about these rules and will enforce them with edits I believe.

I'm not sure what the disconnect is with Field Maps, but can't field maps send edits to the same feature service? And then the Feature Service executes any and all such rules as part of its normal operation.   

If the rule returns some kind of error condition or object, then the edit would fail and show that error.

2. Now as to your concern about refactoring, I don't think it handles it this by reloading all rows in memory, I think it defers a query against the underlying data source.  All Filter Does is add a where clause to a query, unless I'm misunderstanding something.

I've gone back on forth on some similar ideas in my previous C# work but could something like this work?

var others = Filter(fs, "code_oid = @ID AND objectid <> @oid_self");
var first = First(others);

if (!IsEmpty(first)) {
  return {
    "errorMessage": "code_oid already existed."
  };
}
return true;




I think indexes provide the more robust performance boosts than just refactoring expressions.  However, if refactoring it makes it more clear to understand what its doing, that's always a bonus I  think.


3. the $datastore parameter is not a data container, it is a handle to the same data source that holds the feature.  (I think this could be a database like PostGre/SQL/Oracle, or it could be a File Geodatabase, or some other arcgis compatible database.  So it could be the connection to an RDBMS, it could also be a reference to an arcgis data store (relational) which might be in ArcGIS Online or Enterprise I believe.


So because its not the thing, but a reference to get access to what's under lying, the size of the underlying thing could go up or down over time depending on what operations you perform.

Now I don't know if the verbiage 'pointer' is correct, because, typically that is a keyword in programming reserved for Random Access Memory, or perhaps even Cache Memory) storage, not to a possible internet IP, socket, port, servername, folder path, file path, or file name location.

(Although in some cases these layers might be built in memory depending on what you are building I'd not get hung up on the pointer vs reference idea here)