POST
|
Hello, I'm relatively new to attribute rules, and I'm currently exploring options for using batch calculation as an alternative to immediate calculation to speed up some bulk processing after large nightly data imports to my data. I'm trying to create a batch calculation attribute rule, and I only need it to run on a subset of features in a feature class. I'm curious if there is a way to apply the filter on the attribute rule so it only runs on features that meet the conditions in a SQL whereclause. My hope is this will speed the batch calculation run time. So, for example I'd like a batch calc attribute rule that does something like this: For Bulk Calc Attribute Rule A, only run it against rows in Feature Class B that match SQL WHERECLAUSE C. For records that match the whereclause, update one attribute value on the row. All other rows on the feature class are ignored. I guess the simple way to do this is to put an IIF() statement inside the Attribute rule. But this means the attribute rules will evaluate every row in the feature class. I already know this one attribute rule would only apply to features with specific values, which would drop the number of rows to assess from 100k+ to only 100 or so. I guess there is a way to use a "FILTER()" function, but I haven't seen an example of sticking that in an attribute rule.
... View more
01-25-2024
08:55 AM
|
0
|
0
|
214
|
POST
|
Hello, I'm relatively new to attribute rules. I'm trying to build them into a new schema I'm developing to handle primary key population with a sequence using the NextSequenceValue() function. I've noticed that just by adding this one simple attribute rule, the time to bulk import features to the feature class slows significantly. For my test I used a sample line dataset of 1,000 rows. When importing data through the Append tool, with the rule disabled, the data is loaded in seconds. When the sequence attribute rule is enabled, it takes about 1.5 minutes. And this is being done from Pro on a machine at the same datacenter as the Oracle geodatabase I'm loading into. 1.5 minutes is okay, but that's just for 1,000 rows. With this schema I'm building, one major workflow is loading data into the schema from a 3rd party, and some of the 1:M tables related to the line feature class could get 1,000's of rows each every day. I'm worried the load time will be so long with this attribute rule, that I'm going to hog our ETL scripting server all night with just this one job. Same thing happens with loading the data via FME I'm wondering if I should fall back to alternative ways for generating primary keys outside of the Esri ecosystem. For example, creating a trigger at the database level to grab sequence values on row inserts. Or running a query in the ETL tool to grab sequence values from the database upstream from the geodatabase import. These feature classes won't be versioned or archive enabled, so I don't think the trigger population would be an issue. Although attribute rules are great for managing some key attributes for transactional edits done by analysts throughout the day, they seem to really slow down bulk loads. I'm curious if folks out there have created some mitigation strategies for speeding up nightly bulk loads while keeping your attribute rules intact or daily edits.
... View more
01-23-2024
07:31 AM
|
0
|
0
|
184
|
IDEA
|
The implemented enhancements in ArcPy.Describe for Pro 3.2 don't return a dateModified property for feature classes in my 10.9.1 Oracle geodatabase. Nor does it work for feature classes in my 10.9.1 Azure SQL Database. It does for a file geodatabase feature class though. So really the enhancement is Arcpy describe will return size and dateModified if the underlying geodatabse supports it... and enterprise geodatabases (at least at 10.9.1) don't support it. I assume this is because the SDE business tables don't contain any kind of auditing fields for date created or date modified for geodatabase items...because I've hunted for something like this and no luck. I'd be curious if the value would be returned by ArcPy if the Oracle geodatabase was at version 11.2. But I assume those columns would have to be added to the SDE business schema somewhere and populated to make the data available for Arcpy to grab. I could not find any documentation specific to what's new in enterprise geodatabase functionality at 11.2 so can't verify this was done.
... View more
12-19-2023
07:48 AM
|
0
|
0
|
652
|
IDEA
|
Experience builder excels at configuring an interactive web interface design with buttons, pages, etc. Dashboards and Insights are great at data visualization with more charting options. There are some features that overlap, bur for example the bar charts in Experience builder are much lest sophisticated then those in ArcGIS Dashboard apps. So currently you have to make a choice between one or the other. It would be great if I could combine the two by embedding a Dashboard or Insights app inside Experience builder with interactivity. Then actions on the Experience GUI like filtering on a menu dropdown or selecting something could be passed to the Dashboard or Insights app as parameters to do filtering on the various charts. AN example might be an experience app for conveying information about a nationwide program. There are dashboards or insight apps that complement the story that have already been built. The Experience provides users a drop down of states so they can filter the data in the experience to a single state. The enhancement would allow the experience to extend that state filter to the various widgets in the Dashboard or Insight app.
... View more
12-14-2023
07:15 AM
|
3
|
0
|
276
|
IDEA
|
Our ArcGIS Online Org contains 1,000 groups, 2,500 users, and 30,000 items. I am trying to build BI reports on the usage & access to data within our org. The scheduled reports for items and users are great for understanding what's available and who owns them. However, what's missing is who has access to what, and ArcGIS Online groups are the relational key between users and items they might use. It would be helpful to have an additional report that provides this information in bulk. How would I do this today? I've looked at the REST documentation, and unfortunately I'd have to make thousands of calls to get membership and contents of each group. First I'd have to query the endpoint https://org.arcgis.com/sharing/rest/community/groups to retrieve an array of all the groups and their basic properties. These results don't contain info about user that are members or items shared to the group. Because of that I need to call two separate endpoints for each group. For the member users, I need to loop over each group and query https://org.arcgis.com/sharing/rest/community/groups/<groupid>/userList . Then I'll need to re-query with pagination if the user count of a group is above 100. For the items, I need to loop over each group and query https://org.arcgis.com/sharing/rest/content/groups/<groupid> . Then I'll also need to re-query with pagination if item count is above 100. That's 2000+ HTTP calls to the REST API, so a lot of back and forth latency and load on the API. What would the solution look like? I think it would be better to have a process within ArcGIS Online that can quickly extract this data in bulk and export to some kind of report. I'd like to see the report contain some info about each group (ID, Title, isInvitationOnly, Owner, tags, created, modified, access, protected, autoJoin, isOpenData, is it a shared update group, etc.). Then it would also have a list of usernames that are members of each group, preferable also with each user's member role in the group. Finally the report would also have a list of the Item ID's for items shared to the group. When combined with the existing item and user reports in another tool like Excel or Power BI, this would be enough to analyze the relationships. Although all the existing reports are provided in CSV format, for this kind of report JSON seems better equipped to handle the array of usernames and item id's. I'm not very familiar with CSV standards, so maybe there is a way to nest a comma separated list of usernames or item id's inside using double quotes around the list. Alternatively maybe it needs to be 3 different CSV reports: First has one row per group with the group's properties. Second is a report of usernames and group Id's they are members of, and the users member role in the group. Third is a report of item id's and the groups they are assigned to.
... View more
12-14-2023
06:52 AM
|
4
|
0
|
226
|
IDEA
|
The new Generate Schema Report tool in Pro is a great way to export the details of all objects in a geodatabase. However, depending on which output format you choose, the data types provided for each geodatabase item will be something more intended for code rather than the common name displayed in the software to end users. For example, I have a Feature Dataset in my geodatabase. And it looks like this when you look at properties in the Pro catalog interface: When I run the tool for an HTML output, it labels it as dataset type = 'FeatureDataset'. Notice there is no space. In a similar fashion, in the JSON output of the tool gives the type as DEFeatureDataset Similar things happen for geometry types of feature classes (point, line, polygon, etc.), and field types (OID, Geometry, String, Integer, Date, etc.) These are not the types users see when they look at the Fields view of a feature class in Pro. Although I'm sure these types of names are great for coding and play their role in the SDK and backend, I'd prefer adding an additional property of the output with a proper human label that matches what all my GIS analysts see when they are looking at data in Pro through the catalog interface. Having those end-user friendly labels will help in using these reports in engagements with users about schema. I guess I could convert all these values to whatever the Human readable Pro UX label is, if I knew where the definitive list exists, but it would be great if it was in the raw output.
... View more
11-08-2023
06:34 AM
|
3
|
2
|
389
|
POST
|
I'm looking for a way to easily detect all feature classes in an enterprise geodatabase that are configured with branch versioning. Anyone found a straightforward way to do this in an automated fashion? Arcpy's describe provides a way to determine if the feature class is versioned, but no way to distinguish between traditional vs branch. So that's led me to look at the SDE business tables, and I found two possible candidates. I'm curious if anyone has had more experience with these since I can't find documentation on what each do. BRANCH_TABLES_MODIFIED is the first candidate. It has a BRANCH_ID column with many of the rows having a zero in this column. Then there is a REGISTRATION_ID which seems to be the same ID as on SDE.TABLE_REGISTRY. I assume that BRANCH_ID = 0 equates to default version for each table. So if I filter to BRANCH_ID=0 and join to TABLE_REGISTRY does that get me the comprehensive list? MULTIBRANCH_TABLES is another candidate with a REGISTRATION_ID and a START_MOMENT date column. I seem to get fewer tables back from this.
... View more
11-01-2023
12:52 PM
|
0
|
3
|
624
|
IDEA
|
Power BI has many out of the box connectors to bring data from external sources (ex. Salesforce) into their data model. It would be helpful if ArcGIS Online (and Enterprise) was one of these options. Often GIS layers are the database of record for data assets in an organization., so they are useful for their tabular data in addition to the spatial aspect. That data then needs to be joined to information from other systems like maintenance and permitting to report information useful to the business. Often Power BI modelers need to add measures based on the data to make create reports. The ArcGIS For Power BI visual is a great start since it allows data from a model to be joined to GIS data for map visualization. But it prevents incorporating data further upstream, so it can be processed in Power Query or incorporated into measure functions. As for the spatial column aspect, Power BI seems to honor GeoJSON as a data format, it would be helpful if this proposed connector could bring the spatial geometry from the Esri service into a Data model column. At the 2023 UC Technical Workshop "ArcGIS for Microsoft 365: An Overview", I saw that Esri is trying to accommodate this need by providng a Power Automate template workflow that uses the ArcGIS Power Automate premium connector to read a GIS service, and write it to a csv in One Drive so it can then be imported to a Power BI model. Although this may be better for some user's workflows, it feels like a workaround that requires adding additional data hops. Building a Power BI connector would further development along the same lines to make getting data from ArcGIS into Power BI models quick and easy from a single interface Matthew Roche at Microsoft has a maxim about where the best place to do data processing should occur in the hops from source data to Power BI model. "Data should be transformed as far upstream as possible, and as far downstream as necessary". If Power BI had a "Get Data from ArcGIS" connector, that would help modelers in Esri shops follow this principle when data. Here's a 20 minute presentation where he talks about it: Roche's Maxim of Data Transformation - SQLBits Presentation
... View more
07-14-2023
07:32 AM
|
5
|
0
|
672
|
IDEA
|
AGOL and Portal Administrators need to understand dependencies between items. The REST API provides an endpoint on each item to get a list of all the related items. This is the relatedItems endpoint (see documentation here). However, the list returned does not tell you what kind of relationship it is. The endpoint does provide a parameter named "relationshipTypes" to filter what kind of relationships you want returned, so Esri have a finite list of these types, but the endpoint documentation does not list them. It would help me to report dependencies if this relatedItems response was enhanced to include the relationshipTypes property and whether the relationship is backwards or forwards in regards to the item I'm querying from.
... View more
06-26-2023
02:20 PM
|
1
|
0
|
680
|
POST
|
Thanks Johannes. I upvoted that idea. Agreed that this could be improved to be more seamless.
... View more
02-24-2023
06:53 AM
|
0
|
0
|
1334
|
POST
|
Thank you for the responses. @jcarlson 's response was the ticket. Wrapping the reference to the input FeatureSet's date column translated the date into a unix timestamp integer, which then outputed correctly on the return statement. Something else I noted was that null dates in the source data will get converted to a zero when the number function is called. On the output FeatureSet these zero's will appear as Jan 1, 1970. So to fix that I also had to wrap the number function in an IIF function, and then return null if the incoming number() value is zero. So the column request looks like this: //If incoming date value is zero then return null, else return the date value as a unix timstamp integer MY_OUT_DATE_COLUMN: iif (number(f["MY_IN_DATE_COLUMN"]) == 0 ,null ,number(f["MY_IN_DATE_COLUMN"]) )
... View more
02-24-2023
06:51 AM
|
0
|
0
|
1334
|
POST
|
I'm curious if anyone has had issues creating a data expression for Arcade in Dashboards that returns a date column? I've noticed that if I include a date column in the FeatureSet I'm returning at the end of the script, the output contains no rows. Based on my testing I believe this is a bug in the FeatureSet() function if the dictionary you pass to it contains a date column. But since Arcade is pretty new to me maybe I'm doing something wrong As an example, I put two samples below of the same code based on a public facing AGOL hosted feature layer. The first sample has the date column removed so you can see it does return data. The second sample the only difference is the date column is included, but you'll notice if you run it the results are blank. I tried this with two AGOL feature layers, so it doesn't seem to be a fluke with just one layer. I'm trying to follow along on the GitHub example on how to join tabular data to one of my layers and returns a feature set (Link). Ultimately I want to slice and dice the layer in Arcade, but for this forum question I kept the script simple to show the issue. Here's sample script 1 with the date column commented out. If you run in the Arcade playground you will se it returns data. var portal = Portal("https://www.arcgis.com/");
var features = [];
var feat;
//Define input layer to read
var fs = FeatureSetByPortalItem(
portal,
"a400f4711f9443a9855340ee7b66890a",
0,
['DRAINAGE_ID','LOCATION','FME_DATE'],
false
);
//Loop over each hosted layer feature
//, pass subset of attribute values to the feat variable
//, then Push the feat object into the feature dictionary
for (var f in fs) {
feat = {
attributes: {
DRAINAGE_ID: f["DRAINAGE_ID"],
LOCATION: f["LOCATION"],
//FME_DATE: f["FME_DATE"],
}
}
Push(features,feat)
}
//Define schema for output dictionary
//and pass in the dictionary of output features
var joinedDict = {
fields: [
{name: "DRAINAGE_ID", type: "esriFieldTypeInteger"},
{name: "LOCATION", type: "esriFieldTypeString"},
//{name: "FME_DATE", type: "esriFieldTypeDate"}
],
'geometryType': '',
'features':features
};
return FeatureSet(Text(joinedDict)); Here is sample 2 that includes the date column, but when you run it the results will be blank var portal = Portal("https://www.arcgis.com/");
var features = [];
var feat;
//Define input layer to read
var fs = FeatureSetByPortalItem(
portal,
"a400f4711f9443a9855340ee7b66890a",
0,
['DRAINAGE_ID','LOCATION','FME_DATE'],
false
);
//Loop over each hosted layer feature
//, pass subset of attribute values to the feat variable
//, then Push the feat object into the feature dictionary
for (var f in fs) {
feat = {
attributes: {
DRAINAGE_ID: f["DRAINAGE_ID"],
LOCATION: f["LOCATION"],
FME_DATE: f["FME_DATE"],
}
}
Push(features,feat)
}
//Define schema for output dictionary
//and pass in the dictionary of output features
var joinedDict = {
fields: [
{name: "DRAINAGE_ID", type: "esriFieldTypeInteger"},
{name: "LOCATION", type: "esriFieldTypeString"},
{name: "FME_DATE", type: "esriFieldTypeDate"}
],
'geometryType': '',
'features':features
};
return FeatureSet(Text(joinedDict));
//If you instead return just the Text(joinedDict) you will see that the data looks valid
//, so it's just way the FeatureSet() function is transforming the data that seems to cause blank results
//return Text(joinedDict)
... View more
02-22-2023
06:59 AM
|
0
|
5
|
1470
|
IDEA
|
That's great, so yes that fulfills my idea. I'm on Pro 2.9 and didn't realize this option was added in 3.0. In parallel on the ArcGIS Enterprise side, does Enterprise 11.0 add the ability for the server to authenticate with an Azure SQL database in a similar way using a service account in Azure AD?
... View more
01-11-2023
07:46 AM
|
0
|
0
|
780
|
IDEA
|
When moving to Azure, many organizations implement Azure AD as a single source of authentication within the Azure environment. However, when connecting ArcGIS Pro to an Azure SQL Database or Managed Instance, the connection properties window does not provide the user an option to choose "Azure Active Directory" for the authentication. Please add this as an option so, we can manage access to our geodatabases in Azure the same way we manage access to other Azure resources within our organization.
... View more
12-13-2022
01:41 PM
|
0
|
5
|
992
|
IDEA
|
Currently, if you want a Velocity Analytic to write to a layer, the layer has to be owned by the same account that owns the analytic item. Please make this more flexible so Velocity can write to a layer owned by any account, as long as the analytic owner has edit rights to it through the typical AGO methods (share through group, hosted view with edit rights enabled, admin access). This would make Velocity layer management more in line with how standard layers are managed, so the same rules can be applied regardless of how the layer is editing. And it would allow teams that have SOP's in place for layer editing to avoid having to create workarounds for workflows where Velocity will be the service editing a layer. For an example, I'm currently acting as the Velocity lead for my organization, but we have dozens of GIS data admins in our org managing GIS layers, maps, and apps for their teams. For a current project, I am building a real-time analytic that reads a data feed from a 3rd party, and then I want to slice and dice that data and split it up so each team has a layer that's a subset. Then that team can manage the layer itself as they need to (symbology, sharing, etc)....and Velocity is just the data writer. However, because of this limitation in Velocity, if I'm going to be the one managing the Velocity services, I've also got to own all the layers that are the outputs.
... View more
11-14-2022
11:15 AM
|
3
|
1
|
486
|
Title | Kudos | Posted |
---|---|---|
1 | 07-28-2022 08:55 AM | |
3 | 12-14-2023 07:15 AM | |
4 | 12-14-2023 06:52 AM | |
3 | 11-08-2023 06:34 AM | |
5 | 07-14-2023 07:32 AM |
Online Status |
Offline
|
Date Last Visited |
Tuesday
|