|
IDEA
|
Experience builder excels at configuring an interactive web interface design with buttons, pages, etc. Dashboards and Insights are great at data visualization with more charting options. There are some features that overlap, bur for example the bar charts in Experience builder are much lest sophisticated then those in ArcGIS Dashboard apps. So currently you have to make a choice between one or the other. It would be great if I could combine the two by embedding a Dashboard or Insights app inside Experience builder with interactivity. Then actions on the Experience GUI like filtering on a menu dropdown or selecting something could be passed to the Dashboard or Insights app as parameters to do filtering on the various charts. AN example might be an experience app for conveying information about a nationwide program. There are dashboards or insight apps that complement the story that have already been built. The Experience provides users a drop down of states so they can filter the data in the experience to a single state. The enhancement would allow the experience to extend that state filter to the various widgets in the Dashboard or Insight app.
... View more
12-14-2023
07:15 AM
|
10
|
0
|
1091
|
|
IDEA
|
Our ArcGIS Online Org contains 1,000 groups, 2,500 users, and 30,000 items. I am trying to build BI reports on the usage & access to data within our org. The scheduled reports for items and users are great for understanding what's available and who owns them. However, what's missing is who has access to what, and ArcGIS Online groups are the relational key between users and items they might use. It would be helpful to have an additional report that provides this information in bulk. How would I do this today? I've looked at the REST documentation, and unfortunately I'd have to make thousands of calls to get membership and contents of each group. First I'd have to query the endpoint https://org.arcgis.com/sharing/rest/community/groups to retrieve an array of all the groups and their basic properties. These results don't contain info about user that are members or items shared to the group. Because of that I need to call two separate endpoints for each group. For the member users, I need to loop over each group and query https://org.arcgis.com/sharing/rest/community/groups/<groupid>/userList . Then I'll need to re-query with pagination if the user count of a group is above 100. For the items, I need to loop over each group and query https://org.arcgis.com/sharing/rest/content/groups/<groupid> . Then I'll also need to re-query with pagination if item count is above 100. That's 2000+ HTTP calls to the REST API, so a lot of back and forth latency and load on the API. What would the solution look like? I think it would be better to have a process within ArcGIS Online that can quickly extract this data in bulk and export to some kind of report. I'd like to see the report contain some info about each group (ID, Title, isInvitationOnly, Owner, tags, created, modified, access, protected, autoJoin, isOpenData, is it a shared update group, etc.). Then it would also have a list of usernames that are members of each group, preferable also with each user's member role in the group. Finally the report would also have a list of the Item ID's for items shared to the group. When combined with the existing item and user reports in another tool like Excel or Power BI, this would be enough to analyze the relationships. Although all the existing reports are provided in CSV format, for this kind of report JSON seems better equipped to handle the array of usernames and item id's. I'm not very familiar with CSV standards, so maybe there is a way to nest a comma separated list of usernames or item id's inside using double quotes around the list. Alternatively maybe it needs to be 3 different CSV reports: First has one row per group with the group's properties. Second is a report of usernames and group Id's they are members of, and the users member role in the group. Third is a report of item id's and the groups they are assigned to.
... View more
12-14-2023
06:52 AM
|
25
|
0
|
1925
|
|
IDEA
|
The new Generate Schema Report tool in Pro is a great way to export the details of all objects in a geodatabase. However, depending on which output format you choose, the data types provided for each geodatabase item will be something more intended for code rather than the common name displayed in the software to end users. For example, I have a Feature Dataset in my geodatabase. And it looks like this when you look at properties in the Pro catalog interface: When I run the tool for an HTML output, it labels it as dataset type = 'FeatureDataset'. Notice there is no space. In a similar fashion, in the JSON output of the tool gives the type as DEFeatureDataset Similar things happen for geometry types of feature classes (point, line, polygon, etc.), and field types (OID, Geometry, String, Integer, Date, etc.) These are not the types users see when they look at the Fields view of a feature class in Pro. Although I'm sure these types of names are great for coding and play their role in the SDK and backend, I'd prefer adding an additional property of the output with a proper human label that matches what all my GIS analysts see when they are looking at data in Pro through the catalog interface. Having those end-user friendly labels will help in using these reports in engagements with users about schema. I guess I could convert all these values to whatever the Human readable Pro UX label is, if I knew where the definitive list exists, but it would be great if it was in the raw output.
... View more
11-08-2023
06:34 AM
|
4
|
2
|
1418
|
|
POST
|
I'm looking for a way to easily detect all feature classes in an enterprise geodatabase that are configured with branch versioning. Anyone found a straightforward way to do this in an automated fashion? Arcpy's describe provides a way to determine if the feature class is versioned, but no way to distinguish between traditional vs branch. So that's led me to look at the SDE business tables, and I found two possible candidates. I'm curious if anyone has had more experience with these since I can't find documentation on what each do. BRANCH_TABLES_MODIFIED is the first candidate. It has a BRANCH_ID column with many of the rows having a zero in this column. Then there is a REGISTRATION_ID which seems to be the same ID as on SDE.TABLE_REGISTRY. I assume that BRANCH_ID = 0 equates to default version for each table. So if I filter to BRANCH_ID=0 and join to TABLE_REGISTRY does that get me the comprehensive list? MULTIBRANCH_TABLES is another candidate with a REGISTRATION_ID and a START_MOMENT date column. I seem to get fewer tables back from this.
... View more
11-01-2023
12:52 PM
|
0
|
7
|
4477
|
|
IDEA
|
Power BI has many out of the box connectors to bring data from external sources (ex. Salesforce) into their data model. It would be helpful if ArcGIS Online (and Enterprise) was one of these options. Often GIS layers are the database of record for data assets in an organization., so they are useful for their tabular data in addition to the spatial aspect. That data then needs to be joined to information from other systems like maintenance and permitting to report information useful to the business. Often Power BI modelers need to add measures based on the data to make create reports. The ArcGIS For Power BI visual is a great start since it allows data from a model to be joined to GIS data for map visualization. But it prevents incorporating data further upstream, so it can be processed in Power Query or incorporated into measure functions. As for the spatial column aspect, Power BI seems to honor GeoJSON as a data format, it would be helpful if this proposed connector could bring the spatial geometry from the Esri service into a Data model column. At the 2023 UC Technical Workshop "ArcGIS for Microsoft 365: An Overview", I saw that Esri is trying to accommodate this need by providng a Power Automate template workflow that uses the ArcGIS Power Automate premium connector to read a GIS service, and write it to a csv in One Drive so it can then be imported to a Power BI model. Although this may be better for some user's workflows, it feels like a workaround that requires adding additional data hops. Building a Power BI connector would further development along the same lines to make getting data from ArcGIS into Power BI models quick and easy from a single interface Matthew Roche at Microsoft has a maxim about where the best place to do data processing should occur in the hops from source data to Power BI model. "Data should be transformed as far upstream as possible, and as far downstream as necessary". If Power BI had a "Get Data from ArcGIS" connector, that would help modelers in Esri shops follow this principle when data. Here's a 20 minute presentation where he talks about it: Roche's Maxim of Data Transformation - SQLBits Presentation
... View more
07-14-2023
07:32 AM
|
9
|
0
|
1717
|
|
IDEA
|
AGOL and Portal Administrators need to understand dependencies between items. The REST API provides an endpoint on each item to get a list of all the related items. This is the relatedItems endpoint (see documentation here). However, the list returned does not tell you what kind of relationship it is. The endpoint does provide a parameter named "relationshipTypes" to filter what kind of relationships you want returned, so Esri have a finite list of these types, but the endpoint documentation does not list them. It would help me to report dependencies if this relatedItems response was enhanced to include the relationshipTypes property and whether the relationship is backwards or forwards in regards to the item I'm querying from.
... View more
06-26-2023
02:20 PM
|
1
|
0
|
1502
|
|
POST
|
Thanks Johannes. I upvoted that idea. Agreed that this could be improved to be more seamless.
... View more
02-24-2023
06:53 AM
|
0
|
0
|
4519
|
|
POST
|
Thank you for the responses. @jcarlson 's response was the ticket. Wrapping the reference to the input FeatureSet's date column translated the date into a unix timestamp integer, which then outputed correctly on the return statement. Something else I noted was that null dates in the source data will get converted to a zero when the number function is called. On the output FeatureSet these zero's will appear as Jan 1, 1970. So to fix that I also had to wrap the number function in an IIF function, and then return null if the incoming number() value is zero. So the column request looks like this: //If incoming date value is zero then return null, else return the date value as a unix timstamp integer MY_OUT_DATE_COLUMN: iif (number(f["MY_IN_DATE_COLUMN"]) == 0 ,null ,number(f["MY_IN_DATE_COLUMN"]) )
... View more
02-24-2023
06:51 AM
|
0
|
0
|
4519
|
|
POST
|
I'm curious if anyone has had issues creating a data expression for Arcade in Dashboards that returns a date column? I've noticed that if I include a date column in the FeatureSet I'm returning at the end of the script, the output contains no rows. Based on my testing I believe this is a bug in the FeatureSet() function if the dictionary you pass to it contains a date column. But since Arcade is pretty new to me maybe I'm doing something wrong As an example, I put two samples below of the same code based on a public facing AGOL hosted feature layer. The first sample has the date column removed so you can see it does return data. The second sample the only difference is the date column is included, but you'll notice if you run it the results are blank. I tried this with two AGOL feature layers, so it doesn't seem to be a fluke with just one layer. I'm trying to follow along on the GitHub example on how to join tabular data to one of my layers and returns a feature set (Link). Ultimately I want to slice and dice the layer in Arcade, but for this forum question I kept the script simple to show the issue. Here's sample script 1 with the date column commented out. If you run in the Arcade playground you will se it returns data. var portal = Portal("https://www.arcgis.com/");
var features = [];
var feat;
//Define input layer to read
var fs = FeatureSetByPortalItem(
portal,
"a400f4711f9443a9855340ee7b66890a",
0,
['DRAINAGE_ID','LOCATION','FME_DATE'],
false
);
//Loop over each hosted layer feature
//, pass subset of attribute values to the feat variable
//, then Push the feat object into the feature dictionary
for (var f in fs) {
feat = {
attributes: {
DRAINAGE_ID: f["DRAINAGE_ID"],
LOCATION: f["LOCATION"],
//FME_DATE: f["FME_DATE"],
}
}
Push(features,feat)
}
//Define schema for output dictionary
//and pass in the dictionary of output features
var joinedDict = {
fields: [
{name: "DRAINAGE_ID", type: "esriFieldTypeInteger"},
{name: "LOCATION", type: "esriFieldTypeString"},
//{name: "FME_DATE", type: "esriFieldTypeDate"}
],
'geometryType': '',
'features':features
};
return FeatureSet(Text(joinedDict)); Here is sample 2 that includes the date column, but when you run it the results will be blank var portal = Portal("https://www.arcgis.com/");
var features = [];
var feat;
//Define input layer to read
var fs = FeatureSetByPortalItem(
portal,
"a400f4711f9443a9855340ee7b66890a",
0,
['DRAINAGE_ID','LOCATION','FME_DATE'],
false
);
//Loop over each hosted layer feature
//, pass subset of attribute values to the feat variable
//, then Push the feat object into the feature dictionary
for (var f in fs) {
feat = {
attributes: {
DRAINAGE_ID: f["DRAINAGE_ID"],
LOCATION: f["LOCATION"],
FME_DATE: f["FME_DATE"],
}
}
Push(features,feat)
}
//Define schema for output dictionary
//and pass in the dictionary of output features
var joinedDict = {
fields: [
{name: "DRAINAGE_ID", type: "esriFieldTypeInteger"},
{name: "LOCATION", type: "esriFieldTypeString"},
{name: "FME_DATE", type: "esriFieldTypeDate"}
],
'geometryType': '',
'features':features
};
return FeatureSet(Text(joinedDict));
//If you instead return just the Text(joinedDict) you will see that the data looks valid
//, so it's just way the FeatureSet() function is transforming the data that seems to cause blank results
//return Text(joinedDict)
... View more
02-22-2023
06:59 AM
|
0
|
5
|
4655
|
|
IDEA
|
That's great, so yes that fulfills my idea. I'm on Pro 2.9 and didn't realize this option was added in 3.0. In parallel on the ArcGIS Enterprise side, does Enterprise 11.0 add the ability for the server to authenticate with an Azure SQL database in a similar way using a service account in Azure AD?
... View more
01-11-2023
07:46 AM
|
0
|
0
|
2434
|
|
IDEA
|
When moving to Azure, many organizations implement Azure AD as a single source of authentication within the Azure environment. However, when connecting ArcGIS Pro to an Azure SQL Database or Managed Instance, the connection properties window does not provide the user an option to choose "Azure Active Directory" for the authentication. Please add this as an option so, we can manage access to our geodatabases in Azure the same way we manage access to other Azure resources within our organization.
... View more
12-13-2022
01:41 PM
|
0
|
5
|
2646
|
|
IDEA
|
Currently, if you want a Velocity Analytic to write to a layer, the layer has to be owned by the same account that owns the analytic item. Please make this more flexible so Velocity can write to a layer owned by any account, as long as the analytic owner has edit rights to it through the typical AGO methods (share through group, hosted view with edit rights enabled, admin access). This would make Velocity layer management more in line with how standard layers are managed, so the same rules can be applied regardless of how the layer is editing. And it would allow teams that have SOP's in place for layer editing to avoid having to create workarounds for workflows where Velocity will be the service editing a layer. For an example, I'm currently acting as the Velocity lead for my organization, but we have dozens of GIS data admins in our org managing GIS layers, maps, and apps for their teams. For a current project, I am building a real-time analytic that reads a data feed from a 3rd party, and then I want to slice and dice that data and split it up so each team has a layer that's a subset. Then that team can manage the layer itself as they need to (symbology, sharing, etc)....and Velocity is just the data writer. However, because of this limitation in Velocity, if I'm going to be the one managing the Velocity services, I've also got to own all the layers that are the outputs.
... View more
11-14-2022
11:15 AM
|
3
|
1
|
1245
|
|
IDEA
|
Please provide documentation on the limits on the number of REST API calls that can be made for ArcGIS Online or ArcGIS Enterprise. When I try to iterate over the thousands of items in my AGOL org and obtain data about them using the REST API, I will often get errors like the following: {"error":{"code":400,"messageCode":"GWM_0026","message":"Too many requests. Please try again later.","details":[]}} These messages don't appear at first, but after a few minutes of a tools like FME or Python iterating and calling REST endpoints, this message will appear. If I knew what the limits were I could tell my FME or Python script to slow down to stay within the limits. The only documentation i've seen on API limits talks about how many responses you can get for something like a call against a feature service. They don't talk about how many times you can call that endpoint per second or per minute.
... View more
11-07-2022
11:47 AM
|
0
|
0
|
2142
|
|
IDEA
|
@Bud Maybe database triggers would work if the data is branch versioned since that eliminates the complication of delta tables being involved that the triggers aren't aware of. We are still getting familiar with how the branch versioning works so maybe we'll rediscover some tricks that weren't applicable with traditional versioning and the delta tables However, the better thing would be for Esri to allow calling a function. That way our RDBMS savvy developers can build the functions, and then allow our GIS analysts to leverage them in attribute rules on the feature classes they maintain. The nice thing about attribute rules is they live with the feature class definition, so no matter which Esri method you use to edit the data (Pro, Enterprise feature service, Geoprocessing tool), the same attribute logic is applied and esri handled, and the management is exposed on the ESRI GUI side for GIS analysts. We are already having to implement attribute rules that come with newer Esri data management solutions, so it would be better to have all the custom data entry logic in one place, rather than having some in attribute rules and some in custom database triggers, and then you have to wonder which fires first. Esri already allows calling a sequence from Arcade with the NextSequenceValue function. This expects that the user/dba has setup the sequence and granted the appropriate privileges for it to be used. If they are okay with that then I don't see harm in expanding to also allow referencing a database function and assume that the DBA/Analyst will manage the function and privileges. Esri may have to stipulate that the function only return one value, but that should cover most situations.
... View more
11-07-2022
07:01 AM
|
0
|
0
|
1809
|
|
IDEA
|
I'd also add that query layers that are not spatial should be included as well. Not sure if those kinds of views will appear as "Standalone Tables" instead of Feature Layers, but I think they should be included as well, so their data source can be parsed in python as well.
... View more
10-06-2022
02:45 PM
|
0
|
0
|
3843
|
| Title | Kudos | Posted |
|---|---|---|
| 7 | 11-19-2025 07:03 AM | |
| 1 | 06-16-2025 02:17 PM | |
| 17 | 10-11-2024 12:58 PM | |
| 5 | 09-20-2024 09:19 AM | |
| 4 | 07-28-2022 08:55 AM |
| Online Status |
Offline
|
| Date Last Visited |
yesterday
|