Oh, gosh. Well, that would explain it, we're creating a SQL string that's going to be many many thousands of characters long. It's probably the wrong approach to look for the "good" values and build a filter based on the "bad" ones.
Let's take a totally different approach. I like @KenBuja 's use of GroupBy, but the output of that won't include any of the feature attributes that we need. But it could give us one of the objectIDs with some tweaking, which we could pass into a Filter on the original featureset.
Do you know if there are any cases of more than two duplicates? This expression could probably get rid of duplicates when there's 1 extra copy.
var fs = FeatureSetByPortalItem(…)
var dups = GroupBy(
fs,
'some_field',
[
{name: 'dup_id', expression: 'objectid', statistic: 'MAX'},
{name; 'feat_count', expression: '1', statistic: 'SUM'}
]
)
var dups = Filter(dups, 'feat_count > 1')
var dup_ids = []
for (var d in dups) {
Push(dup_ids, d['dup_id'])
}
return Filter(
fs,
`objectid not in (${Concatenate(dup_ids, ',')})`
)
- Josh Carlson
Kendall County GIS