This is what the Summary Statistics tool is for. Make your case field whatever makes sense as the unique grouping value (the field with duplicates in this case) and get the minimum ObjectID as a summary value. Then look for any Frequency > 1 in the summary table. You can join the Summary table to the original data on the case field and then use the Min_ObjectID value to select all others records than the first one to consider invalid, if that is your business rule, or just select all of the records with duplicates if that makes more sense. A calculation based on the Min_objectID can also do the assignment of 1 or 0 to an alternative field if you like. You can alternatively use a relate to separately select each of the duplicate value groups if they need to be evaluated and operated on by each group individually (a 1 or 0 in another field won't let you do that).
This is what the Summary Statistics tool is for. Make your case field whatever makes sense as the unique grouping value (the field with duplicates in this case) and get the minimum ObjectID as a summary value. Then look for any Frequency > 1 in the summary table. You can join the Summary table to the original data on the case field and then use the Min_ObjectID value to select all others records than the first one to consider invalid, if that is your business rule, or just select all of the records with duplicates if that makes more sense. A calculation based on the Min_objectID can also do the assignment of 1 or 0 to an alternative field if you like. You can alternatively use a relate to separately select each of the duplicate value groups if they need to be evaluated and operated on by each group individually (a 1 or 0 in another field won't let you do that).