Pro is a lot faster than Desktop in most of what it does, but field calculating is still extremely painful on a large number of records. I know it isn't my PC. It exceeds the recommended system requirements.
In general, no, it is not "normal" for Pro to take 20+ minutes to calculate a field with that many records. That said, any discussion on the performance of calculating fields starts with sharing what the data source is because calculating fields with a local file geodatabase vs enterprise geodatabase vs hosted feature layer all perform very different.
The feature class is part of an enterprise geodatabase.
That doesn't surprise me. I am guessing if you dumped the data to a FGDB and tested the same calculation, it would be < 30 seconds. If you are working with a version that has a very long/deep state tree, it can lead to significant performance degradation for activities like updating data. I would speak to someone who manages your EGDB and have them look at how many versions are outstanding and see if those versions can be reconciled and posted.
I actually handle the versioning on our geodatabases. This one in particular does not make use of versioning.
What is the RDBMS type and version being used?
Is the Pro client in the same location as the EGDB?
Is there a relationship class, archiving, editor tracking involved on the feature class?
What is the calculation that is being performed?
You might also check how much geoprocessing history is in the feature class and / or the feature itself.
I am running into this exact same thing. Using a FGDB I am adding the Zoning info into my Parcel data. Steps taken, are as follows:
1. Create Field (Zoning) in the Parcel Layer
2. Calculate Field: Using Arcade script that returns the Intersect of the Centroid of the Zoning layer.
Any ideas on how to make this process faster?