What do you want to happen to the smaller one(s)? Delete them? sounds like it
You could use a serach cursor to get a list of the parcel IDs
loop through that list (one at a time), making a temporary layer of just that parcel ID
get the areas of the polygons in that layer (which will all be from the same parcel)
make a second temporary layer from that layer, selecting the ones with an area smaller than the second largest.
delete everything in that second layer.
remove the layers (so you can use the layer names again)
move to the next parcel ID and repeat
----
You could also try looping through the file once,
making a set of nested dictionaries, something like this:
dict = {
parcel1:{ {OID1:area}, {OID2:area}, {OID3:area},
parcel2:{ {OID11:area}, {OID12:area}, {OID13:area},
parcel3:{ {OID21:area}, {OID22:area}, {OID23:area},
}
Then you could examin the areas for each parcel, building a list of OIDs to be deleted...
but that could/would get complicated very quickly...