Best practices for huge databases joined to a featureclass

Discussion created by stevel on May 30, 2011
Latest reply on Jun 1, 2011 by stevel
I'm seeking some advice on best practices for handling datasets with hundreds of attributes. (I made a typo in the subject of this thread - the attributes would be part of the featureclass, not joined to it.)

I'm building some polygon featureclasses from raw census tables, which contain hundreds of fields. I'd like to leave all of the attributes on the featureclasses (rather than stripping off the ones I don't currently need), so I can be flexible about which attributes to show at a later date.

Does anyone have any advice on how this will affect performance?

1) is it a bad idea to have a file geodatabase featureclass with hundreds of attributes?

2) is it a bad idea to create a map service from this featureclass, containing hundreds of attributes?

3) is there a performance implication if I build a featureLayer in the JS API, and only specify the attributes I currently need?

eg, the map service might contain 500 attributes, but I define my feature layer using

featureLayer.fields = [x, y, z]

Thanks for any advice, and please let me know if you need any further details.