I have a point feature layer that uses a unique value renderer to symbolize by a qualitative attribute and then uses the size visual variable to make the point larger or smaller based on a quantitative attribute. I'm trying to highlight the graphic when it is moused over (essentially just change the outline color). Right now I'm doing a hittest on the view's pointer-move event and then if there's a graphic, I attempt to highlight it (I'm actually cloning the graphic, giving it a new symbol, and adding it to a separate graphicslayer). Because the layer is symbolized with a renderer, the graphic that is returned via the hittest has a symbol property of null. So, how do I figure out the size the symbol needs to be? I tried using the renderer's getUniqueValueInfo method, but returns the same size for all graphics I pass to it. Any ideas??
In the 3.x API this was simple using the renderers getSymbol method:
getSymbol has not made it to 4.x so I am not sure.
Ok, so I know the min and max attribute values as well as the min and max symbol sizes. I've worked out a method to find the percent X of the graphic's quantitative attribute between the attribute min and max and then find that percentage between the min and max symbol sizes.
( ( (value - attrMin) / (attrMax - attrMin)) * ( symbolMax - symbolMin) ) + symbolMin
This gets close, but oddly at the min and max symbol size its off slightly. I would think that under the hood the API is doing something very similar, but again it is slightly different. Anyway, this method is a close enough approximation if others run into this.