Select to view content in your preferred language

MVP program design

3999
1
05-03-2010 09:04 AM
WilliamHuber
Deactivated User
What should replace the old MVP program?

I'm sure ESRI has been thinking hard about this and is in the process of implementing a new one. The idea behind creating this thread about the question is that you, the users of the old and new forums, are likely to have good ideas about the features the program ought to have. Let's discuss them here (and trust that ESRI will listen).


Begin with the objectives. Some that ESRI and all users should be interested in and value highly are:

  1. Enhancing the rate at which questions are usefully answered in a timely fashion.

  2. Encouraging people to initiate and participate in useful, interesting, and productive conversations.

  3. Providing feedback to help frequent contributors improve their work.

  4. Providing information to improve searches and the collection of FAQs.

  5. Promoting solutions to difficult as well as easy questions.

Note that none of these objectives specifically includes rewarding contributors. There are, however, two reasons for augmenting a ratings program with rewards: first, and foremost, an appropriate rewards system promotes the objectives; and second, the value of frequent contributors to ESRI and the user community is high--it's probably the equivalent of several people in technical support--and therefore deserves some kind of compensation and recognition.


But how to provide feedback and ratings? As a guiding principle, the user community should determine whether a question has been answered, whether the answer is useful, and how difficult the question was. The MVP program for the old forums made the originator of a thread the sole determiner of all three. This was a good start but, in my humble opinion, ultimately failed for many reasons:

  • Most people with a question, especially newbies, cannot determine how difficult (or time-consuming) it might be to answer that question. (A rational user of the forums should always rate their question as the most difficult, regardless of its actual difficulty, in order to encourage responses. But that defeats the purpose of a difficulty rating.)

  • In some cases highly useful solutions were offered to questions, but the originator of the question was unable to understand or appreciate the solutions (although many subsequent searchers did appreciate them).

  • In the majority of cases, thread originators simply didn't bother to indicate whether they were satisfied with answers provided.

  • It was possible to game the system by posing one simple question as a series of threads, each with a high difficulty rating, allowing a single respondent to garner many points for little work. (I don't believe anyone ever consciously did this, but similar situations did occur from time to time. I have been the beneficiary of a few of them.)

  • Doubling the points for answers after five days of no response had the negative effect of encouraging people not to reply immediately: once a question went a couple of days without an answer, it made more sense just to wait a few more days.

A way to overcome these deficiencies exists: provide a mechanism for all forum readers, not just the originator of a thread, to rate a thread's (or posting's) usefulness. Base the MVP awards on cumulative usefulness totals. But don't do so in a linear fashion, for otherwise a single popular posting could dominate the ratings (and allow for certain forms of cheating).


Here's a simple example for discussion, not fully worked out but outlined to illustrate the main ideas. Readers could "vote" on the usefulness or interest of any posting (including questions and comments, not just solutions to problems), with the vote being binary (don't like = 0 points, like = 1 point) or numerical (e.g., the old forums used {1, 3, 5}). (BTW, allowing negative votes--although it sounds unfriendly--could be useful for identifying misleading, wrong, or crank messages.) Total votes can be displayed with each message and used for prioritizing search results. The originator of the thread might get extra weight in the voting as a nod to their special interest in the responses. To compute MVP scores, though, the total votes for each rated message would first be transformed in a nonlinear fashion to downweight unusually high totals. E.g., a positive total could be worth one MVP point, a total of 10 or more could be worth two MVP points, 100 or more could be worth three MVP points, etc. These MVP points would be summed over all of a contestant's messages to determine their cumulative MVP points. The scoring for a single contest period would be determined by the difference in cumulative MVP points achieved during that period. (Thus, old posts with ongoing popularity can keep garnering points for a contestant over time. Why not? That might encourage contributions that are longer, more thoughtful, and more complete than otherwise.)

This could be augmented by votes for other categories, such as "interest" or "difficulty," but we should be concerned that the system would become unworkably complex. (The main purpose of such auxiliary votes would be providing additional feedback to contributors.) However, ESRI could at its option selectively overweight certain votes or add bonuses, such as for threads that first identify problems in the software and provide solutions that ultimately turn into software enhancements.

Let me emphasize one special feature of this proposal: MVP awards are not directly proportional to a sum of individual points. This is what helps us promote the forum's multiple objectives, rather than emphasizing the mere garnering of "points." In particular, the downweighting of highly popular postings encourages broader participation. In the old forums this downweighting was too severe, though: no posting could ever accumulate more than the points associated with the thread's difficulty level.

Making the cumulative MVP points earned by each contributor visible to readers could make it easier to identify those whose postings tend to be helpful. (Actually, average points per posting rather than total points would be more meaningful in this regard. Using averages would also help the community identify competent newcomers more quickly.)

Finally, it seems worth remarking that if ESRI elects to continue awarding prizes in this program--as it should--it is important that whatever system is set up be clear and transparent.

Let us not lose sight of the fact that the purpose of the program (if I may be bold as to say so) is to improve the user experience and only as a secondary matter to reward frequent contributors. I welcome your thoughts and suggestions about how this can be accomplished.
0 Kudos
1 Reply
DanPatterson_Retired
MVP Emeritus
I agree with Bill's assessment.  I would only add that if software is to continue to be awarded, it should not be awarded solely for one's performance in a 6 month period, but should reflect a cumulative contribution over a period of time. 

This might imply "thresholds" for awards which may be distastful for some, but would encourage and reward others that contribute continuously, but just don't make the "cut" for one MVP period.  I can think of numerous contributors to this forum that have provided useful commentary, useful scripts/toolsets and yet have never won so much as a baseball cap (back in the old days, this was a serious MVP prize).  Also, "threshold" contributors could be placed on the Beta programs to provide useful commentary before a release, or perhaps, the release of a new forum format.
0 Kudos