Duplicate locations get dealt with. Essentially they are removed. (values averaged, first or last used... depends on the software)
Nudging a duplicate a teeny bit so you can use its value affects the interpolation for all the surrounding space (generally not a good idea)
Replicating a measure at the same location doesn't do much unless you do it at all previous locations (this is data replication testing)
Measuring in a new location, perhaps in between previous measures may be useful, but you shouldn't do it selectively, it defeats a useful sampling strategy.
Given these considerations, why would you want to weight a location's value as being more important than surrounding values?
... sort of retired...