We're experimenting with replacing a legacy system which uses Python scripts for processing and an Oracle DB for storage with GeoEvent Server for processing and the Spatiotemporal Big Data Store for storage.
I've been able to replicate our processing (filtering, data validation, geometry creation, etc) in GeoEvent 10.6 with one exception: detecting and removing duplicates. In Python we load data into a Pandas.Dataframe and removing duplicates is as simple as using the .drop_duplicates() method. I haven't found a way to do this in GeoEvent yet. Is it possible? Or can I somehow delete duplicate records from the Big Data Store after the events are pushed in?