|
BLOG
|
This custom processor provides the capability to create a geometry out of other fields (much like the GeoEvent Inputs allow you to do with incoming events). The geometry that is created is placed into the event's field tagged as GEOMETRY. The geometry can be 2D or 3D with a defined spatial reference or one that is set via a field in the event. Finally, this processor allows you to manipulate the coordinates (using a multiplier and an offset) prior to being used to create the geometry. This type of processor can be useful when the incoming events contain more than one geometry, or the fields used to create the geometry need to be modified in some way before the geometry is created. Support This component is not officially supported as an Esri product. Sorry, the source code is not available for general release. Please use the comments on this entry to submit issues or enhancement requests. Usage The following parameters are supported: X Field Specifies a field that contains the X value. The field can be any numeric or string field (so long as the data is numeric and can be converted to a double). X Multiplier Specifies a double value that will be multiplied by the X value. The field can be any double value. This can be helpful if your coordinate value is not in the correct format (e.g. if coordinates are assumed to be West and not reported in decimal degrees: 832988303; you could use a multiplier of -0.0000001 to get decimal degrees -83.2988303) X Offset Specifies a double value that will be added to the result of the (X*Multiplier) value. The field can be any double value. Y Field Specifies a field that contains the Y value. The field can be any numeric or string field (so long as the data is numeric and can be converted to a double). Y Multiplier Specifies a double value that will be multiplied by the Y value. The field can be any double value. Y Offset Specifies a double value that will be added to the result of the (Y*Multiplier) value. The field can be any double value. Use a Z Field? Specifies if you wish to use a Z field. Yes will allow you to set a Z field and create 3D geometry. No will not use a Z value and create a 2D geometry. Z Field Specifies a field that contains the Z value. The field can be any numeric or string field (so long as the data is numeric and can be converted to a double). Z Multiplier Specifies a double value that will be multiplied by the Z value. The field can be any double value. Z Offset Specifies a double value that will be added to the result of the (Z*Multiplier) value. The field can be any double value. Spatial Reference From Field? Determines if you want to hard code the spatial reference or use a field to specify it. Yes will allow you to designate a field that contains the Spatial Reference information. No will allow you to define a default spatial reference that will be used for all geometry. Spatial Reference Field The field that will contain the spatial reference value. The value can be either a WKID or a Well Known Text (WKT) format of the spatial reference. Default Spatial Reference The value can be either a WKID or a Well Known Text (WKT) format of the spatial reference. This value will be used for all geometry.
... View more
03-19-2020
08:50 AM
|
2
|
0
|
1208
|
|
BLOG
|
When processing events, you occasionally need to split an event up into parts. For example, if you use the Field Splitter to create an event for each GeoFence your event intersects, then you may be interested in knowing that all the resulting events relate back to an original incoming event. This processor helps you by adding a Globally Unique Identifier (GUID) to each event. Code & Release The source code can be found on GitHub Here. The latest release can be found in the Releases section of the repository. https://github.com/eironside/geoevent-eventguid-processor Usage Add the processor to your GeoEvent Service and select the Event Guid processor. Then select the GeoEvent Definition your events will arrive as, and the field to put the GUID into. Adding a GUID Field Please note, the field must be a String type field and the field must already exist in the definition. If your original event doesn't have a field to hold the GUID: Make a copy of the definition, add the GUID field: then use a Field Mapper processor to map your events into the new definition with a field to hold the GUID: Then use your Event Guid processor to add the GUID to the desired field. Make sure the Field Mapper processor is before your Event Guid processor. Troubleshooting The logger for this processor is listed below. Setting it to TRACE will allow you to see what the processor is doing. com.esri.geoevent.processor.eventguid.EventGuidProcessor
... View more
03-18-2020
07:47 PM
|
1
|
0
|
602
|
|
BLOG
|
When working with the GeoEvent SDK, there are times when you need to utilize a framework service from the underlying GeoEvent system. In order to do this you must inject the service into your bean in the configuration file. This blog discusses how to do that and provides a list of services that are available. This blog is not intended to be all encompassing, so I'll update it as needed. target="_self">Adding a Framework Resource Reference Non-Reference Properties Data Folder Useful References GeoEventDefinitionManager Messaging HTTP Client Service Tag Manager AGS Connection Manager Adding a Framework Resource Reference To reference an underlying SDK resource you must add it to your Blueprint config.xml file. 1. Add the service reference towards the top of the xml: id: a name you give the reference so you can refer to it later in the configuration xml. interface: the full name of the class you are referencing <reference id="geoDefManagerService" interface="com.esri.ges.manager.geoeventdefinition.GeoEventDefinitionManager" /> 2. Inside your bean declaration, add a property that identifies the reference and gives it a unique name: <property name="manager" ref="geoDefManagerService" /> 3. Inside your Service class, implement a method to allow the reference to be injected into your service: private GeoEventDefinitionManager geoEventDefinitionManager; public void setManager(GeoEventDefinitionManager geoEventDefinitionManager) { this.geoEventDefinitionManager = geoEventDefinitionManager; } 4. Use the injected reference when instantiating your implementation class: EventJoiner eventJoiner = new EventJoiner(definition, this.geoEventDefinitionManager); Non-Reference Properties The following resources are available without having to create a reference in your config.xml. Data Folder If you need to store data on disk (such as configuration or non-volatile cache) you can add a data folder property to your bean. Using the data folder property is the same for the Service and Implementation classes as documented above for the referenced resources. <property name="dataFolder" value="./data/myfolder" /> The path in the value above points to the GeoEvent installation data directory (typically C:\Program Files\ArcGIS\Server\GeoEvent\data\ ). Useful References GeoEventDefinitionManager Allows you to find/use/modify existing GeoEvent Definitions and/or create new ones. Interface: com.esri.ges.manager.geoeventdefinition.GeoEventDefinitionManager Documentation: file:///C:/Program%20Files/ArcGIS/Server/GeoEvent/sdk/api/com/esri/ges/manager/geoeventdefinition/GeoEventDefinitionManager.html Messaging Allows you to create/publish GeoEvents and subscribe to GeoEvent listeners. The main use of this reference is to get access to a GeoEventCreator. Interface: com.esri.ges.messaging.Messaging Documentation: file:///C:/Program%20Files/ArcGIS/Server/GeoEvent/sdk/api/com/esri/ges/messaging/Messaging.html Example for Implementation Class: private GeoEventCreator geoEventCreator; public void setMessaging(Messaging messaging) { geoEventCreator = messaging.createGeoEventCreator(); } HTTP Client Service Gives you access to the GeoEvent Server's HTTP client services. Use this to create HTTP clients that honor GeoEvent Server's global settings for proxy, etc. Interface: com.esri.ges.core.http.GeoEventHttpClientService Documentation: file:///C:/Program%20Files/ArcGIS/Server/GeoEvent/sdk/api/com/esri/ges/core/http/GeoEventHttpClientService.html Example for Implementation Class: private GeoEventHttpClient httpclient;
public class AnImplementationClass(GeoEventHttpClientService httpClientService) {
this.httpclient = httpClientService.createNewClient();
} Tag Manager If you need to create some custom TAGs, you can use the tag manager to do that. Interface: com.esri.ges.manager.tag.TagManager Documentation: file:///C:/Program%20Files/ArcGIS/Server/GeoEvent/sdk/api/com/esri/ges/manager/tag/TagManager.html AGS Connection Manager If you need to connect to a registered data store. Interface: com.esri.ges.manager.datastore.agsconnection.ArcGISServerConnectionManager Documentation: file:///C:/Program%20Files/ArcGIS/Server/GeoEvent/sdk/api/com/esri/ges/manager/datastore/agsconnection/ArcGISServerConnectionManager.html
... View more
03-11-2020
10:35 AM
|
3
|
0
|
812
|
|
BLOG
|
Updated 10/26/2021 Doing a lot of testing with GeoEvent allows me the opportunity to do a lot of administrative resets [as described by RJ Sunderman in his blog How to Administratively Reset GeoEvent Server ]. Since I'm doing this frequently, I wrote a new administrative tool to do the reset for me automatically. This tool follows the same sort of pattern as the ArcGIS Server tools found in <server install>/tools/ directory. 10/26/2021 UPDATES Added ability to delete the three Kafka topic log directories (log, log1, log2) for versions 10.8.1+. Added a RemoveCache.bat tool that will only delete the cached data under the c:/Program Files/ArcGIS/Server/geoevent/data/ folder. Caveats It only works on Windows at the moment (I do have plans to migrate to Linux eventually). It works against 10.6 or later. It is probably best to always stop the GeoEvent and Gateway Windows services before running the .bat file. Instructions 1. Download the .zip file and extract the contents onto the GeoEvent Server machine. + I put mine in a new folder C:\Program Files\ArcGIS\Server\GeoEvent\tools\ 2. Edit the AdministrativeReset.bat if you need to change the default folder location(s) or timeout parameter: 3. Run the AdministrativeReset.bat file as an Administrator. The AdministrativeReset tool will do the following: 1. Stop the GeoEvent Server and GeoEvent Gateway Windows services. 2. Delete the contents of the following directories (at 10.9+ it will also delete the additional Kafka \logs1 and \logs2 directories): 3. Delete the gateway configuration file: 4. Start the GeoEvent Gateway and GeoEvent Server Windows services. 5. A log file will be written out to disk.
... View more
03-06-2020
11:00 AM
|
11
|
3
|
6826
|
|
BLOG
|
I've recently been working with the Motion Calculator Processor for GeoEvent and been wishing I had a simpler version that would just calculate a line instead of all the statistics. The new Timetree Processor I created does exactly that: It creates a line geometry out of a cache of point geometries for a given TRACK_ID. The size of the line is determined by an event window that can be either count based (e.g. the last 5 points) or time based (e.g. the last 5 minutes). You can download a release of the processor here. Please note that this processor should be considered BETA since I haven't had a lot of time to test it. If you find an issue please log it in the GitHub project here.
... View more
03-05-2020
08:22 AM
|
3
|
0
|
922
|
|
POST
|
Hey, We recently created an option for delaying events. Please see this blog for more info: https://community.esri.com/people/eironside-esristaff/blog/2020/01/15/geoevent-delaying-andor-time-sorting-events
... View more
02-27-2020
09:00 AM
|
0
|
0
|
675
|
|
BLOG
|
The Spatiotemporal Big Datastore is fairly complex and it can be difficult to figure out what setting values you may need without actually looking at the STBDS indices (in Elasticsearch) and getting some information. In general, these are in line with the ES recommendations. Shard Size Elastic reccommends a few tens of GB. For log based data, the optimal size is 20-40 GB. number_of_shards per node Less than 20 per GB of heap space. For 32GB heap size, try to keep the node at 640 shards or less. https://www.elastic.co/guide/en/elasticsearch/reference/current/scalability.html number_of_shards - set this to 1.5 to 3 * N (where N = number of STBDS machines) Primary shards increase indexing performance but too many can reduce query performance. Primary shards determine the scalability of the index. e.g. an index with 5 shards can theoretically scale up to 5 machines for indexing, and have each machine handle 1/5th of the load. An index with 1 shard cannot scale to 5 machines and will have 4 machines sitting idle for indexing. Primary shards cannot be changed after the index is created, so it’s best to over-allocate them a bit. The Default Number of Shards can exceed the number of nodes, particularly in situations where: - The spatiotemporal cluster is expected to expand in the future. If you start with 3 STBDS nodes but add more machines over the project to support additional load, recommend going with 7 primary shards. That would support scalability up to 7 machines without requiring re-indexing or re-creating the STBDS data source. - You have sufficient CPU cores and disk bandwidth on each machine. So if you have a 3-node cluster with 8 cores per node, you should be fine with an index that has 7 primary shards. Source: https://www.datadoghq.com/blog/elasticsearch-performance-scaling-problems/ - You are trying to control shard size. If you are getting very large shards for daily rollover but very small shards (<10GB) with hourly rollover, it may be better to increase the primary shards to reach the desired 20-40GB per shard recommendation. replication_factor - set this to ceil(N / 3) A replication_factor of X means you can sustain the loss of X machines simultaneously, without losing data. You can optionally set it to zero for 1 node, but I think ES already performs that optimization. Replication increases query performance but reduces indexing performance. Replication factor can be changed dynamically. For instance, you can set this to zero when about to ingest a lot of data, and then set it back to 1 when the indexing is done. rolling_data If data is log-based (not updated often), and newer data is more important than older data, you should care about rolling data. Otherwise, set it to yearly. Consider a given period of time and estimate the amount of records that will accumulate for your index. Divide this number by the number_of_shards for that index to get the expected record count. If this number is under 10 million records per some period, then rolling data should be set to that period*. That is: yearly for > 10 million records (per shard) per year. monthly for > 10 million records (per shard) per month. daily for > 10 million records (per shard) per day. hourly for > 10 million records (per shard) per hour. For a more advanced way to determine the rolling data period, follow these steps: Consider the ingest rate per day you consider “steady state”. Find the yearly estimate (e.g. multiply that number by 365) and divide by number_of_shards. If the result is less than 10 million, use yearly. Otherwise, divide by 12 and continue to the next step. If the result is less than 10 million, use monthly. Otherwise, divide by 30 and continue to the next step. If the result is less than 10 million, use daily. Otherwise, choose hourly. * Assuming 3 shards, each record is 4KB, 10 million records ~= 40GB / 3 = 13.3GB per shard, which falls within ES recommendations. Record size needs to be estimated empirically, but 2-4KB is common. See: https://www.elastic.co/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster Shard index calculation To determine how many shards are created per index interval: total_shards_per_rolling_period = number_of_shards + (replication_factor*number_of_shards) To determine the “steady state” shard count: total_shards = total_shards_per_rolling_period * retention_period shards_per_machine = total_shards / N primary_shards_per_machine = shards_per_machine / (replication_factor + 1) replica_shards_per_machine = (shards_per_machine * replication_factor) / (replication_factor + 1) e.g. index is configured with 10 shards, daily rolling period, replication factor of 2, 30 days retention N (number of machines) = 5 number_of_shards = 10 replication_factor = 2 rolling_period = 1 day retention_period = 30 days total_number_of_shards_per_rolling_period = 10 + (10*2) = 30 shards per day total_shards = 30 shards per day * 30 days = 900 shards shards_per_machine = 900 / 5 = 180 (60 primary, 120 replicas) primary_shards_per_machine = 180 / (2 + 1) = 60 primary shards replica_shards_per_machine = (180 * 2) / (2 + 1) = 120 primary shards 60 primary shards per machine isn’t too bad. If necessary, shards can be lowered by setting the replication factor lower. Please note that in 10.6+, STBDS will “shrink” smaller rollover indexes/shards to 1 shard to avoid an index explosion. Flush Interval Typically you don’t need to change this from the default (1,000 milliseconds) unless you are running into write issues (rejected execution exceptions, or Feature Service document count lagging the GeoEvent STBDS output count, etc.). The refresh_interval is a separate setting on the STBDS data source, and it defaults to 1 second. Generally the STBDS output flush interval should be less than or equal to refresh_interval. Additionally, consider looking at other aspects of the system which may affect query performance: Replica shards should increase query performance by parallelizing searches across nodes. Increasing replication factor to 2 incurs additional storage/network/CPU load, but will reduce query latency and also increase the availability of the cluster. Active indexing can cause query performance to decrease on the machines if they also need to service queries at the same time. ES sometimes has problems balancing shards across nodes so I would consider checking the shard allocation to make sure there is not a “hot spot” with nodes having most of the primary shards. Also, newer versions of STBDS perform shrinking/compressing of old shards over a certain time period. There is some logic in place that determines whether this shrinking occurs. This eliminates old shards (reducing them to 1). If STBDS is shrinking your shards, that can definitely affect query performance.
... View more
02-11-2020
02:23 PM
|
6
|
2
|
3648
|
|
BLOG
|
Usually, when you receive data in GeoEvent you can define the data formats up front and let the input figure out how to handle any conversions. But sometimes, the data just doesn't work out and you are forced to try to convert data after the fact. In this specific case, we had a data stream that contains two dates in different formats. The input only allows you to define one format at a time. To get around this issue, we let the input decifer the GPS datetime (it was the more important timestamp) and wrote the second date to a string field. String to Date conversion format... So, now we had an event with a date defined as a string and we needed to convert that string to an actual date. Unfortunately, the GeoEvent field calculator doesn't provide any date functions. Lucky for us, GeoEvent does provide some basic "under the hood" data conversions, we just needed to know how to use them. It turns out that if you use a field mapper and map a String field to a Date field, GeoEvent will attempt to coerce that string into a date object. The trick is to format the string into ISO date format "yyyy-mm-ddThh:mm:sszone". Example: 1:34.23 PM on July 4, 2019 EST should be formatted to be 2019-07-04T13:24:23-05:00 Using the field calculator... If your data isn't already in this format (if it was you probably wouldn't be here) then you can use a Field Calculator to reformat it it. Example: If your data is in the following string format RecDate='20191121224602ES' then you could use the following formula in the field calculator: substring(RecDate,0,4) + '-' + substring(RecDate,4,6) + '-' + substring(createtimestamp,6,8) + 'T' + substring(RecDate,8,10) + ':' + substring(RecDate,10,12) + ':' + substring(RecDate,12,14) + '-05:00' You can then use a Field Mapper to map the field holding the result of the Field Calculator above (as a string) to a new field with a data type Date.
... View more
02-11-2020
02:11 PM
|
1
|
0
|
1840
|
|
POST
|
Hey James Madden, It is not recommended to automatically reboot GeoEvent on a periodic basis. If you can avoid that, it would probably be a good thing. If you do have to reboot, as a safety precaution, you should turn off all of your GeoEvent inputs, services, and outputs prior to stopping the GeoEvent Windows service and rebooting. Once the machine returns to life, restart your GeoEvent outputs, services, and inputs (in that order). Doing this ensures that things start up in an orderly fashion and all dependencies are accounted for. If you are in a situation where you are forced to automatically reboot the machines on a periodic basis. I would implement a python script to run on shutdown and startup that performs the stop/start operations for you. You can find instructions for how to do that in Andy Ommen's article Scripting Tasks Using the GeoEvent Admin API. Best regards, Eric
... View more
02-10-2020
03:53 PM
|
0
|
0
|
1130
|
|
POST
|
Hey, At 10.6.1 you can add the processor manually. Unzip the release below and place the .jar file into your <ArcGISServerInstall>/geoevent/deploy directory. Note that the version on the .jar file (1.5.1) doesn't have to match your GeoEvent version. https://github.com/eironside/defense-solutions-proofs-of-concept/releases/download/10.5.1/addxyz-processor-10.5.1.zip Hopefully that will work, if not, we'll have to take a look at your field calculation. Best, Eric
... View more
02-04-2020
12:47 PM
|
2
|
1
|
2152
|
|
POST
|
Hey Mike, 1. Create a new Web Map that contains the data you want to view (Survey123 points and any other layers). After you save the web map, note the ID. You can use it in the following URL (test the URL in a browser to be sure it opens the correct web map): https://www.arcgis.com/home/webmap/viewer.html?webmap=[WebMapID] 2. Back in your GeoEvent Server, use the Add XYZ processor in your GeoEvent Service to add the XY fields (use field named LAT for your y coordinates and LON for your X coordinates) to your events before sending them to the email output. 3. Update your email output message body to include the following URL (adjust the level to the zoom level needed): https://www.arcgis.com/home/webmap/viewer.html?webmap=[WebMapID]¢er=${LAT},${LON}&level=4 And that's it. Some advanced things you can do: - In step 3, if your message body is HTML, you can use an HTML link instead of the URL: <a href='https://www.arcgis.com/home/webmap/viewer.html?webmap=[WebMapID]¢er=${LAT},${LON}&level=4'>Click Here to Open Web Map</a> - In step 1, you can follow the instructions on adding search capability to your web map (see my last reply) and instead of using the LAT/LON/Zoom to center your map, you can go to the search item directly using a URL: https://www.arcgis.com/home/webmap/viewer.html?webmap=[WebMapID]&find=${[The field name in your event that contains the ID of the item in the searchable layer]} Best, Eric
... View more
02-03-2020
03:44 PM
|
2
|
3
|
2152
|
|
POST
|
Hey, I think there is significant overlap in the use cases for GeoEvent and Web Hooks. But in my experience, the main functional difference between the two is a matter of the scale of messages. GeoEvent is generally geared towards processing 100s or 1000s of events per second and Web Hooks are typically used for processing information on the 1 event per minute or second scale. Those are just general observations on my part. Also a generality, but the implementation choice between GeoEvent and Web Hook depends on if you have access to GeoEvent (and are comfortable using it), your data is availalbe to GeoEvent, and if the 'incidents' you want to notify others with can be processed by GeoEvent. If you need to do custom development within GeoEvent and/or you already have a standalone custom application that you are developing/supporting, then it might make sense to implement the Web Hook functionality in place. Also, don't forget that Web Hooks allow users to self-help themselves to data inside your application. This functionality can be beneficial when you don't know who is going to be interested in consuming your events (GeoEvent does allow users to subscribe to web sockets and consume web services, so maybe this isn't such a big deal). And finally, the two technologies are not mutually exclusive. As my blog (mentioned by RJ above) shows, you can use GeoEvent to receive WebHook events.
... View more
01-22-2020
08:57 AM
|
1
|
0
|
1901
|
|
POST
|
Hey Chris, You might want to verify the output feature service you are pushing these events to supports Z (I know that has bit me a few times in the past). Also, if you think the events are being sent to the feature service, then check the logs there to see if there are any issues (I assume the message 'too many parts' above came from the feature service and not GeoEvent??).
... View more
01-17-2020
01:44 PM
|
0
|
2
|
2522
|
|
POST
|
Hey Mike, I have a couple of suggestions for you: Your first option is to use the Add XYZ Values Processor in your GeoEvent Service to add the X and Y fields to your events. Then use those fields in your email output format instead of trying to dereference the XY from the geometry. Second, I don't know your exact use case for reporting the lat/lon to a user in an email, but if the intent is to allow them to pull it up in a web map, I would replace those coordinates with a link to a web map. To make this even more effective, you can add a search capability to your web map, use URL parameters in a web map, application, Ops Dashboard, or Web App Builder to take the email recipient to a map and have it zoom directly to (and even open a popup) the Survey123 in question.
... View more
01-17-2020
01:36 PM
|
2
|
2
|
2152
|
|
BLOG
|
One of the more interesting use cases I've seen over the past year is a request for a "Snap To" processor in GeoEvent. Unfortunately, this requested operation is a relatively costly operation to perform (timewise) and doesn't lend itself to high event throughputs. But if you are still interested in a Snap To processor and willing to accept some performance loss, the following methodology might be useful. The solution involves using a Feature Service's Query method using a spatial operator. As I said, the call to the featuer service does take some time (in my testing it averaged about 95 ms round trip) but this solution doesn't require a large amount of GeoFences to be loaded into memory so it isn't limited by the road network size. Spatial Query Processor First, you will need the Spatial Query Processor located here: http://www.arcgis.com/sharing/rest/content/items/c479d511a317431da4ceb9bb6d10e88d/data Attached to this post is a document that describes the process of preparing the road network data and publshing it as a feature service, creating your GeoEvent Service, and using the Spatial Query Processor. As always, leave a comment if you have questions, issues, or enhancement requests.
... View more
01-15-2020
12:22 PM
|
1
|
3
|
1838
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 01-05-2023 02:58 PM | |
| 2 | 07-21-2021 07:16 PM | |
| 1 | 02-05-2024 11:02 AM | |
| 1 | 09-14-2023 08:09 PM | |
| 2 | 05-13-2019 09:32 AM |