Skip navigation
All Places > GIS > Enterprise GIS > GeoEvent > Blog > Author: rsunderman-esristaff
1 2 Previous Next


29 Posts authored by: rsunderman-esristaff Employee

On both the Linux and Windows platforms, GeoEvent Server is run from within a Java Virtual Machine (JVM) instance. The out-of-the-box default configuration allocates only 4GB of your server's available RAM to this JVM. All GeoEvent Server operations requiring RAM draw from this allocation.


Some reasons you might want to increase the amount of RAM allocated to the GeoEvent Server's JVM include:

  • A need to load a large number of geofences into the GeoEvent Server's geofence manager
  • A need to process a large velocity or large volume of event records (more than a few hundred per second)
  • A need to cache a large amount of information from a secondary enrichment source for event record enrichment
  • An expectation that real-time analytics using Incident Detectors will generate a large number of concurrent incidents
  • An expectation that real-time analytics requiring state (e.g. Track Gap detection and monitoring, or spatial conditions such as ENTER / EXIT) will need to work with a large number of assets with unique track identifiers


System administrators who have determined that their server machine has sufficient available RAM, and who have also determined that their GeoEvent Server deployment has a need to allocate more RAM to the JVM instance running GeoEvent Server, can follow the steps outlined below to increase the memory available to GeoEvent Server by allocating more RAM to the hosting JVM.


  1. Stop GeoEvent Server
    • On a Windows platform, make sure the GeoEvent Server Windows Service and its associated java.exe process are stopped.
  2. Open the ArcGISGeoEvent.cfg configuration file in a text editor
    • On a Windows platform, this file is found in the ...\ArcGIS\Server\GeoEvent\etc folder by default
    • When located beneath C:\Program Files you will need to edit this file as a user with administrative privilege
  3. Locate the block of JVM Parameters in the file
    • Note that at different releases the indexes for the JVM parameters will be different from the illustration below
    • Click the image below for an enlarged view in a new tab / window:

  4. Increase the -Xmx parameter for the Java Heap Size from its default (4096m) to specify a larger allocation
    • For example:   -Xmx8192m
    • Note that the allocation is in megabytes
  5. Save your edits to the ArcGISGeoEvent.cfg file (and dismiss your text editor)
  6. Start GeoEvent Server


Using system administrative tools you should be able to verify that the JVM instance (java.exe process) never consumes more memory than what is allocated by the ArcGISGeoEvent.cfg configuration file, and that more than the default 4GB is now available for GeoEvent Server operations.

Hello Everyone --


I've recently completed three short videos which illustrate how to use stream services and capabilities related to stream services -- specifically 'Store Latest' and 'Related Features' which were never covered in the product tutorial available on line.


We are working on updating the tutorial's exercises and narrative to be consistent with these new videos, but I don't want to hold the videos until the tutorial re-write is complete. (The videos will eventually be bundled with the tutorial for download.)


The basic stream service capability provided by GeoEvent did not change with the ArcGIS 10.5 product release. However, some minor changes in behavior were made with regard to 'Store Latest' when working within different enterprise configurations, such as single-machine vs. multi-machine and when you have federated with a Portal for ArcGIS vs. when you have not federated with a Portal.


Enhancements to the 'Related Features' configuration workflow now allow you to select the feature service from which related features will be obtained (rather than having to manually enter the URL of an existing feature service).


Three MP4 files have been attached to this blog.  Please check-out the videos -- they are each only 10 to 15 minutes. Let the team know (e-mail if you think bundling a short video with a less detailed tutorial is an approach which works for introducing product updates and documenting product functionality.


Best Regards --


A couple of times a year a script developer will ask me about using the GeoEvent Admin API to automate some administrative task - such as stopping and restarting a GeoEvent input.

Any user action taken through the GeoEvent Manager web application makes a request against a URL in our GeoEvent Admin API. So, in theory, once you authenticate with the GeoEvent Admin API, you should be able to script some fairly simple tasks, like stopping a running input, modifying one of the input's parameters, saving the input's new configuration and restarting the input to begin receiving event data.

I'd like to share a blog post by Andy Ommen, a solution engineer working with Esri Database Services out of our Boston regional office. Take a look and let him know if you find his information useful. I really appreciate him sharing this out through his blog.  Here's the link:  Scripting tasks using the GeoEvent Admin API

Update March 2019Eric Ironside, a product engineer on the Real-Time team, has created a second blog illustrating how to update the properties of a GeoEvent Input. Much appreciated Eric!

Update November 2019 – Jake Skinner, a platform configuration engineer in Esri's Philadelphia region, has another blog illustrating how to authenticate with the GeoEvent Server administrative API in order to script administrative actions like updating the configurable properties of inputs and outputs. Thanks Jake!

It might also be helpful to know that the ArcGIS Server Administrative REST API is documented using Swagger. You can review available operations exposed by the Admin API:

  1. Browse to https://my-machine.domain:6143/geoevent/admin
  2. Acquire a token from your ArcGIS Server (or Portal for ArcGIS if federated) and log-in
  3. In the top-left corner, click the API link to take you to the Swagger Doc for the GeoEvent Server Admin API
  4. Note the advice at the top of the page on how to authenticate your admin script's requests with the API

Hope this information is helpful –

The GeoEvent product team would like to announce the release of a new tutorial introducing the Spatiotemporal Big Data Store. The tutorial is available on the GeoEvent product gallery with our other product tutorials.


Here is a direct link to the new tutorial:  Tutorial - Spatiotemporal Big Data store


Feedback is welcome. Please add any comments you have to the item in the gallery or send an e-mail to


Best Regards --


Changes made to the Text adaptor for the 10.4 product release might catch a few folks off guard, so I want to call attention to what you might expect to see and why.



Several inbound connectors – specifically those using the Generic-JSON and XML adaptors – provide the ability to ‘Construct Geometry From Fields’ so that when event data is received which contains coordinate information on a point location a Geometry can be constructed by the GeoEvent input prior to sending the event to a GeoEvent Service.


At the 10.3.1 release, inputs such as ‘Receive Text from a TCP Socket’ and ‘Watch a Folder for New CSV Files‘ do not have this capability because it was not included in the Text adapter.


It was observed that if a user allowed the input to create a GeoEvent Definition and then later reconfigured the input to request that it construct a Geometry from specified event fields without also editing the event definition to include a Geometry field in which the constructed Geometry could be placed … that the input would stop ingesting event data.


For the 10.4 release, an enhancement was made to the Text adaptor to bring it in line with the Generic-JSON and XML adaptors:


Issue #945:  Text adapter needs to follow Generic-JSON adapter's behavior and create a field of type Geometry, tagged GEOMETRY, when creating a GeoEvent Definition


Now, at the 10.4 release, when you select the ‘Construct Geometry From Fields’ option and specify the name of event attribute fields containing coordinate values, the GeoEvent Definition generated for you by the input will include a field named “Geometry”.  The field will be added as the last field in the event definition and will be tagged with the GEOMETRY GeoTag.





If you leave the ‘Construct Geometry From Fields’ parameter set to its default ‘No’ the generated event definition will not have the Geometry field highlighted above.


If you elect to edit the generated GeoEvent Definition to replace ‘Field1’, ‘Field2’, ‘Field3’ (etc.) with meaningful names, but neglect to include a field in the GeoEvent Definition into which the constructed Geometry can be placed, and then alter your input to switch ‘Construct Geometry From Fields’ from ‘No’ to ‘Yes’ … beginning with the 10.4 release the input will alter the event definition for you to create the needed Geometry field.


Here’s where this enhancement might produce undesired behavior …


Let’s say that you had been using GeoEvent at the 10.3.1 release with a ‘Receive Text from a TCP Socket’ input successfully receiving event data sent from the GeoEvent Simulator. Your GeoEvent Definition might look something like the following:




You export your 10.3.1 GeoEvent product configuration as an XML file, upgrade to 10.4, and import your configuration. When you try simulating the same data which was working at the 10.3.1 release, the input’s event count no longer increments and it does not appear that your GeoEvent is receiving any of the data you’re simulating. You look in the GeoEvent logs and see a message:


Cannot find matching GeoEvent Definition. Event is not created.

The incoming text is SWA2706,3/16/2012 02:25:30 PM,IAD,TPA,B733,37000,-79.585739,34.265521


What’s going on?


It is likely that the field named “Location” is not the field the ‘Construct Geometry From Fields’ operation expects. Even though you were careful to copy and edit the GeoEvent Definition, back in 10.3.1, to include a field whose data type is Geometry and tag the field GEOMETRY … its name is not “Geometry”, so the input doesn’t recognize it.


Remember that delimited text is not well-defined like JSON or XML. If the inbound events contained field names then the adapter would be able to map the received data into the correct fields in the event definition. The 10.4 Text adapter does not assume that a field of type Geometry is sufficient (even the field is tagged GEOMETRY). It looks for a field named “Geometry” and, failing to find one, it logs the error shown above and discards the event.


To fix the problem, all you have to do is edit the GeoEvent Definition imported with your configuration and rename the last field from “Location” to “Geometry”.


This is probably a scenario that many users will encounter when upgrading 10.3.1 configurations which include a TCP/Text input to the 10.4 release. You need to remember that at 10.4 your Geometry fields should be named “Geometry” (not “Location”, or “Target”, or “Geom”). The field must still be the last field in the GeoEvent Definition.


Hope this information saves you a little future grief -


I received a second request recently for the information below, provided to me originally by Javier on the GeoEvent Server development team, so I thought I’d post it in case others are looking for the information.


See also:  GeoEvent WebSockets in 10.6 with ARR



From: Javier Delgadillo

Sent: December 01, 2015


Since the ArcGIS WebAdaptor does not support GeoEvent Server, you cannot use it to proxy GeoEvent requests. You can however configure Application Request Routing (ARR) and create your own rules to allow IIS to proxy the requests. Below is a screenshot of a rule created within IIS 8.5 to allow proxy of GeoEvent Server via IIS:




After installing ARR, double-click the URL Rewrite icon and create a similar rule (replacing the hostname of the Rewrite URL to match your environment):



After applying the rule confirm you are able to access GeoEvent Manager through IIS:



Configuring your SSL certificates will be important. You can use IIS to create and export a Domain Certificate with a private key and then configure ArcGIS Server and GeoEvent to use that certificate. An alternative would be to get IIS to trust the certificate that is configured with your ArcGIS Server/GeoEvent installation.


It was discovered that, using the 10.3.1 release of GeoEvent Server and IIS version 8.5 with WebSockets enabled, with ARR configured to create a reverse proxy for GeoEvent that StreamService connections did not work. This issue was addressed in the 10.4 release of GeoEvent Server.


10.3.1 deployments can create a reverse proxy similar to WebAdaptor specifically for WebSockets using NGINX.  Steps outlined below provide some detail on installing and configuring NGINX. A sample nginx.conf configuration is attached which you can use as a reference.


Please note:  Esri tech support cannot help troubleshoot reverse proxy configurations - whether they be NGINX, IIS, or Apache.  If you do plan on deploying a reverse proxy as part of a systems solution, make sure you have someone with appropriate experience available to you for help troubleshooting. Esri tech support has a KB article you can refer to:


  • Install a WebSocket reverse proxy server Nginx.
  • Enable HTTPS on the proxy server and configure it to use a certificate issued by a trusted 3rd party CA (Thawte, VeriSign, DigiCert).
  • Configure the proxy server to forward requests to GeoEvent Services. Attached is an example of an Nginx configuration file. You will need to change the following settings:
    • server_name <your Web Socket reverse proxy server name>
    • ssl_certificate <your CA-cert certificate file>
    • ssl_certificate_key <your CA-cert certificate key file>
    • server <list of servers that GeoEvent Extension is running on>. In this example, the server names are,, and


Note: The WebSocket reverse proxy server is set to use HTTPS, but it connects to HTTP on the backend. In the example configuration file, the proxy_pass for port 443 is set to http: //skivmHTTP; not https: //skivmHTTP.


  • In a web browser, navigate to http: //<ServerName>:6080/arcgis/admin to access the ArcGIS Server Administrator Directory.
  • Login and click system -> properties-> update.
  • Enter a property called WebSocketContextURL to point to the Web Socket reverse proxy.


For example:

{"WebSocketContextURL": "wss://"}


  • In a browser, navigate to:

https: //<WAMachineName>/<context>/rest/services/<StreamServiceName>/Streamserver/subscribe


  • Click Subscribe to verify data is streaming. You can also verify data is streaming by navigating to:

https: //<WAMachineName>/<context>/rest/services/<StreamServicename>/StreamServer?f=jsapi


See Also:



Hope this information is helpful -


What is GeoEvent doing when it receives event data

When an inbound connector (input) receives an event payload, an adapter parses the event data and constructs a GeoEvent. It uses a GeoEvent Definition (which specifies the data structure of each event) to do this. An input passes the GeoEvents it constructs to a GeoEvent Service for processing. GeoEvents are passed from an input to a GeoEvent Service using a message bus. The message queuing implementation is internal to the product and abstracted from the user (the 10.3 / 10.3.1 releases use RabbitMQ). The GeoEvent Service applies event filtering and processing to the events it receives then delivers the processed GeoEvents to an outbound connector (output) for broadcast.


In order to add/update features in a Geodatabase, a GeoEvent outbound connector leverages REST endpoints exposed by a published feature service. The features are actually stored in an underlying feature class, in a Geodatabase. So adding and updating features actually involves HTTP calls and backend database transactions.


When the GeoEvent extension was first released, event data had to be persisted in a GeoDatabase and exposed through a published feature service before a client could visualize the data on, say, a Web Map. At the 10.3 release of the product we introduced the concept of a Stream Service … but more on that in a little bit.


Event Throughput / System Performance

When considering system scalability and the GeoEvent extension, most folks are thinking about the number of event messages they need to process each second. A simple GeoEvent service with a single TCP/Text input sending data out to a TCP/Text output can process thousands of events per second. But such a simple service isn’t doing very much … it isn’t undertaking any computational analytics or spatial relationship comparisons (which can be CPU intensive), it isn’t transacting with a feature service to enrich event data with attributes from a related feature (which involves network latency), and it isn’t doing anything with GeoFences (which can use a lot of available RAM).


Improvements made to the product for the 10.3 release focused on maintaining a high level of GeoEvent throughput when real-time analytics such as spatial filtering (e.g. GeoEvent INSIDE GeoFence) were incorporated into a GeoEvent Service.


Just as important than the number of events you expect to receive each second, or the filtering/processing you plan on performing, is the overall size of each event in bytes. You can easily process 1000 events per second on a single server when the events are fairly small – say just a few simple attributes with a point geometry. If your events include several polygon geometries each with many dozens of vertices and/or complex hierarchical attribute structures with lists/groups of values, then each event's "size" will be much larger. More system resources will be required to handle the same number of events and your events per second throughput will be reduced.


Most folks want to use the event data to update features in a feature class through a published feature service. Transactional throughput with a feature service is an acknowledged bottleneck. When you configure a GeoEvent output to add/update features in a feature service you can expect to be limited to just a couple hundred events per second. There are a couple of parameters you can adjust on your GeoEvent outbound connector to throttle event output as you prototype your event transactions, but you typically don’t need to if you are below the 200 – 300 events per second threshold we recommend for updating features in a locally hosted feature service. Please note that event throughput is further restricted when updating a hosted feature service within an ArcGIS Online Organization account.


Regarding system performance, say you were to receive a burst of 1000 events on an input. If you knew, from testing you had conducted, that your local server and database could only update 240 events per second, then you can assume that GeoEvent will need to create a backlog of events and work to process events off the backlog. The 10.3 product release does a better job of this than the 10.2.2 release.


Say your data was not a single burst, but that you expected to receive a sustained 1000 events per second, and your output was still only handling 240 events per second. At 10.3 you can expect that the GeoEvent performance will degrade sharply as the backlog begins to grow. GeoEvent will work to clear the backlog, but it will continue to grow until a system resource becomes saturated and the system becomes unresponsive. This is the behavior we observed during the GeoEvent 10.3 Holistic Event in DC. It could be that you run out of system RAM, it could be that you saturate the network card. If you are not processing events as fast as you are receiving them you will have a problem.


Stream Services

Stream Services, available with the GeoEvent 10.3 release, provide an alternative to updating features in a feature service. The key when thinking about Stream Services is to separate in your mind data visualization and data persistence. We have another tutorial, Stream Services in GeoEvent, which you might want to take a look at if streaming data out for visualization without persisting the data in a geodatabase is something that you are interested in pursuing.


Stream Services rely on WebSockets. The JSON broadcast from a Stream Service can be received by a custom JavaScript application; the Stream Service concept is supported by a developer API. They can also be added to Web Maps as feature layers to display the event data in near real-time.


The reason behind the development of the Stream Service was that, without them, your only option for data visualization was to first persist your event data as features in a Geodatabase feature class, then have your client application query the feature class through a published feature service. An ArcGIS Server instance running on a local GIS Server with a local RBDMS supports up to about 240 events per second. That’s a 1/10th of what a typical GeoEvent Service is able to process each second. Streaming the event data out as JSON was one way to provide data visualization for folks who didn’t require data persistence.


WebSockets also allow a client to specify spatial and/or attribute filters which the server will honor. This allows network bandwidth to be preserved by limiting the data being passed over the socket connection. It is up to the client application to know what it can support and specify filters which limit the amount of data the client will receive.


In the case of a custom JavaScript web application, if too much information is sent from GeoEvent to a browser's web application, the browser will simply freeze and may crash. The socket connection will close when the client dies and GeoEvent will continue broadcasting events through the Stream Service for receipt by other clients (currently connected or in the process of connecting).


System Resources / System Sizing

There’s actually quite a bit you need to consider when thinking about scalability.


  • Physical Server
    When leveraging virtualization, in my experience, it is most important that the physical server hosting the virtual machines adequately support the virtualization. The physical server is going to need considerable memory and a substantial processor if you plan on instantiating multiple VMs each with 4 CPU cores and 16GB of RAM. Also, the more physical servers you incorporate in your architecture, the more important the network card will become. At the 10.3 product release we support high-availability and high-velocity concepts by configuring an ArcGIS Server cluster with multiple servers and installing GeoEvent on each server in the cluster. We have a Clustering in GeoEvent tutorial which introduces these concepts.


  • RAM
    In a Holistic lab we conducted back in September, we discovered that the type of RAM is as important as the amount. It's not sufficient to consider a server as simply having 16GB of RAM. Premium DDR4 SDRAM will provide 2133 – 4266 MT/s (million transfers per second) whereas DDR3 RAM from just a couple years ago will provide only 800 to 2133 MT/s. You may not have any control over the type of RAM in the physical server hosting the VMs being provided to you  – but it matters.

    If you are importing GeoFences into the GeoEvent processor, know that these geometries are all maintained in memory. If you have thousands of complex GeoFences with many vertices, that is going to consume a lot of RAM. Significant improvements were made at the 10.3 product release to allow GeoEvent to better handle the spatial processing needed to determine the spatial relationship between an event’s geometry and a GeoFence, so event throughput is much better – but a high volume of spatial processing can consume significant CPU.


  • CPUs
    The number of CPU cores is generally important once you being designing GeoEvent Services with actual processing – it is not as important when benchmarking raw event throughput. For example, a benchmark taking event data from a TCP socket and sending the data to a TCP socket doesn't require much CPU; a large amount of premium RAM is more important in this case. Projecting an event's geometry, enriching events, calculating values – these analytics will all place load on your CPU resource.


I wouldn't be surprised to learn that a physical server with only 4 cores and 32GB of premium RAM outperformed a virtual cluster of three VMs each with 4 cores and 16GB of RAM. The hosting server might be an older machine with DDR3 or DDR2 generation RAM. The hosting server might be supporting other virtualization. Network connections between physical machines might benefit from an upgrade.


Given the above, you can probably understand why we recommend that you dedicate an ArcGIS Server site for real-time event processing. You might have heard this referred to as a "silo'd" approach to your system architecture in which one ArcGIS Server site is set-up to handle your map service, feature service, geoprocessing, and map tile caching with a second ArcGIS Server site set-up for real-time event processing and stream service publication.


There are many factors you will need to consider when making system architecture and hardware purchasing decisions. Videos from technical workshops presented at our Developer Summit in Palm Springs as well as the International User Conference in San Diego are available on-line at ... search for geoevent best practices to find a video which presents some of these considerations.


The above is, of course, specific to the ArcGIS GeoEvent Extension for Server. A much more comprehensive look at system architecture design strategies is provided by Dave Peters in his blog:  System Architecture Design Strategies Class Resources.


Hope this information is helpful -


Another recurring question:


I've configured a 'Poll an ArcGIS Server for Features' input to 'Get Incremental Updates', is there a way to prevent the input from polling all of the features in a feature class when GeoEvent Server is restarted?


The short answer is: No.  When GeoEvent Server is restarted (or the server on which it is running is rebooted), inputs which use the out-of-the-box FeatureService transport to poll an ArcGIS Server map service (or feature service) lose whatever value they've cached which enables them to query for features which are "new" relative to the last poll conducted by the input.


The capability to 'Get Incremental Updates' is unique to the 'Poll an ArcGIS Server for Features' input connector and should not be confused with the 'Receive New Data Only' parameter, exposed by the HTTP transport, which requires event data include the HTTP "Last-Modified" header. (Refer to comments in the thread Re: Receive RSS Inbound Connector.)


The issue we're exploring here deals only with the 'Poll an ArcGIS Server for Features' input connector -- or a custom input you may have configured which uses the FeatureService transport to poll an ArcGIS Server map / feature service.


An input configured to poll a map / feature service and retrieve only incremental feature updates maintains an in-memory cache. The value in this cache depends on whether your 'Method to Identify Incremental Updates' is ObjectID or Timestamp. In either case the input incorporates the largest value observed from its last poll into a WHERE clause so that only features whose OID (or date/time) is greater than the greatest value (from the last query) are returned (by the next query).


If you stop the input the cache is maintained, so that when the input is restarted it will be able to poll for features whose specified attribute value is greater than the value in the cache. If you stop the ArcGIS GeoEvent Server Windows service, or reboot the server, the cache is destroyed and the input has no way of knowing which features were polled previously. The next poll conducted by the input will retrieve all of the items in the map / feature service.


This becomes painfully obvious when one of the notification outputs (e.g. 'Send a Text Message' or 'Send an Email') are included in a GeoEvent Service which polls a map / feature service for event data. When the GIS Server is rebooted an e-mail recipient can potentially receive hundreds of messages if event data polled by the input satisfy filtering and/or processing criteria designed into a GeoEvent Service.


The motivation behind this behavior is that a cache persisted within a system file on disk could be difficult to find, might only be editable by a user with administrative credentials, and unnecessarily involves file I/O in a potentially high-volume event processing scenario. Locating and deleting a system file-based cache was deemed more burdensome than requiring that GeoEvent Server outputs be stopped in order to prevent unwanted notifications from being sent. Basically, this behavior is by design.


As a best practice, if you find you are frequently restarting GeoEvent Server (or having to reboot your server), make sure to stop all notification outputs, or any outputs you do not want to process events based on "old" features, when GeoEvent Server is restarted.


You can also employ a strategy of writing notification messages to a secondary feature layer, rather than directly to a notification output. The secondary feature layer acts as a notification message cache. You could then design a second GeoEvent Service (or extend your original GeoEvent Service) to poll this "notification message cache" and as event messages are sent to a notification output, use an 'Update a Feature' output to flag the notification message as having been sent. This will enable a filter to discard messages which have been sent and avoid sending repeat notifications.


If you have other approaches you have developed to deal with this particular behavior, your comments are welcome.


As always, I hope this information helps.


- RJ

Morakot has been keeping a blog under his GeoNet user-id.


I'm going to reference a recent blog of his, and probably begin re-posting his content here, in the GeoEvent product's blog.


How to Create Temporal Filter in GeoEvent


- RJ

Hey All -



Normally the GeoEvent Simulator loads comma separated text from a simulation file and allows you to send this data to a GeoEvent Server tcp-text-in input to simulate a real-time data feed. Sample data in the tutorials either represents a point geometry as a quoted pair of X,Y coordinate values (e.g. "-75.175,39.991") ... or provides the coordinates of a point location as separate X and Y attribute values which the input can take and use to construct a geometry.


But what if you want to simulate a dynamic polygon, such as a series of forecast areas affected by a storm?


Here's a trick you might find handy:

  1. You do not have to use a comma to delineate your event attributes in a simulation file.
  2. Valid Esri Feature JSON can be placed in a simulation file.
  3. A GeoEvent Server input can be configured to interpret the JSON as a geometry.


Consider the following two lines of simulation input:


"AA-1234";"12/24/2015 23:59:59";{ "rings": [ [ [-75.175, 39.991], [-75.173, 39.991], [-75.173, 39.99], [-75.175, 39.991], [-75.175, 39.991] ] ], "spatialReference": { "wkid": 4326 } }


"BB-7890";"02/15/2015 12:34:56";{ "rings": [ [ [-8368449.66, 4864715.92], [-8368263.15, 4864676.62], [-8368272.04, 4864618.25], [-8368459.87, 4864645.20], [-8368449.66, 4864715.92] ] ], "spatialReference": { "wkid": 102100, "latestWkid": 3857 } }


In both examples I am sending GeoEvent Server a JSON string representation of a geometry using the Esri Feature JSON format for a polygon geometry. Please refer to the ArcGIS Developers on-line documentation for the JSON spec and samples of Point, Multipoint, Polyline, and Polygon geometries.


Notice that I have chosen to separate the event attributes using a semi-colon rather than a comma. Since both commas and literal quotation marks are part of the Esri Feature JSON syntax, using a semi-colon for field delineation simplifies my simulation file considerably. It allows me to keep the required quotes and commas without having to escape or quote them as string literals. I'm free to quote the other event attributes of the simulated event. In the examples above, I've quoted my TRACK_ID and my TIME_START values, though I probably do not need to.


Also notice that each geometry string includes the coordinate system associated with the coordinate values. The first event uses the WGS 1984 Geographic Coordinate System (its coordinate values are expressed in decimal degrees). The second event uses the Web Mercator Aux Sphere Projected Coordinate System (its coordinate values are expressed in meters).


Attached are illustrations of the GeoEvent Definition I configured my 'Receive Text from a TCP Socket' input to use. The input is still responsible for adapting the delimited text it receives from the GeoEvent Simulator, so it needs to know what characters to expect for the message separator and attribute separator ... and it relies on an event definition to tell it that the third attribute should be interpreted as a Geometry.


If you try this and run into problems, let me know. There may be limits on the raw number of bytes you can pass over a TCP socket or how many messages of a given size you can load into the GeoEvent Simulator and send each second. It is probably best to simplify string representations of your geometries when including JSON in simulated event data.


Hope you find this information useful -



GeoEvent Definition




GeoEvent input configuration



Customers, particularly in the Federal Government space, have reported issues launching the ArcGIS GeoEvent Extension for Server when the McAfee Enterprise Suite has been deployed in their environment.


The McAffee Enterprise Suite offers anti-virus and malware protection. One component of the suite, the On-Access Scanner, actively scans files used and/or accessed by a running program. This has been shown prevent the proper installation and startup of the GeoEvent Extension. As highly compressed archive files (such as the JAR files utilized by the GeoEvent Extension) are scanned, access to the files is restricted, and multiple timeout failures can occur while waiting for the scans to complete.


Please refer to KB Article #44817 on the Esri Support site for additional information.

Understanding what a GeoEvent Definition is and how they are created is important for understanding the real-time analytics, filtering, and processing being performed by a GeoEvent Service.


In the Making Features Come Alive exercise in Module 2 of the Introduction to GeoEvent tutorial, you are introduced to the Field Mapper Processor. One of your first experiences with GeoEvent Definitions will probably be when using a GeoEvent Service to add or update features in a feature service’s feature layer. Mapping a GeoEvent to a schema consistent with the feature layer before sending an event to an output to add/update features is a recommended best practice.


You should think of every GeoEvent as having an associated GeoEvent Definition. An input connector does not receive GeoEvents. The connector has a transport which receives and hands-off a byte stream to the connector’s adapter. The adapter needs a GeoEvent Definition in order to instantiate different data values and place the data into different fields within a data structure. It’s like the adapter is sorting mail -- this field is a String, that field is a Date, that other field is a Long integer,… etc. The structure of a GeoEvent is specified by a GeoEvent Definition.


Several input connectors available out-of-the-box with the GeoEvent Extension use adapters which allow you to determine whether the adapter should create a GeoEvent Definition. The adapter does this by taking the first event received, examining the data values, and then making a best-first-guess at what the GeoEvent Definition should look like. If a GeoEvent Definition already exists with the name you specified, the adapter will use (not update) this existing GeoEvent Definition.


GeoEvent Definitions have owners. When an adapter creates a GeoEvent Definition for you the owner will look something like: auto-generated/com.esri.ges.adapter.inbound.JSON


If you copy an existing GeoEvent Definition, import a GeoEvent Definition from a feature service, or click 'New GeoEvent Definition' to create one from scratch, the GeoEvent Definition owner will typically be displayed as either arcgis (10.2.x) or admin (10.3.x) depending on how you logged in to ArcGIS GeoEvent Manager.


Only two GeoEvent Definitions come defined out-of-the-box. The incident GeoEvent Definition owner is identified as com.esri.ges.processor/IncidentDetector/xxxx. You can see this if you click to examine the incident GeoEvent Definition. The TrackGap GeoEvent Definition is the other GeoEvent Definition that is defined out-of-the-box. If you click to examine this GeoEvent Definition you will see that its owner is com.esri.ges.processor/TrackGapDetector/xxxx. These owner strings tell you which out-of-the-box processor created the GeoEvent Definitions; the xxxx identifies the GeoEvent Extension release.


Sometimes a processor will modify an event’s structure or schema by either adding or removing a field. When a processor does this, it will create a new GeoEvent Definition. The Field Calculator Processor is a good example; you can configure this processor to place its calculated value into a new field. The Field Enricher Processor is another example; it is taking data from a feature service's table and adding new fields to the event it is processing.


When a processor creates a GeoEvent Definition for you, you are typically required to enter the name the processor should use for the GeoEvent Definition it will create.


It can be important to understand when a processor creates what I have referred to as a managed GeoEvent Definition. A processor will not create a managed GeoEvent Definition until it actually receives a GeoEvent. You have probably discovered that you cannot, for example, configure a Field Mapper Processor which follows a Field Enricher until the Field Enricher has received and processed an event. Once the Field Enricher receives a GeoEvent, it will process the event and create the necessary GeoEvent Definition. Only then can you edit your GeoEvent Service to configure your Field Mapper.


Since processors own the GeoEvent Definitions they create, any change you make to the GeoEvent Service which incorporates the processor will trigger the processor to delete its managed GeoEvent Definition when you publish the GeoEvent Service to save your changes. For this reason, you should be aware of when GeoEvent Definitions are created, when they are deleted, and when they are available for reference by another processor or output connector.


A few recommendations to take away from this quick discussion:


  1. Every GeoEvent has an associated GeoEvent Definition. You should become familiar with when GeoEvent Definitions are created, which components create them, and why.
  2. Don’t leave an input connector configured to create GeoEvent Definitions for you. When configuring an input connector to create a GeoEvent Definition, it is a recommended best practice to copy the GeoEvent Definition generated by the input connector’s adapter and then reconfigure your input to use the copy of the GeoEvent Definition.
  3. Processors can create GeoEvent Definitions. The GeoEvent Definition associated with an event received by a processor is not necessarily going to be the GeoEvent Definition associated with the GeoEvent which comes out of a processor.
  4. Always use a Field Mapper Processor to prepare a GeoEvent’s schema to match a feature service’s feature layer when adding/updating features. You might think you understand the structure of an event you are sending to an output – but it is a best practice to use a Field Mapper and make the schema mapping explicit.
  5. Processors may delete GeoEvent Definitions they created. If you configured a Field Mapper “downstream” from a Field Calculator or Field Enricher, for example, the “upstream” processor may delete a GeoEvent Definition referenced by the Field Mapper. As long as you do not double-click to edit the Field Mapper’s configuration you can trust the processor “upstream” will re-create the GeoEvent Definition referenced by the Field Mapper as event data is received by the processor, before GeoEvents are received by the Field Mapper.
  6. When publishing a stream service, never refer to a GeoEvent Definition which has been generated for you by an input connector’s adapter or a GeoEvent Definition created for you by a processor. Make a copy of any such GeoEvent Definition(s) and publish the stream service to reference a GeoEvent Definition which you own.

GeoRSS is a standard way of tagging an RSS feed so that applications can use embedded location information in each post. Using the GeoEvent Extension for ArcGIS Server, you can monitor a GeoRSS feed in real time and use it to update the applications and common operational pictures used by your colleagues. Should you encounter a secured GeoRSS feed that you would like to use, there is no standard connector that allows you to pass credentials. However, it is possible to configure a connector (without programming) that will allow you to access a GeoRSS service secured with basic HTTP authentication.


You can use the GeoEvent Manager to combine out-of-the-box transports and adapters to configure a custom connector without resorting to the GeoEvent SDK or developing any custom code.


An excellent example is available on the Support Services Blog:


- RJ

This blog has been updated as part of a new series describing debugging techniques you can use when working to identify the root cause of an issue with a GeoEvent Server deployment or configuration. The original blog's text is included below, however, please consider the new blogs which can be accessed by clicking any of the quick links below:


How to debug the Add a Feature and Update a Feature Output Connectors is probably the question I have been asked the most over the last couple of years working on GeoEvent Extension development team – so it’s appropriate that my inaugural blog to GeoNet address it.


The scenario:

  • An input appears to be successfully receiving and adapting the data from the stream and creating GeoEvents.
  • The Filters and/or processors incorporated in a GeoEvent Service are handling the GeoEvents as expected.
  • The event count on an Add a Feature or Update a Feature Output Connector is incrementing, but no features are being added or updated in the targeted feature layer.


So, how do you start debugging to determine what the problem might be?


My advice is to enable DEBUG logging on the feature service outbound transport to see if we can capture the JSON request being sent to ArcGIS Server and the response GeoEvent Extension receives from ArcGIS Server.


I’ve attached an image below (FeatureServiceUpdate.png) of a karaf.log I created a while ago which shows the transactions taking place when a Output Connector performs an HTTP/POST to update features in a feature service. Don’t be concerned that the illustration identifies the 10.2.2 version – the concepts and workflows are the same for the GeoEvent 10.3 and 10.4 product releases when using a traditional RDBMS.


To enable DEBUG logging on a single component in GeoEvent Manager:

    • Navigate to the Logs page and click the Settings button.
    • Enter the logging component in the text field Logger and select the DEBUG log level.
    • Click Save.


In this case you want to log DEBUG messages for the com.esri.ges.transport.featureService.FeatureServiceOutboundTransport component only. Setting a logging level of DEBUG on the ROOT component is not recommended. Doing this will produce a very verbose set of log messages and can cause the Logs page in the GeoEvent Manager to 'hang' as it tries to refresh the rapidly updating logs.


With DEBUG logging enabled for the specified component, the GeoEvent Extension will produce more detailed logs when the feature service outbound transport handles event data. The DEBUG logging statements will include the JSON being sent and the ArcGIS Server’s response. I prefer looking at the log in a text editor, rather than using the log manager in GeoEvent Manager. You can find the karaf.log in the default folder C:\Program Files\ArcGIS\Server\GeoEventProcessor\data\log.


In the FeatureServiceUpdate.png that is attached, find the two messages time stamped 2014-03-28 14:06:09,074. The “querying for missing track id XXXX” messages indicate the GeoEvent Extension has discovered that it has not cached any information on features with the TRACK_ID “SWA1568” or “SWA510”. Looking at the third message in the series, it shows the SQL where clause used to query the …/FeatureServer/0/query REST endpoint to discover the necessary OBJECTID values.


The response from the ArcGIS Server includes a JSON array features[ ] with the geometry, OBJECTID, and unique identifier field (flightNumber in this example) for the features with the flight identifiers “SWA1568” and “SWA510”.  Notice that it took 175 milliseconds for the GeoEvent Extension to receive the ArcGIS Server response:

  • 14:06:09,250 - 14:06:09,75  =  75 ms


You might find this query/response latency information valuable when profiling / debugging your GeoEvent Services which are adding or updating features through a feature service.


Once the GeoEvent Extension has the necessary OBJECTID values for the features it wants to update, it posts a block of JSON to the …/FeatureServer/0/updateFeatures REST endpoint. ArcGIS Server responds with “success”:true 325 milliseconds later:
(577 - 252  =  325).


If you find too many JSON event records being included in the transactions, making the log file difficult to read, you can try configuring your Update a Feature Output Connector to specify that the ‘Maximum Features Per Transaction’ should be limited to 1 (the default is 500). This obviously not something you would do in a production environment, but while debugging it can make the log file much easier to read.


If you find that the log is rolling over too frequently, you can edit settings in the following configuration file to allow the karaf.log to grow larger than 1MB and to keep more than 10 archival log files:

  • ...\Program Files\ArcGIS\Server\GeoEvent\etc\org.ops4j.pax.logging.cfg

GeoEvent Log Settings

(Click the thumbnail above to open a larger view)


The logging package changed with the 10.6.0 release to use version 2.x of Log4J.
Settings applicable for the log4j2.appender are illustrated in the attached ops4j.pax.logging.cfg.png file.


You can also edit the message formatting specified by the layout.ConversionPattern in this configuration file to reformat the messages being written to the karaf.log - more information on that can be found here:


Hope this information helps –