Skip navigation
All Places > GIS > Enterprise GIS > GeoEvent > Blog
1 2 3 Previous Next

GeoEvent

39 posts

What is GeoEvent doing when it receives event data

When an inbound connector (input) receives an event payload, an adapter parses the event data and constructs a GeoEvent. It uses a GeoEvent Definition (which specifies the data structure of each event) to do this. An input passes the GeoEvents it constructs to a GeoEvent Service for processing. GeoEvents are passed from an input to a GeoEvent Service using a message bus. The message queuing implementation is internal to the product and abstracted from the user (the 10.3 / 10.3.1 releases use RabbitMQ). The GeoEvent Service applies event filtering and processing to the events it receives then delivers the processed GeoEvents to an outbound connector (output) for broadcast.

 

In order to add/update features in a Geodatabase, a GeoEvent outbound connector leverages REST endpoints exposed by a published feature service. The features are actually stored in an underlying feature class, in a Geodatabase. So adding and updating features actually involves HTTP calls and backend database transactions.

 

When the GeoEvent extension was first released, event data had to be persisted in a GeoDatabase and exposed through a published feature service before a client could visualize the data on, say, a Web Map. At the 10.3 release of the product we introduced the concept of a Stream Service … but more on that in a little bit.

 

Event Throughput / System Performance

When considering system scalability and the GeoEvent extension, most folks are thinking about the number of event messages they need to process each second. A simple GeoEvent service with a single TCP/Text input sending data out to a TCP/Text output can process thousands of events per second. But such a simple service isn’t doing very much … it isn’t undertaking any computational analytics or spatial relationship comparisons (which can be CPU intensive), it isn’t transacting with a feature service to enrich event data with attributes from a related feature (which involves network latency), and it isn’t doing anything with GeoFences (which can use a lot of available RAM).

 

Improvements made to the product for the 10.3 release focused on maintaining a high level of GeoEvent throughput when real-time analytics such as spatial filtering (e.g. GeoEvent INSIDE GeoFence) were incorporated into a GeoEvent Service.

 

Just as important than the number of events you expect to receive each second, or the filtering/processing you plan on performing, is the overall size of each event in bytes. You can easily process 1000 events per second on a single server when the events are fairly small – say just a few simple attributes with a point geometry. If your events include several polygon geometries each with many dozens of vertices and/or complex hierarchical attribute structures with lists/groups of values, then each event's "size" will be much larger. More system resources will be required to handle the same number of events and your events per second throughput will be reduced.

 

Most folks want to use the event data to update features in a feature class through a published feature service. Transactional throughput with a feature service is an acknowledged bottleneck. When you configure a GeoEvent output to add/update features in a feature service you can expect to be limited to just a couple hundred events per second. There are a couple of parameters you can adjust on your GeoEvent outbound connector to throttle event output as you prototype your event transactions, but you typically don’t need to if you are below the 200 – 300 events per second threshold we recommend for updating features in a locally hosted feature service. Please note that event throughput is further restricted when updating a hosted feature service within an ArcGIS Online Organization account.

 

Regarding system performance, say you were to receive a burst of 1000 events on an input. If you knew, from testing you had conducted, that your local server and database could only update 240 events per second, then you can assume that GeoEvent will need to create a backlog of events and work to process events off the backlog. The 10.3 product release does a better job of this than the 10.2.2 release.

 

Say your data was not a single burst, but that you expected to receive a sustained 1000 events per second, and your output was still only handling 240 events per second. At 10.3 you can expect that the GeoEvent performance will degrade sharply as the backlog begins to grow. GeoEvent will work to clear the backlog, but it will continue to grow until a system resource becomes saturated and the system becomes unresponsive. This is the behavior we observed during the GeoEvent 10.3 Holistic Event in DC. It could be that you run out of system RAM, it could be that you saturate the network card. If you are not processing events as fast as you are receiving them you will have a problem.

 

Stream Services

Stream Services, available with the GeoEvent 10.3 release, provide an alternative to updating features in a feature service. The key when thinking about Stream Services is to separate in your mind data visualization and data persistence. We have another tutorial, Stream Services in GeoEvent, which you might want to take a look at if streaming data out for visualization without persisting the data in a geodatabase is something that you are interested in pursuing.

 

Stream Services rely on WebSockets. The JSON broadcast from a Stream Service can be received by a custom JavaScript application; the Stream Service concept is supported by a developer API. They can also be added to Web Maps as feature layers to display the event data in near real-time.

 

The reason behind the development of the Stream Service was that, without them, your only option for data visualization was to first persist your event data as features in a Geodatabase feature class, then have your client application query the feature class through a published feature service. An ArcGIS Server instance running on a local GIS Server with a local RBDMS supports up to about 240 events per second. That’s a 1/10th of what a typical GeoEvent Service is able to process each second. Streaming the event data out as JSON was one way to provide data visualization for folks who didn’t require data persistence.

 

WebSockets also allow a client to specify spatial and/or attribute filters which the server will honor. This allows network bandwidth to be preserved by limiting the data being passed over the socket connection. It is up to the client application to know what it can support and specify filters which limit the amount of data the client will receive.

 

In the case of a custom JavaScript web application, if too much information is sent from GeoEvent to a browser's web application, the browser will simply freeze and may crash. The socket connection will close when the client dies and GeoEvent will continue broadcasting events through the Stream Service for receipt by other clients (currently connected or in the process of connecting).

 

System Resources / System Sizing

There’s actually quite a bit you need to consider when thinking about scalability.

 

  • Physical Server
    When leveraging virtualization, in my experience, it is most important that the physical server hosting the virtual machines adequately support the virtualization. The physical server is going to need considerable memory and a substantial processor if you plan on instantiating multiple VMs each with 4 CPU cores and 16GB of RAM. Also, the more physical servers you incorporate in your architecture, the more important the network card will become. At the 10.3 product release we support high-availability and high-velocity concepts by configuring an ArcGIS Server cluster with multiple servers and installing GeoEvent on each server in the cluster. We have a Clustering in GeoEvent tutorial which introduces these concepts.

 

  • RAM
    In a Holistic lab we conducted back in September, we discovered that the type of RAM is as important as the amount. It's not sufficient to consider a server as simply having 16GB of RAM. Premium DDR4 SDRAM will provide 2133 – 4266 MT/s (million transfers per second) whereas DDR3 RAM from just a couple years ago will provide only 800 to 2133 MT/s. You may not have any control over the type of RAM in the physical server hosting the VMs being provided to you  – but it matters.

    If you are importing GeoFences into the GeoEvent processor, know that these geometries are all maintained in memory. If you have thousands of complex GeoFences with many vertices, that is going to consume a lot of RAM. Significant improvements were made at the 10.3 product release to allow GeoEvent to better handle the spatial processing needed to determine the spatial relationship between an event’s geometry and a GeoFence, so event throughput is much better – but a high volume of spatial processing can consume significant CPU.

 

  • CPUs
    The number of CPU cores is generally important once you being designing GeoEvent Services with actual processing – it is not as important when benchmarking raw event throughput. For example, a benchmark taking event data from a TCP socket and sending the data to a TCP socket doesn't require much CPU; a large amount of premium RAM is more important in this case. Projecting an event's geometry, enriching events, calculating values – these analytics will all place load on your CPU resource.

 

I wouldn't be surprised to learn that a physical server with only 4 cores and 32GB of premium RAM outperformed a virtual cluster of three VMs each with 4 cores and 16GB of RAM. The hosting server might be an older machine with DDR3 or DDR2 generation RAM. The hosting server might be supporting other virtualization. Network connections between physical machines might benefit from an upgrade.

 

Given the above, you can probably understand why we recommend that you dedicate an ArcGIS Server site for real-time event processing. You might have heard this referred to as a "silo'd" approach to your system architecture in which one ArcGIS Server site is set-up to handle your map service, feature service, geoprocessing, and map tile caching with a second ArcGIS Server site set-up for real-time event processing and stream service publication.

 

There are many factors you will need to consider when making system architecture and hardware purchasing decisions. Videos from technical workshops presented at our Developer Summit in Palm Springs as well as the International User Conference in San Diego are available on-line at videos.esri.com ... search for geoevent best practices to find a video which presents some of these considerations.

 

The above is, of course, specific to the ArcGIS GeoEvent Extension for Server. A much more comprehensive look at system architecture design strategies is provided by Dave Peters in his blog:  System Architecture Design Strategies Class Resources.

 

Hope this information is helpful -

RJ

The GeoEvent Extension team collaborated with FlightAware, a leading provider of worldwide aviation flight tracking data in over 50 countries across North America, Europe, and Oceania, to create the FlightAware Connector for GeoEvent. This connector uses the FlightAware Firehose API to receive the real-time data stream from the FlightAware data service directly into the GeoEvent Extension. Data received from the Firehose API includes flight positions (e.g. RADAR, ADS-B, Mode S, etc).

 

By using a stream service, you can visualize flights worldwide, in real-time, in a web map as well as in your GIS applications. The connector supports all five data types: Flightplan, Departure, Arrival, Cancellation, and Position. Since the FlightAware Firehose data stream is sending high volume and high frequency data, make sure to review the prerequisites as well as the GeoEvent Extension tuning steps contained in the supporting documentation to ensure success with the connector. This connector can be used with GeoEvent Extension versions 10.3 and 10.3.1.

 

For more information on how Esri and FlightAware are partnering, see this recent press release. For additional connectors and other resources available for the GeoEvent Extension, visit the GeoEvent Gallery.

 

Real-Time & Big Data GIS

FlightAwareHeatMap.png

FlightAware1.PNG

Another recurring question:

 

I've configured a 'Poll an ArcGIS Server for Features' input to 'Get Incremental Updates', is there a way to prevent the input from polling all of the features in a feature class when GeoEvent Server is restarted?

 

The short answer is: No.  When GeoEvent Server is restarted (or the server on which it is running is rebooted), inputs which use the out-of-the-box FeatureService transport to poll an ArcGIS Server map service (or feature service) lose whatever value they've cached which enables them to query for features which are "new" relative to the last poll conducted by the input.

 

The capability to 'Get Incremental Updates' is unique to the 'Poll an ArcGIS Server for Features' input connector and should not be confused with the 'Receive New Data Only' parameter, exposed by the HTTP transport, which requires event data include the HTTP "Last-Modified" header. (Refer to comments in the thread Re: Receive RSS Inbound Connector.)

 

The issue we're exploring here deals only with the 'Poll an ArcGIS Server for Features' input connector -- or a custom input you may have configured which uses the FeatureService transport to poll an ArcGIS Server map / feature service.

 

An input configured to poll a map / feature service and retrieve only incremental feature updates maintains an in-memory cache. The value in this cache depends on whether your 'Method to Identify Incremental Updates' is ObjectID or Timestamp. In either case the input incorporates the largest value observed from its last poll into a WHERE clause so that only features whose OID (or date/time) is greater than the greatest value (from the last query) are returned (by the next query).

 

If you stop the input the cache is maintained, so that when the input is restarted it will be able to poll for features whose specified attribute value is greater than the value in the cache. If you stop the ArcGIS GeoEvent Server Windows service, or reboot the server, the cache is destroyed and the input has no way of knowing which features were polled previously. The next poll conducted by the input will retrieve all of the items in the map / feature service.

 

This becomes painfully obvious when one of the notification outputs (e.g. 'Send a Text Message' or 'Send an Email') are included in a GeoEvent Service which polls a map / feature service for event data. When the GIS Server is rebooted an e-mail recipient can potentially receive hundreds of messages if event data polled by the input satisfy filtering and/or processing criteria designed into a GeoEvent Service.

 

The motivation behind this behavior is that a cache persisted within a system file on disk could be difficult to find, might only be editable by a user with administrative credentials, and unnecessarily involves file I/O in a potentially high-volume event processing scenario. Locating and deleting a system file-based cache was deemed more burdensome than requiring that GeoEvent Server outputs be stopped in order to prevent unwanted notifications from being sent. Basically, this behavior is by design.

 

As a best practice, if you find you are frequently restarting GeoEvent Server (or having to reboot your server), make sure to stop all notification outputs, or any outputs you do not want to process events based on "old" features, when GeoEvent Server is restarted.

 

You can also employ a strategy of writing notification messages to a secondary feature layer, rather than directly to a notification output. The secondary feature layer acts as a notification message cache. You could then design a second GeoEvent Service (or extend your original GeoEvent Service) to poll this "notification message cache" and as event messages are sent to a notification output, use an 'Update a Feature' output to flag the notification message as having been sent. This will enable a filter to discard messages which have been sent and avoid sending repeat notifications.

 

If you have other approaches you have developed to deal with this particular behavior, your comments are welcome.

 

As always, I hope this information helps.

 

- RJ

Morakot has been keeping a blog under his GeoNet user-id.

 

I'm going to reference a recent blog of his, and probably begin re-posting his content here, in the GeoEvent product's blog.

 

How to Create Temporal Filter in GeoEvent

 

- RJ

Hey All -

 

 

Normally the GeoEvent Simulator loads comma separated text from a simulation file and allows you to send this data to a GeoEvent Server tcp-text-in input to simulate a real-time data feed. Sample data in the tutorials either represents a point geometry as a quoted pair of X,Y coordinate values (e.g. "-75.175,39.991") ... or provides the coordinates of a point location as separate X and Y attribute values which the input can take and use to construct a geometry.

 

But what if you want to simulate a dynamic polygon, such as a series of forecast areas affected by a storm?

 

Here's a trick you might find handy:

  1. You do not have to use a comma to delineate your event attributes in a simulation file.
  2. Valid Esri Feature JSON can be placed in a simulation file.
  3. A GeoEvent Server input can be configured to interpret the JSON as a geometry.

 

Consider the following two lines of simulation input:

 

"AA-1234";"12/24/2015 23:59:59";{ "rings": [ [ [-75.175, 39.991], [-75.173, 39.991], [-75.173, 39.99], [-75.175, 39.991], [-75.175, 39.991] ] ], "spatialReference": { "wkid": 4326 } }

 

"BB-7890";"02/15/2015 12:34:56";{ "rings": [ [ [-8368449.66, 4864715.92], [-8368263.15, 4864676.62], [-8368272.04, 4864618.25], [-8368459.87, 4864645.20], [-8368449.66, 4864715.92] ] ], "spatialReference": { "wkid": 102100, "latestWkid": 3857 } }

 

In both examples I am sending GeoEvent Server a JSON string representation of a geometry using the Esri Feature JSON format for a polygon geometry. Please refer to the ArcGIS Developers on-line documentation for the JSON spec and samples of Point, Multipoint, Polyline, and Polygon geometries.

 

Notice that I have chosen to separate the event attributes using a semi-colon rather than a comma. Since both commas and literal quotation marks are part of the Esri Feature JSON syntax, using a semi-colon for field delineation simplifies my simulation file considerably. It allows me to keep the required quotes and commas without having to escape or quote them as string literals. I'm free to quote the other event attributes of the simulated event. In the examples above, I've quoted my TRACK_ID and my TIME_START values, though I probably do not need to.

 

Also notice that each geometry string includes the coordinate system associated with the coordinate values. The first event uses the WGS 1984 Geographic Coordinate System (its coordinate values are expressed in decimal degrees). The second event uses the Web Mercator Aux Sphere Projected Coordinate System (its coordinate values are expressed in meters).

 

Attached are illustrations of the GeoEvent Definition I configured my 'Receive Text from a TCP Socket' input to use. The input is still responsible for adapting the delimited text it receives from the GeoEvent Simulator, so it needs to know what characters to expect for the message separator and attribute separator ... and it relies on an event definition to tell it that the third attribute should be interpreted as a Geometry.

 

If you try this and run into problems, let me know. There may be limits on the raw number of bytes you can pass over a TCP socket or how many messages of a given size you can load into the GeoEvent Simulator and send each second. It is probably best to simplify string representations of your geometries when including JSON in simulated event data.

 

Hope you find this information useful -

RJ

 

GeoEvent Definition

Capture1.png

 

 

GeoEvent input configuration

 

Capture2.png

Customers, particularly in the Federal Government space, have reported issues launching the ArcGIS GeoEvent Extension for Server when the McAfee Enterprise Suite has been deployed in their environment.

 

The McAffee Enterprise Suite offers anti-virus and malware protection. One component of the suite, the On-Access Scanner, actively scans files used and/or accessed by a running program. This has been shown prevent the proper installation and startup of the GeoEvent Extension. As highly compressed archive files (such as the JAR files utilized by the GeoEvent Extension) are scanned, access to the files is restricted, and multiple timeout failures can occur while waiting for the scans to complete.

 

Please refer to KB Article #44817 on the Esri Support site for additional information.

Understanding what a GeoEvent Definition is and how they are created is important for understanding the real-time analytics, filtering, and processing being performed by a GeoEvent Service.

 

In the Making Features Come Alive exercise in Module 2 of the Introduction to GeoEvent tutorial, you are introduced to the Field Mapper Processor. One of your first experiences with GeoEvent Definitions will probably be when using a GeoEvent Service to add or update features in a feature service’s feature layer. Mapping a GeoEvent to a schema consistent with the feature layer before sending an event to an output to add/update features is a recommended best practice.

 

You should think of every GeoEvent as having an associated GeoEvent Definition. An input connector does not receive GeoEvents. The connector has a transport which receives and hands-off a byte stream to the connector’s adapter. The adapter needs a GeoEvent Definition in order to instantiate different data values and place the data into different fields within a data structure. It’s like the adapter is sorting mail -- this field is a String, that field is a Date, that other field is a Long integer,… etc. The structure of a GeoEvent is specified by a GeoEvent Definition.

 

Several input connectors available out-of-the-box with the GeoEvent Extension use adapters which allow you to determine whether the adapter should create a GeoEvent Definition. The adapter does this by taking the first event received, examining the data values, and then making a best-first-guess at what the GeoEvent Definition should look like. If a GeoEvent Definition already exists with the name you specified, the adapter will use (not update) this existing GeoEvent Definition.

 

GeoEvent Definitions have owners. When an adapter creates a GeoEvent Definition for you the owner will look something like: auto-generated/com.esri.ges.adapter.inbound.JSON

 

If you copy an existing GeoEvent Definition, import a GeoEvent Definition from a feature service, or click 'New GeoEvent Definition' to create one from scratch, the GeoEvent Definition owner will typically be displayed as either arcgis (10.2.x) or admin (10.3.x) depending on how you logged in to ArcGIS GeoEvent Manager.

 

Only two GeoEvent Definitions come defined out-of-the-box. The incident GeoEvent Definition owner is identified as com.esri.ges.processor/IncidentDetector/xxxx. You can see this if you click to examine the incident GeoEvent Definition. The TrackGap GeoEvent Definition is the other GeoEvent Definition that is defined out-of-the-box. If you click to examine this GeoEvent Definition you will see that its owner is com.esri.ges.processor/TrackGapDetector/xxxx. These owner strings tell you which out-of-the-box processor created the GeoEvent Definitions; the xxxx identifies the GeoEvent Extension release.

 

Sometimes a processor will modify an event’s structure or schema by either adding or removing a field. When a processor does this, it will create a new GeoEvent Definition. The Field Calculator Processor is a good example; you can configure this processor to place its calculated value into a new field. The Field Enricher Processor is another example; it is taking data from a feature service's table and adding new fields to the event it is processing.

 

When a processor creates a GeoEvent Definition for you, you are typically required to enter the name the processor should use for the GeoEvent Definition it will create.

 

It can be important to understand when a processor creates what I have referred to as a managed GeoEvent Definition. A processor will not create a managed GeoEvent Definition until it actually receives a GeoEvent. You have probably discovered that you cannot, for example, configure a Field Mapper Processor which follows a Field Enricher until the Field Enricher has received and processed an event. Once the Field Enricher receives a GeoEvent, it will process the event and create the necessary GeoEvent Definition. Only then can you edit your GeoEvent Service to configure your Field Mapper.

 

Since processors own the GeoEvent Definitions they create, any change you make to the GeoEvent Service which incorporates the processor will trigger the processor to delete its managed GeoEvent Definition when you publish the GeoEvent Service to save your changes. For this reason, you should be aware of when GeoEvent Definitions are created, when they are deleted, and when they are available for reference by another processor or output connector.

 

A few recommendations to take away from this quick discussion:

 

  1. Every GeoEvent has an associated GeoEvent Definition. You should become familiar with when GeoEvent Definitions are created, which components create them, and why.
  2. Don’t leave an input connector configured to create GeoEvent Definitions for you. When configuring an input connector to create a GeoEvent Definition, it is a recommended best practice to copy the GeoEvent Definition generated by the input connector’s adapter and then reconfigure your input to use the copy of the GeoEvent Definition.
  3. Processors can create GeoEvent Definitions. The GeoEvent Definition associated with an event received by a processor is not necessarily going to be the GeoEvent Definition associated with the GeoEvent which comes out of a processor.
  4. Always use a Field Mapper Processor to prepare a GeoEvent’s schema to match a feature service’s feature layer when adding/updating features. You might think you understand the structure of an event you are sending to an output – but it is a best practice to use a Field Mapper and make the schema mapping explicit.
  5. Processors may delete GeoEvent Definitions they created. If you configured a Field Mapper “downstream” from a Field Calculator or Field Enricher, for example, the “upstream” processor may delete a GeoEvent Definition referenced by the Field Mapper. As long as you do not double-click to edit the Field Mapper’s configuration you can trust the processor “upstream” will re-create the GeoEvent Definition referenced by the Field Mapper as event data is received by the processor, before GeoEvents are received by the Field Mapper.
  6. When publishing a stream service, never refer to a GeoEvent Definition which has been generated for you by an input connector’s adapter or a GeoEvent Definition created for you by a processor. Make a copy of any such GeoEvent Definition(s) and publish the stream service to reference a GeoEvent Definition which you own.

GeoRSS is a standard way of tagging an RSS feed so that applications can use embedded location information in each post. Using the GeoEvent Extension for ArcGIS Server, you can monitor a GeoRSS feed in real time and use it to update the applications and common operational pictures used by your colleagues. Should you encounter a secured GeoRSS feed that you would like to use, there is no standard connector that allows you to pass credentials. However, it is possible to configure a connector (without programming) that will allow you to access a GeoRSS service secured with basic HTTP authentication.

 

You can use the GeoEvent Manager to combine out-of-the-box transports and adapters to configure a custom connector without resorting to the GeoEvent SDK or developing any custom code.

 

An excellent example is available on the Support Services Blog:

 

- RJ

This blog has been updated as part of a new series describing debugging techniques you can use when working to identify the root cause of an issue with a GeoEvent Server deployment or configuration. The original blog's text is included below, however, please consider the new blogs which can be accessed by clicking any of the quick links below:

 


How to debug the Add a Feature and Update a Feature Output Connectors is probably the question I have been asked the most over the last couple of years working on GeoEvent Extension development team – so it’s appropriate that my inaugural blog to GeoNet address it.

 

The scenario:

  • An input appears to be successfully receiving and adapting the data from the stream and creating GeoEvents.
  • The Filters and/or processors incorporated in a GeoEvent Service are handling the GeoEvents as expected.
  • The event count on an Add a Feature or Update a Feature Output Connector is incrementing, but no features are being added or updated in the targeted feature layer.

 

So, how do you start debugging to determine what the problem might be?

 

My advice is to enable DEBUG logging on the feature service outbound transport to see if we can capture the JSON request being sent to ArcGIS Server and the response GeoEvent Extension receives from ArcGIS Server.

 

I’ve attached an image below (FeatureServiceUpdate.png) of a karaf.log I created a while ago which shows the transactions taking place when a Output Connector performs an HTTP/POST to update features in a feature service. Don’t be concerned that the illustration identifies the 10.2.2 version – the concepts and workflows are the same for the GeoEvent 10.3 and 10.4 product releases when using a traditional RDBMS.

 

To enable DEBUG logging on a single component in GeoEvent Manager:

    • Navigate to the Logs page and click the Settings button.
    • Enter the logging component in the text field Logger and select the DEBUG log level.
    • Click Save.

 

In this case you want to log DEBUG messages for the com.esri.ges.transport.featureService.FeatureServiceOutboundTransport component only. Setting a logging level of DEBUG on the ROOT component is not recommended. Doing this will produce a very verbose set of log messages and can cause the Logs page in the GeoEvent Manager to 'hang' as it tries to refresh the rapidly updating logs.

 

With DEBUG logging enabled for the specified component, the GeoEvent Extension will produce more detailed logs when the feature service outbound transport handles event data. The DEBUG logging statements will include the JSON being sent and the ArcGIS Server’s response. I prefer looking at the log in a text editor, rather than using the log manager in GeoEvent Manager. You can find the karaf.log in the default folder C:\Program Files\ArcGIS\Server\GeoEventProcessor\data\log.

 

In the FeatureServiceUpdate.png that is attached, find the two messages time stamped 2014-03-28 14:06:09,074. The “querying for missing track id XXXX” messages indicate the GeoEvent Extension has discovered that it has not cached any information on features with the TRACK_ID “SWA1568” or “SWA510”. Looking at the third message in the series, it shows the SQL where clause used to query the …/FeatureServer/0/query REST endpoint to discover the necessary OBJECTID values.

 

The response from the ArcGIS Server includes a JSON array features[ ] with the geometry, OBJECTID, and unique identifier field (flightNumber in this example) for the features with the flight identifiers “SWA1568” and “SWA510”.  Notice that it took 175 milliseconds for the GeoEvent Extension to receive the ArcGIS Server response:

  • 14:06:09,250 - 14:06:09,75  =  75 ms

 

You might find this query/response latency information valuable when profiling / debugging your GeoEvent Services which are adding or updating features through a feature service.

 

Once the GeoEvent Extension has the necessary OBJECTID values for the features it wants to update, it posts a block of JSON to the …/FeatureServer/0/updateFeatures REST endpoint. ArcGIS Server responds with “success”:true 325 milliseconds later:
(577 - 252  =  325).

 

If you find too many JSON event records being included in the transactions, making the log file difficult to read, you can try configuring your Update a Feature Output Connector to specify that the ‘Maximum Features Per Transaction’ should be limited to 1 (the default is 500). This obviously not something you would do in a production environment, but while debugging it can make the log file much easier to read.

 

If you find that the log is rolling over too frequently, you can edit settings in the following configuration file to allow the karaf.log to grow larger than 1MB and to keep more than 10 archival log files:

  • ...\Program Files\ArcGIS\Server\GeoEvent\etc\org.ops4j.pax.logging.cfg

GeoEvent Log Settings

(Click the thumbnail above to open a larger view)

 

The logging package changed with the 10.6.0 release to use version 2.x of Log4J.
Settings applicable for the log4j2.appender are illustrated in the attached ops4j.pax.logging.cfg.png file.

 

You can also edit the message formatting specified by the layout.ConversionPattern in this configuration file to reformat the messages being written to the karaf.log - more information on that can be found here: http://www.codejava.net/coding/common-conversion-patterns-for-log4js-patternlayout

 

Hope this information helps –

RJ