POST
|
@MaxBöcke -- I've seen what you describe (and illustrate in your attached PNG) before. In my experience when the GeoEvent Manager web application is not listing feature services I know were created to provide access to data in a spatiotemporal data store's Data Source, and I try to browse to the feature service's page in the ArcGIS REST Services Directory, I see something like an HTTP/500 error returned. The feature service (for whatever reason) has been corrupted or its endpoint cannot be reached at REST, so queries made by web clients like GeoEvent Manager fail when they attempt to discover and list the web services. I can confirm that an output such as Add a Feature to a Spatiotemporal Big Data Store or Update a Feature in a Spatiotemporal Big Data Store are only used by web clients who need to query to retrieve a feature record set for display (for example) on a web map. GeoEvent Server interrogates an Enterprise portal (using a registered server connection) to discover whether or not the portal's hosting server has a registered spatiotemporal data store and obtain the Elasticsearch credentials needed to make a direct connection. You can expect, then, that GeoEvent Server will be able to add / update feature records in a spatiotemporal data store even if you create a Data Source and leave the checkboxes used to publish a map and/or feature service unchecked. GeoEvent Server is not using the web services for data access. You could try deleting the Enterprise portal hosted content items (e.g. the map and/or feature service listed in the Enterprise portal content manager web application) and then use GeoEvent Manger to create a new map / feature service for the existing spatiotemporal Data Source. That should preserve any data and simply create new web services for client applications that need them to access the data. If this doesn't work I would encourage you to open an incident with Esri Technical Support so that they can work through the issue with you.
... View more
01-08-2024
09:55 AM
|
0
|
0
|
177
|
POST
|
@Moi_Nccncc -- Data from sensors, such as the GPS supporting automated vehicle location (or AVL) solutions, is not generally held in-memory by GeoEvent Server. GeoEvent Server was designed around assumptions of frequent observational data reported from sensors and reliable periodic data reports. Whether that is data being sent to GeoEvent Server via HTTP/POST requests or GeoEvent Server polling a REST API via HTTP/GET requests, the processors and filters in a GeoEvent Server can generally only act upon data that is actively being received. Vehicles which cease to send location records generally cannot be included in any sort of real-time notification because there is no input to process. You might think to try configuring an Incident Detector to do what you want to do. An Incident Detector is used when you want to monitor the duration of some condition, such as a tracked asset's location detected "inside" an area of interest (or geofence). An Incident Detector will emit an event record, whose status is 'Ended', if the processor observes at least one event record satisfying its opening condition and does not receive an event record it can use to update its status before the processor's configured expiry time. The default expiry time is 300 seconds. So, it is possible to receive exactly one event record whose geometry is inside a given geofence, route that event record through an Incident Detector to create a new incident (whose status is 'Started') and then stop receiving any additional input from tracked assets (e.g. vehicles). The Incident Detector by default will update its incident's duration until it sees 300 seconds have passed without receiving any additional data and signal this by emitting an event record with the vehicle's TRACK_ID and a status 'Ended'. The administratively ended incident will have a duration 300,000 milliseconds greater than the '0' duration recorded when the incident was opened (when the vehicle was first detected 'Inside' the geofence). This really is not how an Incident Detector processor was intended to be used however. The processor expects to process event records it receives and update its incident's status. It will emit an event record when it "gives up" having not seen any new data within its expiry window, but this is part of its error handling not nominal use. You must also consider that the processor is configured with both an "opening condition" and a "closing condition". The "closing condition" is only checked if the "opening condition" is false. It is not obvious, but given a number of non-overlapping and adjacent geofences you might guess that a single event record routed to an Incident Detector can be used to satisfy either its opening or its closing condition. In other words you cannot use an Incident Detector to process a single AVL data record and detect both that the vehicle's location has "exited" a geofence to the left of an adjacent geofence that the vehicle has just "entered". This nuance is one example of conditions you have to think about when using processors to evaluate spatial conditions such as "enter" and "exit" when simultaneously trying to monitor conditions used to "open" and "close" incidents. It is far better to record observations such as "entry" and "exit" and the date/time these observations are made in a geodatabase rather than trying to hold the information in an in-memory cache and evaluate future conditions based on data cached from prior observations. There's lots of reasons why GeoEvent Server is not designed to cache and hold onto data from event records it has received. I'll try to mock-up an approach to show you what I'm thinking, but you will want to consider a solution which relies on an RDBMS trigger configured to run every couple of minutes to look for data records which have an "entry_time", but do not have an "exit_time", and identify those records whose date/time entered is at least 10 minutes relative to "now". Have the RDBMS trigger flag the data record as one which needs to have an notification sent. You can then use GeoEvent Server to query the flagged records. You can configure the GeoEvent Server input to delete the flagged records from the database as they are queried, so e-mail or SMS notifications are only sent once.
... View more
01-05-2024
05:42 PM
|
0
|
1
|
152
|
POST
|
@Nikhil_Kommidi -- It would be good to know which release of GeoEvent Server you are running and whether it is running on a stand-alone ArcGIS Server (not federated with an Enterprise portal), or if an Enterprise portal is part of the architecture, what role the ArcGIS Server used to run GeoEvent Server plays. I'm confident that the ArcGIS Server platform services you mention are not part of the problem. The ArcGIS Server 'Synchronization_Service' used to run an instance of Zookeeper is only referenced by GeoEvent Server when initializing its configuration following a fresh product installation (or following an administrative reset). The ArcGIS Server platform service is checked only to see if there is an old GeoEvent Server configuration in there which might be imported and upgraded to your current release. After that GeoEvent Server uses an instance of Zookeeper managed by the GeoEvent Gateway and does not make further use of the ArcGIS Server platform service. ArcGIS Server is constantly communicating with the GeoEvent Gateway. If the GeoEvent Gateway service is stopped (or has crashed) the GeoEvent Server service needs to be stopped. GeoEvent Server cannot run without its Gateway managing the Apache Kafka message broker and Zookeeper distributed configuration store. If you are looking at the .../GeoEvent/data/log/wrapper.log and see that the GeoEvent Server's JVM has been shutdown, that means GeoEvent Server is not running (regardless of the state of the GeoEvent Server service shown in the Windows MMC Services console). If the JVM is not running, your GeoEvent Server is not receiving, adapting or processing real-time data. You also won't be able to launch the GeoEvent Manager web application. It is likely that when you try to start the GeoEvent Gateway it attempts to coordinate its Kafka topics with the Zookeeper configuration. When that fails the GeoEvent Gateway cannot initialize and shuts down. The Kafka and Zookeeper managed by GeoEvent Gateway are very tightly coupled. Kafka cannot do its job without Zookeeper (and vice versa). If you see indications that a file beneath C:\ProgramData\Esri\GeoEvent-Gateway\zookeeper-data already exists and this is interfering with the GeoEvent Gateway initializing either its Kafka or Zookeeper ... I can only guess that something has corrupted the Gateway's runtime files. I can offer the advice that creating system restore points using a VM snapshot (for example) is not a reliable way to backup your GeoEvent Server. A snapshot of a VM is not "application consistent" for Esri software. GeoEvent Server in particular may fail to restart following a revert to a VM snapshot if real-time data was actively being ingest, adapted, processed, and/or disseminated when the VM snapshot image was taken. When running normally the GeoEvent Gateway is actively writing data to disk -- a VM snapshot may capture an inconsistent replica or internal state which causes one or more Kafka topics to become corrupted. I do not like recommending an administrative reset as it is the most destructive remedial step you can perform, particularly prior to the 11.1 release when the reset obliterates any Input, Output, GeoEvent Service, GeoEvent Definitions and other configurable elements you have created using GeoEvent Manager. However, if files which exist beneath C:\ProgramData\Esri\GeoEvent-Gateway are interfering with stopping and restarting the GeoEvent Gateway, an administrative reset really is your only option.
... View more
01-05-2024
03:06 PM
|
0
|
0
|
241
|
POST
|
@AdamRepsher_BentEar -- Testing 10.9.1 (Patch 4) my advice would be to configure your GeoFence Synchronization Rule with the Replace All GeoFences in Category checkbox unchecked. Changes made to address BUG-000089545 were not included in any 10.9.1 patch nor any of the earlier release patches. Changes to the logic for geofence synchronization for this bug were first incorporated into the 11.1 release. A couple of things to pay particular attention to when testing: The GeoEvent Manager's list of registered geofences needs to be periodically refreshed. The easiest way to do this is click 'Site' (which takes you back to the 'GeoEvent Definitions' page then click 'GeoFences' to force that page to refresh. You need to do this to make sure you are seeing an accurate list of the geofences currently in the GeoEvent Server registry. Geofence Synchronization runs every nn minutes from the point you click 'Synchronize' when saving your GeoFence Synchroniztaion Rule. I've found that the timer can slowly creep due to normal latency. This means that a synchronization you might expect to run 5 seconds after each minute, after a couple of hours, might be running at 15 or 20 seconds after each minute. To account for this, when testing, I try to wait two full synchronization cycles before checking to see if feature records I deleted resulted in corresponding geofences being removed from the GeoEvent Server registry. At the 10.9.1 release I had to explicitly enter a WHERE clause 1=1 into the Geofence Synchronization Rule's Query Definition parameter. This was something that was fixed in the 11.1 release. Once you upgrade to the 11.1 release, because logic associated with the Replace All GeoFences in Category checkbox changes, my advice is to delete and reconfigure your GeoFence Synchronization Rule to make sure the Replace All GeoFences in Category checkbox is checked.
... View more
01-05-2024
02:25 PM
|
1
|
0
|
298
|
BLOG
|
Hello Debbie -- I am going to request we take any further troubleshooting through Esri Technical Support. From what you've shared I'm assuming that the GeoEvent Definition you've configured your Poll an External Website for JSON to create for you (e.g. SamsaraLocationsIN) recognized the attribute time in the data received from Samsara as a numeric value. When allowing an input to create a GeoEvent Definition for you, numeric values are adapted as a Double (because a Double is the most generic way to handle a numeric value). If indeed the time in the Samsara data record is a 13-digit epoch expressing milliseconds -- not a 10-digit epoch expressing seconds -- then you should be able to simply copy the SamsaraLocationsIN GeoEvent Definition the input created for you and edit your copy to specify that the attribute value time should be adapted as a Date. You should then delete the auto-generated SamsaraLocationsIN GeoEvent Definition and reconfigure your Poll an External Website for JSON input to use your copy of the GeoEvent Definition, the one you've tailored specifically to adapt time as a Date. You also might want to review the comments in the following Esri Community article which present and discuss the adaption of different representations of date/time values: Timestamps received from a sensor feed display differently in GeoEvent Sampler, ArcGIS REST Services queries, and ArcGIS Pro
... View more
10-16-2023
12:59 PM
|
1
|
0
|
586
|
BLOG
|
At the 10.8.1 release there limited options for using a Field Calculator or Field Mapper processor to explicitly cast a data value from a numeric value to a Date. The inbound connector you are using, however, should be able to adapt the received value as a Date rather than as a Double or String. If you can provide me a sample of the data being received and screenshots of how you've configured the inbound connector and the GeoEvent Definition that connector is using, I can probably suggest a way to adapt the Samsara data you are receiving. -- RJ
... View more
10-13-2023
11:23 AM
|
0
|
0
|
602
|
DOC
|
This article provides additional information and examples for the Poll an ArcGIS Server for Features input's Method to Identify Incremental Updates parameter. Please refer to the input's on-line help topic for usage notes, a description of the input's other parameters, as well as considerations and limitations of this input. See Also: - Polling feature services for "Incremental Updates" (Updated August 2023) - Question regarding "Incremental Update" workarounds, custom components? - Using a partial GeoEvent Definition to update feature records The Method to Identify Incremental Updates parameter has four options as described in the Parameters table in the input's on-line help topic: ObjectID – GeoEvent Server will cache the greatest object identifier from the feature record set returned from a map/feature service poll. Only features whose object identifier is greater than the value cached from the last poll will be included in the next poll. Timestamp since last-received to newest-feature timestamp – GeoEvent Server will cache the greatest date/time value from the feature record set returned from a map/feature service poll. Only feature records whose timestamp is greater than the value cached from the last poll will be included in the next poll. Timestamp interval between last query time until now – GeoEvent Server will construct a temporal query with a lower-bound and upper-bound. The lower-bound will be the date/time the last query was executed. The upper-bound will be the date/time “now”. Only feature records with a timestamp within the temporal query’s range will be included in the next poll. The feature record timestamp is taken from a specified attribute field. Timestamp interval between last query time with overlap until now – GeoEvent Server will construct a temporal query with a lower-bound equal to a specified number of seconds before the last query was executed and an upper-bound equal to the date/time “now”. Only feature records whose timestamp is within the temporal query’s range will be included in the next poll. When using the Poll an ArcGIS Server for Features input’s Get Incremental Updates capability, if you configure GeoEvent Server logging to include DEBUG messages from the feature service inbound transport, you will see detailed messages which identify the query expression and/or temporal query GeoEvent Server constructs to limit which feature records are polled. (Option) - Timestamp since last-received to newest-feature timestamp (A) 1=1 and last_updated > timestamp '1969-12-31 23:59:59' (B) 1=1 and last_updated > timestamp '2023-08-07 13:29:00' The logged message (A) indicates the input has no cached timestamp value. In addition to honoring the input’s default Query Definition 1=1, the input has constructed a query to include any feature record with a epoch date/time greater than “the beginning of time” defined as January 1st 1970 (Midnight UTC). This is expected to retrieve any feature records which a valid (non-null) date/time stamp. The logged message (B) indicates that, of the feature records returned in the previous poll, the records with the greatest timestamp were those whose date/time attribute held a value “Aug 07 2023 13:29:00 UTC”. Only feature records greater than this value are included when polling using the cached timestamp. (Option) - Timestamp interval between last query time until now (A) 1=1 and last_updated >= timestamp '1969-12-31 23:59:59' and last_updated < timestamp '2023-08-08 01:19:05' (B) 1=1 and last_updated >= timestamp '2023-08-08 01:19:05' and last_updated < timestamp '2023-08-08 01:20:06' (C) 1=1 and last_updated >= timestamp '2023-08-08 01:20:06' and last_updated < timestamp '2023-08-08 01:21:06' The logged message (A) indicates that the input has no cached timestamp value. In addition to honoring the input’s default Query Definition 1=1, the input has constructed a query to include any feature record with a epoch date/time greater than “the beginning of time” defined as January 1st 1970 (Midnight UTC), but less than the current time “now” when the query was executed. The logged message (B) indicates that the input last polled for feature records “Aug 08 2023 01:19:05 (UTC)” and has therefore assigned that as the lower-bound for the constructed temporal query. The upper-bound is set to the current time “now”. The expectation is that any feature records with a valid (non-null) date/time between the lower-bound and upper-bound will be retrieved. The logged message (C) indicates that the input last polled for feature records “Aug 08 2023 01:20:06” and has therefore assigned that as the lower-bound for the constructed temporal query. The upper-bound is set to the current time “now”. The expectation is that any feature records with a valid (non-null) date/time between the lower-bound and upper-bound will be retrieved. Reviewing these logged messages we recognize that the input is polling for input every 60 seconds. We recognize that the ArcGIS Server’s managed geodatabase, in this case, is rounding date/time values to the nearest second -- otherwise the logged messages would include millisecond values. Also note that the upper-bound in the logged messages (A) and (B) differ by one second. There is inherent latency (some number of milliseconds) between the time GeoEvent Server determines the current time “now” (setting the temporal query’s upper-bound) and when a result is returned from the database. The input's polling is conducted approximately “every 60 seconds” depending on operational latency within GeoEvent Server (as data records are ingest, adapted, and processed) as well as between solution components (e.g. ArcGIS Server and ArcGIS Data Store). (Option) - Timestamp interval between last query time with overlap until now (A) 1=1 and last_updated >= timestamp '1969-12-31 23:59:59' and last_updated < timestamp '2023-08-08 02:21:15' (B) 1=1 and last_updated >= timestamp '2023-08-08 02:21:05' and last_updated < timestamp '2023-08-08 02:22:45' (C) 1=1 and last_updated >= timestamp '2023-08-08 02:22:35' and last_updated < timestamp '2023-08-08 02:24:15' The logged message (A) indicates that the input has no cached timestamp value. In addition to honoring the input’s default Query Definition 1=1, the input has constructed a query to include any feature record with a epoch date/time greater than “the beginning of time” defined as January 1st 1970 (Midnight UTC), but less than the current time “now” when the query was executed. The logged message (B) indicates that the input last polled for feature records “Aug 08 2023 02:21:15 (UTC)”. The input, in this case, was configured with an additional offset parameter Timestamp overlap duration in seconds set to 10 seconds, so the input has constructed a temporal query with a lower-bound 10 seconds earlier than its last query. The computed lower-bound is “Aug 08 2023 02:21:05 (UTC)”. The constructed temporal query’s upper-bound is set to the current time “now”. Any feature records with a valid (non-null) date/time between the lower-bound (with its offset) and upper-bound will be retrieved using this query. The logged message (C) indicates that the input last polled for feature records “Aug 08 2023 02:22:45 (UTC)” and set a lower-bound offset by the 10 seconds specified by the input’s Timestamp overlap duration in seconds parameter. The computed lower-bound is “Aug 08 2023 02:22:35 (UTC)”. The constructed temporal query’s upper-bound is set to the current time “now”. Any feature records with a valid (non-null) date/time between the lower-bound (with its offset) and upper-bound will be retrieved using this query. Reviewing the logged messages in this third example we recognize that the input is polling for input approximately every 90 seconds -- there is a 90 second difference between the upper-bound values in each of the constructed temporal queries. You can use the following GeoEvent Server component logger to configure logging to include DEBUG level messages which include the information illustrated above for the Poll an ArcGIS Server for Features input’s Get Incremental Updates capability: com.esri.ges.transport.featureService.FeatureServiceInboundTransport
... View more
08-11-2023
04:35 PM
|
0
|
0
|
911
|
POST
|
I moved this thread from the GeoEvent Server Questions over to ArcGIS Pro as it sounds, to me, like this has more to do with a difference in behavior when adding hosted feature layers backed by a spatiotemporal data store to an ArcGIS Pro project vs. adding the layer to an Enterprise portal web map. We might want to consider taking 'GeoEvent' out of the thread's title. cc: @jill_es
... View more
07-03-2023
11:40 AM
|
1
|
1
|
400
|
POST
|
Hello @JessicaRouns. What you describe, I think, is a limitation of the spatiotemporal data store's map service implementation. You are correct that you cannot use GeoEvent Server to edit or specify feature record symbology. Because the GeoEvent Manager web application can be used to publish map and feature services, both when using a spatiotemporal data store as well as a traditional relational geodatabase, it is reasonable to see this something the GeoEvent Server product team could address. But I would suggest that GeoEvent Server's role in this scenario is limited to writing data from processed event records out to feature records using the web service as an interface to the geodatabase. I will bring this up with developers familiar with the spatiotemporal data store's map service implementation. However, since configuring symbology for feature record display is more part of web service publication and web service specification, not GeoEvent Server, I would suggest you try asking this question on the ArcGIS Enterprise Questions board or perhaps submitting it as an enhancement request to ArcGIS Enterprise Ideas. You can mention @jill_es, one of the product managers for ArcGIS Enterprise, and she can help direct the conversation to the right team. Thanks -- RJ
... View more
07-03-2023
11:09 AM
|
2
|
0
|
359
|
POST
|
Hello @JessicaRouns. No, there no plans to develop an out-of-the-box inbound connector for GeoEvent Server that reads the Esri Shapefile format. Using a shapefile to capture and relay sensor observations to GeoEvent Server is not considered a good fit for real-time data. Date values in a shapefile are stored as a character string yyyy-mm-dd without a time. When using a shapefile, the time portion of a datetime value is truncated / dropped from the data value. For example, the datetime 2002-08-20 12:00:00 PM is stored in a shapefile as simply 2002-08-20. This is a severe limitation when considering the collecting and representation of sensor data presumably being updated in real-time. I would encourage you to consider ways that you might capture sensor observations as either delimited text or JSON formatted data records. You could relay delimited text to a GeoEvent Server input via a TCP socket relatively easily, or use an input capable of receiving JSON as HTTP/POST requests. Either way, you are probably going to want to use a web service to convey the data, or write a Python script to send the data to GeoEvent Server's input. You could use a system file to convey the data as either delimited text or in a JSON format, but I would recommend other transport protocols over using system files. Hope this information helps -- RJ cross-reference: - Calculate date fields (ArcGIS Pro Documentation) - Fundamentals of date fields (ArcMap 10.8 Documentation)
... View more
07-03-2023
10:08 AM
|
1
|
0
|
276
|
POST
|
Previous presentations for most technical workshops are available from the https://mediaspace.esri.com site. The 2022 UC presentation for GeoEvent Server Best Practices is currently available for review. Questions and comments from either presentation are welcome on this thread.
... View more
06-30-2023
03:51 PM
|
0
|
0
|
477
|
POST
|
This is a discussion thread opened for users, distributors, and especially attendees of the 2023 Esri User Conference to post questions following our technical workshop ArcGIS GeoEvent Server: Best Practices.
John Fry and I will monitor this thread throughout the summer to follow-up with answers to questions related to our technical workshop presentation. If you have a "Best Practices" type of question, please post it as a reply in this thread.
... View more
06-30-2023
03:29 PM
|
0
|
2
|
484
|
IDEA
|
Update June 2023 There have been a couple of changes to GeoEvent Server which could be used to support coded value replacement as part of an event processing workflow. These changes will required that you have upgraded to at least the ArcGIS 10.9.1 release of GeoEvent Server though. The first option, if you have only one or maybe two attribute fields which contain coded values, would be to use the new Choice element. You could use Choice to switch on the coded value in each field and fan-out to follow each choice with a separate Field Mapper processor to write a descriptive string or label for a coded value into an attribute field. If you allowed the input to adapt the coded values as String then you could have the Field Mapper replace the string value '1' with the more descriptive string 'Minor Leak', for example. The drawback to this approach is that a coded value domain with a dozen (or more) discrete coded values would require a separate Field Calculator for each choice of coded value. The fan-out in your GeoEvent Service could become unwieldy, especially since you would also need a separate Choice element for each attribute field which contained a coded value. This is hardly better, in my opinion, than using a Field Enricher if you have dozens of attribute fields each with their own coded values. The approach uses a lot of brute force, and isn't very elegant, but it is more readable (perhaps) than a second approach I can suggest. The second approach would be to use a Field Mapper processor which, beginning with the 10.9.1 release, supports field name delimitation and expression evaluation. Suppose you were receiving data like the following: [ { "code1": 1, "code2": 20, "code3": -3 }, { "code1": 2, "code2": 30, "code3": -2 }, { "code1": 3, "code2": 10, "code3": -1 } ] The value { 1, 2, 3 } in the field code1 should be replaced with { 'Red', 'Green', 'Blue' } The value { 10, 20, 30 } in the field code2 should be replaced with { 'Small', 'Medium', 'Large' } The value { -1, -2, -3 } in the field code3 should be replaced with { 'Jack', 'Jill', 'Jane' } etc. An expression like the following could chain together a series of replaceAll( ) functions to handle the coded value replacement. replaceAll(replaceAll(replaceAll(code1, '1', 'Red'), '2', 'Green'), '3', 'Blue') The attribute value in the input field (code1 in this example) would have to be adapted as a String (rather than an Integer) so that the expression could overwrite one string value with another. Nesting several replaceAll( ) functions together this way requires the input string to be iteratively evaluated (and re-evaluated) which isn't very efficient. The expression itself would also become unwieldy if, for example, there were dozens of coded values in the domain. The only real advantage to this second approach is that you could configure a single Field Mapper with several different string substitution expressions in each of several different mapping fields. Each field mapping expression would take data from one field, translate it, and write the translation out to a target field. Configuring a Field Mapper with an array of string manipulation expressions might be better than having to configure a Field Enricher to handle the look-up of a descriptive string value and enrich the look-up value into an event record if only to avoid having to configure a separate Field Enricher for each attribute field containing a coded value you needed to translate (using a series of separate look-up tables). You can read more about using expressions in Field Mapper in the blog @EricIronside has here: GeoEvent 10.9: Using expressions in Field Mapper Processors
... View more
06-01-2023
11:18 AM
|
0
|
0
|
381
|
BLOG
|
@BrianLomas -- Would you please open an incident with Esri Technical Support on this so an analyst can work with you to establish reproducibility? Off the cuff, I'm thinking that the repeated key 'data' is going to be a problem. I don't know that you're going to be able to specify an XML Object Name for a Poll an External Website for XML input to use to jump forward to the correct substructure in the XML and begin reading data from that point. We'll need to take a look at your GeoEvent Definition to make sure the cardinality of the different attribute keys are configured to properly interpret <data type="list"> as a single item (cardinality 1) and the nested <data type="item"> as a collection of items (cardinality many). By chance did you review the post XML Data Structures - Characteristics and Limitations? It contains some information which complements this article's discussion JSON Data Structures - Working with Hierarchy and Multicardinality
... View more
05-04-2023
07:50 PM
|
0
|
0
|
5879
|
POST
|
Does GeoEvent Server have a preferred string format for date and time values? When is a value for the Expected Date Format parameter required and when can I leave it unspecified? GeoEvent Server is able to adapt a variety of different string formats without relying on a custom string formatting pattern specified using the Expected Date Format parameter. The preferred format for date/time values is the ISO 8601 format with a time zone designator, but either of the following two string formats are preferable to other options: "2023-02-21T14:36:45-08:00" "Tue Feb 21 14:36:45 PST 2023" The first formatted string is an example of the ISO 8601 standard. The second is an example of the format Java uses when converting a Date to a String. Notice that both of the examples above designate the date and time values are in the Pacific Time Zone. GeoEvent Server is able to adapt either string to produce a Date without relying on a specific Expected Date Format pattern. The underlying epoch used for event record processing and the long integer value written to a geodatabase as a feature record’s Date will be the same (1677019005000) regardless of the server locale or clock setting when adapting either formatted string. GeoEvent Server inputs can adapt other string formats as Date values when a GeoEvent Definition specifies the event record attribute should be handled as a Date rather than a String. Each of the following formatted string values will also adapt successfully, without requiring an Expected Date Format pattern, but the underlying epoch will depend on the locale and clock setting of the server. "02/21/23 02:36:45 PM" "02/21/23 14:36:45" "02/21/2023 02:36:45 PM" "02/21/2023 14:36:45" None of the four date/time strings above specify a time zone, so GeoEvent Server has to assume one when adapting the formatted strings and calculating a Date. The string values will be adapted as local date/time values. The epoch long integer value assigned to the constructed Date will be a UTC date/time offset from the server’s local time by the appropriate number of hours. This means that different servers in different time zones will compute different epoch values. Consider the example below: The epoch calculated by a server observing ‘Pacific Time’ will be +03:00 hours relative to the epoch calculated by a server that observes ‘Eastern Time’ even though both server machines presumably received and adapted the same formatted string from the same data feed. If the date/time string sent by the feed included a time zone designator GeoEvent Server would not presume to use the server’s observed time zone and the epoch computed by each server would be the same. Timestamp values with only “hours” and “minutes” GeoEvent Server inputs are able to adapt string timestamps from data feeds which include only “hours” and “minutes” in the formatted string value. "02/21/23 14:36" "02/21/2023 14:36" The server’s calculated epoch will, of course, be 45 seconds earlier than the ‘Pacific Time’ epoch shown in the previous example (1677018960000 rather than 1677019005000) since the reported time does not include ‘seconds’. Specifying an Expected Date Format pattern string You only need to configure an input with an Expected Date Format pattern string if strings being received and adapted as Date values deviate from the examples above. An example frequently encountered includes date/time values on the European continent which specify the day before the month (e.g. 21/02/2023). In this case you would need to specify a date format pattern dd/MM/yyyy hh:mm:ss for GeoEvent Server to use when parsing the date string. The pattern specified uses the standard Java conventions for date/time strings. The Java SimpleDateFormat Tester is on online utility you can use to explore Java’s conventions. Example: Receiving a UTC time which does not include a time zone designator Suppose you want an input to adapt the formatted date/time string "February 28, 2023 21:36:45" reported by a sensor feed. The feed’s specification indicates that timestamps on each record are UTC values, but looking at the data you recognize that the formatted string uses the proper name for the month rather than a numeric value, includes a comma, and does not include a time zone designator. You therefore configure your GeoEvent Server input with an Expected Date Format pattern MMM dd, yyyy HH:mm:ss to instruct GeoEvent Server how the custom string format should be interpreted. The feed's specification indicates that date/time values are reported as UTC values, so you should also use the currentOffsetUTC( ) function to add your server's current UTC offset to the adapted date/time. Example: toDate(ReportedDT + currentOffsetUTC() The epoch you want to compute and assign to your geodatabase feature records is: 1677620205000. In an earlier example I indicated that GeoEvent Server assumes a date/time expressed in the ISO 8601 format is a UTC value when adapting the string. In this case GeoEvent Server assumes the opposite. The custom date/time format "February 28, 2023 21:36:45" follows no particular standard, so GeoEvent Server assumes that it represents a local value. The current UTC offset is added to the reported date/time in this case because the feed's specification said each record's timestamp is a UTC value and GeoEvent Server assumed the opposite. It checked the server's locale and applied an assumed offset based on the time zone observed by the server when adapting the formatted string as a Date. Time zones in North America are several hours behind Greenwich Mean Time (GMT) so GeoEvent Server would add some number of hours to push an assumed local value to a UTC standard value. The value returned by the currentOffsetUTC( ) function in negative (in North America) so adding the value to the date/time effectively rolls back the offset applied when the data value was adapted.
... View more
03-16-2023
03:25 PM
|
1
|
0
|
1169
|
Title | Kudos | Posted |
---|---|---|
1 | a week ago | |
1 | 01-05-2024 02:25 PM | |
1 | 01-09-2024 09:04 AM | |
1 | 01-08-2024 04:01 PM | |
1 | 10-16-2023 12:59 PM |
Online Status |
Offline
|
Date Last Visited |
a week ago
|