|
BLOG
|
This blog is one in a series of blogs discussing debugging techniques you can use when working to identify the root cause of an issue with a GeoEvent Server deployment or configuration. Click any link in the quick list below to jump to another blog in the series. Configuring the application logger Add/Update Feature Outputs (this blog) Application logging tips and tricks Geofence synchronization deep dive In a client / server context ArcGIS GeoEvent Server sometimes acts as a client and at other times acts as a server. When an Add a Feature or an Update a Feature output is configured to add / update feature records in a geodatabase feature class through a feature service, ArcGIS GeoEvent Server is a client making requests on an ArcGIS Server feature service. In this blog I will show how you can isolate requests GeoEvent Server sends to an ArcGIS Server service and how to use the JSON from the request to debug issues you are potentially encountering. Scenario A customer reports that an input connector they have configured appears to be successfully receiving and adapting data from a provider and event records appear to be processed as expected through a GeoEvent Service. The event record count on their output increments, but they are not seeing some – or any – features displayed by a feature layer they have added to a web map. Request DEBUG logs for the outbound feature service transport Components in the ArcGIS GeoEvent Server runtime log messages to provide information as well as note warnings and/or errors. Each component uses a logger, an object responsible for logging messages in the system's log file, which can be configured to generate different levels of messages (e.g. DEBUG, INFO, WARN, or ERROR). In this case we want to request the com.esri.ges.transport.featureService.FeatureServiceOutboundTransport component log DEBUG messages to help us identify the problem. To enable DEBUG logging for a single component's logger: In GeoEvent Manager, navigate to the Logs page and click Settings Enter the name of the logging component in the text field Logger and select the DEBUG log level Click Save As you type the name of a logger, if the GeoEvent Manager's cache of logged messages contains a message from a particular component's logger, IntelliSense will help you identify the logger's name. Querying for additional information When a processed event record is routed to an Update a Feature output the data is first reformatted as Esri Feature JSON so that it can be incorporated into a map/feature service request. A request is then made using the ArcGIS REST API to either Add Features or Update Features. An Add a Feature output connector has the easier job – it doesn't care whether a feature record already exists since it is not going to request an update. An Update a Feature output connector on the other hand needs to know the objectid or row identifier of the feature record it should update. If the output has previously received an event record with this event record's TRACK_ID then it has likely already sent a request to the targeted map/feature service to query for feature records whose Unique Feature Identifier Field was specified as the field to use to identify feature records to update. The output maintains a cache mapping every event record's TRACK_ID to a corresponding object or row identifier of a feature record. Here is what the logged DEBUG messages look like when an Update a Feature output queries to discover an object or row identifier associated with a feature record: 1 2019-06-05T15:12:34,324 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Querying for missing track id '8SKS617' 2 2019-06-05T15:12:34,489 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Posting to URL: https : //localhost.esri.com/server/rest/services/SampleRecord/FeatureServer/0/query with parameters: f=json&token=QNv27Ov9...&where=track_id IN ('8SKS617') &outFields=track_id,objectid. 3 2019-06-05T15:12:34,674 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Response was {"exceededTransferLimit":false,"features":[ ],"fields"... Notice a few key values highlighted in the logged message's text above: Line 1: The output has recognized that it has not previously seen an event record with the TRACK_ID 8SKS617 (so it must query the map/feature service to see if it can find a matching feature record). Line 2: This is the actual query sent to the SampleRecord feature service's query endpoint requesting a feature record whose track_id attribute is one of several in a specified list (8SKS617 is actually the only value in the list). The query requests that the response include only the track_id attribute and an object identifier value. Line 3: The ArcGIS Server service responds with an empty array features[ ]. This indicates that there are no features whose track_id attribute matches any of the values in the query's list. The output was configured with its Update Only parameter set to 'No' (the default). So, given that there is no existing record whose track_id attribute matches the event record's tagged TRACK_ID field, the output connector fails over to add a new feature record instead: 4 2019-06-05T15:12:34,769 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Posting to URL: https : //localhost.esri.com/server/rest/services/SampleRecord/FeatureServer/0/addFeatures with parameters: f=json&token=QNv27Ov9...&rollbackOnFailure=true features=[{"geometry":{"x":-115.625,"y":32.125, "spatialReference":{"wkid":4326}},"attributes":{"track_id":"8SKS617","reported_dt":1559772754211}}]. 5 2019-06-05T15:12:34,935 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Response was {"addResults":[{"objectId":1,"globalId":"{B1384CE2-7501-4753-983B-F6640AB63816}", "success":true}]}. Again, take a moment to examine the highlighted text: Line 4: The ArcGIS REST API endpoint to which the request is sent is the Add Features endpoint. An Esri Feature JSON representation of the event data is highlighted in green. Line 5: The ArcGIS Server service responds with a block of JSON indicating that it successfully updated a feature record, assigning the new record the object identifier '1' and a globally unique identifier (the feature service I'm using in this example is actually one hosted by my ArcGIS Enterprise portal). The debug logs include the Esri Feature JSON constructed by the output connector. You can actually copy and paste this JSON into the feature service's web page in the ArcGIS REST Services Directory. This is an excellent way to abstract ArcGIS GeoEvent Server from your debugging workflow and determine if there are problems with how the JSON is formatted or reasons why a feature service might reject a client's request. I used this technique once to demonstrate that a polygon geometry created by a Create Buffer processor in a GeoEvent Service had several dozen vertices, allowing the geometry to approximate a circular area. When the polygon was committed to the geodatabase as a feature record, however, its geometry had been generalized such that it only had a few vertices. Web maps were displaying very rough approximations of the area of interest, not circular buffers. But it wasn't ArcGIS GeoEvent Server that had failed to produce a geometry representing a circular area. The problem was somewhere in the back-end relational database configuration. Rollback on Failure? There is a query parameter on Line 4 in the illustration above which is easily overlooked: rollbackOnFailure=true The default action for both the Add a Feature and Update a Feature outputs is to request that the geodatabase rollback the feature record transaction request if a problem is encountered. In many cases this is why customers are not seeing all of the feature records they expect updated in a feature layer they have added to a web map. Consider the following fields specification for the targeted feature service's feature layer: Fields: track_id ( alias: track_id, type: esriFieldTypeString, length: 512, editable: true, nullable: true ) reported_dt ( alias: reported_dt, type: esriFieldTypeDate, length: 29, editable: true, nullable: true ) objectid ( alias: objectid, type: esriFieldTypeOID, length: 8, editable: false, nullable: false ) globalid ( alias: globalid, type: esriFieldTypeGlobalID, length: 38, editable: false, nullable: false ) Suppose for a moment that the esriFieldTypeString specification for the track_id attribute specified that the string should not exceed seven characters. If a web application (client) were to send the feature service a request with a value for the track_id which was longer than seven characters, the data would not comply with the feature layer's specification and the feature service would be expected to reject the request. Likewise, if attribute fields other than esriFieldTypeOID or esriFieldTypeGlobalID were specified as not allowing null values, and a client request was made whose attribute values were null, the data would not be compliant with the feature layer's specification; the feature service should reject the request. By default both the Add a Feature and Update a Feature output connectors begin working through a cache of event records they have formatted as Esri Feature JSON placing the formatted data in one or more requests that are sent to the targeted feature service's feature layer. Each request, again by default, is allowed to contain up to 500 event / feature records. It only takes one bad apple to spoil a batch. If even one processed event record's data in a transaction containing ten, fifty, or a hundred feature records in a single transaction request is not compliant with string length restrictions, value nullability restrictions – or any other restriction enforced by an ArcGIS Server feature service – the entire transaction will rollback and none of the feature records associated with that batch of processed event records will be updated. Reduce the Maximum Features Per Transaction You cannot change the rollback on failure behavior. The outbound connectors interfacing with ArcGIS Server feature services do not implement a mechanism to retry an add/update feature record operation because one or more feature records in a batch do not comply with a feature layer's specification. You can change the number of processed event records an Add a Feature or Update a Feature output connector will include in each transaction. If you configure your output to specify a maximum number of one feature record per transaction you can begin to work around the issue of one bad record spoiling an entire transaction. If bad data or null values were to occasionally creep into processed event records then only the bad records will fail to update a corresponding feature record and the rollback on failure won't suppress any valid feature record updates. The downside to this is that REST requests are inherently expensive. If it were to take as little as 20 milliseconds to make a round-trip to the database and receive a response to a transaction request you could effectively cut your event throughput to less than 50 event records per second if you throttle feature record updating by allowing only one processed event record per transaction. The upside to reducing, at least temporarily, the number of records allowed in a transaction is that it makes the messages being logged much, much easier to read. It also guarantees that each success / fail response from the ArcGIS Server feature service can be traced back to a single add / update feature request. Timestamps – another benefit to logging DEBUG messages for the outbound transport Every logged message includes a timestamp with millisecond precision. This can be very useful when debugging unexpected latency when interacting with a geodatabase's feature class through an ArcGIS Server's REST interface. Looking back at the two tables above with the logged DEBUG messages, the time difference between the messages on Line 1 and Line 2 is 165 milliseconds (489 - 324 = 165). That tells us it took over a tenth of a second for the output to formulate its query for "missing" object identifiers needed to request updates for specific feature records. It takes another 185 milliseconds (674 - 489 = 185) to actually query for the needed identifiers and discover that there are no feature records with those track_id values. To be fair, you should expect this latency to drop as ArcGIS Server and/or your RDBMS begin caching information about the requests being made by clients. But it is important to be able to measure the latency ArcGIS GeoEvent Server is experiencing. If every time an Add a Feature output connector's timer expires (which is once every second by default) it takes a couple hundred milliseconds to complete a transaction, you should have a pretty good idea how many transactions you can make in one second. You might need to increase your output's Update Interval so that it holds only its cache of processed event records longer before starting a series of transactions. If you do this, know that as updates arrive for a given tracked asset older records will be purged from the cache. When updating feature records the cache will be managed to contain only one processed event record for each unique TRACK_ID. Conclusion Taking the time to analyze the DEBUG messages logged by the outbound feature service transport can provide you a wealth of information. You can immediately see if values obtained from an event record's tagged TRACK_ID field are reasonably expected to be found in whatever feature layer's attribute field is being used to query for feature records that correlate to processed event records. You can check to see if any values in a processed event record are unexpectedly null, have strings which are longer than the feature layer will accept, or – my favorite – contain what ArcGIS Server suspects is HTML or SQL code resulting in a service rejecting the transaction to prevent a suspected injection attack. ArcGIS GeoEvent Server, when interfacing with an RDBMS through a map / feature service's REST interface, is acting as any other web mapping application client would act in making requests on a service it assumes is available. You can eliminate GeoEvent Server entirely from your debugging workflow if you copy / paste information like the ESRI Feature JSON from a DEBUG message logged by the outbound transport into an HTML page in the ArcGIS REST Services Directory. I did exactly this to prove, once, that polygon geometries with hundreds of vertices modeling a circular area were somehow being generalized as they were committed into a SQL Server back-end geodatabase. If a customer reports that some – or all – of the features they expect should be getting added or updated in a feature layer are not displayed by a web map's feature layer, take a close look at the requests the configured output is sending to the feature service.
... View more
06-14-2019
05:45 PM
|
5
|
0
|
6825
|
|
BLOG
|
This blog is one in a series of blogs discussing debugging techniques you can use when working to identify the root cause of an issue with a GeoEvent Server deployment or configuration. Click any link in the quick list below to jump to another blog in the series. Configuring the application logger Add/Update Feature Outputs Application logging tips and tricks (this blog) Geofence synchronization deep dive In this blog I will illustrate a couple of techniques I use to identify more granular component logging than requesting the ROOT component produce DEBUG messages for all component loggers. I will also introduce a couple command-line utilities I frequently use to interrogate the ArcGIS GeoEvent Server's system log file. I'll consider a specific scenario and show how to isolate logged messages that provide information about an output's requests to a feature service which identify the criteria used to discover and delete feature records. Scenario A customer has configured the Delete Old Features capability on an Add a Feature output connector and reports feature records are being deleted from the geodatabase earlier than expected. Following advice from the blog Add/Update Feature Output Connectors they have captured a few logged messages from the outbound feature transport but are not seeing any information about criteria the connector is using to determine which feature records should be deleted or when the records should be deleted. What is the outbound feature transport telling us? The illustration above does not give us much information. It confirms that an Add a Feature output is periodically, once a minute, making requests on a feature service to delete old feature records and that, for the three intervals shown, no feature records were deleted (the JSON array in the response from the feature service is empty). If one or more existing feature records had satisfied criteria included in the delete features request, then the logged messages would contain feature record identifiers to confirm which feature records had been deleted. Hypothetically, looking at the raw logged messages in the karaf.log file, we would expect to see a message similar to the following: 2019-06-03T16:42:41,474 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][SampleRecord][0][FeatureServer] | FeatureServiceOutboundTransport | 91 - com.esri. ges.framework.transport.featureservice-transport - 10.7.0 | Response was {"deleteResults":[{"objectid":3, ... "success":true},{ "objectid":4, ... "success": true}]}. The outbound feature transport is only confirming what has been deleted, not criteria used to determine what should be deleted. The information we need, hopefully, is being logged by a different component logger. How to determine which component logger to watch As I mentioned in the blog Configuring the application logger, the logging system implemented by ArcGIS GeoEvent Server logs messages from the Java runtime. The messages being logged generally contain good information for software developers, but are rather hard for a GIS analyst to review and interpret. If someone from the product team has not identified a component logger from which you should request more detailed log messages, your only option is to request DEBUG logging on the ROOT component. If you elect to do this you must know that the karaf.log will quickly grow very large and will roll over as described in the aforementioned blog. All hope is not lost lost however. One technique I have found helpful is turn off as many of my running inputs and outputs as I can to quiet ArcGIS GeoEvent Server's activity and then briefly, for perhaps a minute or two, request DEBUG level messages be produced by setting the debugging level on the ROOT component. GeoEvent Manager's logging user interface will quickly cache up to 500 messages and you can use built-in IntelliSense to at least get an idea of which components are actively running and producing log messages. Once you understand that both the Add a Feature and Update a Feature output connectors use endpoints exposed through the ArcGIS REST Services Directory to interface with their targeted feature services, one component logger should stand out – the HTTP Client component logger highlighted in the illustration above. The information we need on the criteria used to identify feature records to delete is probably being logged as part of an HTTP REST request. Request DEBUG logs for the HTTP Client In this case we want to request the com.esri.ges.httpclient.Http component log DEBUG messages to help us identify the problem. To enable DEBUG logging for a the identified component's logger: Navigate to the Logs page in GeoEvent Manger and click the Settings button. Restore the ROOT component logger to its default level WARN and click Save. Specify the name of the HTTP Client component logger, select the DEBUG log level, and Save again. ArcGIS GeoEvent Server is fundamentally RESTful, which means you will still have a high volume of messages being logged to the karaf.log – but not as many as if you had left DEBUG logging set on the ROOT component logger. Useful command-line utilities for interrogating karaf.log I operate almost exclusively on a Windows platform, but Cygwin is one of the first things I install whenever I get a new machine. Cygwin is a free, open source, environment which provides a native Windows integrated command-line shell from which I can execute some of my favorite Unix utilities like sed, grep, awk, and tail. There are probably other packages available which provide similar utilities and tools, but I like Cygwin. If I open a Cygwin command-line shell I can change directory to where the karaf.log file is being written and generate an active tail of the log so that I don't have to open the log file in a text editor and frequently re-load the file as its content is updated. I am also able to pipe the streaming content from tail through grep to limit the logged messages displayed to those which contain specific keywords or phrases. For example: 1 rsunderman@localhost //localhost/C$/Program Files/ArcGIS/Server/GeoEvent/data/log 2 $ tail -0f karaf.log |grep --line-buffered 'where.*reported_dt' 3 2019-06-07T16:33:19,545 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][New_SampleRecord][0][FeatureServer] | Http | 60 - com.esri.ges.framework. httpclient - 10.7.0 | Adding parameter (where/reported_dt < timestamp '2019-06-07 17:33:19'). 4 2019-06-07T16:34:20,269 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][New_SampleRecord][0][FeatureServer] | Http | 60 - com.esri.ges.framework. httpclient - 10.7.0 | Adding parameter (where/reported_dt < timestamp '2019-06-07 17:34:20'). 5 2019-06-07T16:35:20,433 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][New_SampleRecord][0][FeatureServer] | Http | 60 - com.esri.ges.framework. httpclient - 10.7.0 | Adding parameter (where/reported_dt < timestamp '2019-06-07 17:35:20'). The above quickly reduces all the noise logged by the HTTP Client component logger to only those messages which include the name of the attribute field reported_dt which the Add a Feature output was configured to use when identifying feature records older than a specified number of minutes. The criteria we are looking for is clearly identified as a parameter the HTTP Client is adding to the request it is constructing to send to the feature service to identify and delete old feature records. The system I am running is in California, which is -07:00 hours behind GMT. The date/time values in the reported_dt attribute of each feature record in my feature are expressed as epoch long integers and represent GMT values. My output is configured to query every 60 seconds and delete feature records which are more than six hours old. The logged messages above bear timestamps which are roughly 60 seconds apart and the where clause identifies any feature record whose date/time is "now" + 07:00 hours (UTC offset) - 06:00 hours (the number of hours at which a feature record is considered "old"). Using the ArcGIS REST Services Directory to query feature records from the feature service, I can quickly see that feature records which are not yet six hours old (relative to GMT) remain but those I add or update with a reported_dt value which is at least six hours old get deleted every 60 seconds. What if the above had not yielded the information we needed? We could always fall back to set the ROOT logger to DEBUG so that all component loggers produced debug messages. While this is extremely verbose the technique which uses the tail and grep command-line utilities can still be used to try and find anything which mentions our particular feature service's REST endpoint. In this case my feature service's name was New_SampleRecord, so I can reasonably expect to find logged messages which include references to: New_SampleRecord/FeatureServer/0/deleteFeatures A grep command, using a regular expression pattern match like the following should find only those logged messages which appear to be attempting to delete features from the feature layer in question: tail -0f karaf.log |grep --line-buffered 'SampleRecord.*FeatureServer.*deleteFeatures' Tests using the above grep log message filter reveal about 75 messages logged every 60 seconds which include a reference to the deleteFeatures endpoint for the feature layer my output is targeting. Copying and pasting these lines into a text editor I can review them to discover that only one message contains a SQL WHERE clause. Such a clause would be required to identify records with a date/time value which should be considered "old". While the date/time value in this logged message is HTTP encoded, because this particular message depicts text ready to be sent out over the HTTP wire, we can still use the logged message to understand the criteria being applied by the ArcGIS GeoEvent Server's output. 2019-06-07T18:10:06,956 | DEBUG | HttpRequest Worker Thread: https://localhost.esri.com /server/rest/services/New_SampleRecord/FeatureServer/0/deleteFeatures | wire | 60 - com.esri.ges.framework.httpclient - 10.7.0 | http-outgoing-27360 >> "f=json&token=HM85k4E...&rollbackOnFailure=true&where=reported_dt+%3C+timestamp+%272019-06-07+19%3A10%3A06%27"
... View more
06-14-2019
05:45 PM
|
2
|
0
|
3556
|
|
POST
|
I believe you will need to configure two ArcGIS Server properties to accomplish what you are trying to make work: WebContextURL WebSocketContextURL There is some information on the following two web pages in the ArcGIS Enterprise and developer on-line help: Configure a reverse proxy server with ArcGIS Server ArcGIS REST API (Enterprise Administration) > Server Properties Beyond that, my apologies, but I cannot tell you exactly how these two ArcGIS Server parameters should be set. - RJ
... View more
05-13-2019
02:47 PM
|
1
|
1
|
1110
|
|
BLOG
|
Eric Ironside posted a follow-on to Andy Ommen's blog (above) ... and included updates to Andy's code. Check out: https://community.esri.com/people/eironside-esristaff/blog/2019/03/21/scripting-tasks-using-the-geoevent-admin-api-update-inputs?sr=search&searchId=200fe429-cdea-4861-8e01-92c5a10dcd1c&searchIndex=2
... View more
05-10-2019
06:36 PM
|
0
|
0
|
1509
|
|
POST
|
Hey MR - I would recommend you take a look at the following blogs: https://community.esri.com/people/eironside-esristaff/blog/2019/03/21/scripting-tasks-using-the-geoevent-admin-api-update-inputs – (Eric Ironside) https://community.esri.com/people/aommen-esristaff/blog/2016/10/19/scripting-tasks-using-the-geoevent-admin-api – (Andy Ommen) Using the GeoEvent Admin API with Python – (Jake Skinner) What you want to do requires you write a script or web application which uses the ArcGIS Server Administrative REST API. You can review available operations exposed by the Admin API: Browse to https://my-machine.domain:6143/geoevent/admin Acquire a token from your ArcGIS Server (or Portal for ArcGIS if federated) and log-in In the top-left corner, click the API link to take you to the Swagger Doc for the GeoEvent Server Admin API Note the advice at the top of the page on how to authenticate your admin script's requests with the API Good Luck - RJ
... View more
04-23-2019
09:51 AM
|
1
|
1
|
2328
|
|
POST
|
Hello Nick - The reference you highlight, that stream services only run on a single machine, is intended to convey that the JVM used to run GeoEvent Server is the container used to run stream services (unlike traditional map/feature services which have multiple instances and are run by SOC components managed by ArcGIS Server). So the server machine used to publish a stream service is the machine whose GeoEvent Server JVM is running the stream service. However, as I mention above, copies of the Esri Feature JSON records a stream service broadcasts are forwarded to other GeoEvent Server instances in an ArcGIS Server site. This “fan out” allows web clients to subscribe to any machine’s stream service and get all of the feature records processed across a single site, multiple machine solution – regardless of which machine received an event record and which machine actually processed the event record. A bug (BUG-000114373) with the "fan out" mechanism was addressed with the 10.6.1 release and ported back for release as part of ArcGIS GeoEvent Server 10.6 Patch 2. The 10.7 release does not include any specific changes that, I think, should influence an system architect to choose a "silo" approach (where multiple independent instances of GeoEvent Server are run, each in their own ArcGIS Server site) vs. a "site" approach whose architecture deploys multiple GeoEvent Server instances configured as part of a single ArcGIS Server site. If scalability is your primary objective, I would probably recommend the "silo" approach. The solution you architect would need to include an external Apache Kafka (or similar message broker) to handle event record distribution across the multiple independent ArcGIS Server / GeoEvent Server instances. Adopting and maintaining your own Kafka solution introduces its own technical burden, but I think we are finding that, for scalability, this approach lends itself to a system that is easier to maintain and administer overall. If reliability and fault-tolerance is your primary objective (I really dislike using the term high-availability) then a "site" approach is an option. The solution you architect could deploy multiple ArcGIS Server / GeoEvent Server instances in a single site and rely on the Apache Kafka and Zookeeper built-in to the GeoEvent Gateway to mitigate individual machine failures. But a reliability objective can also be addressed with an an active / active approach using multiple independent instances of GeoEvent Server to build redundancy into a distributed solution for fault-tolerance. There are advantages to going with the single site, multiple machines approach for resiliency. Specifically when your solution relies on GeoEvent Server polling external web servers / web services for data (which you mentioned), we have observed that the GeoEvent Gateway reliably mitigates the failure of a single node -- which had been the node polling for input -- by allowing another node to "adopt" the input and begin handling the data polling activity. There are also significant system complexity and administration disadvantages to the "site" approach, which is why I recommend folks who are considering multiple machine, distributed architectures for real-time solutions work with their Esri Technical Advisor or contract with Esri Professional Services for consultation. Only after discussing specific objectives and weighing both the pros and cons can a recommendation be made as to which approach your solution ought to take. - RJ
... View more
04-15-2019
11:05 AM
|
1
|
1
|
3768
|
|
BLOG
|
When someone asks you, "What time is it?", you are probably assuming he or she wants to know the local time where the two of you are right now. As I write this, the time now is Tuesday, March 12, 2019 at about 2:25 PM in Redlands, California, USA. Typically, we do not qualify our answers so explicitly. We say "It's 2 o'clock" and assume it's understood that this is the time right now in Redlands, California. But that is sort of like answering a query about length or distance by simply saying "36". Is that feet, meters, miles, or kilometers? Last weekend, here in California, we set our clocks ahead one hour to honor daylight savings time (DST). California is now observing Pacific Daylight Time (PDT) which is equal to UTC-7:00 hours. When we specify the time at which an event was observed, we should include the time zone in which the observation is made as well as whether or not the time reflects a local convention honoring daylight savings time. When ArcGIS GeoEvent Server receives data for processing, event records usually include a date/time value with each observation. Often the date/time value is expressed as a string and does not specify the time zone in which the date/time is expressed or whether the value reflects a daylight savings time offset. These are sort of like the "units" (e.g. feet, meters, miles, or kilometers) which qualify a date/time value. The intent of this blog is to identify when GeoEvent Server assumes a date/time value is expressed in Coordinated Universal Time (UTC) versus when it is assumed that a date/time expresses a value consistent with the system's locale. We'll explore a couple situations where this might be important and the steps you can take to configure how date/time values are handled and displayed. Event data ingest should generally assume date/time values are expressed as UTC values There are several reasons for this. In the interest of brevity, I'll simply note that GeoEvent Server is running in a "server" context. The assumption is that the server machine is not necessarily located in the same time zone as the sensors from which it is receiving data and that clients interested in visualizing the data are likewise not necessarily in the same time zone as the server or the sensors. UTC is the time standard commonly used around the world. The world's timing centers have agreed to synchronize, or coordinate, their date/time values -- hence the name Coordinated Universal Time.(1) If you have ever used the ArcGIS REST Services Directory to examine the JSON representation of feature records which include a date/time field whose data type is esriFieldTypeDate, you have probably noticed that the value is not a string, it is a number; an epoch long integer representing the number of milliseconds since the UNIX Epoch (January 1, 1970, midnight). The default is to express the value in UTC.(2)(3) When does GeoEvent Server assume the date/time values it receives are UTC values? Out-of-the-box, GeoEvent Server supports the ISO 8601 standard for representing date/time values.(4) It is unusual, however, to find sensor data which expresses the date/time value "March 12, 2019, 2:25:30 pm PDT" as 2019-03-12T14:25:30-07:00. So when a GeoEvent Definition specifies that a particular attribute should be handled as a Date, inbound adapters used by GeoEvent Server inputs will compare received string values to see if they match one of a few commonly used date/time patterns. For example, GeoEvent Server, out-of-the-box, will recognize the following date/time values as Date values: Tue Mar 12 14:25:30 PDT 2019 03/12/2019 02:25:30 PM 03/12/2019 14:25:30 1552400730000 When one of the above date/time values is handled, and the input's Expected Date Format parameter does not specify a Java SimpleDateFormat expression / pattern, GeoEvent Server will assume the date/time value represents a Coordinated Universal Time (UTC) value. When will GeoEvent Server assume a date/time value is expressed in the server machine's locale? When a GeoEvent Server input is configured with a Java SimpleDateFormat expression / pattern the assumption is the input should convert date/time values it receives into an epoch long integer, but treat the value as a local time, not a UTC value. For example, if your event data represents its date/time values as "Mar 12 2019 14:25:30" and you configure a new Receive JSON on a REST Endpoint input to use the pattern matching expression MMM dd yyyy HH:mm:ss as its Expected Date Format property, then GeoEvent Server will assume the event record's date/time expresses a value consistent with the system's locale and will convert the date/time to the long integer value 1552425930000. You can use the EpochConverter online utility to show equivalent date/time string values for this long integer value. Notice in the illustration below that the value 1552425930000 (expressed in epoch milliseconds) is equivalent to both the 12th of March, 2019, at 9:25 PM Greenwich Mean Time (GMT) and 2:25 PM Pacific Daylight Time (PDT): The utility's conversion notes that clocks in my time zone are currently seven hours behind GMT and that daylight savings time is currently being observed. You should note that while GMT and UTC are often used interchangeably, they are not the same.(5) What if I have to use a SimpleDateFormat expression, because my date/time values are not in a commonly recognized format, but my client applications expect date/time values will be expressed as UTC values? You have a couple of options. First, if you have the ability to work with your data provider, you could request that the date/time values sent to you specify a time zone as well as the month, day, year, hour, minute, second (etc.). For example, suppose the event data you want to process could be changed to specify "Mar 12 2019 14:25:30 GMT". This would enable you to configure a Receive JSON on a REST Endpoint input to use the pattern matching expression MMM dd yyyy HH:mm:ss zzz as its Expected Date Format property since information on the time zone is now included in the date/time string. The input will convert the date/time string to 1552400730000 which is a long integer equivalent of the received date/time string value. Using the EpochConverter online utility to show the equivalent date/time string values for this long integer value, you can see that the Date value GeoEvent Server is using is a GMT/UTC value: If the data feed from your data provider cannot be modified you can use GeoEvent Server to compute the proper UTC offset for the ingested "local" date/time value within a GeoEvent Service. Because GeoEvent Server handles Date attribute values as long integers, in epoch milliseconds, you can use a Field Calculator to add (or subtract) a number of milliseconds equal to the number of hours you need to offset a date/time value to change its representation from "local" time to UTC. The problem, for a long time, was that you had to use a hard-coded constant value in your Field Calculator's expression which rendered your GeoEvent Service vulnerable twice a year to time changes if your community started and later stopped observing daylight savings time. Beginning with the ArcGIS GeoEvent Server 10.5.1, the Field Calculator supports a new wrapper function that helps address this: currentOffsetUTC() A Field Calculator, running within a GeoEvent Service on my local server, evaluates currentOffsetUTC() and returns the value -25200000, the millisecond difference between my local system's current date/time and UTC. Currently, here in California, we are observing Pacific Daylight Time (PDT) which is equal to UTC-7:00. Even though GeoEvent Server assumes date/time values such as "Mar 12 2019 14:25:30" (received without any time zone "units") represent local time values -- because a pattern matching expression MMM dd yyyy HH:mm:ss must be used to interpret the received date/time string values -- I was able to calculate a new date/time value using a dynamic offset and output a value which represents the received date/time as a UTC value. All I had to do was route the event record, with its attribute value ReportedDT (data type: Date) through a Field Calculator configured with the expression: ReportedDT + currentOffsetUTC() How do I configure a web map to display local time rather than UTC time values When recommending that date/time values should generally be expressed as UTC values, a frequent complaint when feature records updated by GeoEvent Server are visualized on a web map, is that the web map's pop-up values show the date/time values in UTC rather than local time. It is true that, generally, we do not want to assume that a server machine and sensor network are both located in the same time zone as the localized client applications querying the feature record data. That does not mean that folks in different time zones want to perform the mental arithmetic needed to convert a date/time value displayed by a web map's pop-up from UTC to their local time. In the past I have recommended data administrators work around this issue using a Field Calculator to offset the date/time, as I've shown above, by a number of hours to "falsely" represent date/time values in their database as local time values. I say "falsely" because most map/feature services are not configured to use a specified time zone. For a long time it wasn't even possible to change the time zone a map/feature service used to represent its temporal data values. There are web pages in the ArcGIS REST API which still specify that feature services return date/time values only as epoch long integers whose UTC values represent the number of milliseconds since the UNIX Epoch (January 1, 1970, midnight). So even if a map/feature service is configured to use a specific time zone, we should not expect all client applications to honor the service's specification. For now, let's assume our published feature service's JSON specification follows the default and client apps expect UTC values returned when they query the map/feature service. If we use GeoEvent Server to falsely offset the date/time values to local time, the data values in our geodatabase are effectively a lie. Sure, it is easy to say that all client applications have been localized, and assume all server machines, client applications, and reporting sensors are all in one time zone; all we are trying to do is get a web map to stop displaying date/time values in UTC. But there is a better way to handle this problem. Testing the latest public release (10.6.1) of the Enterprise portal web map and ArcGIS Online web map I found that pop-ups can be configured with custom expressions which dynamically calculate new values from existing feature record attributes. These new values can then be selected as the attributes to show in a web map's pop-up rather than the "raw" values from the feature service. Below are the basic steps necessary to accomplish this: In your web map, from the Content tab, expand the feature layer's context menu and click Configure Pop-up. On the lower portion of the Configure Pop-up panel, beneath Attribute Expressions, click Add. Search the available functions for date functions and build an expression like the one illustrated below. Assign the new custom attribute a descriptive name (e.g. localDateTime) and save the attribute calculation. You should now be able to select the dynamic attribute to display along with any other "raw" attributes from the feature layer. References: (1) UTC – Coordinated Universal Time (2) ArcGIS for Developers | ArcGIS REST API (3) ArcGIS for Developers | Common Data Types | Feature object (4) World Wide Web Consortium | Date and Time Formats (5) timeanddate.com - The Difference Between GMT and UTC (6) ArcGIS for Developers | ArcGIS REST API | Enterprise Administration | Server | Service Types
... View more
03-13-2019
06:24 PM
|
7
|
9
|
10351
|
|
POST
|
Hello Cami – There are a few things I might suggest you try as you work to troubleshoot your issue: Reduce your geofence synchronization rule's polling frequency. Try setting the synchronization to run once every 60 seconds rather than once every second. It’s understandable that you want the polygon buffers created around designated points to be imported as geofences as quickly as possible so that polling on your ex2 feature dataset can take immediate advantage of the new geofences. A frequency of 1 second is too aggressive for geofence synchronization however. The synchronization rule has a simple timer and, every time the timer expires, a REST request is made against the feature service to retrieve its polygon feature records and import them to update the GeoEvent Server catalog of registered geofences. These requests can be relatively expensive and depending on your network, your database server, and other factors it may realistically take longer than 1000 milliseconds to retrieve the records, update the geofence catalog, and advise the various spatial processors in any GeoEvent Services you have configured that the geofence cache has been updated. (Individual processors cache some of the information about newly registered geofences so that they don’t have to each interact with the geofence catalog every time an event record is received by the processor for processing. This "chatter" between the geofence catalog and your GeoEvent Services tends not to take a lot of time, but it does take some non-zero amount of time.) When asserting the geofence synchronization changes, watch your system clock and try to click the ‘Synchronize’ button more-or-less as your system clock advances to the next whole minute. I’ve found that, when debugging, it helps to know when the geofence synchronization rule is firing. Since this is an asynchronous timer and the GeoEvent Manager doesn’t provide you with any visual indication that the synchronization’s timer is about to expire and a new request is about to be made to update geofences, if I set the synchronization interval to 60 seconds and start the timer (by clicking ‘Synchronize’) just as my system clock is advancing from 57, 58, 59, to 60 … I have a pretty good idea of when synchronizations will occur. You can then refresh your web page displaying known geofences and check to see if in fact new geofences are created/updated when your GeoEvent Service processes feature records polled from the ex1 feature dataset, discovers an attribute value set to “true”, and creates a buffer around the event record's geometry. As part of your debugging, stop the input which is polling the ex1 feature dataset, wait for a synchronization to occur, then start the input and allow it to conduct exactly one poll. Something to keep in mind is that this entire workflow has inherent race conditions with asynchronous timers that you don’t have a lot of visibility into or control over. If feature records are polled from the ex1 feature dataset every 20 seconds, there is a chance that the GeoEvent Service is busy creating the buffer and making a request to update polygon features in the buffer feature dataset at the same time the synchronization rule polls to request feature records from that same dataset (to update geofences in the catalog). If your debugging allows you to know that a synchronization will fire at the top of every minute and you wait perhaps 10 seconds to make sure the synchronization rule has a chance to complete before you start the ex1 input to poll features one time and stop the input after you confirm that all 25 (or whatever) feature records were polled, you can quickly query the buffer feature service yourself via the ArcGIS REST Services Directory to confirm that expected polygon feature records were actually created/updated and wait the balance of the minute for the next geofence synchronization rule to fire so that you can check to see if the polygon features you know to exist were successfully imported as geofences. Notice that nowhere in this debugging step did I start the input which is polling the ex2 feature dataset. That’s yet another asynchronous timer that you must deal with when debugging. On one hand, I want to make sure that all the pre-requisite steps have occurred successfully – that all ex1 features are being polled, expected buffer polygons are created/updated, and expected geofences established – before I start ex2 polling for additional features to determine if there a spatial coincidence exists between some other set of feature records and synchronized geofences. On the other hand ... Make sure that the input responsible for polling the ex2 feature dataset actually ingests all of the feature records exposed by that feature service. This was the heart of your question. It’s easy to focus on expected field calculations which are not occurring as you expect … because a filter configured to identify GEOMETRY INSIDE ANY buffer/.* is not providing any event records to a Field Calculator … even when you are reasonable sure geofences the filter should be using exist. Regardless of whether the pre-requisite steps above have occurred you should be able to start the input responsible for polling the ex2 feature dataset and confirm that every 10 seconds (or whatever) the input actually polls all feature records exposed by that feature service. This should happen regardless of whether the ex1 input is running or stopped or whether the synchronization rule is updating geofences in the geofence catalog the way you expect. You are correct that if the input is configured with ‘Get Incremental Updates’ set to ‘No’ and the ‘Query Definition’ beneath advanced has its default ‘1=1’ then all feature records in the feature dataset should be polled by the input every polling interval for processing. You can use the event counter on the GeoEvent Manager to confirm the number of event records adapted by the ex2 input and even create an output to log the event records created by the input as CSV Text or JSON to a system file immediately after the input before they are sent to a spatial filter or other processor. You want to determine that the input is indeed polling feature records from ex2 as you expect and that you can examine the data for these event records in a system *.json or *.txt file before you worry too much about what processing you want to perform on these feature records. Make sure your geofences do not have an associated ‘start time’ or ‘end time’. This is an easy thing to overlook. As you debug to confirm that geofences are being created from the buffer feature records, hover over the geofences in GeoEvent Manager’s list. Make sure the start time and end time field are empty/blank. Even if an event record’s geometry intersects a geofence spatially if the date/time of the event record does not intersect the geofence temporally your ex2 event records will not pass through your filter. Think of it like this – an event record must be proven to intersect (or fall inside) a geofence for the expression GEOMETRY INSIDE ANY buffer/.* to return TRUE. If the filter cannot find any geofence whose start/end time range intersects the date/time of the event record being tested, the event record’s location / geometry does not matter. There are no temporally relevant geofences, so it cannot be proven that the event record’s geometry is inside any particular geofence, and therefore no event records pass through the filter. I encourage you to open a support incident with Esri Technical Support so that an analyst can work with you to examine your configured input(s), GeoEvent Services, and geofence synchronization. There's potentially more to the workflow you've described that is apparent on the surface. I hope the information above is helpful in finding the problem. Best Regards – RJ
... View more
01-09-2019
07:45 PM
|
0
|
0
|
1285
|
|
POST
|
Hey Nate, Yes, if you would please submit a technical support incident, that will help get a bug report formally documented. If you have JSON data we can send to a receive JSON input that shows reproducability, that would also be a huge help. - RJ
... View more
10-24-2018
02:53 PM
|
1
|
0
|
5575
|
|
POST
|
Hello Nathan - I've not seen the error you are reporting before. If the inbound connector (input) is able to receive and adapt data to create an event record, and these event records process through a GeoEvent Service, and data can be logged as text to a CSV file ... then my understanding is that event records have undergone several Avro serialization / deserialization cycles. We cannot tell, from the screen capture you provided, whether the INFO message being logged by the ges.messaging.jms encoder is in response to a problem encountered by an inbound connector's transport or adapter, if the message is being logged because a processor you've configured is unable to handle event records it has received, or if -- as you suggest -- the error is coming from a failure to add or update feature records through a feature service. My first step would be to remove any processors or filters from my GeoEvent Service and see if I can successfully ingest event records and log their data as JSON in a system file. I prefer the Write to a JSON File output as the JSON format supports hierarchy and multicardinality that delimited text (e.g. CSV) does not. Also, you’re probably aware, that event records must be “simplified” to a flat structure without any hierarchy or multicardinality before the event records can be sent to an output tasked with adding or updating feature records through a feature service. This step, I think, will help us figure out whether all of the data being sent to GeoEvent Server is being processed through to an output, or whether some portion of the data is being rejected on the inbound side. Please open an incident with Esri Technical Support so that an analyst can be assigned to work with you and track the steps being taken to address the issue. If this ends up being a bug in an input, processor, or output where a certain type of data is not being handled property, the product team will need information from technical support before we start work to identify a root cause. Best Regards – RJ
... View more
10-24-2018
11:42 AM
|
1
|
2
|
5575
|
|
POST
|
Anna, GeoEvent Server is fundamentally RESTful, by which I mean the most reliable integration tends to leverage inputs where GeoEvent Server hosts a REST endpoint to which data providers can HTTP/POST data as JSON, geoJSON, XML, or delimited text ... or inputs which periodically poll an external web server / service which then sends event records back formatted as JSON, geoJSON, or XML. Both of these types of inputs are available out-of-the-box without any custom development (e.g. programming). When Robert suggests that you'll still need to develop something to relay data to the GeoEvent Server, I suspect that he is thinking of a RESTful API. If a company whose business is deploying sensors for system monitoring wants to make it easy to integrate with ArcGIS Enterprise and GeoEvent Server, they'll develop an REST API which either allows an external web client (e.g. GeoEvent Server) to poll an endpoint or allows some way to subscribe and begin receiving periodic data pushes from the API to a REST endpoint. - RJ
... View more
10-23-2018
10:01 AM
|
2
|
1
|
2242
|
|
BLOG
|
. One of the first contributions I made to the GeoEvent space on GeoNet was a blog titled https://community.esri.com/community/gis/enterprise-gis/geoevent/blog/2015/06/05/understanding-geoevent-definitions?sr=search&searchId=5d1ea5de-4608-47f5-aa1d-bcf9d2017759&searchIndex=0. Technical workshops and best practice discussions for years have recommended that, when you want to use data from event records to add or update feature records in a geodatabase, you start by importing a GeoEvent Definition from the targeted feature service. This allows you to explicitly map an event record’s structure as the last processing step before an add / update feature output. The field mapping guarantees that service requests made by GeoEvent Server match the schema expected by the feature service. In this blog I would like to expand upon this recommendation and introduce flexibility you may not realize you have when working with feature records in both feature services and stream services. Let's begin by considering a relatively simple GeoEvent Definition describing the structure of a "sample" event record: Different types of services will have different schema I could use GeoEvent Manager and the event definition above to publish several different types of services: A traditional feature service using my GIS Server's managed geodatabase (a relational database). A hosted feature service using a spatiotemporal big data store configured with my ArcGIS Enterprise. A stream service without any feature record persistence and no associated geodatabase. Following the best practice recommendation, a Field Mapper Processor should be used to explicitly map an event record structure and ensure that event records routed to a GeoEvent Server output match the schema expected by the service. The GeoEvent Service illustrated below can be used to successfully store feature records in my GIS Server's managed geodatabase. The same feature records can be stored in my ArcGIS Enterprise's spatiotemporal big data store with copies of the feature records broadcast by a stream service: But if you compare the feature records broadcast by the stream service with feature records queried from the different feature services and data stores you should notice some subtle differences. The schema of the various feature records is not the same: You might notice that the stream service's geometry is "complete". It has both the coordinate values for the point geometry and the geometry's spatial reference, but this is not what I want to highlight. The feature services also have the spatial reference, they just record it as part of the overall service's metadata rather than including the spatial reference as part of each feature record. What I want to highlight are the attribute values in the relational data store's feature record and spatiotemporal big data store's feature record which are not in the stream service's feature record. These additional identifier values are created and maintained by the geodatabase and you cannot use GeoEvent Server to update them. Recall that the SampleRecord GeoEvent Definition illustrated at the top of this article was successfully used to add and update feature records in the different data stores. If new GeoEvent Definitions were imported from each feature service, however, the imported event definitions would reflect the actual schema of their respective feature classes: Since the highlighted attribute fields are created and maintained by the geodatabase and cannot be updated, the best practice recommendation is to delete them from the imported GeoEvent Definitions. Even if event records you ingest for processing happen to have string values you think appropriate to use as a globalid for a spatiotemporal feature record, altering the database's assigned identifier would be very bad. But if I delete the fields from the imported GeoEvent Definitions ... Exactly. The simplest way to convey the best practice recommendation to import a GeoEvent Definition from a feature service is to say that this ensures event records mapped to the imported event definition will exactly match the structure expected by the feature service. In service-oriented architecture (SOA) terminology this is "honoring the service's contract." Maybe you did not know that the identifier fields could be safely deleted from the imported GeoEvent Definition, and so chose to keep them, but leave them unmapped when configuring your final Field Mapper Processor. The processor will assign null values to any unmapped attribute fields, and the feature service knows to ignore attempts to update the values that are created and maintained by the geodatabase, so there is really no harm in retaining the unneeded fields. But unless you want a Field Mapper Processor to place a null value in an attribute field, it is best not to leave attribute fields unmapped. Is it OK to use a partial GeoEvent Definition when adding or updating feature records? Yes, though you generally only do this when updating existing feature records, not when adding new feature records. Say, for example, you had published a feature service which specified the codeword attribute could not be null. While such a restriction cannot be placed on a feature service published using GeoEvent Manager, you could use ArcGIS Desktop or ArcGIS Pro to place a restriction nullable: false on a feature class's attribute field to specify that the field's value may not be assigned a null value. If you were using GeoEvent Server to add new feature records to the feature class, left one or more attribute fields unmapped in the final Field Mapper, and those attribute values are not allowed to be null, requests from GeoEvent Server will be rejected by the feature service -- the add record request does not include sufficient data to satisfy all the restrictions specified by the feature service. Feature services which have nullable: false restrictions on attribute fields normally also specify a default value to use when a data value is not specified. Assuming the event record you were processing did not have a valid codeword, you could simply delete that attribute field from the Target GeoEvent Definition used by your final Field Mapper and allow the feature service to supply a default value for the missing, yet required, attribute. If the feature service spec does not include default values for required fields, well then, the processing you do within your GeoEvent Service will have to come up with a codeword value. The point is, if you do not want to attempt to update a particular attribute value in a feature record, either because you do not have a meaningful value, or you do not want to push a null value into the feature record, you can simply not include that attribute field in the structure or schema of event records you route to an output. Examples where feature record flexibility might be useful I have worked with customers who use feature services to compile attribute data collected from different sensors. One type of sensor might provide barometric pressure and relative humidity. Another type of sensor might provide ambient temperature and yet another a measure of the amount of rainfall. No single sensor is supplying all the weather data, so no single event record will have all the attribute values you want to include in a single feature record. Presumably, the different sensor types are all associated with a single weather station, whose name could be used as the TRACK_ID for adding and updating feature records, so we can create partial GeoEvent Definitions supporting each type of sensor and update only the specific attribute fields of a feature record with the data provided by a particular type of sensor installed at the weather station. Another example might be when data records arrive with different frequency. Consider an automated vehicle location (AVL) solution which receives data every two minutes reporting a vehicle's last observed position and speed. A different data feed might provide information for that same vehicle when the vehicle's brakes are pressed particularly hard (signaling, perhaps, an aggressive driving incident). You do not receive "hard brake" event records as frequently as you receive "vehicle position" event records, and you do not want to push null values for speed or location into a feature record whenever an event record signaling aggressive driving is received, so you prepare a partial GeoEvent Definition for the "hard brake" event records and only update that portion of a vehicle's feature record when that type of data is received. A third example where using a GeoEvent Definition which either deliberately includes or excludes a attribute value may be helpful is described in the thread Find new entries when streaming real-time data Are stream services as flexible as feature services? They did not used to be, no, but changes made to stream services in the ArcGIS 10.6 release relaxed their event record schema requirements. You should still use a Field Mapper Processor to make sure that the spelling and case sensitivity of your event record's attribute fields match those in the stream service's specification. Stream services cannot transfer an attribute value from an event field named codeWord into a field named codeword for example, but you can now send event records whose structure is a subset of the stream service's schema to a Send Features to a Stream Service output. The output will attempt to handle any necessary data conversions, broadcasting a long integer value when a short integer is received, or broadcasting a string equivalent when a date value is received. The output will also omit any attribute value(s) from the feature record(s) it broadcasts when it does not receive a data value for a particular attribute. Hopefully the additional detail and examples in this discussion illustrate flexibility you have when working with feature records in both feature services and stream services and helps clarify best practice recommendations to use a Field Mapper Processor to ensure the structure of event records sent to either a feature service or stream service output have a schema compatible with the service's specification. You can use partial GeoEvent Definitions which model a subset of a feature record's complete schema to avoid pushing null values into a data record and/or avoid attempting to update attribute values you do not want to update (or are not allowed to update). - RJ
... View more
10-03-2018
03:10 PM
|
2
|
1
|
11024
|
|
POST
|
Hello Brian - I do not have an ArcGIS Enterprise I can easily configure with SQL Express ... so I'll have to show you the debugging steps necessary to generate log messages which will (hopefully) tell us what the issue is. It may be that the query parameters generated by GeoEvent Server's output and passed with the HTTP request to the feature service are not compatible with the SQL Express database. But before we go there, let's check to make sure the date/time values that are being written into the feature record in the database are value we expect. To set-up this test I created a feature service whose schema matches the following GeoEvent Definition. Notice that my attribute field reported_dt is a Date. If the field is not a Date the Field Calculator will not write the correct value into the field when evaluating its expression receivedTime() and the GeoEvent Server's output will not be able to use the field when requesting features be deleted. Here's the GeoEvent Service I configured to ingest data with a String/TrackID and build a Geometry from two Double values Latitude and Longitude: Notice the Field Mapper is preparing my event structure to match the structure and attributes of feature records expected by the feature class. The field reported_dt is left unmapped because the Field Calculator is writing the value receivedTime() into that field. I could also have used the Field Mapper to map the event property $RECEIVED_TIME into that field and elected to not use the Field Calculator at all - but you indicated in your post that you wanted to use the Field Calculator's function to obtain the event record's received date/time. At 5:32 (and 20 seconds) by my server's clock, I sent an event record to GeoEvent Server and observed the following feature record created in my feature class: "features": [
{
"attributes": {
"oid": 401,
"track_id": "VWY-86-ABU",
"reported_dt": 1538094740000
}
}
] Note that the date/time recorded in the feature class is not my server's local time, it is an epoch long integer whose value is UTC / GMT. This is important because whatever value is passed as part of a deleteFeatures request will have to reflect the UTC time, not my server's local time. (The value 1538094740000 is 12:32:20 am the next day, seven hours ahead of my current local time, 5:32:20 pm...) Your configuration of your output looks correct. For my test I chose to have the output delete any feature records which are older than two minutes, attempting to delete "old" records every 30 seconds: If I turn DEBUG logging on for GeoEvent Server, every 30 seconds I should see a block of log messages similar to the following. Unfortunately the information we want to capture is logged as part of the HTTP Client ... and there is a lot of HTTP traffic logged when you turn on DEBUG logging for that component, so I'll try to bullet for you the information from the log messages below that you need to look for: 2018-09-26T17:32:53,495 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][SampleRecord-UpdateFeature][0][FeatureServer] | Http | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Adding parameter (f/json).
2018-09-26T17:32:53,495 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][SampleRecord-UpdateFeature][0][FeatureServer] | Http | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Adding parameter (token/xxxxxxxx).
2018-09-26T17:32:53,495 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][SampleRecord-UpdateFeature][0][FeatureServer] | Http | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Adding parameter (rollbackOnFailure/true).
2018-09-26T17:32:53,495 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][SampleRecord-UpdateFeature][0][FeatureServer] | Http | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Adding parameter (where/reported_dt < timestamp '2018-09-27 00:30:53').
2018-09-26T17:32:53,498 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | Http | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Executing following request: POST https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures HTTP/1.1
2018-09-26T17:32:53,499 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | RequestAddCookies | 52 - com.esri.ges.framework.httpclient - 10.6.0 | CookieSpec selected: default
2018-09-26T17:32:53,499 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | RequestAuthCache | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Auth cache not set in the context
2018-09-26T17:32:53,499 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | PoolingHttpClientConnectionManager | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Connection request: [route: {s}->https://your-machine.domain:443][total kept alive: 0; route allocated: 0 of 2; total allocated: 0 of 20]
2018-09-26T17:32:53,500 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | PoolingHttpClientConnectionManager | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Connection leased: [id: 43][route: {s}->https://your-machine.domain:443][total kept alive: 0; route allocated: 1 of 2; total allocated: 1 of 20]
2018-09-26T17:32:53,500 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | MainClientExec | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Opening connection {s}->https://your-machine.domain:443
2018-09-26T17:32:53,500 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | DefaultHttpClientConnectionOperator | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Connecting to YOUR_MACHINE.DOMAIN/10.27.102.67:443
2018-09-26T17:32:53,501 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | SSLConnectionSocketFactory | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Connecting socket to YOUR_MACHINE.DOMAIN/10.27.102.67:443 with timeout 0
2018-09-26T17:32:53,502 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | SSLConnectionSocketFactory | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Enabled protocols: [TLSv1, TLSv1.1, TLSv1.2]
2018-09-26T17:32:53,502 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | SSLConnectionSocketFactory | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Enabled cipher suites:[TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, ...
2018-09-26T17:32:53,502 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | SSLConnectionSocketFactory | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Starting handshake
2018-09-26T17:32:53,513 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | SSLConnectionSocketFactory | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Secure session established
2018-09-26T17:32:53,514 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | SSLConnectionSocketFactory | 52 - com.esri.ges.framework.httpclient - 10.6.0 | negotiated protocol: TLSv1.2
2018-09-26T17:32:53,514 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | SSLConnectionSocketFactory | 52 - com.esri.ges.framework.httpclient - 10.6.0 | negotiated cipher suite: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
2018-09-26T17:32:53,514 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | SSLConnectionSocketFactory | 52 - com.esri.ges.framework.httpclient - 10.6.0 | peer principal: CN=your-machine.domain, OU=Business Development, O=Esri, L=Redlands, ST=California, C=US
2018-09-26T17:32:53,514 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | SSLConnectionSocketFactory | 52 - com.esri.ges.framework.httpclient - 10.6.0 | peer alternative names: [your-machine.domain]
2018-09-26T17:32:53,514 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | SSLConnectionSocketFactory | 52 - com.esri.ges.framework.httpclient - 10.6.0 | issuer principal: CN=ESRI Enterprise Root, DC=empty, DC=local
2018-09-26T17:32:53,514 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | DefaultHttpClientConnectionOperator | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Connection established 10.27.102.67:60152<->10.27.102.67:443
2018-09-26T17:32:53,514 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | MainClientExec | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Executing request POST /server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures HTTP/1.1
2018-09-26T17:32:53,514 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | MainClientExec | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Target auth state: UNCHALLENGED
2018-09-26T17:32:53,515 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | MainClientExec | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Proxy auth state: UNCHALLENGED
2018-09-26T17:32:53,515 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> POST /server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures HTTP/1.1
2018-09-26T17:32:53,515 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> Content-Type: application/x-www-form-urlencoded
2018-09-26T17:32:53,515 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> charset: utf-8
2018-09-26T17:32:53,515 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> Referer: https://your-machine.domain:6143/geoevent/admin/datastores/agsconnection/default
2018-09-26T17:32:53,515 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> User-Agent: GeoEvent Server 10.6.0
2018-09-26T17:32:53,515 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> Content-Length: 314
2018-09-26T17:32:53,515 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> Host: your-machine.domain
2018-09-26T17:32:53,515 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> Connection: Keep-Alive
2018-09-26T17:32:53,516 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> Accept-Encoding: gzip,deflate
2018-09-26T17:32:53,516 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> "POST /server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures HTTP/1.1[\r][\n]"
2018-09-26T17:32:53,516 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> "Content-Type: application/x-www-form-urlencoded[\r][\n]"
2018-09-26T17:32:53,516 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> "charset: utf-8[\r][\n]"
2018-09-26T17:32:53,516 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> "Referer: https://your-machine.domain:6143/geoevent/admin/datastores/agsconnection/default[\r][\n]"
2018-09-26T17:32:53,516 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> "User-Agent: GeoEvent Server 10.6.0[\r][\n]"
2018-09-26T17:32:53,516 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> "Content-Length: 314[\r][\n]"
2018-09-26T17:32:53,516 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> "Host: your-machine.domain[\r][\n]"
2018-09-26T17:32:53,516 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> "Connection: Keep-Alive[\r][\n]"
2018-09-26T17:32:53,516 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> "Accept-Encoding: gzip,deflate[\r][\n]"
2018-09-26T17:32:53,517 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> "[\r][\n]"
2018-09-26T17:32:53,517 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 >> "f=json&token=xxxxxxxx&rollbackOnFailure=true&where=reported_dt+%3C+timestamp+%272018-09-27+00%3A30%3A53%27"
2018-09-26T17:32:53,630 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "HTTP/1.1 200 OK[\r][\n]"
2018-09-26T17:32:53,630 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "Cache-Control: private, must-revalidate, max-age=0[\r][\n]"
2018-09-26T17:32:53,630 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "Content-Type: application/json;charset=UTF-8[\r][\n]"
2018-09-26T17:32:53,630 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "ETag: bd8f0abc[\r][\n]"
2018-09-26T17:32:53,630 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "Vary: Origin[\r][\n]"
2018-09-26T17:32:53,631 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "Server: Microsoft-IIS/10.0[\r][\n]"
2018-09-26T17:32:53,631 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "Set-Cookie: AGS_ROLES=xxxxxxxx; Expires=Thu, 27-Sep-2018 00:33:53 GMT; Path=/server/rest; HttpOnly[\r][\n]"
2018-09-26T17:32:53,631 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "Server: [\r][\n]"
2018-09-26T17:32:53,631 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "X-AspNet-Version: 4.0.30319[\r][\n]"
2018-09-26T17:32:53,631 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "X-Powered-By: ASP.NET[\r][\n]"
2018-09-26T17:32:53,631 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "Date: Thu, 27 Sep 2018 00:32:53 GMT[\r][\n]"
2018-09-26T17:32:53,632 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "Content-Length: 16[\r][\n]"
2018-09-26T17:32:53,632 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "[\r][\n]"
2018-09-26T17:32:53,632 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | wire | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << "{"success":true}"
2018-09-26T17:32:53,632 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << HTTP/1.1 200 OK
2018-09-26T17:32:53,632 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << Cache-Control: private, must-revalidate, max-age=0
2018-09-26T17:32:53,632 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << Content-Type: application/json;charset=UTF-8
2018-09-26T17:32:53,632 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << ETag: bd8f0abc
2018-09-26T17:32:53,632 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << Vary: Origin
2018-09-26T17:32:53,632 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << Server: Microsoft-IIS/10.0
2018-09-26T17:32:53,633 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << Set-Cookie: AGS_ROLES=xxxxxxxx; Expires=Thu, 27-Sep-2018 00:33:53 GMT; Path=/server/rest; HttpOnly
2018-09-26T17:32:53,633 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << Server:
2018-09-26T17:32:53,633 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << X-AspNet-Version: 4.0.30319
2018-09-26T17:32:53,633 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << X-Powered-By: ASP.NET
2018-09-26T17:32:53,633 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << Date: Thu, 27 Sep 2018 00:32:53 GMT
2018-09-26T17:32:53,633 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | headers | 52 - com.esri.ges.framework.httpclient - 10.6.0 | http-outgoing-43 << Content-Length: 16
2018-09-26T17:32:53,633 | DEBUG | HttpRequest Worker Thread: https://your-machine.domain/server/rest/services/SampleRecord-UpdateFeature/FeatureServer/0/deleteFeatures | MainClientExec | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Connection can be kept alive indefinitely
2018-09-26T17:32:53,634 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][SampleRecord-UpdateFeature][0][FeatureServer] | PoolingHttpClientConnectionManager | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Connection [id: 43][route: {s}->https://your-machine.domain:443] can be kept alive indefinitely
2018-09-26T17:32:53,635 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][SampleRecord-UpdateFeature][0][FeatureServer] | PoolingHttpClientConnectionManager | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Connection released: [id: 43][route: {s}->https://your-machine.domain:443][total kept alive: 1; route allocated: 1 of 2; total allocated: 1 of 20]
2018-09-26T17:32:53,635 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][SampleRecord-UpdateFeature][0][FeatureServer] | Http | 52 - com.esri.ges.framework.httpclient - 10.6.0 | Got response from HTTP request: {"success":true}.
2018-09-26T17:32:53,635 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][SampleRecord-UpdateFeature][0][FeatureServer] | FeatureServiceOutboundTransport | 77 - com.esri.ges.framework.transport.featureservice-transport - 10.6.0 | Response was {"success":true}. Keywords that you are searching for to identify the block of messages above are: /rest/services/ (your service's name) /FeatureServer/ (layer index) /deleteFeatures I chose to open the karaf.log file in a text editor (Notepad ++) and search the content using the regular expression where.*reported_dt to quickly locate lines which include the SQL expression for my feature record's date/time field name: In the large block of DEBUG messages above, on Line 4, as you scroll to the right, you'll find a parameter added to the HTTP request: Adding parameter (where/reported_dt < timestamp '2018-09-27 00:30:53') Notice that the message was logged at 2018-09-26T17:32:53 but the query parameter's value is +7 hours (00:30:53) and two minutes earlier (00:30:53 rather than 00:32:53). Illustrating this differently, using the search results from Notepad ++, you should expect to see "Adding parameter" logged by the HTTP Client every 30 seconds as it prepares another request to send to the feature service's \deleteFeatures endppoint: The bulk of the DEBUG log messages in the listing above, Lines 5 - 71, are a worker thread building out the HTTP request. What you really care about from this section is on Line 45. Scrolling to the right, past the f=json and the token=xxxxxxxx you'll see an HTTP encoded WHERE clause which specifies the timestamp parameter: where=reported_dt+%3C+timestamp+%272018-09-27+00%3A30%3A53%27 The feature service is going to receive a request which contains this literal SQL. If this WHERE clause does not match what SQL Express is expecting (since that is the database you are using) the request is probably going to fail. The DEBUG messages on Line 75 and Line 76 show that the both the HTTP Client and the Feature Service outbound transport see the executed request returning {"success":true} for my test. You might see errors logged by ArcGIS Server, or by your database engine ... but I'm not sure how to illustrate those as I'm not able to reproduce the issue using the PostGRE geodatabase I've deployed as part of my ArcGIS Enterprise. If you are able to determine that the date/time values written into each feature record are correct, I would recommend perhaps changing your GeoEvent Server output's delete interval from 20 seconds to 300 seconds so the queries to delete features occur every five minutes. If you can capture DEBUG log messages which show incorrect values for the query parameters and SQL being generated by GeoEvent Server, please open an incident with Esri Tech Support so that they can work to reproduce the issue using SQL Express. Hope this information is helpful -- RJ
... View more
09-26-2018
07:55 PM
|
0
|
2
|
2318
|
|
POST
|
Braulio Galvez - There was a certificate issue on the server geoeventsample1.esri.com which has been addressed. Are you able to discover the following services at the ArcGIS REST Services Directory endpoint https://geoeventsample1.esri.com:6443/arcgis/rest/services ? Services: FAAStream (StreamServer) LABus (StreamServer) NYCMonitoredVehicleJourney (StreamServer) SeattleBus (StreamServer) WashingtonMetroBuses (StreamServer) WorldSatellites (StreamServer) If you navigate to the stream service's subscription page (e.g. Home > services > LABus > subscribe) and then click the 'Subscribe' button, are you seeing feature records broadcast by the stream service? I've noticed that a few of the stream services, like the NYCMonitoredVehicleJourney, are not broadcasting data as frequently as others. So while it may look like the stream service is "dead", if you wait a minute or so feature records do show up in the stream service subscription page's scrolling text display eventually. The velocity for FAAStream, LABus, and WorldSatellites is pretty high. I'm seeing data broadcast immediately when I subscribe to any of those stream services. - RJ
... View more
09-25-2018
12:08 PM
|
2
|
1
|
962
|
|
BLOG
|
The GeoEvent Server team maintains sample servers which expose both simulated and live data via stream services. For this write-up I will use publicly available services from the following ArcGIS REST Services Directory: https://geoeventsample1.esri.com:6443/arcgis/rest/services This write-up assumes you have set up a base ArcGIS Enterprise and have included ArcGIS GeoEvent Server as an additional server role in your solution architecture. I will use a deployment which has the base ArcGIS Enterprise and GeoEvent Server installed on a single machine. Your goal is to receive feature records, formatted as Esri Feature JSON, from an ArcGIS Server stream service. You could, of course, simply add the stream service to an ArcGIS Enterprise portal web map as a stream layer. For this write-up, however, we will look at the steps a custom client must perform to discover the WebSocket associated with a stream service and subscribe to begin receiving data broadcast by the service. Stream Service Discovery It is important to recognize that the GIS server hosting a stream service may be on a different server machine than GeoEvent Server. A stream service is discoverable via the ArcGIS Server REST Services Directory, but the WebSocket used to broadcast feature records is run from within the JVM (Java Virtual Machine) used to run GeoEvent Server. If your ArcGIS Enterprise portal and GeoEvent Server have been deployed on separate machines client applications will need to be able to access both servers to discover the stream service and subscribe to the stream service's WebSocket. If you browse to the ArcGIS REST Services Directory mentioned above you should see a list of available services highlighted below: Let’s examine how a client application might subscribe to the LABus stream service. First, the client will need to acquire a token which it will append to its request to subscribe to the stream service’s WebSocket. The WebSocket’s base endpoint is shown on the stream service’s properties page. The token you need is included in the stream service’s JSON specification. Click the LABus stream service to open the service's properties page. In the upper-left corner of the LABus properties page, click the JSON link to open the stream service's JSON specification. Scroll to the bottom of the LABus stream service’s JSON specification page and locate the stream service’s subscription token. Client applications will need to construct a subscription request which includes both the WebSocket URL and the stream service’s subscription token as a query parameter. The format of the request is illustrated below; make sure to include subscribe in the request: wss://geoeventsample1.esri.com:6143/arcgis/ws/services/LABus/StreamServer/subscribe?token=some_value Client Subscription Examples The website websocket.org offers a connection test you can use to verify the subscription request you expect your client application will need to construct. Browse to http://websocket.org and select DEMOS > Echo Test from the menu. Paste the subscription request with the stream service’s WebSocket URL and token into the Location field and click Connect. The websocket.org client should be able to reach the GeoEvent Server sample server and successfully subscribe to the service’s WebSocket. Esri feature records will be displayed for the Los Angeles Metro buses in the Log window. websocket.org homepage websocket.org Echo Test You can also configure an input connector in GeoEvent Server to subscribe to the LABus stream service. Log in to GeoEvent Manager. Add a new Subscribe to an External WebSocket for JSON input. Enter a name for the input. Paste the constructed subscription request to the Remote server WebSocket URI property. Allow the input to create a GeoEvent Definition for you. Do not configure the input to use event attribute values to build a geometry. The records being broadcast by the stream service are Esri feature records, formatted as Esri Feature JSON, which include attributes and geometry as separate values in an event record hierarchy. Save the new input and navigate to the Monitor page in GeoEvent Manager – you should see your input’s event count increase as event records are received. You can now incorporate the input into a GeoEvent Service and use filters and/or processors to apply real-time analytics on the event records being ingested. You might, for example, create a GeoEvent Definition with a simpler structure, tag the id field as the TRACK_ID, and use a Field Mapper Processor to flatten the hierarchical structure of each event record received so that you can send them to a TCP/Text output for display using GeoEvent Logger. Hopefully the examples and illustrations in this write-up are helpful in guiding you through the discovery of stream services, their properties, and how you can use external clients – or configure GeoEvent Server inputs – to receive the feature records that are being broadcast.
... View more
09-07-2018
05:43 PM
|
3
|
2
|
56648
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 01-05-2023 11:37 AM | |
| 1 | 02-20-2025 03:50 PM | |
| 1 | 08-31-2015 07:23 PM | |
| 1 | 05-01-2024 06:16 PM | |
| 1 | 01-05-2024 02:25 PM |
| Online Status |
Offline
|
| Date Last Visited |
Wednesday
|