|
POST
|
Hello Elias - The Field Enricher processor was designed around a one-to-one relationship where each received event can be linked to one (and only one) row/feature in a secondary enrichment table. Sorry, but I've consulted with the development team, and I don't have a ready work around to suggest. I've entered this as an enhancement request - to extend the Field Enricher to support a one-to-many relationship with the enrichment table. We will consider this along with other product enhancements post 10.3. Thanks - RJ
... View more
04-29-2014
10:16 AM
|
0
|
0
|
594
|
|
POST
|
Hello Vitor - You are correct. When GeoFences overlap and an event is INSIDE the overlapping area the GeoTagger processor's default GeoTag Format, DelimitedValue, will output a comma separated list of GeoFence names. Because the IN and MATCHES operators within expressions do not support variable substitution, filter expressions such as SearchKey IN ${TaggedValuesList} or TaggedValuesList MATCHES .*${SearchKey}.* which include the name of a field on the right hand side of the expression will not work. The parsing of the second expression in fact - the one using MATCHES with .* regular expression atoms on either end - ends up allowing all of the events through the filter when the parser encounters the unsupported ${SearchKey} in the expression; the filter's implementation is handling the unsupported expression by replacing the entire expression with .* (which matches everything). If you must handle the case in which GeoFences overlap, I would suggest that you review the forum thread Overlapping" rel="nofollow" target="_blank">http://forums.arcgis.com/threads/105319-Overlapping-geofences-with-different-attributes]Overlapping geofences with different attributes which presents an alternative to using the default GeoTag Format, DelimitedValue. You might find working with the GeoTagged values as a List will allow you some flexibility in your GeoEvent Service design. ...I used a GeoTagger to add a field GeoFenceNames containing the names of each GeoFence the received event's Geometry is INSIDE as a List. Note that I did not use the default DelimitedValue. If the received event's Geometry is within an area overlapped by two GeoFences, the List appended to the event by the GeoTagger will have GeoFence names at the first two index positions: GeoFenceNames[0] and GeoFenceNames[1]. (Note that the list's index is zero-based.) By sending the geotagged event through a series of Field Calculator and Field Enricher processors, I can isolate the individual GeoFence names from the list and use those individual names as the key to the join needed to further enrich the events with attributes from a uniquely named polygon feature. I have reviewed the need for supporting variable substitution for the IN and MATCHES operators within filter expressions with the development team and have created an enhancement request. No promises, but we will evaluate this against other work currently in our backlog and see if we can get it into a future product release. Best Regards - RJ
... View more
04-24-2014
10:35 AM
|
0
|
0
|
571
|
|
POST
|
Hello Vitor - Sorry, but what you are trying to do is not supported by a filter at any of the 10.2.x product releases. The filter will not expand a field name such as ${ValuesList} to build a list of comma separated values. When configuring a filter with an expression and selecting the IN operator, the Value portion of the expression must be a comma separated list of values. These can be integer values ... 1,16,32 as you discovered. They can also be unquoted strings ... for example abc,def,ghi. You might refer to the thread Spatial" rel="nofollow" target="_blank">http://forums.arcgis.com/threads/102155-Spatial-filter-Different-Geofences-depending-on-a-field]Spatial filter - Different Geofences depending on a field which describes a way to use a filter to compare a geotagged value against another field's value using = and != operators rather than the IN operator. Hope this information helps - RJ
... View more
04-23-2014
11:27 AM
|
0
|
0
|
571
|
|
POST
|
The development team is prototyping a generic XML adapter for the 10.3 product release. If you have specific XML examples you would like to make sure GeoEvent Processor can handle, please post them as attachments to this thread so that we can evaluate them against the requirments we've developed and discuss specifics with you as needed.
... View more
04-23-2014
10:49 AM
|
0
|
1
|
3067
|
|
POST
|
Hey Adam - You are spot on. With what we've exposed, you really don't have a true if/then/else programming construct. You only have filters which will send the event through for processing if the filter's criteria are satisfied. So you can split the event stream to feed three different filters, and as long as the filters' criteria are mutually exclusive, you can feed events falling in different categories to separate Field Calculators to place a status into either an existing field or a new field. Beware that if an event satisfies two or more filters' criteria you will end up duplicating events as the full event stream is independently considered by each filter. [ATTACH=CONFIG]33282[/ATTACH] You will want to be careful, if working with non-integer values, to designate a sufficiently small epsilon value that you can test less-than or greater-equal ... don't test equal-to when using non-integer values as a value '0' might be actually 0.000000000013 behind the scenes (which of course would fail if the filter was testing strictly Speed = 0. Your only other option is to crack open the SDK and develop a custom processor which accepts an event, performs its own custom business logic, and then outputs a new event which has been stamped with an appropriate status. For such simple attribute tests, I'm not convinced that any gain in performance would be worth the development effort. - RJ
... View more
04-23-2014
10:39 AM
|
0
|
0
|
1006
|
|
POST
|
Hello - I had some problems reaching out to the URL you provided (http://api.metro.net/agencies/lametro/routes/207/vehicles). I was seeing some HTTP 503 "Service Unavailable" errors returned. However, I was able, intermittently, to obtain JSON from the site and I've included a sample below. You are using the correct input connector - Poll an external website for JSON. Notice in the sample JSON below that the response contains a number of items in a list. If you specify items for the input's JSON Object Name parameter, GeoEvent Processor will parse the JSON and ingest a separate GeoEvent for each "item" in the list returned from the external site. {
"items": [
{
"seconds_since_report": 6,
"run_id": "207_240_0",
"longitude": -118.308899,
"heading": 360,
"route_id": "207",
"predictable": true,
"latitude": 34.003338,
"id": "9479"
},
{
"seconds_since_report": 244,
"run_id": "207_240_0",
"longitude": -118.309067,
"heading": 360,
"route_id": "207",
"predictable": true,
"latitude": 34.065708,
"id": "9556"
},
{
"seconds_since_report": 6,
"run_id": "207_236_1",
"longitude": -118.309166,
"heading": 180,
"route_id": "207",
"predictable": true,
"latitude": 34.056084,
"id": "9543"
},
{
"seconds_since_report": 6,
"run_id": "207_236_1",
"longitude": -118.309296,
"heading": 180,
"route_id": "207",
"predictable": true,
"latitude": 34.091366,
"id": "9425"
},
{
"seconds_since_report": 6,
"run_id": "207_240_0",
"longitude": -118.309036,
"heading": 330,
"route_id": "207",
"predictable": true,
"latitude": 34.021954,
"id": "9431"
},
{
"seconds_since_report": 64,
"run_id": "207_240_0",
"longitude": -118.308998,
"heading": 360,
"route_id": "207",
"predictable": true,
"latitude": 33.965965,
"id": "9554"
},
{
"seconds_since_report": 6,
"run_id": "207_236_1",
"longitude": -118.309036,
"heading": 180,
"route_id": "207",
"predictable": true,
"latitude": 34.00177,
"id": "9504"
},
{
"seconds_since_report": 65,
"run_id": "207_240_0",
"longitude": -118.308968,
"heading": 220,
"route_id": "207",
"predictable": true,
"latitude": 33.932251,
"id": "9438"
},
{
"seconds_since_report": 6,
"run_id": "207_236_1",
"longitude": -118.309395,
"heading": 220,
"route_id": "207",
"predictable": true,
"latitude": 34.104317,
"id": "9427"
},
{
"seconds_since_report": 6,
"run_id": "207_236_1",
"longitude": -118.308998,
"heading": 315,
"route_id": "207",
"predictable": true,
"latitude": 33.931149,
"id": "9424"
}
]
} You will have to work some to first generate, and possibly refine, a GeoEvent Definition for the event data you expect to receive. For example, if you do not specify a node name in the JSON Object Name parameter, and you configure the input to generate a GeoEvent Definition, GeoEvent Processor will generate an event definition which looks something like the illustration below (click the thumbnail to see a larger view): [ATTACH=CONFIG]33251[/ATTACH] When the input is reconfigured with items specified as the input's JSON Object Name, the generated GeoEvent Definition does not include a 'Group' element since each "item" is ingested as a separate event: [ATTACH=CONFIG]33252[/ATTACH] Now that you have a GeoEvent Definition which specifies the format of each vehicle report, you need to reconfigure the input to Construct Geometry From Fields. The events you are receiving contain latitude and longitude values, and if you want to update a feature class through a feature service you will need to construct a Geometry. To do that you must tell the input which fields (specified in the event definition) contain the coordinate values. My final configuration of the Poll an external website for JSON is illustrated below. Notice that I've changed the Create GeoEvent Defintion parameter to 'False' and selected the previously created GeoEvent Definition as the one to use to interpret the received events. (This is assuming, of course, that the structure of the data is not going to change...) I've also specified that my X Geometry Field value should be taken from 'longitude' and my Y Geometry Field should be taken from 'latitude'. Because I am requesting the input build a Geometry for me, I need a field in which the Geometry can be placed, so I also needed to edit the generated LA-Metro-Vehicle-Report event definition to include a new field of type Geometry. [ATTACH=CONFIG]33254[/ATTACH] [ATTACH=CONFIG]33253[/ATTACH] Hope this information helps - RJ
... View more
04-22-2014
10:38 AM
|
0
|
0
|
1593
|
|
POST
|
Hey Simon - The latest Twitter" rel="nofollow" target="_blank">http://www.arcgis.com/home/item.html?id=041138094e5348eb902f4b71175eeb6f]Twitter Connector on the Product" rel="nofollow" target="_blank">http://links.esri.com/geoevent-gallery]Product Gallery should match the latest twitter-for-geoevent source in the GitHub" rel="nofollow" target="_blank">https://github.com/Esri/twitter-for-geoevent]GitHub repo. When we went to present Twitter at the DevSummit (March 2014) we had a problem sending tweets. This was addressed by rebuilding the latest from GitHub and updating the offering on the product gallery. I think Cris pulled the offering from the Gallery to verify this (refer to thread Twitter" rel="nofollow" target="_blank">http://forums.arcgis.com/threads/106353-Twitter-connector-Gallery-vs.-Github-version]Twitter connector, Gallery vs. Github version. I had thought that the Twitter API specified that we could filter tweets for specified hash-tags or a bounding box ... but Morakot corrected me and indicated that, per the Twitter Dev Website, the bounding box doesn't act as a filter ... it provides spatial context, so you should get the union of tweets filtered-by-hashtag and all tweets within the specified bounding box. If this is not what you are seeing, please let us know. You might consider using the Twitter adapter to filter for �??interesting�?� tweets by hash tag, but use GeoEvent Processor GeoFences to provide the spatial restriction by configuring a filter in a GeoEvent Service which discards any event not inside the GeoFence. I think you will find this approach more flexible as you don't have to reconfigure an input to change or update your area of interest ... you can instead synchronize the GeoFence(s) in GEP with a dynamic feature service. Please let us know how this is working for you... - RJ
... View more
04-15-2014
04:49 PM
|
2
|
0
|
1817
|
|
POST
|
Thomas - As it turns out, a couple of developers on the team are working to prototype a custom processor which might do what you're looking for, but the proof of concept is not working properly yet, so it's not something I can share. So, let's see what we can do with what we have available out-of-the-box... I changed my FieldEnricher to enrich events received from the feature service's feature class, rather than the non-spatial table. [ATTACH=CONFIG]33128[/ATTACH] The two filter elements are branching the event flow one way if the EventCount field exists in the received event (bottom branch) and the other way if the field is not included in the event's attributes. Notice that one filter specifies 'AND' while the other filter specifies 'NOT'. [ATTACH=CONFIG]33130[/ATTACH] I've attached a 10.2.1 product configuration I exported which defines all of the various elements I created for this exercise. I had to wrap it up in a ZIP to get the forum to allow me to attach it. You should be able to unzip and import it into your GeoEvent Processor to create the Inputs, Outputs, GeoEvent Service, and GeoEvent Definitions needed - along with a registered Data Store pointing to LOCALHOST where the feature service is assumed to have been published. The GeoEvent Definition trail is especially important to getting the event flow to work. I didn't use any Field Mapper processors to ensure that the event schemas matched what was needed to update the feature service. You will want to look at the GeoEvent Definitions being created by the Field Reducer, Field Enricher, and the Field Calculator named 'ResetField' which creates a field named EventCount if the received event does not already have that field. I included some sample JSON data in the ZIP archive with the product configuration. - RJ
... View more
04-15-2014
04:00 PM
|
0
|
1
|
2257
|
|
POST
|
Hello David - Sorry, but the text adapter does not support events which have a variable amount of whitespace (or other characters) between attribute values. You can specify a single character or a multi-character sequence using HTML encoding (  for an ASCII space for example), but the attribute separator cannot be set to a RegEx pattern. If you knew that attributes were delimited by exactly three ASCII spaces you could specify     as the attribute separator. Syntax should actually be unicode format: \u003B (semi-colon) \u0020 (ASCII space) ... see illustration below. Also, multi-character delimiters do not seem to be working in 10.2.2 ... a bug has been entered to address this. [ATTACH=CONFIG]33172[/ATTACH] You might be able to design a GeoEvent Service, using the support for Java String functions available in the 10.2.2 release, to reduce the variable delimiters between event attributes to a single, standard delimiter ... but I think a better approach would be to explore a JavaScript solution to pre-process your incoming events, replace the variable delimitation with a single delimiter, and then write the event data to a TCP socket or WebSocket for a GeoEvent Service to ingest. - RJ
... View more
04-15-2014
02:03 PM
|
0
|
0
|
862
|
|
POST
|
You will want to configure a Field Calculator whose Expression looks something like substring(DateAsString,5,7) to pull a two character substring out of a date/time field received as a String and write the retrieved sub-string to a new field. The index values 5 and 7 in the substring( ) syntax specify the starting index and ending index of a zero-based array of characters. The downside of this is that the date/time value must be specified, by the GeoEvent Definition, as a String and the service elements in the GeoEvent Service will only be able to work with the date/time as a String ... not a Date. You can work around this by feeding the event data back into a second input and allowing the second input to handle the conversion of the String representation of the date/time to an actual Date value. Details are included in the attached PDF. Please note that support for Java String functions from within a Field Calculator is supported in the 10.2.2 release of GeoEvent Processor, which should be publically available on your Customer Care Portal tomorrow 15-April-2014. Esri Distributors should have been given access to download the new release last week. - RJ
... View more
04-14-2014
02:59 PM
|
0
|
0
|
1240
|
|
POST
|
Hello Simon - We have an item in our backlog to develop a processor which will do what I think you are describing - take X/Y (or lat/lon) points from an asset's last reported position, treat them as vertices, and use the vertex to update a polyline feature to build a track line. The Add a feature and Update a Feature outbound connectors will only work with the geometry of the target feature service's feature layer. If the service's feature layer has point geometry, the X/Y (or lat/lon) values received will either add new or update existing point features. If you wanted to use these GEP outputs to update a polyline or polygon the Geometry received with the event would need to be a polyline feature or a polygon feature. With the 10.2.x releases you might be able to use an Incident Detector processor to build a track line. You would select Polyline for the processor's Geometry Type to configure the processor to build and update a polyline rather than point features. You will also need to make sure that the Incident Type is Cumulative (not PointInTime) ... and set the processor's Expiry Time to a sufficiently high value that the incident will not be automatically closed when no updates are received for a period of time; the default is 300 seconds (5 minutes). Update: March 2019 I am retracting my advice above to possibly use an Incident Detector processor to build track lines for assets whose positions are being periodically reported as point locations. This approach would rely on the Incident Manager – a server-side component to which you have read-only access through the GeoEvent Server administrative API – to maintain an audit history with a guid, timestamp, and point geometry, for every event record received which satisfies the Incident Detector's opening condition. The polyline geometry output from the Incident Detector processor is built from this audit history. I didn't think about it at the time, but it obvious to me now that maintaining an open/ongoing incident record for all assets you are tracking in order to have Incident Detector output an updated polyline geometry is going to consume more and more RAM until the JVM heap is full. Unless you include a way of periodically closing incidents so that GeoEvent Server can use its ability to maintain a fixed number of "ongoing" and "closed" incidents with a reasonable audit history, using an Incident Detector to build a track line is a very bad idea. Sorry ... The correct approach to this problem is to use GeoEvent Server to receive and persist locations reported in real-time, allow feature records to accumulate in a feature class, then use GeoAnalytics Server to periodically run a batch analysis and reconstruct tracks. You can read more about that capability here: Reconstruct Tracks Hope this information helps - RJ
... View more
04-14-2014
08:43 AM
|
0
|
0
|
1507
|
|
POST
|
Hey Elias - Can you provide me a sample of how the date/time values are being sent to GeoEvent Processor? Are you interested in receiving a string value such as "14-7-2012 02:30:45 PM" and allowing an input connector to use an Expected Date Format mask such as dd-M-yyyy hh:mm:ss a to interpret the string as a Date ... July 7, 2012 14:30:45 in this case ... and then then pull the Month portion out as a numeric (e.g. 7 for July)? We might be able to use substring( ) or a Regular Expression to pull a portion from a date/time String which represents the month, if the date/time is brought in as a String. But if you need the Input to convert the value it receives to a Date, we may have trouble trying to get the month out of the Date object. - RJ
... View more
04-11-2014
12:38 PM
|
0
|
0
|
1240
|
|
POST
|
Hello Elias - I think there was an issue with using non-spatial tables as the source of event enrichment at the 10.2.0 release. The Field Enricher processor was assuming that the target table would have a Geometry field ... which of course non-spatial tables do not. I created a mock-up of the scenario we have been working with using my build of 10.2.1 and didn't have any problems first GeoTagging events with the GeoFence(s) they were inside, then using a Field Calculator to pull the name of a specific GeoFence from the List of tags and sending that event through a series of Field Enrichers ... one enriching the event with a String from the Feature Service's Feature Layer, and one enriching the event with a String from the Feature Service's Table. Here's how I configured my GeoEvent Service (click to enlarge thumbnail): [ATTACH=CONFIG]32930[/ATTACH] [ATTACH=CONFIG]32931[/ATTACH] [ATTACH=CONFIG]32932[/ATTACH] My Feature Service has a Feature Layer at layer index 0 and a non-spatial table at layer index 1: [ATTACH=CONFIG]32933[/ATTACH] - RJ
... View more
04-08-2014
11:59 AM
|
0
|
0
|
2338
|
|
POST
|
Hello Thomas - Using only out-of-the-box processors, this could be tricky. By design most of the configurable filters and processors used in a GeoEvent Service do not maintain any sort of cache, so your GeoEvent Servcie will never know, for example, how many events for a vehicle with the license plate ABX-339 have been received. We might be able to work around this, though, without having to use the GeoEvent Processor's SDK to develop a custom processor. Do your events have unique track identifiers? If so we might develop a creative GeoEvent Service which polled its own output feature service to update a non-spatial table containing event statistics with attributres being maintained on the base features. [ATTACH=CONFIG]32896[/ATTACH] Consider the illustration of the GeoEvent Service above (click to enlarge the thumbnail). The event flow goes something like this: An event is received on a GEP input connector's REST endpoint - The input connector constructs a Geometry from the event's attribute fields - A Field Reducer is used to remove the Latitude and Longitude fields from the event - The constructed Geometry is retained and the event is now "flat" (complient with an Esri Feature Service) X A Field Enricher joins the target feature service's non-spatial table to the event - If field enrichment succeeds because a field named EventCount is found, a Field Calculator will increment the integer in the joined field and write the incremented value back into the enriched event. The enriched event is then written out to the target feature service's feature class. - If the field enrichment fails because the target feature service's non-spatial table has no EventCount for the event with the received TRACK_ID, then a different Field Calculator ... the one named ResetField ... writes a hard-coded '1' into a new field named EventCount. X In either case, a single feature in the target feature service's feature class is updated with the received event which is tracking a count of the events received for that TRACK_ID. X A separate input connector is polling the target feature service's feature class, and when it sees a feature whose DateTime value is greater than the last date/time a feature was added/updated ... the "new" feaures are retrieved and the EventCount from each feature is transferred to the target feature service's non-spatial table. Depending on the rate at which you are receiving new events for a given track, I've observed this approach working. My tests were only preliminary though ... for example, I was making sure that the second input connector always had a chance to poll the feature class to get updated counts for the events received before new events were POST'ed to the other input connector's REST endpoint. I'll try to make some time to work with this approach some more and identify any faults. Please reply if you are able to make this work or have any questions. - RJ
... View more
04-07-2014
04:54 PM
|
1
|
0
|
2257
|
|
POST
|
Hello Brian - Additional references which may be helpful: Ports used by GeoEvetn Processor: Thread:" rel="nofollow" target="_blank">http://forums.arcgis.com/threads/105789-Rest-Port?p=377617&viewfull=1#post377617]Thread: REST Port Windows command to list ports currently in use: netstat -o -a Run the latter command from within a Windows command console window to see which ports are in use by which PID (program identifiers) on your system. Like Mark said, on my local system, I usually find no conflicts when using ports in the 5500 - 5600 range for TCP/Text input. - RJ
... View more
04-07-2014
01:20 PM
|
0
|
0
|
1227
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 01-05-2023 11:37 AM | |
| 1 | 02-20-2025 03:50 PM | |
| 1 | 08-31-2015 07:23 PM | |
| 1 | 05-01-2024 06:16 PM | |
| 1 | 01-05-2024 02:25 PM |
| Online Status |
Offline
|
| Date Last Visited |
05-16-2025
07:55 AM
|