|
POST
|
Is each collection item treated as an event? Yes. When a JSON structure is received and the node being used as the root is a list, GeoEvent Processor parses the list and sends each item in the list to the adapter as a separate GeoEvent. 5 inputs, 5 Geoevent Services, and 5 outputs. Each "Update Feature" output goes to a separate AGS feature service. As a service layout, this is reasonable. What we have to manage is the total number of events being received and processed every second so as to not overwhelm the server. Updating a feature service is currently a bottleneck in the event processing workflow. GeoEvent Processor can generally handle on the order of 800 events per second without any processing or filtering being performed. When updating a feature service, the event traffic needs to be throttled back to 200 - 300 events per second (total, across all running GeoEvent Services). We are developing high capacity stream services with high availability and cluster processing for our next major product release - but that is not going to be publicly available until sometime mid- to late-2014. I've attached two files to this thread which you might find helpful for capacity planning and general product performance. The other 4 inputs all retrieve more than 100 items. Usually, not all of the records in the target feature class get updated after an input has run. In fact after about 12 hours, no further updates occur to the target feature classes with inputs items > 100. I don't believe that what the feature service does with the event data it receives will affect resource consumption. That is, it is just as expensive to query a feature service and update 50 features as it is to query the feature service and update zero features. As long as we're not making PUT and GET calls too frequently, on too large a feature dataset, GeoEvent Processor should take the JSON in, convert it to a GeoEvent, query the feature service and post the event data. It shouldn't matter that after a half-day all necessary features have been processed and the dataset is considered up-to-date. How many total features are in that dataset and how many events are being received a second is what matters. BTW - the 'java.exe' process has remained at about 600MB since I made the change to max heap size but now one of the ArcSOC.exe services is consuming 1.4GB memory. The ArcSOC.exe processes are part of the ArcGIS for Server product. If indeed the issue is with feature service update, it might make sense that those processes are consuming the lion share of the server's memory. I might recommend scaling back your solution to run one of the five GeoEvent Services you have for a period of time to monitor the system's resources and then slowly scale up to either add a second service or stop a "smaller" service to begin running one that handles more data. Running all five services when 80% of those are each expecting a few hundred events at a shot might just be overwhelming your server. - RJ
... View more
12-18-2013
03:46 PM
|
0
|
0
|
3260
|
|
POST
|
Hello Dennis - I'd like to provide you some under-the-hood details which might play into the question we're now discussing. Internally, the GeoEvent Processor is using ActiveMQ to manage the event queues being sent to each node in a GeoEvent Service. Esri Germany reported that they observed a large number of files being written to a tmp_storage folder beneath the ...\data\activemq folder in the product installation directory. This was a disk space consumption issue for them. We recommended to the Esri Germany team that they increase the wrapper.java.maxmemory setting. Our understanding is that they are feeding a very high volume of events into GeoEvent Processor, which is caching events to disk when the service components are unable to keep up. Our recommendations for system resources are a minimum 6GB of RAM for GEP (vs. the 4GB minimum recommendation for ArcGIS Server). The default sizes for the queues used by the Event Processing framework can also be adjusted; you would only need to consider that if you were working with high volumes of event data. To adjust the queue sizes edit the com.esri.ges.messaging.jms.cfg and increase the defaults to reflect the following: com.esri.ges.messaging.jms.destinationPolicy.queue.memoryLimit (10 megs) com.esri.ges.messaging.jms.destinationPolicy.topic.memoryLimit (10 megs) com.esri.ges.messaging.jms.destinationPolicy.topic.memoryLimit (1 gig) You will need to stop GeoEvent Processor and edit these files as an administrator (assuming they are beneath C:\Program Files\...) then restart GeoEvent Processor in order for the settings to take effect. If you've observed the java.exe process which is running with the -Dkaraf.base="C:\<install folder>\GeoEventProcessor" -Dkaraf.data="C:\<install folder>\GeoEventProcessor\data" arguments in its command line actually consuming more than 1024 megs of memory, lowering the wrapper.java.maxmemory value to 512 megs as you suggest will only result in Java throwing out-of-memory exceptions. How many events are you sending into GeoEvent Processor? What inputs are you using? How many different GeoEvent Services do you have running? I think we need to examine the load you are placing on your Server, by GeoEvent Processing as well as any other running services/applications, before we recommend throttling the memory allowed to the Java process. - RJ
... View more
12-18-2013
08:22 AM
|
0
|
0
|
2757
|
|
POST
|
Hello Matt - Thank you for posting. I was able to confirm what you suggest, that a user can add/update features in an otherwise non-editable feature service using GeoEvent Processor. I conducted my tests with the 10.2.1 product release which should be publicly available the second week of January 2014. Currently, GeoEvent Processor can only discover and target AGOL hosted feature services which you own. You cannot, for example, configure a GeoEvent Processor Output to update or add features to a feature service which has merely been shared with you. GeoEvent Processor looks for items owned by you listed when you review "?My Content"? in your Organization. I suspect that because I "own"� the data, I was allowed to add/update features. We are, however, investigating this use case with the Server Usage support team. We will reply back to this thread if/when we learn anything more. Best Regards - RJ Cross Reference: Allow GeoEvent Processor to update non-editable AGOL feature services
... View more
12-16-2013
09:28 AM
|
0
|
0
|
1553
|
|
POST
|
Hey Dennis - Attached is an illustration from the 10.2.0 product, supporting what Ryan says above. The interface panels change slightly at 10.2.1 ... but not so much that you can't use what is below. [ATTACH=CONFIG]29862[/ATTACH] - RJ
... View more
12-13-2013
07:39 AM
|
0
|
0
|
2878
|
|
POST
|
Hello Nathan �?? I�??ve been working with Scott Mitchell over at GeoFeedia and investigating your issue with using GeoEvent Processor to consume the JSON returned when you poll GeoFeedia�??s site. I wanted to share with you a few observations / discoveries. 1) As I said mentioned above, data with groups and lists (elements whose cardinality is greater-than one) cannot be represented as comma separated text, so you will want to use an Output �??Write to a .json file�?? rather than writing to a CSV file. Writing the data as comma separated text would be ambiguous. You wouldn't know if an item after a comma were the next element, the next sub-element within a group, or the next item in a list. 2) You can use the URL you shared with me, Product" rel="nofollow" target="_blank">http://www.arcgis.com/home/group.html?owner=GeoEventTeam&title=ArcGIS%20GeoEvent%20Processor]Product Gallery helpful in resolving your issue with getting the GeoFeedia data into a feature service. Hope this information helps �?? RJ
... View more
12-12-2013
02:55 PM
|
0
|
0
|
1577
|
|
POST
|
Hello Darryl - GeoEvent Processor has a default 2GB limit for RAM allocation. You can configure this for your instance. Locate the product installation folder "�\ArcGIS\Server\GeoEventProcessor\etc on your system and open the ArcGISGeoEventProcessor.cfg file in a text editor. You should be able to locate the configuration setting toward the top of the file: # Maximum Java Heap Size (in MB) wrapper.java.maxmemory=2048 I would recommend monitoring other processes running on the system to see which is consuming the most RAM. Unless you have configured GEP to use more than its default, you shouldn't find that ArcGISGeoEventProcessor.exe is using that much RAM. The issue with GeoEvent Services disappearing is an issue with 10.2.0 which we have addressed with 10.2.1 (which is effectively a service pack / stabilization release). Can you reevaluate the issue with the 10.2.1 release? This should be publicly available the second week of January. (Regional offices, Business Partners, and Distributors should be receiving their distribution the week of December 19th.) Thanks - RJ
... View more
12-10-2013
10:22 AM
|
0
|
0
|
2757
|
|
POST
|
Hello Darryl - We have been unable to identify a work around for the issue in which a "?Subscribe to an external Web Socket for JSON"? Input connector is not receiving content from the ws://geoeventsample1.esri.com:8080/exactearthservice sample data when using the 10.2.0 release of GeoEvent Processor. To confirm, this does not appear to be an issue with the 10.2.1 release. Please reply to this thread if you are unable to wait for the 10.2.1 product release which should be publicly available the second week of January. (Regional offices, Business Partners, and Distributors should be receiving their distribution the week of December 19th.) Thanks - RJ
... View more
12-10-2013
10:10 AM
|
0
|
0
|
2724
|
|
POST
|
Hello Nathan - I've tried just a simple JSON input and CSV output with just the default connector and all I get is the header as an entry. I wouldn't recommend a CSV output for this data. When I created an 'Poll an external website for JSON' Input connector and allowed GEP to generate a GeoEvent Definition for me, the first thing I noticed was the complexity of the data. Data with groups and lists (elements whose cardinality is greater-than one) cannot be represented as comma separated text. The data would be ambiguous; you wouldn't know if an item after a comma were the next element, the next sub-element within a group, or the next item in a list. That aside, I found that I was receiving INFO and ERROR messages in the GeoEvent Processor logfile for the https://api.geofeedia.com/v1/search/geofeed/18591?appID=fbf45924&appKey=23ced6fac3544364b8c4b84aee76e287&format=json-default data feed. When my Input connector first polls for data, I see an ERROR message which looks like (click to enlarge): [ATTACH=CONFIG]29698[/ATTACH] I am able to use JSONLint to parse the feed's JSON, but I think that GeoEvent Processor is having a problem with the JSON structure. When I stop and restart the Input connector, I will see a message from com.esri.ges.messaging.jms.JmsMessaging saying "Error trying to receive message. Output may not have processed message". This is just an INFO message, but then when the Input next polls the external site, the error in the screenshot above is logged. If I configure the Input connector's 'JSON Object Name' property to look for "items" (no quotes), I get a simpler GeoEvent Definition: [ATTACH=CONFIG]29699[/ATTACH] But only a handful (I was only seeing 11) events get output to the JSON file being managed by the Output connector. The JSON in the file doesn't appear valid at first glance, the GEP Monitor continues to increment the count for the Output connector, but no additional events get written to the output. The GEP logfile is full of messages from com.esri.ges.messaging.jms.JmsMessaging saying "Error trying to receive message. Output may not have processed message". I will look into this further - but I wanted to get you my initial recommendation (not to use a CSV file for output) and record my initial observations in case they are at all helpful to you. - RJ
... View more
12-09-2013
04:38 PM
|
0
|
0
|
1577
|
|
POST
|
...you will create a feature within arcmap with the two attributes 'Name, StatusCode'. Also you have to add symbology accordingly to the 'StatusCode' ... I was assuming that the location of the static/stationary polygon(s) are well-known. The event data being received in my example does not contain any location or geographic information, just attribute values to use to update a target. So I was using ArcMap to author a map document which contained the well-known polygon(s). I was publishing a feature service because we need the feature access capability enabled in order to use GeoEvent Processor to update the attribute values of the named feature(s). Configuring the display to render/stylize the polygon based on the 'StatusCode' is completely optional. It was just an easy, visual way of quickly determining that the polygon's status was being updated as I fed events into GeoEvent Processor. But why delete the shape, where is the information stored about the shape that is in this service? Keep in mind that whatever fields exist in the GeoEvent Definition received by the Output connector responsible for updating the feature service - in my example this would be the fs-out connector - will be used to update the targeted feature service. So if my input events had fields 'Name' and 'StatusCode' ... I wouldn't need a field mapper. If the input had a field named 'shape' this would be bad; I don't want the event to drive updates to the static/stationary geometry. In my example the only attribute data I had to work with was a name, a date/time, and a status. Since the output target has a name, a status, and a shape ... if I use a field mapper with the feature service's full schema, I would have a destination field 'shape' with no corresponding source field. So I elected to delete 'shape' from the imported GeoEvent Definition to make the field mapping cleaner and make clear that the GeoEvent Service is only updating the attribute values specified in the field mapper processor. - RJ
... View more
12-04-2013
03:47 PM
|
0
|
0
|
1318
|
|
POST
|
I have a workflow in mind "? but it"?s not perfect. Perhaps if I share it with you, you or someone else can suggest an improvement. I"?m assuming that what we are trying to address is a proposed maintenance cycle which requires a service admin to stop a feature service (for a period of time) which a GeoEvent Service is actively updating. I"?m further assuming that the maintenance will be to add attribute fields to the feature service"?s schema "? not to change data types of existing fields or remove existing fields. What we want to do is actively cache events received by GeoEvent Processor so that when the admin brings the feature service back on-line it will automatically receive any events which came in during the (planned) outage. If you were to incorporate a second Output connector into the GeoEvent Service which was updating features in the feature service, you could start/stop that Output independently, effectively turning caching "on"� / "off"� "� you could use any Output, but I"?ll stick with a CSV or JSON file Output for purposes of this discussion. So the workflow might be that you start the "cache"� Output so that whatever event data is being sent to the Output updating the feature service gets copied to a system file. Then stop the feature service, which will result in event data being sent to the Output updating the feature service (e.g fs-out) failing to reach the target feature layer. But the event data is being cached in the system file "� so we"?re OK so far. Once the planned maintenance to the feature service is complete, the admin can restart the feature service, which will resume feature service updates, and then stop the secondary "cache"� Output so that data is no longer being written to the system file. The admin would then copy the system file to a folder being watched by a different GeoEvent Service so that the "cached"� event data would get read into GEP and used to update the target feature service. But we have a race condition. Depending on the rate at which feature data is being received, I can imagine a situation in which "live"� data output to the feature layer gets overwritten by older "cached"� event data from the system file "� once the system file is copied and subsequently read. I"?m not sure that we"?ll be able to identify a solution which guarantees both (a) no data loss, and (b) also guarantees the most recent "live"� event data sent by a data provider is given priority over data from a "cache"�. Perhaps you or someone else can suggest a modification which would address this concern. - RJ
... View more
12-03-2013
04:03 PM
|
0
|
0
|
1415
|
|
POST
|
Hi Ingeborg, My knowledge in the area of JavaScript and HTML isn't going to be sufficient to tell you how to do what you're asking, but I believe that the stream layer object in the HTML should have the information necessary to allow the "identification" of features. That is, you should be able to add code which will allow users to retrieve attributes associated with an event such as the event's date/time by clicking on a feature's location. I say that because part of the edits you can make to the HTML within the <script> tag as a developer includes a variable declaration for the layer (var layerDefinition = ...) which includes attribute fields. If you look in the attached Twitter.zip archive you'll find a TwitterStreamLayer.html which was updated for the revised Twitter tutorial. (This tutorial should be up on the product gallery soon...) Look for the first block of code below the layerDefinition declaration. You should see code which identifies the basemap, the map's center, and zoom level, then code which invokes a setClickEvents() function to establish callbacks for click events on the HTML form's two buttons. I'm assuming that with a sufficient knowledge of JavaScript you could establish a callback for a mouse click or double-click overtop the basemap which would find the nearest feature within a tolerance and report back to the user the attributes of that feature in some sort of pop-up. I just don't know how do actually do that. Hope this helps - RJ
... View more
12-03-2013
03:22 PM
|
0
|
0
|
2805
|
|
POST
|
Hello - Since you only want to update the "status"�, not the geometry of the stationary polygon feature, I"?d recommend taking a more generic approach than the one presented in the Introduction tutorial. Most of the exercises in the tutorial use exactly one GeoEvent definition. This works when the input has the same fields, with the same data types, in the same order as the output. This is done so that an exercise can focus on a particular bit of functionality without complicating the exercise with multiple event definitions. However, using a single GeoEvent Definition is not as common a case as you might think. In your case, you don"?t need to update the geometry, just the status. So we"?ll prepare one GeoEvent Definition for the schema of the input and a second GeoEvent Definition for the target "? the feature layer you want to update. The first GED we"?ll call EventStatus-Input. The CSV file I"?m using to simulate events looks something like: Alpha,8/25/2012 13:21:36 PM,1 Bravo,8/25/2012 13:23:52 PM,1 Charlie,8/25/2012 13:26:11 PM,1 Delta,8/25/2012 13:29:15 PM,1 Echo,8/25/2012 13:31:30 PM,1 "� So I need a GeoEvent Definition with three fields: TrackId, DateTime, CodedValue The second GED is imported from a published feature layer named "?UpdatePolyStatus"?. The feature layer only has two attributes: Name, and StatusCode. Of course when I import the GED a get a third field named "?shape"?. In your case we don"?t want to use GEP to update the shape/geometry, so we edit the imported GED and delete that field. Now, we have two event definitions which don"?t look much like one another, so when we design our GeoEvent Service we will need to include a Field Mapper (see illustration below): [ATTACH=CONFIG]29512[/ATTACH] It is important to recognize that the target feature layer has an attribute field which uniquely identifies each feature/record. In this example the received "TrackId"� value "? "?Alpha"?, "?Bravo"?, "?Charlie"?, etc. "? maps to the 'Name' field specified as the Unique Feature Identifier Field when configuring the Output connector. GeoEvent Processor will look for features (polygons in this case) whose "?Name"? matches a received "?TrackId"? and will then update that record"?s "?StatusCode"? with the received "?CodedValue"?. My target feature layer"?s symbology was configured to use StatusCode to select a color for each polygon. I verified that as data received by GeoEvent Processor was used to update a polygon"?s attribute value, refreshing the map display would update the map to display the polygon using the color which corresponded to the StatusCode for that feature. If my data feed had the coordinate values for each polygon's vertex correctly formatted within a Geometry, I could use GeoEvent Processor to update the position and/or shape of each polygon, but for this example we wanted to leave the polygon's static and only update their status attribute. Hope this helps "? RJ
... View more
12-02-2013
05:01 PM
|
0
|
0
|
1318
|
|
POST
|
Hello Krzysztof, Please take a look at the Forum thread GeoEvent Processor Web Socket Tutorial. It appears that we have an issue leveraging the out-of-the-box connectors for the 10.2.0 product to consume feeds available from the http://geoeventsample1.esri.com/demosite. We are currently looking for a work around. We apologize for the inconvenience. I'll post to both this thread and the thread referenced above if we can find a work around which doesn't require a patch to the product. Best regards - RJ
... View more
12-02-2013
09:21 AM
|
0
|
0
|
2112
|
|
POST
|
Hey Darryl, I was able to replicate the issue on 10.2.0 ... I am looking for a work around. Parsing the logging message you attached, it looks like you're running into a component-level validation error. The adapter responsible for interpreting JSON is complaining that it doesn't have a GeoEvent Definition it can use.
Failed to set Adapter (com.esri.ges.adapter.inbound/Generic-JSON/10.2.0)
properties:
com.esri.ges.core.validation.ValidationException:
com.esri.ges.adapter.genericJson.JsonInboundAdapter
properties validation failed:
Mandatory property 'ExistingGeoEventDefinitionName' is not set.
at com.esri.ges.core.property.PropertyCollectionBase.validate(PropertyCollectionBase.java:184)
at com.esri.ges.manager.stream.internal.InboundAdapterProxy.initAdapter(InboundAdapterProxy.java:74)
at com.esri.ges.manager.stream.internal.AdapterProxyBase.installAdapter(AdapterProxyBase.java:280)
at com.esri.ges.manager.stream.internal.AdapterProxyBase.setAdapterUri(AdapterProxyBase.java:73)
at com.esri.ges.manager.stream.internal.StreamBase.setAdapterUri(StreamBase.java:178)
(65 more ...)
This issue does not manifest itself with the 10.2.1 candidate release. The 10.2.1 product is being prepared for release to Esri Distributors and we expect that it will be available publicly the second week of January 2014. Can you please let me know whether you are able to wait for the 10.2.1 product release? If I need to support you on the 10.2.0 product and we need to prepare a patch, the patch will likely not be available to you until the 10.2.1 product is publicly released. - RJ
... View more
12-02-2013
08:50 AM
|
0
|
0
|
2724
|
|
POST
|
Hello Brian - Can you provide us with some additional information? I'm assuming that you are using ArcGIS for Server 10.2 with the 10.2.0 release of GeoEvent Processor. Are you on a Windows system? What input connector are you using? Is it one of the out-of-the-box connectors? Have you customized a connector to use an out-of-the-box adapter and transport, or are there components customized using the SDK in play here? Can you describe the source of the event messages you are receiving, what if any filtering or processing is taking place, and describe the output connector you are using to send the event data out from GEP? - RJ
... View more
11-27-2013
09:00 AM
|
0
|
0
|
714
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 02-11-2026 11:38 AM | |
| 1 | 02-11-2026 10:38 AM | |
| 1 | 01-05-2023 11:37 AM | |
| 1 | 02-20-2025 03:50 PM | |
| 1 | 08-31-2015 07:23 PM |
| Online Status |
Offline
|
| Date Last Visited |
02-17-2026
02:45 PM
|