|
POST
|
3. Select 'Trace' for 'Log Level' Completely misstated - apologies. You should set the logging level to 'INFO' for 'ROOT' (not 'TRACE') in order to reduce log verbosity. Setting the logging level to 'TRACE' as I suggested turns verbose logging on. - RJ
... View more
01-06-2014
04:05 PM
|
0
|
0
|
593
|
|
POST
|
In this case I would recommend that you alter your Input component's configuration to indicate that, no, the event stream does not contain the name of the event definition. You will want to specify that, yes, you do want a fixed event definition to be created and provide a name which does not include the ':' which the product currently recognizes as an illegal character. Please see the attached screenshot below. The event definition created will be reused for every event whose signature (number of attribute fields and data types) matches the first event received. - RJ
... View more
01-06-2014
10:25 AM
|
0
|
0
|
874
|
|
POST
|
Hello Ayman - Thank you for clarifying what you were trying to accomplish. My apologies for misunderstanding... At the 10.2 / 10.2.1 release, the Output connector supporting logging event data to a CSV system file won't support what you are trying to do dynamically ... unless you know in advance the track IDs of the vehicles being reported by your AVL system. For example, if you knew that vehicles TRC2478, TRC290, and TRC2812 were expected to report, you could use filters to isolate events for each reporting vehicle and direct those vehicle reports to separate Output connectors. You would need to specifically configure each Output connector to write to either a file or sub-folder dedicated to a specific vehicle. If the track IDs of the vehicles being reported by your AVL system are not known, then pre-configuration of Output connectors and addressing the need with event filtering is obviously not an option. In that case you would have to develop a custom transport using the GeoEvent Processor SDK. Your custom transport could parse the outbound stream to discover the track ID, determine if a system file/sub-folder had been established for that vehicle, and take appropriate action. The ability for a transport to query an outbound stream for a track id was developed for the 10.2.1 product release. As a side note, Notifications (e-mail, SMS message, etc.) have been enhanced to enable service designers to retrieve address information from event attribute data in order to dynamically direct notifications to recipients. This capability will be available in the 10.2.2 product scheduled for release in March 2014. If similar capability is important to your successful adoption of GeoEvent Processor when working with system file output, please post the idea to the ArcGIS Ideas portal (http://ideas.arcgis.com). Community support for an idea helps us prioritize future product enhancements. Be sure when submitting ideas to tag the idea with "GeoEvent Processor" and check the "ArcGIS Server" category. Regards - RJ
... View more
01-06-2014
09:09 AM
|
0
|
0
|
1398
|
|
POST
|
From GeoEvent Processor Manager: Select 'Logs' to switch to the 'View Log Messages' page Click 'Settings' to display the 'Log Settings' dialog Select 'Trace' for 'Log Level' Select 'ROOT' for 'Logger' Click 'Save' This should reset your GeoEvent Processor logging to a non-verbose level.
... View more
01-03-2014
06:01 PM
|
0
|
0
|
593
|
|
POST
|
Hello Dennis - Yes, you can edit a GeoEvent Definition by editing the XML you export from Manager, but this isn't exactly recommended. The 10.2.1 product release will include the ability to edit hierarchical (group) elements as well as cardinal (list) elements. The attached ZIP contains an MP4 video which shows this. The 10.2.1 product is scheduled for public release next week (January 7th). If you do find yourself needing to edit and/or create an event definition outside of Manager, I've included a screenshot in the ZIP which shows how the XML nodes are nested for the 10.2.1 release. I don't think this has changed since 10.2 - but that is part of the reason for recommending that you not edit the XML yourself. If the development team decided to change the node structure we would consider that an internal change not necessarily advertise it. I wouldn't want you to borrow trouble by editing a file we didn't intend you to edit. Proceed with caution. - RJ
... View more
01-03-2014
12:59 PM
|
0
|
0
|
346
|
|
POST
|
Hello Ayman - What you describe I see folks doing all the time - by accident. Consider the event data illustrated below (ignore the empty lines, they are there only to help make the data more readable). If an Input such as "?Receive text from a TCP Socket"? were configured with the Incoming Data Contains GeoEvent Definition parameter set 'Yes', and the Create Fixed GeoEvent Definitions parameter also set 'Yes', GeoEvent Processor would receive the first event for flight SWA2706 and look for an event definition with that name. Failing to find one it would create a new one. The next event for flight SWA724 would likewise fail to find an existing event definition with that name and create a new one. This process would continue for the first nine events (as the tutorial's data only has nine uniquely identified flights) before existing event definitions would be found by name and used. You can imagine how, if an analyst did not have significant control over the data being received, that dozens and dozens of event definitions could conceivably get created which differ in name only. I included an illustration of this following the sample event data below. Best practice is to know the format of the event data you expect to receive, create a suitable event definition for that schema, and couple an Input with a specific event definition to receive the expected data. Normally you wouldn"?t expect the event schema of data coming over a dedicated channel to change, and if a provider were sending different classes of events over a single channel you would probably only want to take one particular class of the data being provided. The ability to configure an Input to create event definitions for you based on data discovery is intended to make the product more generic and more user-friendly. But it does not absolve you of your responsibility know and understand the structure of the data you expect to receive. - RJ
... View more
01-02-2014
01:28 PM
|
0
|
0
|
1398
|
|
POST
|
Hello Ayman - The situation I think you are describing, a GeoEvent Service with active failover, is not configurable out-of-the-box. You would need to implement a custom service component using the GeoEvent Processor SDK to either detect or anticipate the failure, and then handle the failure appropriately. The Output components provided out-of-the-box have no callback mechanism to alert GeoEvent Processor that the Output failed to correctly handle data it received. There is no way to alert the service containing the Output that an event needs to be reprocessed along a different event path. If an Output is responsible for updating a feature service, for example, but the server hosting the feature service is unreachable for some period of time, you will likely see the error logged but you cannot configure GeoEvent Processor to automatically recover and re-process the event. - RJ
... View more
01-02-2014
12:38 PM
|
0
|
0
|
362
|
|
POST
|
Hello Ayman - I have a device that send the following string to GEP "POS:TRAFFAVL0049,1363478415,+25.32340050,+51.48632812,-12.847000,548946,2800847,3.9,141.7,35,16,0,1.570,44,39R" The first String POS is a static value, but next to : is a dynamic value as in the example above it's TRAFFAVL0049. A quick remark on your input - since the entire string is quoted, your event really only has one input field, a literal string. The fact that the string contains commas is irrelevant. If you want to send events into GeoEvent Processor whose attribute values are comma separated values, you need to drop the quotes so that the attributes will be recognized as separate values. As I understand the string sent to GEP should contain in the first string a name of geoevent definition. How I can create a GeoEvent defenition based on the above String. Receiving event data as comma separated text, you can configure an Input connector (�??Receive text from a TCP Socket�?? for example) to use the first field's value to determine an appropriate event definition to apply to the remainder of the event�??s payload (e.g. the remaining comma separated values). But you don�??t have to�?� Figure 1 below illustrates the default �?? the Input is configured to expect the event definition be specified as the first value. If you can change the response to the first Yes/No parameter to �??No�?? a new parameter will appear asking if you want GEP to attempt to create event definitions for events it does not recognize based on data discovered in the event (see Figure 2). If you configure your Input to create a fixed GeoEvent Definition, you should specify a name for the event definition that will be created. This implies that all of events received on the specified port will share a common schema �?? a common practice, but not a requirement imposed by the product. Figure 3 illustrates a more generic case in which you specify that, no, the event data does not specify an event definition, and no, you do not want GeoEvent Process to attempt to create one. Instead, you specify that all events received on the specified port should be evaluated using the specified event definition �?? �??AVL-Data�?? in the illustration. I this case you would need to either create the event definition within Manager, or repurpose an existing definition created previously. If you absolutely have to �?? because you cannot otherwise have the system component which provides the event data remove the quotes �?? we can probably design a GeoEvent Service which will trim the quotes from the received data string and write the modified event data out as true comma separated values (rather than as a quoted string). A second Input might then be used to re-ingest the processed data �?� but you will find this approach presents you with new challenges such as how you want to handle the processing of duplicate events. Hope this information helps - RJ
... View more
01-02-2014
12:22 PM
|
0
|
0
|
874
|
|
POST
|
You might want to take a look at this thread: lost" rel="nofollow" target="_blank">http://forums.arcgis.com/threads/94947-lost-geoevent-services-on-server-reboot]lost geoevent services on server reboot Summary Points: We recommend a minimum of 6GB of RAM for GEP (vs. the 4GB minimum recommendation for ArcGIS Server). GeoEvent Processor has a default 2GB limit for RAM allocation. You can configure this for your instance. Locate the product installation folder �?�\ArcGIS\Server\GeoEventProcessor\etc on your system and open the ArcGISGeoEventProcessor.cfg file in a text editor. You should be able to locate the configuration setting toward the top of the file: # Maximum Java Heap Size (in MB) wrapper.java.maxmemory=2048 The default sizes for the queues used by the Event Processing framework can also be adjusted; you would only need to consider that if you were working with high volumes of event data. To adjust the queue sizes edit the com.esri.ges.messaging.jms.cfg and increase the defaults to reflect the following: com.esri.ges.messaging.jms.destinationPolicy.queue.memoryLimit (10 megs) com.esri.ges.messaging.jms.destinationPolicy.topic.memoryLimit (10 megs) com.esri.ges.messaging.jms.destinationPolicy.topic.memoryLimit (1 gig) Regards - RJ
... View more
12-20-2013
07:39 AM
|
0
|
0
|
589
|
|
POST
|
Hello Elisabeth - Attached is a preview draft of the NMEA Connector tutorial. This should be uploaded to the GeoEvent Product Gallery shortly. The draft attached below may have some rough spots (copy editing on the document is not yet complete), but the content is all there. Please let me know if you have any questions or feedback on changes needed to the tutorial. Please note that this draft of the tutorial was prepared for the GeoEvent Processor 10.2.1 product release, which should be publicly available the second week of January 2014. The most significant change you would probably notice is that the Service Designer, the separate application you run to design and publish GeoEvent Services at 10.2.0, was retired. The functionality has been integrated into the GeoEvent Processor Manager at 10.2.1. I mention this so that you are not confused if you go to the GeoEvent Services page in Manager and don't see an 'Add Service' button ... you won't see that until you have the 10.2.1 product release. - RJ
... View more
12-19-2013
06:46 AM
|
0
|
0
|
958
|
|
POST
|
Apologies - forgot to attach the promised performance planning and capacity documents.
... View more
12-18-2013
03:50 PM
|
0
|
0
|
1990
|
|
POST
|
Is each collection item treated as an event? Yes. When a JSON structure is received and the node being used as the root is a list, GeoEvent Processor parses the list and sends each item in the list to the adapter as a separate GeoEvent. 5 inputs, 5 Geoevent Services, and 5 outputs. Each "Update Feature" output goes to a separate AGS feature service. As a service layout, this is reasonable. What we have to manage is the total number of events being received and processed every second so as to not overwhelm the server. Updating a feature service is currently a bottleneck in the event processing workflow. GeoEvent Processor can generally handle on the order of 800 events per second without any processing or filtering being performed. When updating a feature service, the event traffic needs to be throttled back to 200 - 300 events per second (total, across all running GeoEvent Services). We are developing high capacity stream services with high availability and cluster processing for our next major product release - but that is not going to be publicly available until sometime mid- to late-2014. I've attached two files to this thread which you might find helpful for capacity planning and general product performance. The other 4 inputs all retrieve more than 100 items. Usually, not all of the records in the target feature class get updated after an input has run. In fact after about 12 hours, no further updates occur to the target feature classes with inputs items > 100. I don't believe that what the feature service does with the event data it receives will affect resource consumption. That is, it is just as expensive to query a feature service and update 50 features as it is to query the feature service and update zero features. As long as we're not making PUT and GET calls too frequently, on too large a feature dataset, GeoEvent Processor should take the JSON in, convert it to a GeoEvent, query the feature service and post the event data. It shouldn't matter that after a half-day all necessary features have been processed and the dataset is considered up-to-date. How many total features are in that dataset and how many events are being received a second is what matters. BTW - the 'java.exe' process has remained at about 600MB since I made the change to max heap size but now one of the ArcSOC.exe services is consuming 1.4GB memory. The ArcSOC.exe processes are part of the ArcGIS for Server product. If indeed the issue is with feature service update, it might make sense that those processes are consuming the lion share of the server's memory. I might recommend scaling back your solution to run one of the five GeoEvent Services you have for a period of time to monitor the system's resources and then slowly scale up to either add a second service or stop a "smaller" service to begin running one that handles more data. Running all five services when 80% of those are each expecting a few hundred events at a shot might just be overwhelming your server. - RJ
... View more
12-18-2013
03:46 PM
|
0
|
0
|
1990
|
|
POST
|
Hello Dennis - I'd like to provide you some under-the-hood details which might play into the question we're now discussing. Internally, the GeoEvent Processor is using ActiveMQ to manage the event queues being sent to each node in a GeoEvent Service. Esri Germany reported that they observed a large number of files being written to a tmp_storage folder beneath the ...\data\activemq folder in the product installation directory. This was a disk space consumption issue for them. We recommended to the Esri Germany team that they increase the wrapper.java.maxmemory setting. Our understanding is that they are feeding a very high volume of events into GeoEvent Processor, which is caching events to disk when the service components are unable to keep up. Our recommendations for system resources are a minimum 6GB of RAM for GEP (vs. the 4GB minimum recommendation for ArcGIS Server). The default sizes for the queues used by the Event Processing framework can also be adjusted; you would only need to consider that if you were working with high volumes of event data. To adjust the queue sizes edit the com.esri.ges.messaging.jms.cfg and increase the defaults to reflect the following: com.esri.ges.messaging.jms.destinationPolicy.queue.memoryLimit (10 megs) com.esri.ges.messaging.jms.destinationPolicy.topic.memoryLimit (10 megs) com.esri.ges.messaging.jms.destinationPolicy.topic.memoryLimit (1 gig) You will need to stop GeoEvent Processor and edit these files as an administrator (assuming they are beneath C:\Program Files\...) then restart GeoEvent Processor in order for the settings to take effect. If you've observed the java.exe process which is running with the -Dkaraf.base="C:\<install folder>\GeoEventProcessor" -Dkaraf.data="C:\<install folder>\GeoEventProcessor\data" arguments in its command line actually consuming more than 1024 megs of memory, lowering the wrapper.java.maxmemory value to 512 megs as you suggest will only result in Java throwing out-of-memory exceptions. How many events are you sending into GeoEvent Processor? What inputs are you using? How many different GeoEvent Services do you have running? I think we need to examine the load you are placing on your Server, by GeoEvent Processing as well as any other running services/applications, before we recommend throttling the memory allowed to the Java process. - RJ
... View more
12-18-2013
08:22 AM
|
0
|
0
|
2081
|
|
POST
|
Hello Matt - Thank you for posting. I was able to confirm what you suggest, that a user can add/update features in an otherwise non-editable feature service using GeoEvent Processor. I conducted my tests with the 10.2.1 product release which should be publicly available the second week of January 2014. Currently, GeoEvent Processor can only discover and target AGOL hosted feature services which you own. You cannot, for example, configure a GeoEvent Processor Output to update or add features to a feature service which has merely been shared with you. GeoEvent Processor looks for items owned by you listed when you review "?My Content"? in your Organization. I suspect that because I "own"� the data, I was allowed to add/update features. We are, however, investigating this use case with the Server Usage support team. We will reply back to this thread if/when we learn anything more. Best Regards - RJ Cross Reference: Allow GeoEvent Processor to update non-editable AGOL feature services
... View more
12-16-2013
09:28 AM
|
0
|
0
|
938
|
|
POST
|
Hey Dennis - Attached is an illustration from the 10.2.0 product, supporting what Ryan says above. The interface panels change slightly at 10.2.1 ... but not so much that you can't use what is below. [ATTACH=CONFIG]29862[/ATTACH] - RJ
... View more
12-13-2013
07:39 AM
|
0
|
0
|
1864
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 01-05-2023 11:37 AM | |
| 1 | 02-20-2025 03:50 PM | |
| 1 | 08-31-2015 07:23 PM | |
| 1 | 05-01-2024 06:16 PM | |
| 1 | 01-05-2024 02:25 PM |
| Online Status |
Offline
|
| Date Last Visited |
05-16-2025
07:55 AM
|