POST
|
Praveen, In your GeoEvent Processor's Manager, click "Inputs", "Add Input", and select "Receive Text from a UDP Socket". This connector uses a very simple network transport library that binds to the port you specify and each UDP packet received is parsed by the Text adapter. The text adapter's default settings parse the data as events separated with new-line characters so make sure you send a newline character after each event. The values should be separated by commas. You can change these values if they don't suit your needs. If you need to parse the data with a different parser (like JSON), you will need to create a different connector in Manager (under "Site" -> "GeoEvent Processor" -> "Connectors") Good luck!
... View more
10-18-2013
11:14 AM
|
0
|
0
|
431
|
POST
|
Yes, it looks like you will need to have some custom work done to support this. The built in connector is a combination of a regular TCP socket Transport and a UTF-16 based Adapter (parser). The Transport hands raw bytes to the Adapter for parsing. Since the Base64 decoding process converts bytes to bytes, you could potentially swap out our built-in Transport or Adapter depending on which one you wanted to preserve. For example, if you think the built-in Adapter is really nice, you could write your own TCP transport that automatically decoded the bytes. Or you could use the built-in TCP Transport and write your own custom Adapter to decode the bytes and convert them to GeoEvents. Either way, there are code samples in the SDK showing how to write a custom Transport and Adapter.
... View more
10-02-2013
08:34 AM
|
0
|
0
|
178
|
POST
|
There are two ways to approach this problem. You can change the port where GeoEvent Processor is listening for https traffic from 6143 to another port. To do this, find the file called "org.ops4j.pax.web.cfg". It is most likely in this path: "c:\Program Files\ArcGIS\Server\GeoEventProcessor\etc". Open this file in a text editor and around line 4 you should see where the secure port is being set to 6143. Change this to a different value (the default https port is 443), save the file, and restart your GeoEvent Processor in the Windows Services Control Panel. This will also affect the port you use to access the Manager. The second option is to use a reverse proxy. The Web Adapter is designed for use with ArcGIS Server services (like Map Services or Feature Services), and has not been tested with the GeoEvent Processor's REST endpoint. You might have to use a different reverse proxy if this is the approach you need to follow.
... View more
09-25-2013
01:40 PM
|
0
|
0
|
228
|
POST
|
There is likely to be a connector released in the near future that supports ActiveMQ. This could be used to resequence the messages. However, the internal processing of GeoEvents in the GeoEvent Processor happens in multiple parallel processing threads. Depending on how fast these different threads run, the GeoEvents could come out of the GeoEventProcessor in a different order than they went in. This doesn't usually happen, but it is possible.
... View more
09-24-2013
08:54 AM
|
0
|
0
|
204
|
POST
|
Ken, There is no built-in GeoCoder in the GEP. The quickest way to do this would probably be to add a custom processor that called out to the Esri Geocoding Serivce and requested coordinates from a description.
... View more
09-12-2013
10:02 AM
|
0
|
0
|
150
|
POST
|
One of the sample processors (volume control) demonstrates how to detect (and filter) events that come through it when their rate exceeds a user-defined threshold. Instead of filtering those events, you could simply have the processor produce alerts when the threshold exceeds the detected value. However, like all of the bundled samples it is meant for instructional purposes. Thus it is missing some of the checks to make sure it is not consuming excessive amounts of memory, error checking, and other things that are part of a complete solution.
... View more
09-11-2013
09:16 AM
|
0
|
0
|
110
|
POST
|
This sounds like a missing TRACK_ID tag. Can you open the GeoEvent Definition (in Manager, click "Site", "Geoevent Definitions", then click the little pencil icon next to the name of your GeoEvent Definition). Is one of the fields tagged with the TRACK_ID tag? If not, then the GeoEvent Processor will not be able to group GeoEvents by the person who generated them. Each GeoEvent is considered to be a unique person. Try adding the TRACK_ID tag to whatever field is used to uniquely identify the person generating the GeoEvent, and save your changes to the GeoEvent Definition. I believe this will take effect immediately, but you can stop and then start your GeoEvent Service again just to make sure. -Ryan
... View more
08-27-2013
01:24 PM
|
0
|
0
|
250
|
POST
|
Is there a definitive way to confirm GEP is successfully receiving/processing/outputting everything, other than what Monitor is reporting? The monitor counts and the log files are the only way for you to get information from the GEP system at 10.2. The only exception to this is if you develop your own Adapter/Transport/Processor, you can connect a debugger and step through your own code. Note that setting the log level to DEBUG for specific components within the GEP is possible. Or is Monitor a very trustworthy tool for determining that GEP has received/processed/output everything sent to it? I would say that the Monitor page is very useful for determining if a GeoEvent got dropped by the Input, Service, or Output. In your case, it seems pretty clear that the messages are being dropped in the Output or in the transaction between the Output and the Feature service. But the Monitor page doesn't give you details on what takes place inside the Output. The problem with getting much more from the Monitor here is that the Monitor is generic to all Outputs and you need details on the inner working of a specific Adapter & Transport. What ways exist to track down where the message "dropping" is occurring (especially given there are no errors in GEP or AGS logs)? There's 4 possible culprits that I can identify from a GEP perspective: Adapter, Transport, Output, Feature Service. I'm lumping everything downstream from the Feature service (like the SQL Server) as part of the Feature Service. As mentioned, there's no definitive way for you to determine if messages get passed successfully from Adapter to Transport, but you have a way to intercept the data that goes from the Transport to the Feature Service. All of this data is transferred through http, and can be intercepted by wireshark. However, this would take a lot of patience as you would have to figure out what message got dropped, and it sounds like you are only losing about 0.1% of your data. That's a lot of data to sift through looking for the lost geoevent. You can also set the log level of your Transport to DEBUG and see the content of all features sent to the feature service, and the response from the server for each of those insert statements. But that's still a lot of data to sift through. Is it possible that GEP is successfully passing on everything, but ArcGIS Server is not successfully writing all records? That is possible. I would look at the memory and cpu usage of the GEP and the ArcGIS Server processes on your machine to determine if one of them looks to be behaving abnormally. The ArcGIS Server logs may contain something useful as well. Could this be a SQL Server thing? This seems possible as well, but I wouldn't know how to systematically diagnose an error here. What other places in the entire cosmos of this data flow (not just within GEP) are potential suspects? That's difficult to say from the perspective of GEP. There may be settings use when publishing the feature service that affect its performance as well as a plethora of database settings, network settings, etc. Is there any flaw in our methodology that could explain the discrepancy between what GEP reports and what's written to the DB? The only possible logical error I can think of would be if you were using the "Update Feature Service" connector. This connector will only update each track ID once per buffer interval. For example, if you set it to buffer the geoevents for 3 seconds before sending an update to the feature service, then send in two GeoEvents with the same track ID, only one of those GeoEvents will get written to the Feature Service.
... View more
08-19-2013
03:07 PM
|
0
|
0
|
701
|
POST
|
Each GeoEvent is processed independently of any others, so there would have to be a processor that did this for you. None of the built-in processors with the 10.2 release do this. However, one of the Processor samples in the SDK (Volume Control Processor) does something very similar to what you are asking for. It controls the "volume of traffic" through the processor. For example, you could configure it to only allow 1 message every 60 seconds to go through it, and use that to filter the number of GeoEvents that are sent to your email output.
... View more
08-14-2013
01:25 PM
|
0
|
0
|
177
|
POST
|
I guess I could create another geofence sync and follow the same pattern? For now, that's cleanest way to do it without doing some tricky buffer-nameing in the feature service. But note that creating two sync rules will double the amount of RAM consumed by GeoFences in the GeoEvent Processor. Keep an eye on the RAM consumed by your GeoEvent Processor. Actually, it might make more sense to populate a field in the buffer feature service that contains a textual description of the traffic levels in the buffer, such as "LIGHT", "MEDIUM", and "HEAVY". Then configure your sync rule to use that field to populate the geofence's 'category'. Now you create two incident processors, one that uses geofences in the "HEAVY" category, and one that uses geofences in the "MEDIUM" category.
... View more
08-08-2013
12:28 PM
|
1
|
0
|
706
|
POST
|
1. It stops at 10 by default. There appears to be a configuration setting that controls this called "log4j.appender.out.maxBackupIndex" in the file "org.ops4j.pax.logging.cfg" in the "etc" folder of your GeoEvent Processor. There are a few other settings in that file that control file size and other details. You are actually asking about the File appender which is just one output from the GEP log system (the Log panel in manager is another one). For more details about configuring appenders, see The Log4J manual. 2. Yes I suspect that's what is really going on, but the Adapter isn't keeping up because it sounds like it got completely stuck (since you said it completely stopped sending out GeoEvents). From the GEP perspective, it know s that raw bytes were passed into the Adapter, but no GeoEvents are coming out). 3. The benchmarks are measuring how fast GEP can pass a simple message all the way through from beginning to end. This assumes minimal parsing of data in the Input, and minimal formatting of data in the Output. 4. The Text Adapter (which I assume you are using) is single threaded in release 10.2. So if the parsing of the incoming text data is the bottleneck, you would have to create two inputs to get two separate parsing threads. But you would have to have control over the incoming data, and have it send half of the text messages to one Input, and half of the text messages to the other Input. But again, you said data completely stopped flowing, so I suspect something in the Text Adapter got completely stuck. 5. You mentioned that you restarted GEP when it didn't work to stop/start the input. Another intermediate step might be to delete the offending input, and recreate it. When you restart an input, there are data structures in memory that are not deleted, and may be in a bad state. Deleting the input and recreating it will cause these structures to be recreated and might fix the problem.
... View more
08-08-2013
12:25 PM
|
0
|
0
|
588
|
POST
|
That sounds like something you can accomplish. I would suggest that you store the buffers around each road segment as a polygon layer in a feature service, but add an extra field representing the "active" state. Then use the incoming traffic feed to update that active field to turn on/off depending on how heavy the traffic is for the road segment that you used to generate the buffer. Next create a sync rule that pulls in those buffers as geofences. The sync rule should be configured to use the active field in the buffer feature service, which in turn is populated by your live road sensor data. Then feed your incoming vehicle locations to an incident detector that uses those geofences, and it will only generate incidents for vehicles entering "active" (heavy traffic) areas. Since the buffers don't change their shape, you don't need to change the feature service at all except to turn that active flag on/off.
... View more
08-08-2013
11:58 AM
|
0
|
0
|
706
|
POST
|
There is not currently a publicly accessible demonstration server. What kinds of actions were you wanting to try out? If you just need a source of streaming data, we could probably find one. If you need a server where you can create/modify streams of data then you would probably be better off with a server that you have exclusive access to so that other people aren't modifying streams at the same time as you are.
... View more
08-08-2013
06:31 AM
|
0
|
0
|
617
|
POST
|
Yes, you can do this. In fact the Incident Detector does this without any special conditions set by you. You can disconnect the "gfincoming" input from your Incident Detector and remove the "Active = true" from the conditions. Instead, configure your synchronization rule to have the "active" status of each geofence driven by a field in the source feature service like in the attached screenshot. [ATTACH=CONFIG]26575[/ATTACH] Any geofences that are pulled into GeoEvent Server will be active only if that field in the Feature Service was "true". If you want to have that feature service populated by your "gfincoming" input, you can create an output that goes to the geofence feature service, and connect the "gfincoming" input to that new output.
... View more
08-08-2013
06:21 AM
|
0
|
0
|
706
|
POST
|
I think your guess is exactly right. The input is not using the GeoEvent definition you created. Go to the Inputs tab in the Manager web pages. Click the edit button (looks like a pencil) next to the name of your input. Expand the Advanced section. Make sure the radio button next to "Incoming Data Contains GeoEvent Definition:" says "No". Next make sure "Create Fixed GeoEvent Definitions:" is "No". Now make sure the "GeoEvent Definition Name" has the geoevent definition you created. Click the Save button This should start creating the right type of geoevents. Unfortunately you will have to delete all those other definitions (hopefully there isn't too many).
... View more
08-06-2013
10:01 AM
|
1
|
0
|
224
|
Title | Kudos | Posted |
---|---|---|
1 | 08-06-2013 10:01 AM | |
1 | 01-06-2014 05:53 AM | |
3 | 10-28-2013 07:17 AM | |
2 | 03-25-2015 10:39 AM | |
1 | 12-02-2013 02:01 PM |
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:23 AM
|