POST
|
@EdenPunter - There's a few ways you could go about capturing the "Name" field as part of the incident. It might help to share how exactly you have your incident detector and filters configured. Absent of that information, I would consider putting a GeoTagger processor between the Input and the Incident Detector that simply serves to capture the name of a geofence when an intersection or some other spatial operation that isn't enter/exit occurs. Most of the time the geotag field wont have a value for "Name", but when it does, you could factor that into your opening condition. For example, you could set your opening condition to when your tracked asset enters a geofence and the "Name" field is not null. This will ensure that an incident is created only when the incident has been observed entering a geofence; and you'll also have the name of that geofence on hand. To make sure you keep the "Name" field as part of the incident, you'll also want to ensure you have the Incident Detector setting "Keep Source Fields" set to yes. By setting this to yes, you'll keep the original schema of your tracked asset before the incident detector. Put another way, you'll keep the "Name" field that was populated with the name of the geofence from the GeoTagger Processor. The closing condition might require a bit of creativity as well, but the TLDR is to preface the incident detector with the geotagger to capture the name of geofences + set "Keep Source Fields" to yes to not throw away the field that has the name of the geofence.
... View more
01-21-2025
06:13 AM
|
0
|
1
|
315
|
POST
|
Hi @ChrisCarter3 - Understood! The "Poll an ArcGIS Server for Features" input connector that is available out of the box does not have a property that controls the timeout. This is primarily because the "FeatureService" transport that the aforementioned connector uses doesn't have that property as something you can expose. This is true even if you created your own custom copy of the connector via Manager > Site > Connectors, and then tried to tweak the settings. I'm assuming my colleague @EricIronside was either referring to 1) increasing the polling interval, or 2) using a custom connector whose transport is not "FeatureService", but instead a different transport that exposes a property for influencing the client side timeout. If you're interested in the latter scenario, you could create your own custom connector via GeoEvent Manager that pairs the JSON adaptor with the HTTP transport (rather than the FeatureServer transport). It's a bit of an unorthodox workflow, but you should be able to make a query against your feature service with a bit more effort. Most importantly, the HTTP transport does expose the HTTPtimeoutvalue property. I tested this on my end (manager > site > connectors > create new connector). I set the type to "input", the adapter to "JSON", and the transport to "HTTP". You can configure whichever properties you want shown/hidden, but the key thing to note in the list of options is that there's a "HTTP Timeout (in seconds)" property now. I called my custom connector "Poll a Feature Service for Esri Feature JSON" but you can call yours whatever you'd like. Once you finish creating this connector, go through the process of adding it as a new input for configuration & use. The reason I said this is an unorthodox workflow & requires a bit more effort is because you'll need to provide the URL to the feature service you want to query along with the full query string, token, and whatever else might be needed. For example, this is part of what I provided in the URL parameter: "https://<my_geoevent_server>:6443/arcgis/rest/services/SampleWorldCities/MapServer/0/query?where=1%3D1&text=&objectIds=&time=&timeRelation=... ... ...etc etc". A good way to obtain this URL is to perform the query yourself at REST, ensuring that the format returned is set to JSON. Copy the full encoded URL and provide that in the URL parameter. I also needed to specify the acceptable MIME type as "application/JSON". Your URL, parameters, and more, will vary. The FeatureService and HTTP transports both allow you to make a HTTP request to a feature service, but where the FeatureService transport is specifically geared for navigating ArcGIS server's and querying services natively with little effort, the HTTP transport is more general for querying any sort of REST endpoint / API and so it requires more leg work to use with feature service queries.
... View more
01-16-2025
05:35 AM
|
0
|
1
|
289
|
POST
|
For anyone looking at this, I should add that different transports expose different properties for handling timeouts. Transports are just one half of what makes up an input or output connector; with the other half being the adaptor. For example, the "Poll an ArcGIS Server for Features" input connector pairs the JSON adaptor with the Feature Server transport. The Feature Server transport does not contain a property for handling timeouts hence my response above. I am assuming that the timeout in question is a server timeout which is where the feature service would need to be changed in ArcGIS Server. In contrast, the "Push Text to an External TCP Socket" output connector pairs the Text adaptor with the TCP transport. The TCP transport does contain a property, "Connection Timeout" for handling timeouts server side (GeoEvent). All of this is to say the answer to the above question is largely contingent on the type of connector in question when thinking about GeoEvent Server. Its important to consider this; and whether a timeout is client side or server side.
... View more
01-15-2025
06:20 AM
|
0
|
0
|
309
|
POST
|
Hi @ChrisCarter3 - GeoEvent Server is a client application that consumes from feature services. The timeout setting you're looking for is often going to be found server side (i.e. where the feature service resides); not client side. Put another way, ArcGIS Server is where you'll be able to control the service timeout thresholds for any particular service. I'd check out the following documentation. It seems that you can adjust the timeout threshold on services via ArcGIS Server Manager > navigating to the service in question > explore the "pooling" tab. Alternatively, if you know the average response time of the service is 45 seconds for the type of query you're trying to perform, you can adjust GeoEvent Server's input to poll (read: query) every 45+ seconds so that GeoEvent Server isn't asking for records faster than ArcGIS Server can return them. This is just an example by the way - hopefully the query response time isn't that latent. If it is, there are other strategies that can be employed to improve the query response time if you don't want to increase the timeout threshold.
... View more
01-15-2025
05:50 AM
|
0
|
3
|
310
|
POST
|
Hi @GarrettMelvin - The above information still largely holds true in the context of ArcGIS Velocity - that is, there are dozens if not hundreds of different APIs/data providers out there who each have their own strategy for handling pagination. As a result, there is no one-size-fits-all approach to support pagination out of the box. The recommended strategy at this time is to utilize the gRPC feed type to act as a bridge for bringing in paginated data per the requirements of the API/data provider in question. See gRPC. Another method would be to use the HTTP receiver feed to have some other bridge provider send data to Velocity. The product team has been exploring how we can perhaps support some common pagination methods in an effort to achieve a partial out-of-the-box solution but it has yet to be seen what are the most widely 'popular' or 'common' pagination methods. We would kindly ask that folks submit feedback on this topic to the ideas site, or as an enhancement request with Esri Support Services.
... View more
09-19-2024
10:53 AM
|
2
|
0
|
493
|
POST
|
@BenClark - There might be a defect here that's worth investigating on our end, but in the meanwhile I have a couple of thoughts/things you could try as a workaround. 1. Before using the event joiner, consider shortening the field names in your source definitions so that the resulting joined definition ultimately has shorter fields too. You might even want to go so far as to shorten the definition name feeding into the event joiner since these names are also prepended in the event joiners definition field names. This entire suggestion would potentially mean using a field mapper before the event joiner since the geotab fields associated with your input definitions are fixed. This means geotab input (i.e. the fixed definition) -> field mapper (where you've defined shorter field names and a shorter definition name instead of using the fixed geotab definition) -> event joiner -> *optional* field mapper (to redefine/shorten the joined field names once more) -> output feature layer. 2. Should the above be a no go, you might want to try truncating the resulting fields even further by removing "geotab_exceptionevent___" as a prefix with your current configuration. This means geotab input -> event joiner -> field mapper (where you reduce the field names even further) -> output. This might alleviate the drop down problem you mentioned before if I recall correctly. 3. The event joiner strips away any tags (such as the TRACK_ID tag). This goes hand in hand with all the suggestions here but make sure you assign the TRACK_ID tag again in the target definition you're specifying in the field mapper processor (after the event joiner). I assume you've done this already, but its worth double checking.
... View more
02-22-2024
01:48 PM
|
1
|
1
|
1362
|
POST
|
Hi @TylerKiehle1 - There's no specific delay setting but there are two options you could consider: 1. Create a staging folder that GeoEvent doesn't look at. Once the file has been written entirely to the staging folder, you can use a script or something similar to move it to the folder that GeoEvent is looking at and will start reading from. 2. You can configure GeoEvent to look for a file of a specific name. Once Pi creates and writes all of its data to this file, use some sort of task/script to change the name to the specific one that GeoEvent is looking for.
... View more
02-15-2024
09:31 AM
|
0
|
1
|
992
|
POST
|
Hi @Moi_Nccncc - Yes you can if you're using a geofence synchronization rule on a feature or stream service and you have a string type field in your data whose literal string values are either true or false. Assuming you've met this criteria, you can configure your geofence sync rule to use that said field as the "active field". Feature records whose active field value are "true" will continue to be used as geofences. Those that are "false" will not be used as geofences for further processing. I've included a screenshot below of what I am talking about in the sync rule settings: If you have all this in place, you can configure a GeoEvent service to process geofence records and use different logic paths to set the active field value to true or false based on your needs. It's worth noting that you can also control whether or not a geofence is active based on time extents as well, but the tried and true method is going to be to use the active field property.
... View more
02-15-2024
04:19 AM
|
0
|
1
|
572
|
POST
|
If I were in your shoes, I would give it a try but obviously there are other details to consider (e.g. what resources are available on the machine, is GeoEvent Server competing for those resources, are other inputs/outputs/GeoEvent services running on your instance of GeoEvent Server, etc). At a glance... 1. 1,500 point geofences doesn't seem alarmingly high. They have simple geometries (x,y) which is a major plus. I'd be more concerned if you had 1,500 complex polygons (i.e. high vertices count). 2. Updating those AVL "geofences" every minute doesn't seem all that bad either. That is within the scope of what a stream service (for a sync rule) could support (but again, this is based on the other factors I mentioned above). 3. This is the biggest question; how many areas are there? Are they complex polygons? How frequently are you looking to check them for truck intersection? If they're simple polygons, there aren't many, and you only need to see their point-in-time status once every minute or so, this could work. If your answer is the opposite (i.e. they are all complex polygons, you have hundreds instead of dozens, and you need them updated every 5 seconds, then that could be an issue where you see performance bog down some and not keep up in real-time). Again, if I were in your shoes I would put together a pseudo proof of concept and see how it works for you. That's really the best way to know for certain (especially because there are other factors that could be at play like I mentioned above that I am personally not aware of). In the event that you find its not what you want, there can be other ways to potentially go about your problem. For example you could track the trucks again and geotag them with the name of the geofence area they intersect with. In another field you could have the name of the area they're assigned to. If the geofence (geotag) name field == the assigned area name, there might be a way to use that condition to affect another GeoEvent Service to update a separate areas dataset to have an area status set to covered. This might require some more tinkering/logic though.
... View more
02-06-2024
03:00 AM
|
0
|
1
|
867
|
POST
|
@Moi_Nccncc - I assume that the areas are (or can be) geofences coming from a feature layer or stream layer? If this is true, you can ingest the areas every "x" seconds or minutes from this feature layer using an input connector (e.g. poll an arcgis server for features). Simultaneously, you have your truck data coming in somehow. You can push those trucks to a stream layer & use said stream layer (via a geofence synchronization rule) to act as point geofences that are updating frequently. With this setup, you can begin to answer real-time questions such as "when does my area (polygon event), contain a truck (point geofence)?". You can refine this even further; "when does my area (site "xyz"), contain a truck (of type "Alpha")? If the truck is of type "Alpha", consider area xyz fully covered. However, if the truck is of type "Beta", consider area xyz half covered. To me it sounds like you're not trying to capture the status of the truck, but rather the status of the area based on its interaction with the truck if that makes sense. I hope this helps but let me know if it does not.
... View more
02-05-2024
04:39 AM
|
0
|
3
|
876
|
POST
|
Hi @YObaidat - As an out-of-the-box capability, the answer is no. That's not to say its impossible however. Rather, its likely that you would need to develop a custom transport/adaptor for handling messages via thread in the format/protocol of matter. An alternative solution would be to push these messages to some other endpoint or broker that GeoEvent can then tap into with an out-of-the-box connector (e.g. Kafka).
... View more
02-01-2024
06:53 AM
|
1
|
0
|
351
|
POST
|
Hi@meb323 - It could be that the token error means the service isn't accessible since the credentials to ArcGIS Online (where I presume the service is?) are bad. You'd want to check your GeoEvent data store connections & credentials to see whether or not your connection to ArcGIS Online validates. It could be that the credentials you provided have access to reading the service, but not writing to it. Just something else to consider. Assuming the credentials are good, you own the service, and that you can edit the service, my next suggestion would be to ensure you have your TRACK_ID tags set correctly. You might know that the feature record for car with plate "ABC321" was updated, and should then be updated in the output feature layer as such, but GeoEvent doesn't necessarily know which field value its supposed to use to uniquely identify and manage each record on output so they're all just "cars" if that makes sense. Make sure your input connector definition has a field set with the TRACK_ID tag set to unique identify and manage records from one another in the pipeline of your GeoEvent Service. If this definition is the same one used for the output connector, you should be good to go. If you somehow have a different definition for the output connector, make sure that output definition has the TRACK_ID tag appropriately set to the correct field too. Putting this back into the context of an example, the correct usage of a TRACK_ID will let GeoEvent know that it needs to update the record whose plate is "ABC321" with the values it processed in the output feature service. For more, check out this doc. I hope this helps and makes sense but please don't hesitate to message me if it doesn't. It could be that the issue is none of what I mentioned in which case I might suggest reaching out to Support Services for further investigation.
... View more
02-01-2024
06:31 AM
|
0
|
0
|
715
|
POST
|
Hi @Moi_Nccncc - It might be worth taking a look at the karaf logs from the system directly. You can find these via C:\Program Files\ArcGIS\Server\GeoEvent\data\log. I am willing to bet there are messages indicating kafka issues with finding various topics and what not. If that is the case, your best bet is going to be doing an admin reset. Make sure you get a backup of your GeoEvent Server configuration first. None of what I suggested above speaks to what caused the issue to begin with, but it should hopefully get you going again. If the issue persists, I would recommend reaching out to Support Services so that we can take a deeper look at what is amiss. Hope this helps!
... View more
02-01-2024
06:04 AM
|
0
|
0
|
952
|
POST
|
Hi @MANESK - There's a few things you could try here: 1. If you can afford it, try pre-filtering the data so that you're only bringing in the data that matters, thereby shrinking the payload. 2. You could try increasing the polling interval to give GeoEvent Server more time to process the incoming records as events. 3. You could consider increasing the input buffer capacity. By default its 20MB but increasing it might prevent the overflow from happening for a time. This is a setting in GeoEvent Manager > site > settings. Check out this page. 4. It could be worth checking out how much RAM is available on the machine where GeoEvent Server is running and whether or not GeoEvent Server is having to compete with other processes that are also consuming RAM.
... View more
02-01-2024
05:54 AM
|
0
|
0
|
635
|
POST
|
Hi @Moi_Nccncc - I would recommend checking out this blog.
... View more
02-01-2024
05:43 AM
|
0
|
0
|
390
|
Title | Kudos | Posted |
---|---|---|
1 | 01-31-2024 01:21 PM | |
2 | 09-19-2024 10:53 AM | |
1 | 02-01-2024 06:53 AM | |
1 | 02-22-2024 01:48 PM | |
1 | 02-01-2024 05:33 AM |
Online Status |
Offline
|
Date Last Visited |
01-28-2025
03:43 PM
|