BLOG
|
A recent release of the ArcGIS GeoEvent Server included a new set of connectors for accessing Kafka systems. We have documented this initial release, but we have not released exhaustive documentation to guide users on how to work with the connectors. This blog attempts to provide some additional guidance on implementing security with these connectors, while we work on enhancing our documentation (with examples). This is in no way meant to be an exhaustive guide and will probably change over time. The security implementations have some prerequisites which users need to be aware of prior to using the connectors in a secured cluster setting. Especially in the case of Kerberos authentication, users are most likely to run into issues if they don’t pay particular attention to the details on the help page. Even in cases, where they do, they might run into problems when there isn’t a thorough understanding of Kerberos as supported in Kafka. For the GSSAPI(Kerberos) implementation, SASL_PLAINTEXT is not a supported protocol with the connectors; the only support protocol is SASL_SSL. So your config file should look something like in the below: * KafkaClient { * com.sun.security.auth.module.Krb5LoginModule required * useKeyTab=true * storeKey=true * keyTab="<path-to-keyTab>/[keyTabName].keyTab" * principal="$serviceName/principal@EXAMPLE.COM"; * }; Please explore the client section in the security documentation on the Kerberos for further extended information on this. In this specific implementation, we use the folder data store registered with GeoEvent to assign the path to the Jaas file. Our experience with the specific error where Kafka reports that the KafkaClient section in the Jaas file does not exist, is that in most cases that is misleading. It could be that the file is not correctly formatted and that the contents aren’t being read right by the Kafka client libs. Again, please refer to the confluent docs on the correct formatting. For SSL (TLS1.2), the key thing to note is that the trust store is managed by ArcGIS Server; so please ensure that ArcGIS Server’s trust store contains the client cert info or cert chain that you are attempting to set as your key store for the kafka client (the connectors in this case). We have our implementation provide “optional” parameters when your Kafka server requires a client to authenticate. In this case, we require a file format for your key store file to be “PKCS12”. Lastly, our implementation does not account for authorization; we only deal with authentication.
... View more
09-25-2019
08:49 AM
|
1
|
1
|
1243
|
BLOG
|
NOTE: This post deserves a more in depth conversation that I hope to expand on in the near future. Sometimes organizations want to use token authentication in their GeoEvent Data Store connections. This allows GeoEvent to connect to an ArcGIS Enterprise or ArcGIS Server system without entering credentials into the GeoEvent Manager user interface. This approach works perfectely fine when testing Data Store connections and proving out the system, but it should not be viewed as a long term solution. Tokens issued by ArcGIS Enterprise/Server are only valid for a maximum of two (2) weeks. Once a token expires, a new one must be generated and provided to GeoEvent. While this process can be scripted (see Jake Skinner's article on scripting using the GeoEvent Admin API or my article on using he GeoEvent Admin API). Since the lifespan of tokens cannot be controlled nor extended it is recommended that credentials be used when creating Data Stores in ArcGIS GeoEvent Server. Most customers will do one of the following. Please note that choosing one of the following depends on your use case and an understanding of how GeoEvent accesses content (Items) on an ArcGIS Enterprise system (please see this article on Integration for more information). Single Admin User When creating a Data Store in GeoEvent, use the credentials of the GeoEvent Administrator. Since this person is the primary user of GeoEvent, it makes sense for them to own and maintain the connection and the content. In this case, any items in the Portal that are utilized by GeoEvent will need to be either public or owned by the GeoEvent Administrator. ‘Headless’ or 'Application' User for GeoEvent In the remote system, create a new user that represents the GeoEvent application itself. When creating a Data Store in GeoEvent, use these GeoEvent Application credentials. If more than one person manages/maitains GeoEvent, each of these users will need to be able to log into the Enterprise system (not GeoEvent, because the credentials are cached in the Data Store connection for them) in order for them to own and maintain the content. In other words, any items in the Portal that are utilized by GeoEvent will need to be either public or owned by the GeoEvent Application user. Data Store Connection Per GeoEvent Admin If there are multiple GeoEvent Administrators and they don't want to share an account in the Enterprise, or comingle their Items in Enterprise then you will have to create a GeoEvent Data Store connection for each GeoEvent Administrator. Each GeoEvent administrator will have their own sandbox of items they can use. GoeEvent Admins will not be able to see other user's Enterprise items, so long as they use their dedicated Data Store connection. You will need to enforce an honor policy that everyone only uses their Data Store connection that contains their credentials, since there is no way to restrict access to a Data Store connection within GeoEvent.
... View more
09-25-2019
08:32 AM
|
1
|
0
|
1024
|
BLOG
|
Previous guidance on Custom Components advised users to update (recompile) their custom components with every new release of GeoEvent Server. This advice is no longer applicable, and should be replaced with the following strategy: When creating a custom compoent for GeoEvent Server, the GeoEvent SDK version you compile the component with does NOT have to match your version of ArcGIS GeoEvent Server. When creating the component, you should compile against the earliest version of the SDK that you can, then test to verify your custom component works with the version of GeoEvent you are on. Or, if you don't aticipate having to be backward compatible (and you are already using a version that is 10.4 or greater), you can compile against your GeoEvent Server's current version SDK and expect to not have to compile it again for later releases. In general, anything compiled against GeoEvent SKD 10.4.0 should be compatible with any release of GeoEvent Server 10.4 or later (up to the latest release). Components compiled against GeoEvent SDK versions previous to 10.4.0 will not work with the current versions of GeoEvent Server. If you have a custom component that was created using a version of the GeoEvent SDK prior to the 10.4.0 version, you must recompile it and redeploy it. Once your component is compiled without error, you should test it against your current version of GeoEvent Server. In rare occaisons, a component may compile against an earlier version of the SDK, but not deploy correctly on the current version of GeoEvent Server. There is an advantage to this change in strategy: When you upgrade GeoEvent Server, you should not have to upgrade every custom component. If it deployes and works, stick with what you've got. The main reason for this is, when you upgrade a custom component to a new version, you must re-create all of the inputs, outputs, or processors for that new version. So, unless you are fixing a bug or enhancing your component, you will not need to upgrade your custom component. For this reason, you will find the following note on the GeoEvent Gallery Items as we update them: NOTE: The release strategy for ArcGIS GeoEvent Server components delivered on the ArcGIS GeoEvent Server Gallery has been updated. Going forward, a new release will only be created when a component has an issue, is being enhanced with new capabilities, or is not compatible with new versions of ArcGIS GeoEvent Server. This strategy makes upgrades of these custom components easier since you will not have to upgrade them for every release of ArcGIS GeoEvent Server unless a new version of that connector is released. The documentation for the latest release has been updated and includes instructions for updating your configuration to align with this strategy. We apologize for any inconvenience this change in strategy causes.
... View more
09-23-2019
01:50 PM
|
0
|
0
|
394
|
BLOG
|
At 10.7, the GeoEvent Manager user interface has an issue uploading .jar files greater than 100KB. We are aware of the problem and actively working on fixing the issue. This workaround applies to any custom component you would deploy to GeoEvent using a .jar file. In the meantime, if you experience issues uploading .jar files via the GeoEvent Manager interface, you can deploy them directly by placing them into the GeoEvent's Deploy directory (see below). You should do this while the GeoEvent service is running (do NOT stop the service). <GeoEvent Install Location>\GeoEvent\deploy\ On a windows machine, the default location would be the following: C:\Program Files\ArcGIS\Server\GeoEvent\deploy\ Once you copy the .jar file into this directory, you can check GeoEvent Manager to verify that the component was imported correctly (go to Site > Components > Transports | Adapters | Processors).
... View more
09-23-2019
01:31 PM
|
0
|
0
|
410
|
BLOG
|
Sometimes a log message appears at the ERROR level that you temporarily/permanently need to turn off. However, the GeoEvent User Interface doesn’t allow you to turn logs off. To get around this, you can do it using the GeoEvent Configuration Files. 1. In GeoEvent Manager, set the log level on the logger you want to eliminate to ERROR. Repeat for each logger. Some examples of loggers that you may decide to turn off. Logger: com.esri.ges.httpclient.Http Logger: com.esri.ges.fabric.internal.ZKPersistenceUtility 2. On the GeoEvent machine, edit the following logging configuration file (NOTE: On Windows you will need to run the editor as Administrator) <GeoEvent Install>\etc\org.ops4j.pax.logging.cfg 3. For each of the logger names above (from Step 1): a. Search for the logger name string from above, you should find a .name record like the following: log4j2.logger.com_esri_ges_httpclient_http.name = com.esri.ges.httpclient.Http b. Change the .level record for that logger .name record to be OFF log4j2.logger.com_esri_ges_httpclient_http.name = com.esri.ges.httpclient.Http log4j2.logger.com_esri_ges_httpclient_http.level = OFF 4. Save your changes and close the .cfg file. 5. Restart GeoEvent to be sure it picks up the new logger settings.
... View more
08-23-2019
07:52 AM
|
1
|
3
|
493
|
POST
|
Hey, Short Answer: GPSFix.Speed[0].Type = Speed1_Type GPSFix.Speed[0].Units = Speed1_Units GPSFix.Speed[0].Value = Speed1_Value GPSFix.Speed[1].Type = Speed2_Type GPSFix.Speed[1].Units = Speed2_Units GPSFix.Speed[1].Value = Speed2_Value GPSFix.Speed[2].Type = Speed3_Type GPSFix.Speed[2].Units = Speed3_Units GPSFix.Speed[2].Value = Speed3_Value NOTE: I updated the Networkfleet Connector yesterday, so please upgrade to the latest release (this should be easy now that you migrated to the 10.4.0 version and have it working). Please see the section Replace the Verizon Networkfleet Adapter in the documentation to replace your .jar files. The Release Notes has information on why I had to make a new release. Long Answer: In the GeoEvent Definition the Speed field is a 'multi-cardinal group' which means it is an array of Speed items. This is slightly different from the GPSFix group which has a single cardinality (there is always only one GPSFix group). Event though each is a group, accessing a single vs. multi cardinal group is slightly different in notation. When the field mapper provides the drop down of fields, it treats ALL groups as single groups. Single Group For a single group, there will be only one item and GeoEvent can make the assumption that any variable under that group is directly accessible from the parent item. So the 'dot' notation in the Field Mapper drop down works as expected. For the single GPSFix group item, it has one and only one FixTime attribute. So accessing fields in the GPSFix group looks like the following: GPSFix.FixTime GPSFix.FixTimeUTF GPSFix.Latitude GPSFix.Longitude GPSFix.Ignition GPSFix.Speed Multi-Cardinal Group (Array) For the multi-cardinal group, there could be any number of child items in an array (0 or more). To access the data, you need to inform GeoEvent which item in the array you are trying to access this. You do this using common array notation with a 0 based index (index 0 is the first item in the array). The only way to know if you need to use the 'Array Index' format is to look at the GeoEvent Definition cardinality setting on the field (is it 'one' or 'many'). If it is set to many you must use the array index format. So to get at the first speed item's Type parameter, you have to index into the Speed array using index [0] for the first item: GPSFix.Speed[0].Type Thus: GPSFix.Speed[0].Type = Speed1_Type GPSFix.Speed[0].Units = Speed1_Units GPSFix.Speed[0].Value = Speed1_Value GPSFix.Speed[1].Type = Speed2_Type GPSFix.Speed[1].Units = Speed2_Units GPSFix.Speed[1].Value = Speed2_Value GPSFix.Speed[2].Type = Speed3_Type GPSFix.Speed[2].Units = Speed3_Units GPSFix.Speed[2].Value = Speed3_Value A note of caution You should not see this issue for the Networkfleet data, so can safely ignore it for your current use case, but I bring this up for future reference as something to avoid in your data structures if you can (unfortunately we don't control the format of most incoming data). For most data arrays, the length of the array is fixed (for example, You can always assume you will get 3 speeds with the Networkfleet GPS message type). But sometimes, the length of the array can be variable. There is no good way to account for this in GeoEvent since indexing into an array with 0 items will fail with a "Null Pointer" exception. For example, if a GPSFix had no Speed records, GPSFix.Speed[0] would be null, so GPSFix.Speed[0].Type would be an exception. To get around this, you have to use a filter to determine if an event has an item at a specific index (filter where NOT(GPSFix.Speed[0] ISNULL) ). As you can guess, this gets quite messy in the GeoEvent Service, and become unpractical for larger arrays (the most I've ever implemented is an array that could contain 0 to 4 records and that seriously tested my patience). RJ Sunderman has a blog post that might be helpful in understanding hierarchical and multi-cardinal data structures in GeoEvent if you need more details or examples.
... View more
08-01-2019
07:21 AM
|
0
|
0
|
2488
|
POST
|
Hey, Page 6 in the documentation lists the different message schema you might expect to receive from Networkfleet. In most cases people only get the NetworkfleetGPS messages (but some people get all of them). You can put a filter between your input and your output (stream service or logger) to filter for only the GPS messages. That will ensure you only pass the GPS messages through. Another thing you should probably do prior to the stream service is add a new GeoEvent Definition that "flattens" the original NetworkfleetGPS definition (the attached configuration has this definition). If you use the flat definition, you will have to re-create your stream service using the new flat definition. Use a Field Mapper to map the nested/grouped fields into the flat definition as shown below:
... View more
07-23-2019
12:54 PM
|
0
|
3
|
2488
|
POST
|
Hey Cassidy, We updated our release process, so there are some extra steps that you have to go through to upgrade to the new Networkfleet Connector. The steps should be detailed in the release documentation, so I encourage you to read through the upgrade sections there. Here's a brief overview of the steps below. The new Networkfleet Adapter is released with a version of 10.4.0. This indicates the adapter is compatible with all versions of GeoEvent 10.4 and later (this is the change in process that we made).Because the version is changing on the .jar you are deploying you must do the following: 1. In GeoEvent Manager, go to Site > GeoEvent > Connectors and edit the Networkfleet Connector. Ensure that it is using the Networkfleet Adapter and the HTTP Transport. Press Save (you must save it so it updates to the new adapter). 2. Go to Services > Inputs and add a new Networkfleet input. The dialog to create a new input should display the properties correctly. Best, Eric
... View more
07-23-2019
09:59 AM
|
0
|
5
|
2488
|
POST
|
The GeoEvent Manager user interface has an issue uploading .jar files greater than 100KB. We are aware of the problem and actively working on fixing the issue. In the meantime, if you experience issues uploading .jar files via the GeoEvent Manager interface, you can deploy them directly by placing them into the GeoEvent's Deploy directory (see below). You should do this while the GeoEvent service is running (do NOT stop the service). <GeoEvent Install Location>\GeoEvent\deploy\ On a windows machine, the default location would be the following: C:\Program Files\ArcGIS\Server\GeoEvent\deploy\ Once you copy the .jar file into this directory, you can check GeoEvent Manager to verify that the component was imported correctly (go to Site > Components > Transports | Adapters | Processors).
... View more
07-23-2019
08:49 AM
|
0
|
1
|
2728
|
BLOG
|
One of the most common requests for the HTTP Transport is to implement some custom authentication steps that are required by an external API. Unfortunately, it is impossible to implement the HTTP Transport in a way that can accommodate all of the possible permutations. So it becomes necessary to write your own HTTP Transport to include the desired authentication functionality. But how can you do this while still incorporating the default Proxy capabilities provided by the GeoEvent Server? This blog will show you how to access the underlying proxy properties and implement your HTTP Transport so that you don't have to re-invent the proxy capability from scratch. NOTE: This blog post assumes you are already familiar with developing custom transports for GeoEvent. If not, please take a look at the GeoEvent SDK documentation provided with your GeoEvent Server installation at: <GeoEventServerInstallLocation>\ArcGIS\Server\GeoEvent\sdk\GeoEvent Developer Guide.pdf System Proxy Settings GeoEvent Server provides global settings for the proxy on both HTTP and HTTPS schemes. To access these settings, open GeoEvent Manager, navigate to Site > Settings and scroll down to the Http Proxy Settings and/or Https Proxy Settings sections. Here you can set the name of the proxy host, the port it is listening on, and the credentials to use. Not all proxy configurations will utilize all of these settings. If you set a host name without specifying a port the system will utilize the default port number for the schema requested (80 for HTTP and 443 for HTTPS). If you don't specify a username/password, then the proxy request won't include those credentials in the request (the proxy is open from the inside). One typical implementation includes a forward proxy listening on a single port (default to port 80) that will forward both HTTP and HTTPS schemes. In this case, the settings for HTTP and HTTPS would be the same: Http Proxy Host: myhost.company.com Https Proxy Host: myhost.company.com Https Proxy Port: 80 Once you've set up your proxy settings, you should be able to test them using a standard input that utilizes the HTTP transport (like the Poll and ArcGIS Server for Features that is requesting data from ArcGIS Online). Configure A Custom Transport Service The first thing you need to do to create a custom transport service that will be able to take advantage of the underlying system's proxy support is to get access to the GeoEvent HTTP Client Service. The blueprint config.xml file should look something like the following. The important parts are 1) adding the reference to the blueprint to the GeoEventHttpClientService and then 2) add that reference to the service bean as a property (OSGI will inject the GeoEventHttpClientService into the service bean once it is created). <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http ://www.osgi.org/xmlns/blueprint /v1.0.0" xmlns:ext="http ://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"> <reference id="geoEventHttpClientService" interface="com.esri.ges.core.http.GeoEventHttpClientService" /> <bean id="myTransportServiceBean" class="com.esri.geoevent.transport.custom.MyTransportService" activation="eager"> <property name="bundleContext" ref="blueprintBundleContext" /> <property name="geoEventHttpClientService" ref="geoEventHttpClientService" /> </bean> <service id="geotabTransportService" ref="geotabTransportServiceBean" interface="com.esri.ges.transport.TransportService" /> </blueprint> In the service java code, add a setGeoEventHttpClientService method to allow the injection of the GeoEvent Http Client Service. Then pass that GeoEvent Http Client Service to your transport when it is created. private GeoEventHttpClientService httpClientService; public void setGeoEventHttpClientService(GeoEventHttpClientService httpClientService) { this.httpClientService = httpClientService; } @Override public Transport createTransport() throws ComponentException { return new MyTransport(definition, httpClientService); } This GeoEvent Http Client Service will be able to create Http Clients that implement the underlying proxy capabilities of GeoEvent. If you use the Http Clients the service creates, you can rest assured that the Global GeoEvent Settings for proxy values will be honored. Create the GeoEvent Http Client In your transport implementation there are a few things to note: Every time your transport starts, you should create a new HttpClient. Every time your transport stops, you should close your current HttpClient. Whenever the properties of your connection are changed, the transport is stopped then started again, so if you follow these rules, you will be guaranteed to use a HttpClient with the correct settings. In the start() method of your transport, you should create a new HttpClient using the GeoEventHttpClientService. This will create a GeoEventHttpClient that is able to properly proxy your requests.In the stop() method, you will want to close the the HttpClient to free up the resources. Please note that I've left out some try/catch/finally calls here for clarity. import com.esri.ges.core.http.GeoEventHttpClient; ... public class MyTransport extends InboundTransportBase { private GeoEventHttpClient httpclient; ... @Override public synchronized void start() { ... this.httpclient = httpClientService.createNewClient(); ... } ... @Override public synchronized void stop() { ... this.httpclient.close(); // try/catch around this! this.httpclient = null; ... } Creating Requests Using the HttpClient There are a number of methods on the GeoEvent Http Client that will allow you to create proxy requests. Please note that you must use one of these methods to create your request in order for it to properly utilize the proxy. createGetRequest(URL url, Collection<KeyValue> parameters) This method creates a GET request with the provided list of parameters appended to the request as URL Parameters. The KeyValue Collection parameters can be null, this results in no URL Parameters at the end of the request URL. Example URL: https ://my.org.com/APICall Parameters: ({key1,value1},{key2,value2}) Result: GET https ://my.org.com/APICall?key1=value1&key2=value2 createGetRequest(URL url, String acceptableTypes) This method creates a GET request with the provided acceptable types set in the header properties. The String acceptableTypes can be null, in that case the header values will not be set. Example URL: https ://my.org.com/APICall acceptableTypes: application/json Result: GET https ://my.org.com/APICall [content-type=application/json, accept=application/json] createPostRequest(URL url, String postBody, String contentType) This method creates a POST request with the provided string body of content type. Example URL: https ://my.org.com/APICall postBody: postBody acceptableTypes: application/json Result: POST https ://my.org.com/APICall BODY=StringEntity(content-type=application/json, entity="postBody") createPostRequest(URL url, Collection<KeyValue> parameters) This method creates a POST request with the provided list of parameters embedded in the post body as a URL Encoded Form. The KeyValue Collection parameters can be null, this will result in an empty URL Encoded From Entity. Example URL: https ://my.org.com/APICall Parameters: ({key1,value1},{key2,value2}) Result: POST https ://my.org.com/APICall [content-type=application/x-www-form-urlencoded, charset=utf-8] BODY=UrlEncodedFormEntity(parameters, "UTF-8") Using the Proxy Http Request Once the Http Request object is created, you can modify the properties or the entity as needed. For example, if you need a JSON entity inside of a URL Encoded Form request: // default content type is "application/x-www-form-urlencoded" HttpPost httpPost = httpclient.createPostRequest(url, null); // If your request entity is "application/json" String requestData = "JSON-RPC=" + URLEncoder.encode(requestString, "utf-8"); StringEntity entity = new StringEntity(requestData, jsonContentType); httpPost.setEntity(entity); To execute the request via the proxy, use the GeoEventHttpClient from above. try (CloseableHttpResponse response = httpclient.execute(httpPost)) { ... // Do stuff with the response } catch (Exception e) { ... }
... View more
06-18-2019
08:52 AM
|
2
|
0
|
3629
|
BLOG
|
CAUTION: Not compatible with ArcGIS GeoEvent Server 10.8.1 (an update is pending) Cartegraph provides an operations management system that allows governments to manage assets, maintain infrastructure, and track resources. It is common to integrate Cartegraph with GIS to provide a spatial aspect to all those capabilities. Recently, I developed a Cartegraph Connector for GeoEvent Server that would allow GeoEvent to write events out to the Cartegraph OMS to track labor and equipment resources against specific tasks. This post provides the components and instructions for deploying this Connector. This connector was designed to allow GeoEvent to provide updates for both labor and equipment that are operating on specific tasks within the Cartegraph system. The underlying assumptions for this connector are as follows: AVL Data Feed - Labor and/or equipment identification is provided in the incoming data feed. This may be a Vehicle Tracking feed that contains an equipment’s GPS Location The ID of the equipment (such as a VIN) And/or the ID of the equipment operator (labor) Shared IDs - The IDs provided by the AVL Data feed are shared by the Cartegraph system. If they are not shared, the IDs can be translated or looked up in GeoEvent using a Field Enricher. Spatially Defined Cartegraph Assets – The GEoEvent Server will have access to an Esri Feature Service that provides the location and details for the Cartegraph Assets that need to be cross-referenced with the equipment and labor above. Each Task feature will contain an AssetID that identifies the Cartegraph asset (such as a roadway). GeoEvent will use this AssetID to create a Task in the Cartegraph system and assign the equipment and/or the labor to that task. When an equipment and/or labor location event is sent to GeoEvent, it will GeoTag that event with an AssetID by intersecting the GPS location with the Cartegraph Asset Features. The resulting GeoTag will indicate which Asset(s) the equipment/labor is currently active in. The resulting event containing equipment, labor, and asset information is sent to the Cartegraph system as a Task. The name of the task is defined in the GeoEvent Output and can represent any sort of operation that currently exists in the Cartegraph system, such as “Snow Plowing” or “Mowing”. Finally, the output will maintain a state of the task and will update the information for that task according to the lifetime of the task. At the completion of a task, a Cartegraph log will be created to track the equipment and labor work done on the asset for that task. If the equipment location is entering the asset’s area for the first time, a new task will be created. If the equipment location was previously in the asset’s area, an existing task will be updated. If the equipment location leaves an asset’s area, the existing task will be closed and a log item will be created for the equipment and/or labor. Detailed Instructions for installing this connector can be found in the zip file attached to this post.
... View more
06-11-2019
09:12 AM
|
0
|
0
|
527
|
BLOG
|
I often get questions about the settings for the Feature Service Field Enricher. Specifically the cache options availble with that processor. Below is a discussion of the properties and how various settings may affect your processor. Cache Operation When an event record is received by the processor, the processor checks its cache to see if it has a feature record matching the event record’s TRACK_ID (or primary key). If so, and if that feature record is not old/expired, then the enrichment is performed using the cached value. If no such feature record exists in the cache, or the existing cache item is old/expired, then the processor makes a focused query to obtain just that one feature record from the feature service. Each cached item maintains its own expiration time. An item in the cache is either: Retrieved as is because an event needs the data and the cached item has not expired. Refreshed via the feature service because an event needs the data but the cached item has expired. Removed from the cache because the cache size has exceeded its limit and the cached item is expired. In cases #1 & #2, the cached item is promoted to the top of the cache queue (regardless of expiration time or whether the item was fetched from the feature service or not). In case #3 above, the cache queue is pruned from the bottom (so records that haven't been used to enrich an event recently are removed first). Cache Memory Management From a memory management perspective the processor is loading feature records on-demand rather than batch-loading a bunch of records into memory “just in case they are needed”. It also avoids a nasty problem of trying to decide which records to load when a cache size is smaller than the total number of records in the feature service (for example, when the default cache size of 1,000 is used, but there are tens of thousands of feature records in the feature service). There are three downsides to this approach: initialization, short cache expiration times, and large enrichment pools. Initializing the cache can be expensive because, on startup, each event causes a call to the feature service. But once the cache is loaded, the processor will operate very quickly. Situations where the cache item expiration time is small/short (when the data being enriched changes often/quickly) the cache must reach out to the feature service as each item expires. If you find you are in this situation, you should factor this knowledge into your performance expectations (for example: on my test machine, a request to a feature service took on average around 100 ms). The final situation will occur when the cache size is too small. If you have a large set of data from which to enrich from, you should increase your cache size while monitoring your memory usage. Disabling the Cache? As mentioned above, I've seen situations where the enriched data needs to be read in from the feature service every time (regardless of the performance penalty). The underlying enriched data is changing just as frequently as the event data running through the GeoEvent Service. In this case, you cannot set the cache expiration to anything less than 1 minute (0.5 wont work, and 0 causes the cache to never expire). But you can set the cache size to 1. Assuming your events are all mixed up (e.g. the TRACK_ID is not the same for two successive events) then the cache will never hold the right value and the processor will have to fetch from the feature service every time. Field Enricher Cache Notes: Cache expiration time (in minutes) is: Stored and consumed as an Integer value. So 0.5 (30 seconds) is not a valid value. It can be set to 0: data never expires. Once an item is read in, it will not be refreshed To reset the cache, you can restart or re-publish your GeoEvent Service. Each cache item (values, expire time) is maintained separately If a value is not found or found to have expired, it is queried for directly from the feature service ("where <uniqueID>=<value>") When a new value is retrieved it is assigned its own expire time (now()+x minutes) The cache size can be any integer value > 0 Setting the value to 0 will result in a default value of 1. Setting the cache size to something small (like 1) would force cache updates potentially faster than 1 minute Assuming your stream of events don't have the same TRACK_ID, then each new event will have to be fetched from the feature service. If your enrichment data changes very often, then this can be a valid strategy to use to force the enricher to get new data every time. This will impact performance so you should test to be sure how much of an impact it will be in your case. When the cache size exceeds the max cache size, the least used records are pruned from the list. The expiration time of a record has nothing to do with cache pruning. Each time a record is used to enrich an event, it is promoted to the top of the queue. Records that are not used to enrich an event fall to the bottom of the list. The records at the bottom of the list are pruned first.
... View more
05-13-2019
09:32 AM
|
2
|
1
|
1019
|
BLOG
|
If you are looking for a way to get your data in the John Deere API (JDLink or My John Deere) into ArcGIS GeoEvent Server, you can use the attached instructions to get started. NOTE: There is an important limitation in that the John Deere API is paged, meaning you will only be able to get 100 records at a time per input. To get around this, you will have to create an input for each 100 pieces of equipment you want to monitor.
... View more
05-13-2019
07:49 AM
|
5
|
0
|
1149
|
POST
|
Hey Everyone, I just updated my GitHub repository hosting the python scripts for my blog Scripting tasks using the GeoEvent Admin API - Update Inputs to include a python script that will update either URL Parameters OR Header name:value properties. Look for the python script UpdateGEEInputURLorHeadParam.py. Best, Eric I.
... View more
05-13-2019
07:39 AM
|
0
|
0
|
1629
|
BLOG
|
Hey Everyone, In case you don't want to read too deeply into my other post, here's the secret to get rid of the 403 error (you have to set the referrer URL on the security handler prior to using it): # GeoEvent admin URL geeUrl = 'https://{}:{}/geoevent/admin'.format(srv, prt) # Get GeoEvent token to access admin API sh = AGSTokenSecurityHandler(username=usr, password=pwd, org_url=geeUrl, token_url='https://{}:6443/arcgis/tokens/'.format(srv), proxy_url=None, proxy_port=None) # need to set the referrer to generate token correctly sh.referer_url = geeUrl Best, Eric I.
... View more
05-13-2019
07:29 AM
|
0
|
0
|
866
|
Title | Kudos | Posted |
---|---|---|
2 | 07-21-2021 07:16 PM | |
1 | 02-05-2024 11:02 AM | |
1 | 09-14-2023 08:09 PM | |
2 | 05-13-2019 09:32 AM | |
1 | 01-20-2023 02:36 PM |