|
POST
|
Hello jess neuner – You're correct that clicking Publish Stream Service a "second" time is not going to show you the existing service's configuration. All that does is open the dialog to allow you to begin selecting (again) the properties you want to specify to the service publication mechanism in order to publish a service. I'm assuming you toggled the Store Latest capability when originally publishing the stream service, and want to know which feature service the stream service is using to store latest observations? Your best bet, if I'm following your question, is to browse to the stream service in the ArcGIS REST Services Directory. At the top of the service's web page, in the upper-left hand corner, you should see a JSON link; clicking this will open a browser tab with the service's raw JSON specification. In there you will find things like which field is being used as a track identifier (derived from the field tagged TRACK_ID in the selected GeoEvent Definition) and the URL of any supporting feature services for capabilities like "Store Latest" and "Related Features". If you open an incident with Esri Technical Support, we can probably publish a stream service with store latest similar to how you have in your environment, then deliberately use the stream service to broadcast some data which the feature service would not be able to accept or persist as a feature record in the geodatabase and see how that condition gets logged in the GeoEvent Server's karaf.log ... Hope this information is helpful – RJ
... View more
10-12-2020
01:10 PM
|
1
|
0
|
2118
|
|
POST
|
Matej – Using my test data above, I can configure three MCFS (multicardinal field splitter) processors to split the data first on the group element metric_collection, then on the group element Level1_Group3, and finally on the array Group3_Items. Following this pattern I think you'll be able to achieve what you need to do. Above, I've illustrated the three event records routed from the final field splitter to the output. Please make sure you have downloaded the latest release of the field splitter processor bundle (Release 8 - June 24, 2020) for release 10.8.1 of ArcGIS. This version includes changes which allow multiple MCFS processors to be arrayed in series. If you are using an earlier release of ArcGIS, please use Release 7 (February 27, 2020). Hope this information is helpful – RJ
... View more
08-27-2020
02:02 PM
|
0
|
1
|
1927
|
|
POST
|
Hello Matej – Unlike a Field Mapper or Field Calculator which needs you to specify the full "path" into a data structure, the Field Splitter and in particular Multicardinal Field Splitter processors only expect the name of the element at which you want the split to be applied. So the Field to Split parameter's value should only specify the element's base name, linearWithinLinearGNElement, not the full path to the element (e.g. groupOfLocations.globalNetworkLinear.linearWithinLinearGNElement). That said, you may have found a limit of what the processor is able to handle in terms of a deeply nested, hierarchical, multicardinal, data structure. To test what you are trying to do I created some sample JSON data which adapts using the GeoEvent Definition illustrated below. Note that it does not matter whether the data received is formatted as XML or JSON – the inbound connector has to adapt whatever data is received to create an event record which can be routed to a GeoEvent Service for processing. After receiving the JSON above and allowing the inbound connector to adapt it using the illustrated GeoEvent Definition, I can use a Field Mapper to extract, for example, the data value "Delta" (line 21 in the sample JSON) by specifying the full path down through the data structure to the "E301" element in the array: metric_collection.Level1_Group3.Group3_Items[1].E301 If I specify the same full path when configuring the Multicardinal Field Splitter, I see the error message you were able to capture from the system log: Field name <name> is invalid. However, if I specify only Group3_Items (the name of the array) as the field on which to conduct the split, I see a different error message in the log indicating a null pointer exception was encountered. I am in contact with one of our developers to review the source code of this processor. We will see what we can do to better handle the case where Group3_Items (the name of the array) is specified and why a null pointer is not being caught. The work around, for now, is to use multiple Multicardinal Field Splitter processors in series and "walk down" the hierarchy from the top to the multicardinal element on which you want the final split applied. I'll reply to this post with what I tested so you can see what I had to do to implement the work around. Please make sure you have downloaded the latest release of the field splitter processor bundle (Release 8 - June 24, 2020) for release 10.8.1 of ArcGIS. This version includes changes which allow multiple MCFS processors to be arrayed in series. If you are using an earlier release of ArcGIS, please use Release 7 (February 27, 2020). Hope this information is helpful – RJ
... View more
08-26-2020
03:41 PM
|
1
|
2
|
1927
|
|
POST
|
Hello Jitendrudu – When replies were written for the original thread Re: GeoEvent process Oracle/SQL connector I don't think the GeoEvent Manager supported publishing a feature service. The functional server role for licensing hadn’t been invented yet and GeoEvent Server was still being referred to as the GeoEvent Processor Extension for ArcGIS Server. In recent releases, however, you can use GeoEvent Manager to publish Hosted Feature Layers to your Enterprise portal or map/feature services to a stand-alone ArcGIS Server. So on the one hand, nothing has changed with regard to the interface between GeoEvent Server and an RDMBS table. The interface is still a feature service since GeoEvent Server can only make REST requests against an Esri ArcGIS Server feature service to add or update feature records. But on the other, you can now publish feature services which do not necessarily have Geometry. A feature service is typically published using a client, like ArcMap, which will not allow you to publish a feature service without a geometry. ArcMap defines a feature class as a collection of feature records and a feature record is a data structure with both geometry and attributes describing some object in the real-world. GeoEvent doesn’t make the same assumptions regarding Geometry that ArcMap does though. Real-time data streaming from a sensor whose position does not change wouldn’t necessarily include a geometry or coordinate values with its data. GeoEvent Server doesn’t assume that all data records will have geometry. To GeoEvent Server, “geometry” is just another type of attribute like a String or a Date. Say, for example, you were to create a GeoEvent Definition with two attribute fields – one a Date the other a String – and didn’t include a Geometry field type. You can use any release of GeoEvent after 10.6.1 to publish a feature service to the ArcGIS Server beneath which GeoEvent Server is running. Without a Geometry, to any other client, the feature class's table looks like a non-spatial table. But ArcGIS Server provides a feature service interface to the table allowing you to write the Date and String data you want to it without having a Geometry. You would still configure an Update a Feature output, and made sure to use a Field Mapper to map whatever schema your inbound connector uses to adapt data you receive to the feature service’s schema (PostGRE, for example, insists on using only lower-case field names; Oracle only upper-case field names). You shouldn't have any problem ingesting some sample data that has no geometry and using GeoEvent Server to add/update “feature records” via the feature service. The data records in the RDBMS simply have no geometry … so you won’t be able to add them to a web map as a feature layer for example, but I can query the data at REST. I assume if I had some other database client able to query from the data table I’d be able to retrieve the data that way, rather than going through the feature service’s REST interface to query the data. This would be a lot easier that developing a custom outbound transport which understood how to connect to the RDBMS using ODBC. This is possible, but I don’t know how feasible it is, really, given the inter-dependency of bundles in the Java system framework that underlies GeoEvent Server. GeoEvent Server’s whole design, in this case, assumes that you’ll be able to work through a feature service to access feature records. Hope this information is helpful – RJ
... View more
08-18-2020
03:24 PM
|
0
|
0
|
1174
|
|
POST
|
Hello Rene -- You are correct. Documentation newly updated with the 10.8.1 release, particularly the on-line help topics Load Balancing and strategies for scalability, reliability, and resiliency, no longer discuss what I have referred to as a 'site' deployment approach (where multiple ArcGIS Server instances, each with a GeoEvent Server, are organized in a single ArcGIS Server site). Please take a look at the newly prepared on-line documentation; we've added quite a bit and have plans for more by the end of the year. Moving forward, the product team is recommending that all GeoEvent Server deployments follow a 'silo' deployment pattern. The distinction comes when you decide whether you have to deploy the same configuration to each machine, so you can use an external mechanism to route or distribute event records to two otherwise identical instances in parallel, or whether you can deploy a different configuration to each machine (allowing one instance to ingest event records of one type and another instance to handle event records of some other type). Update: February 2021 Beginning with the 10.9 release, every instance of GeoEvent Server you deploy must run beneath its own ArcGIS Server with its own ArcGIS Server site. This extends the single-machine high-availability active/active and single-machine high-availability active/passive deployment patterns promoted by ArcGIS Server. You will still be able to deploy multiple instances of GeoEvent Server which run independently from one another. These do not share a common configuration which GeoEvent Gateway must synchronize across a “cluster” of GeoEvent Server instances. (Recall that GeoEvent Gateway encapsulates the Apache Kafka message handler and the Zookeeper distributed configuration store used by GeoEvent Server.) We made the decision to remove support for multiple-machine / single-site deployments because we have found over time that GeoEvent Server deployments which coordinate through a single ArcGIS Server site do not meet reliability objectives. In rare cases of complete hardware failure -- where a single server node in a multi-machine deployment went permanently offline -- the deployment pattern we have deprecated did provide fault-tolerance. More frequently, however, when a deployment was challenged by a disadvantaged network, or a machine was temporarily unavailable, or servers were restarted out of sync, the whole deployment could become effectively unusable. System recovery for multitiple-machine / single-site deployments was tedious and error prone, which led to promises that a system architecture would provide high-availability failing to meet expectations. We therefore refactored the GeoEvent Gateway to provide better resiliency and overall system stability for the majority of our users by removing cluster leader election and in-sync replication between peer brokers/consumers. This means that multiple instances of GeoEvent Server (beginning with the 10.9 release) will no longer be able to synchronize a shared configuration or support a "clustered computing" architecture. But in the end, we achieve a better, more resilient, and more stable product. If you would like an informal write-up I prepared looking at some concerns surrounding this, please e-mail me directly: rsunderman .at. esri.com Hope this information is helpful, RJ
... View more
08-18-2020
02:55 PM
|
2
|
2
|
8015
|
|
POST
|
Hello nasser hinai – I think what you want to do is use an Incident Detector with an opening condition which determines an asset has "entered" and area of interest (registered as a geofence) and set the closing condition to evaluate true when the asset's location has "exited" the geofence. The event record output from the Incident Detector will be an incident which has a very different schema from the event record you routed to the processor. While you'll lose a bunch of attributes from the original record, you will retain whatever field you tagged TRACK_ID and also GEOMETRY. The incident event record will also have a duration which you can pass through a filter. If the asset were to dwell within an area of interest for some time you could use a filter to check that: (a) the incident's status is 'Ended' (b) the incident's duration is greater than some value. Then whatever event records the filter allows to pass you send to an output configured to send an e-mail notification. Hope this information is helpful – RJ
... View more
08-18-2020
02:33 PM
|
3
|
1
|
1476
|
|
POST
|
Hello Jose – To be clear, you are developing a custom web application which complements or reproduces what you can do using the GeoEvent Manager? So you're authenticating with the GeoEvent Server's administrative REST API to (probably automate) the creation of a bunch of processors, publishing a workflow connecting these processors as a GeoEvent Service. Does the service publish, albeit in an error state? If the service publishes you can export an XML representation of the configuration and inspect that to look for where the GeoEvent Service publication and validation is detecting a loop. There is no hard limit I'm aware of for how many processors you can place in a GeoEvent Service or how many different event flow connections you bring into or extent out from a node (e.g a processor or filter). There are certainly practical limits, and you should understand that every time a node handles an event there is some non-zero latency associated with that operation. Complicated real-time analytics applied through complicated event processing workflows through GeoEvent Services can severely impact the overall event records you can expect a GeoEvent Server instance to be able to handle. I have attached a simple illustration of an XML snapshot from an export of a GeoEvent Server configuration. Reading the XML, you'll be able to review the <flow> of a GeoEvent Service and see the <from> and <to> connections between every node. Hopefully the illustration helps with an audit / inspection of your more complicated GeoEvent Service. – RJ
... View more
08-18-2020
02:23 PM
|
0
|
0
|
837
|
|
POST
|
Hello Geoffrey West – Looking at the JSON you illustrated it looks to me like the data object is an array of JSON objects: "data": [
{
"id": "b2c2626d-ff0b-4892-8262-5acae60946a4",
"name": "JUPITER II",
"mmsi": 701000533,
... You probably need to add an index into your hierarchy parsing string to change data.last_known_position.geometry.coordinates[0] to data[0].last_known_position.geometry.coordinates[0] Also, I noticed that all of the negative double values seem to have an extra space between the '-' and the actual value. Quick experiments with GeoEvent Server seem to show that the extra white-space is being removed by the library used to parse the data ... I noticed https://jsonlint.com does the same thing ... but a different validator I often use triggered on these values and refused to validate the JSON to show it in a tree with collapsible nodes. Hope this information is helpful – RJ
... View more
08-06-2020
11:45 AM
|
2
|
0
|
1632
|
|
POST
|
Hello Nicholas Gray – A couple of things you might try, to see if you're able to get a client to successfully subscribe to the stream service: On your mobile device, browse to https://websocket.org/echo.html For the Location enter wss://my-machine:6143/arcgis/ws/services/my-stream-service/StreamServer/subscribe Substitute your fully-qualified machine's domain name for my-machine and the name of your published stream service in the ArcGIS REST Services Directory for my-stream-service. You can find the basis for this URI if you click the JSON link in the top-left corner of the stream service's web page in the ArcGIS REST Services Directory. Scroll to the bottom of the JSON service spec to see the URI. I can browse to the ArcGIS REST Services Directory on my laptop and subscribe to the stream service via REST, and also use the WebSocket.org echo test to connect to see the feature record data as it is broadcast – probably because I've configured SSL certificates with trust for the Esri domain on the target machine and have a VPN running on the laptop enabling me to reach the machine on which GeoEvent Server is running. I found, for example, that I cannot browse to the ArcGIS REST Services Directory from my mobile. That would make trying to use the WebSocket.org echo test to connect to the stream service's web socket a non-starter. I can create a web map, again on my laptop, adding the stream service as a stream layer, and save it to my ArcGIS Online organization. I can log-in to ArcGIS Online from my mobile and access the web map – but I don't see any of the stream layer's data ... again, likely because without a VPN the web map client cannot access the machine running ArcGIS Server to browse its REST Services Directory or reach the web socket endpoint hosted by GeoEvent Server. Assuming that access to the machine from your mobile is not the problem, you might check the ArcGIS Server's SSL certificate configuration. The web browser on your mobile may need to see something to validate the HTTPS (or in this case WSS) connection. Using the WebSocket.org echo test is a good resource to see if clients external to your ArcGIS Enterprise are able to connect to a web socket being run by the GeoEvent Server. Hope this information is helpful – RJ
... View more
08-05-2020
05:54 PM
|
1
|
3
|
2477
|
|
DOC
|
Questions and Answers from the Expo Floor: What is ArcGIS GeoEvent Server GeoEvent Sever is one of several license roles for ArcGIS Enterprise. You license and install GeoEvent Server to add real-time capabilities to your Enterprise. GeoEvent Server, out-of-the-box, provides configurable inputs for connecting to a variety of data streams from virtually any data provider. What is a GeoEvent Service A GeoEvent Service is an event processing workflow you design and publish using the GeoEvent Manager web application. You drag, drop, and configure an input to specify how data will be received. You add and configure an output to specify what will be done with the data the GeoEvent Service processes. You then add one or more filters and processors to the service, connect the nodes to define an event processing workflow, and publish the GeoEvent Service. See Also: GeoEvent Server inputs GeoEvent Server filters GeoEvent Server processors GeoEvent Server outputs What's New in GeoEvent Server 10.8 / 10.8.1 Significant work went into simplifying common user workflows enabling users to maintain their focus on configuring and publishing GeoEvent Services. Most of what you need to do is now done from within the Service Designer rather than switching to a different web page in the GeoEvent Manager. The status, catalog, and option to create, edit, and delete primary service components (inputs, outputs, etc.) is now available from a consolidated Monitor page in the GeoEvent Manager. There is also a new GeoEvent Sampler utility built into the Service Designer allowing you to see the effect filters and processors you configure have on event records as they are received in real-time. For details, please see the PDF attached to this article. Question Answer Question Answer Question Answer
... View more
07-10-2020
12:16 PM
|
0
|
0
|
1388
|
|
POST
|
Hello Mariela Del Rio – When you say automatic handling of "variable number of points in the routes", I'm not sure I follow what you are asking for. Looking over the GeoEvent Definition illustrations in the thread I don't see an attribute for a route or routes that is multicardinal (indicating the attribute is an array containing more than one value). Sara's reply to you on 10-February has a GeoEvent Definition whose event_time attribute looks kind of like a geoJSON geometry structure. The geoJSON specification for geometries places coordinates in an array. Our Generic-JSON inbound adapter includes a parameter asGeoJSON which can be used to tell the adapter to try parsing a field whose declared type is Geometry as a geoJSON geometry. The rest of the JSON structure is still handled as generic JSON, but any supposed geometry will be parsed differently, specifically so arrays of coordinate values can be interpreted as polylines or polygons. I'd have to experiment some with actual data from these feeds, but if this is what you're referring to we might be able to make something work for you without writing any code. If, on the other hand, you are looking for GeoEvent Server's inbound adapter to iterate over a variable number of JSON objects or discrete data values in an array ... that isn't something we have included in our product road map. GeoEvent Server's support for hierarchy and multicardinality is actually pretty limited and we drew the line quite some time ago at providing some sort of configurable iterator to iterate over a variable list of values in order to perform processing on each value. There are some really powerful things you can do using regular expressions and a series of Field Calculator processors. You might check out this thread for an example. Switching to your request for pagination for web service service(s), this is a pretty broad topic. There is no standard I'm aware of for how a given external web service might elect to communicate to a client that data retrieval should be performed as a series of queries rather than receiving all of the data as part of a single response to a single query. For example, the ArcGIS REST API for feature services specifies that a client should interpret a feature service returning exceededTransferLimit=true to mean that "there are more query results" and "you should continue to page through the results". GeoEvent Server is able to page through Esri's feature services when querying for a large number of feature records, but I'm not sure how we would implement a general solution for paging through any external web service's content. (I'm actually more familiar with the opposite, when a web service wants to return tens-of-megabytes of data to a client in response to a query and GeoEvent administrators ask how to configure GeoEvent Server to handle such a massive slug of data.) Your third ask, for automatic token renewal, is part of our product backlog. What we want to do is retire, at least, the obsolete "OAuth1" authentication support offered by inbound transports and implement new support for OAuth2 authentication, which I think would include automatic token renewal. Our last research spike into this showed that while OAuth2 is a standard, there are subtle ways a service author can define how the different authorization types are implemented. We can probably develop a connector which supports the OKTA standard of OAuth2, but the effort was put on hold. When discussing this problem with customers I tell them that this is one of the cleanest examples for recommending a custom transport that I can think of. There are a bunch of different ways a web service might ask a client to first authenticate and acquire a token so that a second request (with the token) can be made authorizing a client's request. I don't recall any of my team helping to develop the custom inbound transport (or processor?) you refer to, but you are correct, if the development was completed using the 10.6 version of the GeoEvent Server Java SDK, you should be safe to continue using it for 10.7.x and 10.8.x ... we've no plans for breaking changes to the API which would require the custom component you had Esri build for you to need to be re-compiled. Hope this information is helpful – RJ
... View more
04-22-2020
04:35 PM
|
1
|
0
|
1475
|
|
DOC
|
Very nicely presented Jake Skinner – I like the RegEx you used in the Field Calculator (Regular Expression). Your pattern match – (,[^,]{0}) – is very tight and well crafted. I do not use that variant of field calculator as often as I use the "regular" Field Calculator with its replaceAll( ) function, which changes how I think about regular expression pattern matching. A regular Field Calculator also allows me to combine operations in a single expression. I tried combining the string replacement and length operations with a slightly different RegEx. I'm not sure that all of the Field Calculator's supported functions can be combined this way, but it appears to work in this case: length(replaceAll(DataString, '[0-9]{1,}[,]*', 'x')) The above RegEx matches one-or-more digits (zero to nine) with a trailing comma (if one exists) and accepts a final numeric identifier without a comma as the last in the delimited list. Each identifier is replaced with a single literal 'x' so that the length( ) function can simply count the number of x characters. Seems to work for empty string (""), single identifier ("12345"), and multiple identifiers ("689102,0256,651,74290") ... returning lengths of 0, 1, and 4 respectively. Null strings are a special case; tests I ran suggest that if GeoTagger found no points in a precinct's polygon and chose to assign a NULL value to the GeoTags field rather than an empty string (I can never remember which value it chooses), the combined expression will produce a NULL value, So your final filter – rather than checking count > 0 – could check NOT(count ISNULL) allowing you to also update districts with a count of zero. This might be important if a precinct was previously updated with n>0 crimes and things got quiet so the crimes count needed to be reduced back to zero. Also, to cross-link to another topic/trick, if you do not want to carry the burden of mapping a bunch of fields whose values you do not intend to change (command_code, patrol_boro, sq_miles, category, etc.) you could use a Partial GeoEvent Definition with only two fields to update the count for each precinct feature record. Assuming precinct is your event record's TRACK_ID and unique feature identifier field, a GeoEvent Definition with only two fields (precinct and count) would protect against accidental changes to other fields by using precinct to identify which feature record to update and then only update the count for that precinct. Thanks for the write-up – RJ
... View more
03-25-2020
04:29 PM
|
0
|
0
|
4398
|
|
POST
|
I agree with Eric Ironside, the problem described isn't a good fit for GeoEvent Server because every event record is generally considered atomic. I can't compute a sum or average value across several events without developing a custom processor to collect event records over a period of time and periodically dump a cache to compute a statistic. Taking a closer look at Simon Jackson initial ideas, we could import the catchment polygons as geofences and then ingest and process the river gauge point features to get the name of the catchment AOI the points fall within (using a GeoTagger). But that only sets us up for some sort of post processing operation external to GeoEvent Server; we've only used GeoEvent Server to enrich a feature record set of river gauges with an attribute identifying a catchment. Assuming neither the location or geometry of catchment polygons / river gauge points frequently change, this becomes an infrequent batch operation, not a recurring real-time operation. Turning the problem inside out, we could import the river gauge points as geofences then ingest and process the catchment polygons. A GeoTagger could be used to enrich each polygon with a comma separated list of sensor identifiers. But again, we only get identifying names from GeoTagger – if we split the delimited list in order to use a Field Enricher to pull actual flow rates into each record we're no better off that having ingested the point records in the first place ... we cannot rejoin the split delimited list to obtain metrics from several event records to compute an average. I suppose we could use a Field Calculator to append each river gauge's value to the gauge's name and use a stream service to broadcast these so that a geofence synchronization rule could actively update a set of geofences whose name provides both a sensorID and an observedValue ... then, as part of a different GeoEvent Service, ingest and process the catchment polygons using a GeoTagger to collect all the "enhanced" geofence names into a comma delimited list. A regular expression, like Adam Repsher suggests, could then (maybe) pull apart the delimited list, extract each sensor's observed value from the "geofence name" and compute an average ... But this is hardly the elegant solution Simon is looking for. It's forced, brutish, and fragile as we now have two independent GeoEvent Services polling catchments and river gauges, updating geofences, and presenting us with inherent race conditions. If you really wanted to do this using GeoEvent Server, you would want to develop a custom processor that incorporated a timer. The processor would catch and hold a series of GeoTagged points as long as its timer had not expired. As the processor was collecting data, it could enter metrics from each river gauge into some sort of key/value structure in memory using the catchment identifier as the key. The processor's timer would reset as new gauge event records are received and its in-memory data structure updated. If the timer ever expired ... say, 30 seconds after not seeing any new data ... it would compute averages for all the catchments it had key values for and write-out updates to the catchment feature records. The burden here is that you have to use the GeoEvent Server's Java SDK to create a custom processor. I don't see anything elegant you can do with GeoEvent Server out-of-the-box to solve this problem. Hope this information is helpful – RJ
... View more
03-24-2020
02:56 PM
|
0
|
0
|
1935
|
|
POST
|
Also consider that as you collect copies of data files you intend GeoEvent Server to read from, you probably want to configure the GeoEvent Server's input to delete files it has read after they have been processed. It is expected behavior at the 10.7.x release (and I think 10.6.x as well) that if you stop a running input used to read data from a file, any file(s) you have in the watched folder will be re-read when the input is restarted. Same goes for server machine reboot. You probably do not want GeoEvent to re-read files following a restart ... You will still need to append a date/time string to the file name, to ensure the external process used to copy files does not, at any point, try to lock or overwrite a file GeoEvent Server is reading. There is no way to coordinate GeoEvent Server's read (or file delete when done reading) with the external file copy process; filename uniqueness is how you work around that. - RJ
... View more
03-23-2020
11:05 AM
|
2
|
1
|
3390
|
|
POST
|
Hello William – To add to what Brad says above, there are two approaches to approaching a multi-machine GeoEvent Server deployment. The first I refer to as the 'site' approach, the other as the 'silo' approach. The ‘site’ approach deploys multiple ArcGIS Server instances, each with a GeoEvent Server, in a single ArcGIS Server site. The GeoEvent Gateway is utilized more heavily in this configuration as the component responsible for event record distribution across the site. The ‘silo’ approach relies on an external broker or load balancing component such as Apache Kafka for message distribution. In this approach you are essentially configuring multiple independent GeoEvent Server instances and taking on the challenge of routing a portion of the inbound data you need to process to different instances. Brad mentioned one tutorial, GeoEvent Server 10.6.x Multiple-Machine Site, which covers the 'site' approach. There is another tutorial, GeoEvent Server Resiliency, which covers the 'silo' approach. There are pros and cons to both the 'site' and 'silo' approach. When choosing one over the other you should carefully consider your specific objectives – resiliency, scalability, fault-tolerance, reliability. Architects need to decouple these specific objectives from a more generic "high-availability" objective. Brad is correct that with the introduction of the GeoEvent Gateway in the 10.6.x release architects have the option to follow a 'site' deployment and allow GeoEvent Server to handle machine fail-over when a machine participating in a site fails. We've found on the product team, however, that when a multiple machine approach is necessary, accepting the technical debt of learning how to deploy, configure, and administer Apache Kafka and Zookeeper and following a 'silo' deployment model gives administrators better visibility into operational failures and more control over recovery. For this reason, more than any other, I am more comfortable recommending a 'silo' approach over the 'site' approach. I have written up some thoughts and advice on resiliency, scalability, reliability, high availability, and pros / cons to consider when taking on a multiple machine deployment. Brad or I can share this with you if you schedule some time with one of us to discuss your approach, concerns, and objectives. I will offer that, realistically, folks who are happy with GeoEvent Server are those who are able to get what they need out of a single machine deployment; folks who are unhappy with GeoEvent Server are those who try to architect solutions which push the technology on which GeoEvent Server was built beyond what it’s able to do by trying to design “highly available” solutions with multiple machines. Customers looking for more resilient solutions with auto-scaling and built-in fault-tolerance are encouraged to look at the new ArcGIS Analytics for IoT – a SaaS offering for ArcGIS Online. Its architecture and implementation are completely different from GeoEvent Server. You can read more about ArcGIS Analytics for IoT at the following links: ArcGIS Analytics for IoT Overview Blog: Introducing ArcGIS Analytics for IoT On-Line Help -- What is ArcGIS Analytics for IoT? Moving forward, the GeoEvent Server product team is not going to be recommending multiple machine deployments. We won't be taking away an architect's options to deploy using a 'site' or 'silo' approach, but we will be encouraging customers who have needs beyond what a single instance of GeoEvent Server can support to consider the ArcGIS Analytics for IoT SaaS offering over the on-premises ArcGIS GeoEvent Server. Hope this information is helpful – RJ
... View more
03-10-2020
10:24 AM
|
2
|
6
|
8015
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 01-05-2023 11:37 AM | |
| 1 | 02-20-2025 03:50 PM | |
| 1 | 08-31-2015 07:23 PM | |
| 1 | 05-01-2024 06:16 PM | |
| 1 | 01-05-2024 02:25 PM |
| Online Status |
Offline
|
| Date Last Visited |
Thursday
|