|
POST
|
You do have a few options available, however, without relying on an ODBC / JDBC connector: Here's another trick that a customer described to me. If you know an RDBMS wizard, ask them to help you create a spatial view of your data. ArcGIS can use a spatial view in the same way as it uses a feature class, so you can publish a spatial view as an ArcGIS feature service. This can provide you a great deal of flexibility. You have complete control over your spatial view, such as joins you want to perform with other tables in your database or limiting the data exposed through the spatial view to the "most recent five minutes". The feature service will appear to be automatically updating to display the most recent data - but the back end database is doing all the work. And you now have the ability to query the feature service using endpoints exposed through the ArcGIS REST Services Directory. I'm not an RDBMS wizard and thought this approach was absolutely brilliant. - RJ
... View more
08-07-2015
11:06 AM
|
3
|
0
|
3502
|
|
POST
|
Hello Allen - The ‘Poll an ArcGIS Server for Features’ input makes the assumption that the Geometry for your events will come from the features you are polling. So that particular input does not offer a configuration option to ‘Construct Geometry From Fields’. I'd recommend you try using the ‘Poll an External Website for JSON’ input. That input connector uses the product's generic JSON adapter (vs. the Esri Feature JSON adapter used by the ‘Poll an ArcGIS Server for Features’ input). When configuring your ‘Poll an External Website for JSON’ input you will have to specify the URL that GeoEvent should poll. You will have to include all of the required parameters for polling a Esri feature service as generic JSON. The easiest way to see what these parameters are is to use the ArcGIS REST Services Directory to query your feature service, then copy/paste the URL generated for your query and examine the parameters. Many of the parameters will be unspecified. Examples include: &text= &objectIds= &time= You can omit many of the parameters. For example: <code>http://localhost:6080/arcgis/rest/services/SampleWorldCities/MapServer/0/query<br/>?where=1%3D1&geometryType=esriGeometryEnvelope&spatialRel=esriSpatialRelIntersects<br/>&outFields=*&returnGeometry=false&returnTrueCurves=false&returnIdsOnly=false<br/>&returnCountOnly=false&returnZ=false&returnM=false&returnDistinctValues=false<br/>&f=json</code> You would need to specify the above query as a single line of text ... as the ‘Poll an External Website for JSON’ input's URL property. You will also need to configure the input to specify 'features' (no quotes) for the input's ‘JSON Object Name’ parameter so the input knows to look for an array containing features and bring each feature in as a separate event. In my example above, I elected to specify returnGeometry=false since you indicated that you intend to have the input use coordinate values returned from the feature query to construct a Point Geometry. Hope this information helps - RJ
... View more
08-07-2015
10:44 AM
|
1
|
1
|
1043
|
|
BLOG
|
Morakot has been keeping a blog under his GeoNet user-id. I'm going to reference a recent blog of his, and probably begin re-posting his content here, in the GeoEvent product's blog. How to Create Temporal Filter in GeoEvent - RJ
... View more
08-06-2015
11:20 AM
|
0
|
0
|
1643
|
|
POST
|
Hello Michael - A stream service output's 'Related Feature' property is a URL which is passed along to clients subscribing to the stream service. GeoEvent does not actually do anything with this URL (other than pass it along to clients). Clicking 'Subscribe' on the stream service's HTML page in the ArcGIS REST Services Directory, for example, will not show you any enrichment. The intent is that a real-time data stream, rich with attributes but missing a Geometry, can link to a related feature service which contains *only* the location (Geometry) for sensors in a network so that the real-time sensor data can be displayed on a web map. I've attached a PDF with several steps which illustrate how the 'Related Features' capability is supposed to work when using Stream Services. I shared these with you off-forum already ... I'm posting them here for the benefit of the community. Hope this information helps - RJ
... View more
08-05-2015
06:03 PM
|
1
|
0
|
1652
|
|
POST
|
Hello Thibaut - Unfortunately it is not possible to configure the rollback on failure behavior for either the 'Add a Feature' or 'Update a Feature' output connectors. This behavior is part of the underlying transport's implementation and it not exposed to the user. The situation you describe, in which a transaction containing 100 feature records includes one record which violates a constraint imposed by the feature service (for example, a string whose length exceeds the what is allowed for a particular field, or a <null> value pushed into a field whose specification indicates nullable: false) will cause the entire transaction to rollback ... and none of the event data in that transaction will be applied to add or update features. You have two options: 1) Use GeoEvent filters to screen for event attributes which do not satisfy your feature service's constraints. Then use a Field Calculator to write compliant data into the event -- or allow the Filter to discard the event so that it is not included in an add/update features transaction. 2) Adjust the 'Maximum Features Per Transaction' property on your 'Add a Feature' or 'Update a Feature' output connectors from the default 500 to a lower number. That way, when a transaction is rolled back, you do not lose as much event data. If you know that event data from a data provider often contains values your feature service will consider invalid, you can set the 'Maximum Features Per Transaction' property to 1. Understand that limiting GeoEvent to exactly one event per transaction with the database will increase transaction overhead and limit the number of events you can add/update each second. Hope this information helps - RJ
... View more
08-05-2015
05:01 PM
|
1
|
1
|
1038
|
|
POST
|
Shane - I've moved your question to GeoNet's space for GIS > Enterprise GIS > ArcGIS for Server discussions. I think your question on scripting geodatabase administration is better positioned here than in the GeoEvent area. Best Regards - RJ
... View more
08-05-2015
04:35 PM
|
0
|
0
|
440
|
|
POST
|
Marc - Is it possible that your system has a Locale other than 'English'? Numeric values in the simulation file use ‘.’ as the decimal separator, rather than a ‘,’ as is typical in many European locales. The GeoEvent Extension will assume your server’s locale if the Language for Number Formatting parameter is unspecified in the TCP/Text input you've configured to receive the simulated data. If this is the case, you need to specify en_US for the Language for Number Formatting parameter (see attached illustration). Module 2 in the Introduction to GeoEvent tutorial was updated July 30th (2015) to clarify this. Please download the latest tutorial and refer to pages 29 - 31 which discusses configuring the TCP/Text input for the exercise. Hope this information helps - RJ
... View more
08-04-2015
06:19 PM
|
2
|
1
|
873
|
|
POST
|
The http://brgov.com/reports/public/brtrxml.xml feed provides the current date as an attribute of the trafficincidentlist structure. Since each inbound traffic record only specifies a local time, it is difficult for GeoEvent to determine a proper epoch timestamp in milliseconds to associate with each event. We can work with this. By polling the same feed, in a different way, we can cache the header information in a feature service’s feature class, an then use that cache to enrich each of the individual events we receive from the feed. Consider the attached GeoEvent Definition illustration. This event definition specifies that only the trafficincidentlist.time and trafficincidentlist.date attributes should be pulled from the feed. Further, the trafficincidentlist.time is to be handled as a Date, not as a String. If we configure a new ‘Poll an External Website for XML‘ input, we can poll the same feed. This time, however, I leave the 'XML Object Name' property unspecified, so this new input will retrieve the base date for every traffic incident. I need to specify an ‘Expected Date Format’ so that the input knows not to expect a time value as part of the received date string. (Refer to attached illustration of the ‘Poll an External Website for XML‘ input.) I can then incorporate this second input into my GeoEvent Service, using a Field Mapper to map the retrieved Date value to a schema consistent with a feature class in my geodatabase. I use a Field Calculator to hard-code the URL being polled for data as a TRACK_ID for these “features”. I use an ‘Update a Feature’ output to update the base date, in epoch milliseconds, as a feature in the feature class. This updates our cached epochbase attribute as a feature in a feature service. (Refer to illustration detailing the use of Field Mapper and Field Calculator with the fs-out output updating the feature service.) Now we can retrieve the epochbase attribute from the feature service and use it to enrich each incoming traffic event from the city’s feed. To do this, we add additional fields (feedidentifier, hours, and minutes) to our BatonRouge-TrafficIncidents event definition. These fields are not provided by the feed, but adding them to the event definition provides us attributes into which we can write data. We also add a Field Enricher to enrich the incoming traffic events with the Date from the feature service ... and a Field Calculator to extract the hours and minutes as substrings from each event’s time attribute. An additional Field Calculator can then be used to add the hours and minutes to the epochbase – creating a fully-qualified date/time value for each event. (Refer to illustration ‘Final GeoEvent Service’). Admittedly, this is a lot to go through to pull information from a XML feed’s header and incorporate it into individual event records extracted from a feed’s list. But it illustrates several concepts of what you can do using a combination of processors to manipulate data obtained from a feed. Also, it appears that the GeoEvent input, configured to expect only MM/dd/yy for the Date, assumes that the date must be local. But when the Date is written to a feature service, clients retrieving the value will assume that the value has been expressed in epoch milliseconds GMT, so displaying the manufactured date in a web map will probably be artificially offset from GMT to your server's local time. To complete the solution we really should correct the locally reported Baton Rouge time by adding +5 hours to push the value forward and represent it as a GMT / UTC value, when caching it in a feature service. Hope this information is helpful – RJ
... View more
08-04-2015
06:01 PM
|
3
|
0
|
2313
|
|
POST
|
Hello William – I had no problems bringing in your traffic incidents feed from the City of Baton Rouge. I had to remove the input’s default 'application/xml' specification from my configured GeoEvent input's 'Acceptable MIME Types’ parameter. I also had to specify trafficincident as the 'XML Object Name' so that the input would know how to extract individual events from the feed's structure. I cannot be sure from the feed whether using the location_number attribute as a TRACK_ID is appropriate. Identifying a field within the feed's data which can be used to uniquely identify each event will be fairly important. The input is receiving all available event records every polling cycle, and if we elect to broadcast the event data out a stream service, or update features in a feature service, we'll need a TRACK_ID in order to visualize the data properly on a map. Because the feed’s structure includes the date as an attribute of the trafficincidentlist structure, rather than as part of each trafficincident item, we have a small challenge to overcome. More on that in a moment. Attached are illustrations of my configuration of the ‘Poll an External Website for XML‘ input and the GeoEvent Definition I configured my input to use. I allowed the input to create an initial GeoEvent Definition for me, then copied the generated one and configured my input to use my copy of the event definition. This is a fairly simple best practice. Hope this information helps – RJ
... View more
08-04-2015
03:18 PM
|
2
|
0
|
2314
|
|
BLOG
|
Hey All - Normally the GeoEvent Simulator loads comma separated text from a simulation file and allows you to send this data to a GeoEvent Server tcp-text-in input to simulate a real-time data feed. Sample data in the tutorials either represents a point geometry as a quoted pair of X,Y coordinate values (e.g. "-75.175,39.991") ... or provides the coordinates of a point location as separate X and Y attribute values which the input can take and use to construct a geometry. But what if you want to simulate a dynamic polygon, such as a series of forecast areas affected by a storm? Here's a trick you might find handy: You do not have to use a comma to delineate your event attributes in a simulation file. Valid Esri Feature JSON can be placed in a simulation file. A GeoEvent Server input can be configured to interpret the JSON as a geometry. Consider the following two lines of simulation input: "AA-1234";"12/24/2015 23:59:59";{ "rings": [ [ [-75.175, 39.991], [-75.173, 39.991], [-75.173, 39.99], [-75.175, 39.991], [-75.175, 39.991] ] ], "spatialReference": { "wkid": 4326 } } "BB-7890";"02/15/2015 12:34:56";{ "rings": [ [ [-8368449.66, 4864715.92], [-8368263.15, 4864676.62], [-8368272.04, 4864618.25], [-8368459.87, 4864645.20], [-8368449.66, 4864715.92] ] ], "spatialReference": { "wkid": 102100, "latestWkid": 3857 } } In both examples I am sending GeoEvent Server a JSON string representation of a geometry using the Esri Feature JSON format for a polygon geometry. Please refer to the ArcGIS Developers on-line documentation for the JSON spec and samples of Point, Multipoint, Polyline, and Polygon geometries. Notice that I have chosen to separate the event attributes using a semi-colon rather than a comma. Since both commas and literal quotation marks are part of the Esri Feature JSON syntax, using a semi-colon for field delineation simplifies my simulation file considerably. It allows me to keep the required quotes and commas without having to escape or quote them as string literals. I'm free to quote the other event attributes of the simulated event. In the examples above, I've quoted my TRACK_ID and my TIME_START values, though I probably do not need to. Also notice that each geometry string includes the coordinate system associated with the coordinate values. The first event uses the WGS 1984 Geographic Coordinate System (its coordinate values are expressed in decimal degrees). The second event uses the Web Mercator Aux Sphere Projected Coordinate System (its coordinate values are expressed in meters). Attached are illustrations of the GeoEvent Definition I configured my 'Receive Text from a TCP Socket' input to use. The input is still responsible for adapting the delimited text it receives from the GeoEvent Simulator, so it needs to know what characters to expect for the message separator and attribute separator ... and it relies on an event definition to tell it that the third attribute should be interpreted as a Geometry. If you try this and run into problems, let me know. There may be limits on the raw number of bytes you can pass over a TCP socket or how many messages of a given size you can load into the GeoEvent Simulator and send each second. It is probably best to simplify string representations of your geometries when including JSON in simulated event data. Hope you find this information useful - RJ GeoEvent Definition GeoEvent input configuration
... View more
07-31-2015
06:13 PM
|
1
|
0
|
3307
|
|
POST
|
07-30-2015
06:17 PM
|
1
|
0
|
1120
|
|
POST
|
Please be aware that when using the Twitter for GeoEvent connector you are registering with Twitter as a developer and are using their public API and are subject to Twitter's terms and conditions. For example, the twitter input for GeoEvent is only going to receive a sample of the tweets being sent by Twitter's users since that is what their public API provides to developers. The GeoEvent team conducted an extended proof-of-concept to demonstrate that, if the twitter connector is configured to follow a specific list of registered users, you *will* receive all of the tweets from those users. The "sampling" is enforced as a threshold, so following a dozen specific people you are unlikely to find that group tweeting above the threshold established by the public API. If you were to filter all tweets for a currently trending (popular) hashtag, on the other hand, you would likely receive only a sample of the tweets being sent. Additional information is in the following FAQ. - RJ
... View more
07-30-2015
06:16 PM
|
1
|
0
|
1120
|
|
POST
|
Hello Darryl - Did you solve your issue regarding upgrading your instance of GeoEvent? I was a little confused by your original post. GeoEvent was introduced at the ArcGIS 10.2 release ... it wasn't available at 10.1. I would recommend exporting your GeoEvent configuration as an XML so that you have a backup of your inputs, outputs, GeoEvent Services (etc.) and then uninstalling GeoEvent. Uninstalling the extension doesn't remove the product's configuration store. I'd go ahead and delete it manually after you've uninstalled the extension. You can find it in the folder C:\ProgramData\Esri\GeoEvent After installing the 10.3.1 product release you'll find that one of the new features is the ability to selectively import items from an previously exported XML. You could try importing just an input to see that data can be received, then import the GeoEvent Service which incorporates that input to verify that the processing and event output work as they did previously. Import your remaining items and validate that they all work. When working with custom components, you usually load those into the GeoEvent framework using GeoEvent Manager to copy the JAR files into the C:\Program Files\ArcGIS\Server\GeoEvent\deploy folder. In Manager navigate to: Site > Components > Transports and click 'Add Local Transport' Site > Components > Adapters and click ‘Add Local Adapter’ Site > Components > Processors and click ‘Add Local Processor’ If these are custom components you created yourself, you'll probably need to make sure they have been re-compiled using the 10.3.1 product's SDK before loading them into the newly installed product framework. You will need to reload any custom components - only their configurable properties are in the product XML. Some of the older 10.2 custom components suffer from an issue that the Java namespace in the JAR refers to com.esri.ges.<something> ... which interferes with product upgrade. If you find that a custom component appears to load, but you cannot access its properties to change its configuration, that is a good indication that you've encountered this issue. When custom components have been loaded into the framework changes are made to the product's runtime files (C:\Program Files\ArcGIS\Server\GeoEvent\data). You want to be sure that the GeoEvent Windows service has been stopped and that you've deleted the files in this ...\data folder to make sure that all traces of the custom component have been removed. (A product uninstall should have taken care of this. I only mention it so that you're aware of what you need to do when removing a custom component from an installed instance of the extension.) Simply deleting the JAR file from the ...\GeoEvent\deploy folder should be sufficient, but I usually take the extra step of stopping GeoEvent and deleting its ...\GeoEvent\data folder. Hope this information helps - RJ
... View more
07-30-2015
05:59 PM
|
1
|
0
|
895
|
|
POST
|
Hey Jay - On a related question, why does the Update a Feature output leave large gaps in the OBJECTID sequence when it creates new features? In a class where the highest existing OBJECTID was less than 10, the GeoEventProcessor created features started with OBJECTID 810. Many times, when using the 'Add Features' or 'Update Features' capabilities on a feature service, I have observed that one set of edits will use OBJECTIDs 1, 2, 3, 4 ... and see a second set of edits jump to begin using OBJECTIDs 401, 402, 403, ... This is actually by design, and has nothing to do with the GeoEvent Extension. My understanding is that this is an attempt by ArcGIS Server to avoid conflicts when multiple contributors (or multiple web applications) are making edits to a feature class through a feature service's REST endpoints. By detecting, somehow, that a second editor is making edits, that second editor will be assigned a block of IDs (401, 402, 403, ...) so that features they add and update will not interfere with likely requests a previous editor is making to add and update features. You can easily see this if you add a feature service's layer to a web map, begin editing, then in a different window POST an Esri Feature JSON record to the feature layer's 'Add Features' endpoint, then return to the web map and continue adding/editing features. This is the primary reason why we introduced date/time support when polling a feature service for incremental updates. We had assumed (wrongly) that if a GeoEvent input were to cache the last (largest) OBJECTID retrieved we would be safe in querying for OBJECTIDs greater than our largest observed value. What we found was that we would often miss features that the first contributor had created; once we began querying in the 401, 402, 403, ... range we would never pick up features created in the 5, 6, 7, 8 ... range of OBJECTIDs. - RJ
... View more
07-30-2015
04:56 PM
|
0
|
0
|
1799
|
|
POST
|
Hello Jay - These feature classes are versioned, and the feature service is pointing to a child version. Don't do that. The add / update feature capability being performed by GeoEvent has never supported versioned geodatabases.
... View more
07-30-2015
04:41 PM
|
0
|
0
|
1799
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 01-05-2023 11:37 AM | |
| 1 | 02-20-2025 03:50 PM | |
| 1 | 08-31-2015 07:23 PM | |
| 1 | 05-01-2024 06:16 PM | |
| 1 | 01-05-2024 02:25 PM |
| Online Status |
Offline
|
| Date Last Visited |
yesterday
|