|
POST
|
Hi Tom (tweitzel), I'm fairly certain what you want to do is not possible without a custom output. I tried messing around with the syntax GeoEvent wants when referring to field values inside geoevents (${FIELD_NAME}). While I was able to set this in my output URL (I had to escape the three special characters), the special characters were interpreted literally by the HTTP transport. I had my JSON/HTTP output URL set to: http://<server_name>:8080/REST-receiver/webresources/Items/%24%7bID%7d In my REST server app's log, I was getting: id= ${ID} id= ${ID} whereas if I called my server app via REST directly (via ChromePoster), I would get: id= 99-001 id= { "ID":"99-001", "Latitude":33.58202, "Longitude":-117.1234 } So it appears that the HTTP transport sends its bytes payload to the literal URL specified for "URL:" and does not attempt field substitutions like other transports do, i.e. SMTP. Mark
... View more
01-03-2017
02:23 PM
|
0
|
0
|
525
|
|
POST
|
mikkeloinformi-dk-esridist Generally speaking, individual geoevents "stand alone". GeoEvent Server (new name at 10.5) may process a single geovent, and then not another for seconds or minutes, or months, or longer. Without introducing state into the mix, these geoevents don't know anything about any other geoevents. To do what you want, you need to persist geoevent data somewhere, in order for that data to then be compared to other geoevents at a later time. Generally speaking, you have the following options for persisting geoevent data: an in-memory cache, a file, or a feature service. I've worked with cache-aware processors and persisting data to feature services in this context. If you want more information, let me know.
... View more
01-03-2017
10:58 AM
|
3
|
5
|
1567
|
|
POST
|
Maarten Tromp, You are on the right path with your idea. I have what I think you're looking for. Here is a sample geoevent in XML: <root> <GE_ID>1</GE_ID> <GE_X>-92.47439554</GE_X> <GE_Y>35.08497411</GE_Y> <GE_ORDER>first</GE_ORDER> <GE_DATE>12/29/16 15:26:00</GE_DATE> </root> The 'GE_' prefix is to help identify the fields that came from the raw event. You'll see how this comes in handy in a moment. Here's a screen shot of my feature class I'm serving out via a feature service: My GeoEvent Service only emits a new feature if the geoevent's date is after the date that was in the database for that particular track. Here's a screen shot of the GeoEvent Service: Let's break it down: The green input is a surrogate for what you have. It's a " Receive XML on a REST Endpoint" instance, set up like so: I use Chrome Poster to submit a geoevent to the REST endpoint of the input. For each geoevent I want to send, I update the value for GE_DATE in the xml, and optionally the X and/or Y if I want to see the point move on a map (if it happens to be a geoevent that's "after" the last one). Each new geoevent I submit this way is a surrogate for a new XML file in your case. The first processor is a Field Enricher called "get existing date". It retrieves the date of the latest feature in the database: The properties are presented rather randomly, so I'll explain each one in a logical order: 1. This field enricher is going to connect to the local ArcGIS Server ("ArcGIS Server Connection: Default"), and 2. look for a feature service called "latest_feature". 3. It will look for the first layer in that feature service ("Layer: 0"). 4. It will attempt to fetch the date value ("Enrichment Fields: THE_DATE") from that layer for the feature whose "TRACKID" value matches the geoevent's "GE_ID" value. 5. After the date is fetched from the database, its value will be appended to the geoevent in a new field ("Target Fields: New Fields"). 6. Because appending this value alters the GeoEvent Definition (e.g. schema), the field enricher will emit a geoevent with a new GeoEvent Definition ("New GeoEvent Definition Name: SDE_DBO_latest_feature-with-last-date" The next processor is simpler, and it's job is to cast the geoevent's date value from a Date data type to a Long data type. This is required in order to do simple "date math". There are no "isAfter" or "isBefore" type operations in GeoEvent so instead we just do some simple math on dates as numbers. That calculation doesn't happen in this step, but rather this step is simply to prepare the geoevent's date for that operation. This processor is called "ge date as long" and looks like: It simply uses the value in GE_DATE and places it into a new field called "GE_DATE-as-long". Because this new field is of data type Long, this casts the date from its original Date data type to a Long data type. Just like how the field enricher modified the GeoEvent definition (e.g. schema), this Field Calculator modifies the schema again. Because of this, we must specify the name of the new GeoEvent Definition that will be emitted. That GeoEvent Definition's name is "xml-enriched-with-date-as-long". The next processor is exactly the same in purpose, but for the date that came from the feature service layer: The next component is a filter. This is what filters out any geoevents whose date is not after the date of the last one written to the feature class. It looks like this: or in the actual editing UI, it looks like this: Finally, for any geoevents that do meet the filter's criteria, the geoevent's GeoEvent Definition must be mapped to the GeoEvent Definition of the feature service using a Field Mapper processor. In my case, it looks like this: or in the editing UI, it looks like this: So with that GeoEvent Service published, any geoevent I send will overwrite the last-written one ONLY if its date is after that last-written one. I can verify this simply by looking at the data in the feature class, but I can also see this in a map shown as a moving dot. But in order to see the dot move, I have to update the X and/or Y in addition to the newer date value. Note: I do not know how this particular configuration will work on the very first geovent that's sent. If you already have a feature service with features, you should be ok. If you have a blank one, you may want to temporarily unhook the filter from the process flow of the GE Service, write one feature, then re-hook up the filter. Or you could manually add a feature. Otherwise, I'm not sure how a null date will be returned/handled by the Field Enricher when there isn't a record retrieved. You could also incorporate a filter clause looking for null or NOT null to catch for the case. Hope this helps. Mark
... View more
01-03-2017
10:34 AM
|
2
|
1
|
1486
|
|
POST
|
linda rae, Ok, I generally follow. Forgive me if I keep asking seemingly basic or annoyingly-detailed questions. Also, I don't meant to sound patronizing in any way, so if any question I ask sounds like it is, I certainly didn't mean it to be! The domain of HTTP/S, SSL, PKI, etc gets complex very quickly, as I'm sure you know. It can sometimes be hard to be on the same page. You mention that you updated the code for the Query Report Processor to utilize a certificate. But then you mention you face the "same issue" with the Push/JSON adapter. You say that the Push/JSON adapter works with HTTP but you get PKIX errors if you use HTTPS. HTTPS connections don't necessarily require client certificates. But they do always require a server certificate, and that the destination server's certificate is trusted. The CA that signed the server certificate has a certificate that must exist as a trusted root in the calling client's trust store. In most cases, this calling client is a browser, but in the case of a GeoEvent adapter, this would be a trust store (i.e cacerts) within the Java environment that hosts GeoEvent components. So in the QRP case, it sounds like you're talking about client certificates, but in the OOB HTTP case, it sounds like you're just talking about HTTPS (which entails server certificates and trusts but not necessarily client certificates). Can you please set me straight? If you are talking about needing an HTTP transport that supports HTTPS and needs a true client certificate, then I can give you one. I thought we had it out on GitHub but I don't see it now. My hunch is that you're talking about HTTPS/SSL/server certs/trusts/etc and not true X.509 PKI client certificates, but of course I could be wrong! I'm more than happy to keep this message volley going as long as it takes to get you to be successful. I work a lot with SSL, X.509, certs/trusts, so I can definitely help.
... View more
12-30-2016
03:32 PM
|
2
|
0
|
1194
|
|
POST
|
Hi linda rae, Can you please elaborate more on what you're asking? You mention modifying a processor via changing its source code - I follow. But then you ask about making outputs work. Outputs are different entities in GeoEvent than processors. Are you asking if outputs can be modified via editing source code (just like how processors can be)? Or are you asking about to make GeoEvent outputs work in general? Custom outputs? And you also mention "web adapters". "Web Adaptor" is something very specific in ArcGIS Server terms. What did you mean, specifically, when saying "web adapter"? Mark
... View more
12-29-2016
03:00 PM
|
0
|
2
|
1194
|
|
POST
|
Hi prsdgaur In your screenshot, your file filter is just simply * . But in the screenshot for the CAP connector, the file filter is " .* ". See below: Might be a place to start. Mark
... View more
12-29-2016
02:54 PM
|
0
|
1
|
609
|
|
POST
|
Hi Shawn (Shawn Boliner) I have what you need, along with some important information about how GeoEvent handles dates. I should have asked you early on what input you were using (?). For my testing, I used the "Receive Text from a TCP Socket" and "Receive JSON on a REST Endpoint". I used the TCP/socket input because it is very common in real-world use cases, and I used JSON/REST as I wanted to see how the JSON adapter compared. If you are using a different input, like Poll/ArcGIS, I can look at that as well, but I did not include that for now. Both the Text Adapter and JSON Adapter exhibit behavior I was not expecting. If the value for a date is invalid, like really invalid, the adapters may choose to make the date an instance of the current system date at the time of processing the geoevent. This is potentially really bad, as GeoEvent is making up data. I had "bada bing" in a date field and GeoEvent proceeded with no error, and made the geoevent's date value an instance of the current system time. I've submitted this issue to the GeoEvent development team, and it has been added to the defect list. The desired behavior is for GeoEvent to make a null date should a value not be parseable as a date. I especially think this is true if the user has specified a value for "expected date format" in the parameters of the input. In this case where a user specifies a value for "expected date format", I'd expect the product to behave like "Ok, the user has instructed me to look for date values in this particular format. I'm going to reject any records with values that don't look like that". That's only my opinion and I will be discussing this with the dev team when the holiday season passes. So with all of this said... Here is the GeoEvent Service that does what you want: Let's break it down. I won't talk about the input (green) as it sounds like you already have one set up. I won't talk about the outputs (blue) other than how one represents a outlet for within-range geoevents, and the other is the outlet for out-of-range geoevents. The logic happens in the processors and filters (yellow). The first processor is "cast date to long" and looks like this: This processor takes the value from a field named "THE_DATE" in the incoming geoevent, calculates a new field value based solely on the value for THE_DATE, and places it into a new field called DateAsLong. Because adding a new field alters the GeoEvent Definition (e.g. schema), I have to specify a name for the new GeoEvent Definition, and in this case, that name is "DC-With-Date-As-Long". So the geoevent data that flowed into this processor is still intact and emitted from this processor, but has a NEW value appended, which is the original date value in THE_DATE, but as Long data type. This is necessary because the way we do date math in the next couple steps. And by the way, this date value I'm talking about is in epoch time milliseconds. The next processor is "calculate now" and looks like this: This processor simply uses the currentTime() function to generate a Long-format value for the system time when the geoevent is being processed. Again, this value is newly generated, and we need somewhere to put it, so in this example, it's going into a field called NowAsLong. Once again, a new GeoEvent Definition is created, and is called DC-with-now-as-long. The next processor is "calc date diff" and looks like this: Recall that between the last two processors, we've calculated two new fields: one called "DateAsLong" and one called "NowAsLong". These are the two pieces of information we need to determine if a geoevent's date is within an acceptable time range. However, we're not quite ready to filter out geoevents just yet. We need to calculate the difference between these two values, in milliseconds, and compare that to the number of milliseconds in the time range we care about. It's with that difference that we will filter out any geoevents that fall outside that range (or vice versa, depending on what is desired). The "calc date diff" processor (above) calculates the absolute value of the difference between DateAsLong and NowAsLong. By calculating the absolute value, we're comparing a geoevent's time both before and after system time. Using an absolute value covers the "within a tolerance of" aspect of what we're trying to accomplish. The calculated value is inserted into a field called "datediff" and once again, we have to make a new GeoEvent Definition, which will be called "DC-with-datediff". Finally, I have two filters, one of which catches geoevents outside of the desired date range, the other of which catches geoevents within the desired date range. Note that depending on what you care about, you may only need one of these. I wanted to demonstrate the concept of splitting the stream and sending results down two different paths. The filter that catches out-of-range geoevents looks like: 300,000 is the number of milliseconds in 5 minutes. So this filter simply looks for any geoevents with a "datediff" value greater than 300,000. Any geoevent that meets this criteria will pass through the filter, and onto whatever downstream processing that awaits. In my case, I simply output these geoevents to a file output called "file-5-minute-out-fail". These geoevents are ones whose timestamps fell outside of a +/- 5 minute time window. Conversely, the filter that catches geoevents within the time range of interest looks like this: One would think the logic here would simply be the exact opposite of the other filter (i.e. datediff < 300,000). This is generally true. But notice that I have an additional criteria of "datediff > 10". Recall the earlier topic of the Text and JSON Adapters sometimes creating a timestamp of system time when a date was not able to be parsed. In those cases, a bad date value would result in the Text/JSON Adapter creating a geoevent whose timestamp would pass the filter criteria of "datediff < 300,000". This is undoubtedly NOT what would be desired. So by having an additional check of datediff > 10, I'm not letting geoevents pass through if their timestamp is within 10 milliseconds of system time. Obviously, this is based on an assumption that 10 milliseconds is an acceptable value to use for this secondary catch clause. You may be able to afford a larger value, or even a smaller value. The key here is to realize that this secondary criteria is present entirely to address what I see as a defect in the product at the current version. I had good luck with the GeoEvent Service discussed in testing. I used the following very simple test data: CSV: ID,POINT_X,POINT_Y,STATUS,THE_DATE 8,-92.41646302,35.06684922,offline,12/28/16 18:39:12 3,-92.47439554,35.08497411,offline,1/10/16 1:99:x 3,-92.48372651,35.08497411,offline,12/28/16 18:39:35 5,-92.499569,35.0888676,online, bada bing JSON: [{ "ID":8, "POINT_X":-92.41646302, "POINT_Y":35.06684922, "STATUS":"offline", "THE_DATE":"12/28/16 18:45:00" }, { "ID":3, "POINT_X":-92.47439554, "POINT_Y":35.08497411, "STATUS":"offline", "THE_DATE":"1/10/16 1:0:xx" }, { "ID":3, "POINT_X":-92.48372651, "POINT_Y":35.08497411, "STATUS":"offline", "THE_DATE":"12/28/16 18:45:00" }, { "ID":5, "POINT_X":-92.499569, "POINT_Y":35.0888676, "STATUS":"online", "THE_DATE":"bada bing" }] Note that records 1 and 3 are the ones with properly formatted dates. Therefore, for each dataset, during testing, I have to change the timestamp for records 1 and 3 to be within 5 minutes of my system time to achieve the expected results of two records succeeding, and two failing. I hope this helps - please let me know if any of it is confusing or unclear. Special thanks to rsunderman-esristaff for input and help from the product team side. Mark
... View more
12-28-2016
08:48 PM
|
4
|
3
|
5522
|
|
POST
|
Hi Mohammed, Two things to consider: 1. Is the target service/layer a feature service. For adding/updating features, you must use Feature Services. Map Services will not suffice. Feature Services are published by enabling the "Feature Access" capability when publishing a Map Service whose data source contains layers from an Enterprise Geodatabase (PostgreSQL, Oracle, SQL Server, etc). 2. If you are, in fact, using a Feature Service, had you recently added the Data Store that contains the Feature Service in GeoEvent Manager's Site-Data Stores? It's common to see UI controls take a while to enable/update while GeoEvent is in the process of reading Data Stores soon after they're added. Mark
... View more
12-28-2016
06:43 AM
|
2
|
0
|
948
|
|
POST
|
If I understand what you're saying, then look at the Field Mapper processor. It's job is to map fields of one GeoEvent Definition to the fields of another GeoEvent Definition in the course of processing GeoEvents.
... View more
12-28-2016
06:37 AM
|
0
|
0
|
665
|
|
POST
|
Then you just need to get that user's full distinguished name (DN) and use that for "user" and their password for "userPassword". And don't forget about trying anonymous (blank for both). This is allowed by many places.
... View more
12-28-2016
06:35 AM
|
1
|
0
|
2951
|
|
POST
|
Hi Shawn, I've had this set up for a while but ran across a few unexpected things along the way. I'll post a reply soon with all the gory details. Just wanted to know there was activity on our side so you don't think we fell silent on you. Stay tuned... Mark
... View more
12-12-2016
08:46 PM
|
0
|
0
|
5522
|
|
POST
|
Sorry, I should have explained better. user and userPassword are credentials for someone to log into LDAP. However, many LDAPs allow anonymous binding (logging in) so you can actually try leaving these two parameters blank and see if you can connect.
... View more
12-07-2016
06:24 AM
|
0
|
2
|
2951
|
|
POST
|
Hi Allen, Are you still looking for help on this? Mark
... View more
12-05-2016
08:25 PM
|
0
|
1
|
734
|
|
POST
|
Hi Krishan, You mention you changed "ldapURLForUsers". Is that the only property you've changed? You need to also change "userPassword" and "user" at a minimum, and potentially other properties, depending on your LDAP. But first start with user and userPassword. Mark
... View more
12-05-2016
07:21 AM
|
0
|
0
|
2951
|
|
POST
|
I actually didn't have to do the "+ 0" part, just a regular cast into a new field of Long did the trick. Shawn, I do not know your proficiency with GeoEvent, so if you want more step-by-step instructions, don't be afraid to ask! Mark
... View more
12-01-2016
12:20 PM
|
1
|
2
|
5522
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 09-09-2016 09:23 AM | |
| 1 | 01-29-2015 12:39 PM | |
| 1 | 11-08-2016 10:15 AM | |
| 1 | 09-09-2016 08:51 AM | |
| 1 | 06-13-2016 02:58 PM |
| Online Status |
Offline
|
| Date Last Visited |
11-11-2020
02:23 AM
|