|
BLOG
|
At writing I'm tasked with delivering a demo theatre session at the 2026 Esri User Conference titled Cloud-Native Georelational Data Distribution. If you're coming to UC 2026 you can add it to your schedule. If you can't make it then the book of the play is what follows below, however to encourage UC attendance there is a chapter missing from the book which I'll show live - namely how fast you can get data into your map or scene from AWS S3 while enforcing a data model for place-of-interest and building features - because you should always be thinking about an information product and not just data. The plot has three threads to its data velocity story; dataset lifecycles I'm categorizing as: Infrequent periodic bulk replacement Base layer data that isn't time enabled Frequent append and upsert Operational layers that grow continuously and have life stages and are time enabled Medium frequency insert, update and delete edits Operational layers that evolve and are time enabled My goal is to show how data might be offered in each of the above velocity scenarios, in a cloud native way, and brought into ArcGIS. The common thread is that the cloud native format we'll use is GeoParquet in a public S3-API compliant object store such as AWS. In all cases we'll use ArcPy and DuckDB in notebooks or script tools for data consumption, with the understanding that a data custodian would supply consumers with the tools needed for ArcGIS to consume the data. The combination of S3, GeoParquet and DuckDB provides performant and functional implementation in ArcGIS. Let's dig in. Periodic Bulk Replacement My subject matter data is Overture Maps Foundation Division Area features, a global scale dataset released monthly. The data model includes polygon geometry with a primary place name and a struct object with alternate names in many languages. The source data at writing are ten GeoParquet files in AWS S3, with a common schema. There is no logical partitioning. The information product I want is a geodatabase feature class, related alternate name table plus a geocoding locator that understands all the names. Here is what the feature data looks like over Europe: Division Areas To use my information product, for example if I want to find Madrid in Spain using the Bihari language (the popup shows available names for Madrid) I give मैड्रिड as the address: मैड्रिड finds Madrid A notebook is appropriate for the information product, you can find it in the blog download, and really its only trick is using the appropriate glob path to the GeoParquet data, see in this cell: sql = f"""create or replace temp view division_area_view as select
id,
names.primary as primary_name,
class,
subtype,
region,
country,
version,
is_land,
is_territorial,
bbox.xmin as xmin,
bbox.ymin as ymin,
bbox.xmax as xmax,
bbox.ymax as ymax,
division_id,
geometry
from read_parquet('s3://overturemaps-us-west-2/release/{release}/theme=divisions/type=division_area/*.parquet',filename=false, hive_partitioning=1)
where {whereExp}
order by country, ST_Area(geometry) desc;"""
view = conn.sql(sql) DuckDB can use glob paths for local or remote data, and make remote queries with the S3 API, and these queries are parallelized across all files in the glob path. Note that the path includes a release identifier, this is taken from a STAC catalog at run time. This is the central theme of this post - when and how to use GeoParquet and DuckDB with data that changes at intervals. In this case we don't know what records changed so cannot query for them easily, so we do a bulk extract. What if we do know which records changed? Frequent Append and Upsert My example "busy" data is 311 case data for San Francisco. The dataset is continuously updated with thousands of cases a day opened, edited or closed, but not pruned, and goes back to 2008. At writing the bulk download is 8 million features, seen here at 1:10000 scale: 311 Cases in San Francisco Many "event" datasets like this exist, so how might the dataset be efficiently delivered in a cloud native way? The answer relies on the data having timestamp fields for when cases are opened, updated and closed, with the field updated_datetime being refreshed on any status change. As the San Francisco open data site (and therefore the system of record behind it) has an API that supports querying then this can be done for changes based on updated_datetime. Here is the approach taken in the tools shared in the blog download: Do an initial bulk download to a baseline GeoParquet file On a frequent schedule Query the existing GeoParquet file(s) for the maximum updated_datetime value Query the 311 system of record for records more recent than the maximum This is a fast query Write the query result to a new, additional GeoParquet file All GeoParquet files must be at the same glob path On demand, query the set of GeoParquet files to extract data of interest This query makes use of a simple but powerful SQL clause, so read on... Here is an example query where I extract into a memory feature class all 311 cases to-date for 2026 within a polygon - this took 4 seconds. There are about 8500 features. San Francisco Query The query tool is a script tool, its secret sauce is the QUALIFY clause. The GeoParquet files made from daily case data will have duplicates due to the case lifecycle (opened one day, edited another day, closed a later day) then we want only the most recent row per service_request_id value across all GeoParquet files - the QUALIFY clause when querying the parquet data does this for us. conn.sql(f"""create or replace temp view sf311_view as select
service_request_id,requested_datetime,closed_date,updated_datetime,
status_description,status_notes,agency_responsible,service_name,
service_subtype,service_details,address,street,supervisor_district,
neighborhoods_sffind_boundaries,police_district,source,media_url,
bos_2012,data_as_of,data_loaded_at,ST_AsWKB(GEOM) as wkb
from read_parquet('{pqPath}',filename=false)
where {where}
and ST_Intersects(ST_GeomFromText('{wkt}'), GEOM)
qualify row_number() over (partition by service_request_id order by updated_datetime desc) = 1;""") So now we have a simple way to deliver fast-changing data in a cloud native way with fast queries. What if we have quite big data but no way to query for changes? Continual Insert, Update and Delete Edits Data subject to heavy branch versioned editing is high value work for GIS, and while you can share the data state by giving access to its underlying feature services, adding a public mapping workload to the server will not be welcomed by the administrator. It turns out you can share the state of the data, with support for time travel, using a cloud-native approach. What enables this is the insert-only data model of branch versioning - the state of a feature at any moment is determined by GDB_FROM_DATE in combination with OBJECTID - the row with latest GDB_FROM_DATE for each unique value of OBJECTID is the current state of a feature, and if you query for earlier GDB_FROM_DATE values you get time travel. The only trick is making GeoParquet files that represent edit moments, but then the QUALIFY clause comes to the rescue again to deliver the data you want. Here are two views of parcel data over the same extent and using the same source GeoParquet files. Current and earlier moments The left view is the latest moment, the right view is at an earlier moment, you can see a lot of parcel subdivision has taken place. The data source retains the full data history, edits result in new GeoParquet files being added to a folder or cloud object store. Here I have a baseline 1.46GB GeoParquet file for the original state of the data with a couple of small GeoParquet files containing edits over two weeks. GeoParquet files containing full data history Here is a manifest of the blog download file CloudNativeDataDistribution.zip: ImportCurrentDivisionAreas.ipynb notebook Downloads Overture Division Area features to your project home geodatabase Creates multi-lingual names for features in a related table Creates or refreshes a locator using Division Area features as reference data GetBaselineSF311 spatial ETL tool Extracts the full 311 dataset for San Francisco to GeoParquet Requires ArcGIS Data Interoperability Requires an account and app token GetUpdatesSF311 spatial ETL tool Extracts 311 case data more recent than any in an existing GeoParquet file Creates a new GeoParquet file Requires ArcGIS Data Interoperability Requires an account and app token Generate311Points script tool Demonstrates using the GeoParquet files to make a memory feature class ExtractCurrentParcels.ipynb notebook Demonstrate extracting the latest data state of GeoParquet files in a branch versioned data model ExtractEarlierParcels.ipynb notebook Demonstrate extracting an earlier data state of GeoParquet files in a branch versioned data model Not included, but available on request, are the tools used to build GeoParquet files for the parcel data. Remember, where my sample tools are using local file storage for GeoParquet, in production you would use a cloud object store, like AWS S3. Now, while I hope to see you at the 2026 User Conference in San Diego, if you need more encouragement then here is sneak preview of a demo bringing Overture Building theme data into a scene. See if you can find a couple of easter egg script tools in the blog download 😉. Overture Buildings
... View more
Wednesday
|
2
|
0
|
164
|
|
POST
|
Yes, the ArcGIS Pro 3.7 release of ArcGIS Data Interoperability will use the FME 2026.1 engine.
... View more
3 weeks ago
|
0
|
0
|
127
|
|
BLOG
|
There are many ETL jobs you might want to trigger from your map and the patten we'll explore in this blog will help you get there. Here is one use case, running an ETL web tool that accepts area features from a map as an input. The web tool happens to build a mobile geodatabase which is returned to my Pro session or Experience Builder analysis widget as a local file, but you might ETL data from anywhere to anywhere, in any format. The data doesn't have to be in your map, or even in your ArcGIS Pro project. Note the web tool as shown here will not return a mobile geodatabase when run in map viewer, it would need to be zipped first. This is simple to do with the mobile geodatabase writer, just write with the file name extension .geodatabase.zip and a ZIP file will be automatically created. Running an ETL web tool in Pro The blog download has my sample toolbox, so let's walk through the steps. There is a model tool and an ETL tool in the blog download's toolbox, the model wraps the ETL tool. I used ArcGIS Pro 3.6 and of course ArcGIS Data Interoperability for Pro 3.6. ModelBuilder wrapping the ETL Things start with a FeatureSet input parameter, in my case with polygon geometry intended to be the area(s) of interest containing any number of US cities. At run time this lets you pick from a map layer, browse for data or create your own features from scratch. The FeatureSet is then written as EsriJSON data to the path %scratchFolder%\esri.json. If you're not familiar with ModelBuilder inline variable substitution then read about it here. When the tool runs locally this path will be in the project scratch folder, and when it's a web tool, in the job scratch folder. Now for a little time travel. Imagine you're building the above model and all you have is the FeatureSet input and the Features to JSON tool in the model. Save and run it and you'll have a file esri.json in your scratch folder. Now build your ETL tool (embedded, in the same toolbox as the model) that does what you want given the EsriJSON input. Here is mine. ETL that reads EsriJSON and does stuff Like the annotation says, an EsriJSON reader (NB: with single path input property) brings in my map input, the features get aggregated (I want one feature, multiparts are OK) then projected however I need in my downstream processing. In my case the resulting feature is used as an initiator and spatial filter in a FeatureReader, which extracts some data from Living Atlas and writes it to a mobile geodatabase with dynamic schema. There is a trick though in the pathing of the data. Here I am editing user parameters in the ETL tool and the top one is a scripted parameter SCRATCHFOLDER that gets what ArcPy is using, either locally in Pro or in a job folder in a web tool. Scripted parameters are evaluated before any data processing happens. SCRATCHFOLDER scripted parameter The next parameter is the source EsriJSON file path, which will be supplied by the model but to make sure I set the default path to use the SCRATCHFOLDER scripted parameter. Note that you have to save the parameter state after adding SCRATCHFOLDER and reopen the dialog to use the value in other parameters. EsriJSON input parameter Now the trick, the output mobile geodatabase is written with overwrite permission to the scratch folder. Mobile geodatabase output to scratch folder Now back to the model. The ETL tool is added and the EsriJSON file connected as its input. The ETL tool output is written to %scratchFolder%\Extract.geodatabase. Back to the future The last step is to add a Calculate Value model tool that copies the output path to a new File type parameter. This is necessary because the output has to be Derived for a web tool to return data. Make sure the ETL tool output isn't set to intermediate in ModelBuilder or you'll get no output! Check out the Calculate Value expression, it is necessary to protect the path from interpretation as having escape characters by making sure its a raw string. Calculate Value expression After some local test runs I had a result I could share as a web tool and then run. When sharing the web tool I set the message level to Info so I can watch the action. I used ArcGIS Enterprise 12.0, which is compatible with ArcGIS Pro 3.6. Make sure Data Interoperability is installed on your server and any packages and credentials installed on the server by the arcgis service owner user. It is good practice to open fmeworkbench.exe as the arcgis service owner user (in the Data Interoperability install) on the server and test readers and writers you will be using. Web tool results view In the message stream above you can see the output written to a job scratch folder and it is also downloaded locally to my Pro session: Data is returned I don't have any file associations for .geodatabase files but I can open the link in Data Inspector: Mobile geodatabase in Data Inspector Now I have an ETL web tool I can share with my colleagues! Speaking of colleagues I am indebted to @SashaLockamy for her help putting this together. For clarity, the steps to building my web tool were: Make a model with a FeatureSet input parameter that writes EsriJSON to %scratchFolder%\esri.json Save and run the model to create the esri.json file Make an embedded ETL tool in the model's toolbox that reads EsriJSON and does your ETL If returning data it must be a file data type, and... Script the first parameter to return arcpy.env.scratchFolder Write the desired file output data to that folder Test the ETL tool using your esri.json file Add the ETL tool to the model Connect the EsriJSON output as the ETL tool input Add a Calculate Value tool to cast the ETL tool output to a File, Derived value Make sure the ETL tool output is not intermediate Save and run the model from the toolbox, not the model diagram Share the result as a web tool (or geoprocessing service if using a standalone server) Please do comment in the post with your experiences.
... View more
04-02-2026
01:17 PM
|
4
|
0
|
406
|
|
POST
|
@Oli82uk posts like this are welcome! I'll amplify the post on LinkedIn. You caught my eye with the mention of embeddings - exactly what I'm looking at right now, except I'm not using ArcGIS Data Interoperability, I'm going Pythonic, so feel free to take a run using Data Interop and we can compare notes!
... View more
03-23-2026
05:24 AM
|
1
|
1
|
406
|
|
BLOG
|
At writing it's conference week for the Esri Developer & Technology Summit 2026 and I'm just back from presenting my technical session Exploring Tools and Patterns for Data Migration. I like to go demo-heavy at event presentations, it lets me generate content I can share. Here's a snapshot of some of the live action - ETL via script tool of Overture Maps Foundation Places theme points of interest brought into ArcGIS Pro's memory workspace as a base point layer (the green points) and related (with 1:M cardinality) place categories in a table view as my output information product, 64 million points queried from GeoParquet files in AWS S3 with map interaction to define an area of interest. ETL with map input Here I identify some of the points, see the alternate categories accessed by map relate. Place Categories I'm getting ahead of myself so let's back up a bit. Ideally I could share a project package with attached documents but the relevant geoprocessing tool errored for me on a validation issue, so I'm just going to share tools, documents and minimal data in the post attachments. Here is the relevant content, which I'll talk to in the order shown in the Catalog pane. Relevant Content The toolset Using Data Interoperability contains two Spatial ETL tools that embody a 2-stage process for maintaining a hosted feature service information product in ArcGIS Online. First create geodatabase objects with the desired data model, relationships and schema, add them to a map and define symbology, popup behavior and metadata, publish the layer as a hosted feature service, then secondly update this information product on demand. Not demonstrated on the day but something I know is in demand is a ModelBuilder model FeatureSetInput namely how to build map interaction into an ArcGIS Data Interoperability Spatial ETL tool. Here it is: FeatureSetInput model The model isn't shown as validated (it validates at run time) but the processing works like this: A model variable of type FeatureSet is an input parameter, filtered to be polygon. This means at run time you can pick a polygon layer or create new area of interest features on your map. The core tool Features to JSON is then used to write an EsriJSON file to the project scratch folder as intermediate data. This file's path is then an input parameter expected by the Spatial ETL tool DoSomethingWithEsriJSON which takes it from there to do whatever you want. This behavior is frequently wanted for web tools. Here is a view of the tool log as it runs, showing the input map polygon has been read into the workspace. FeatureSetInput runtime details Now for the Using ModelBuilder toolset. The model EVPopulationGeocoded shows reading a local CSV file with the Export Table core tool to impose a desired schema, plus how to geocode unique ZIP code values to supply geometry to all rows and joining the geometry onto the base table. The desired schema is tuned to fit the data, the script tool ReportMaximumTextDataWidth helps with deep inspection of field width requirements, but on the day I also showed using Notepad++ and Pro to help with problem discovery, such as nulls encoded as zero , empty strings, and empty geometry values. EVPopulationGeocoded Next up was URL2EVPopulation, which showed how to give ModelBuilder a boost with a little Python to retrieve the source CSV data from a URL where it lives in an open data catalog, this was a popular feature. URL2EVPopulation The submodel EVPopulation showed an alternate approach to geometry creation, namely ArcPy's ability to convert OGC Well Known Text into Esri geometry. The core tool Convert Coordinate Geometry doesn't support WKT. EVPopulation Using Script Tools toolset has the GeneratePlaces script tool that makes the Places layer and Place Categories related table view shown above. This shows how to add map interaction to your ETL tools, and introduces DuckDB as both a SQL-aware database and powerful integration client for local or remote data. You can throw SQL queries at any data type DuckDB supports, right down to CSV. That takes me to Notebooks as an integration option. I demonstrated ImportPlacesByDivisionArea that used a SQL where clause defining an area of interest by naming Overture Divisions features that make up the area. To help discover what the area names need to be the notebook ImportCurrentDivisionAreas is included in the post downloads. This downloads all Overture Divisions areas worldwide. ImportPlacesByDivisionArea goes further than the script tool GeneratePlaces in terms of making an information product. It adds metadata to the output but more importantly maintains a geocoding locator "Places_Locator" of POI type using the latest data. A copy of the locator is attached - see how it supports category filtering by entering "hotel" in the Locate pane in Pro - you'll see all hotels in Palm Springs are candidates! Coders out there don't forget that locators at a path can be used programmatically by arcpy.geocoding to do things like repairing null geometry in script tools, you don't need to stand up a geocode service for row-based processing. Not shown on the day but in the downloads is a toolbox QuickImportToMemory containing the tool QuickImportToMemory that does what it suggests using ArcGIS Data Interoperability - it lets you do data inspection in Pro from any supported data source. You can learn about your data before committing to an ETL approach. Al tools built with ArcGIS Pro 3.6. Do comment in the post with your observations.
... View more
03-12-2026
12:22 PM
|
2
|
0
|
516
|
|
POST
|
Ed, make sure to use the Esri Geodatabase (Personal Geodb) format in Quick Import, not Microsoft Access. See also: https://docs.safe.com/fme/2025.1/html/FME-Form-Documentation/FME-ReadersWriters/personal_geodatabase/personal_geodatabase.htm
... View more
02-19-2026
05:45 AM
|
1
|
0
|
294
|
|
POST
|
The Data Pipelines team at Esri should be able to help with a recommendation @BethanyScott ? You might be hitting an issue found elsewhere, namely that when ArcGIS creates a circle it is modeled as an ellipse with constant start and end radii and the same start and end axes, and hosted feature services are fussy about what clients can edit true curves.
... View more
02-03-2026
12:28 PM
|
0
|
0
|
1332
|
|
POST
|
See this in ArcGIS Online https://developers.arcgis.com/rest/geocode/find-address-candidates/#search-for-a-street-between-two-cross-streets Or in your own Street Address role locators for the US made with Pro 3.6+ https://pro.arcgis.com/en/pro-app/latest/help/data/geocoding/introduction-to-locator-roles.htm#ESRI_SECTION2_F3D2807694AA4A588DF21E9884DB4159
... View more
02-03-2026
06:13 AM
|
0
|
0
|
332
|
|
BLOG
|
Here is my subject matter data, the State of New Jersey's parcel data (~3.5 million features, 4.4 GB with 45 fields), with thanks to the NJ tech team for their assistance putting this sample together. I'll get to why one parcel is highlighted shortly... First, with reference to the post's title, this discussion isn't tied to ArcGIS Online, you may be working with ArcGIS Enterprise and implement this workflow, so stick with me. The challenge of maintaining a hosted feature layer of big data is common. The problem we're trying to solve here is applying a data update to a live service when the update transaction is very large, in our case tens of thousands of parcel edits are written several times a year, the features may be point rich and the schema is wide. If you overwrite the service using core ArcGIS Pro's user interface it consumes a session for a long time, so let's get some more efficient automation going using ArcGIS Data Interoperability and write only the delta transaction. To be specific, the changeset writing mode recommended for larger transactions is upsert. This requires the target feature service have a key field with a unique constraint, which the subject matter data has. Upserts are sent in 10 MB chunks rather than sets of features with maximum row count supported by the service (2000 for polygon data). Maintaining hosted feature services by applying a delta transaction as edits is a well-trodden path, change detection in ArcGIS Data Interoperability is ideal for this. However, there are some things to note here: The incoming data for the refresh is in file geodatabase format The target workspace is a hosted feature service The datasets are not co-located This implies a few issues: Streaming the hosted feature layer data locally to calculate the changeset would take a long time Geometry, date and numeric fields need their precision in agreement for correct change detection Subtle value differences require careful handling Precision agreement issues can be solved with ETL tool configuration, but to avoid the issue entirely the approach we'll take is to download the target feature service as its own file geodatabase, so storage-dependent precision differences are not a factor. Then the changeset can be easily calculated locally between two file geodatabase feature classes and the delta written efficiently. This where the highlighted parcel in the map comes in. Parcels may have complex geometry, boundaries may have multiple segments, and segments may be true curves. While storing true curves in hosted feature layers is supported, editing them is constrained. See here some relevant properties of my target feature service: {"allowGeometryUpdates" : true,
"supportsTrueCurve" : true,
"supportedCurveTypes" : ["esriGeometryCircularArc"],
"allowTrueCurvesUpdates" : true,
"onlyAllowTrueCurveUpdatesByTrueCurveClients" : true} What you can take from this is that while some true curve editing is theoretically possible, any curves of type esriGeometryEllipticArc are not supported for editing, and guess what, a circular doughnut hole in a parcel has ellipse geometry. Also, our ETL tool client is not known as a true curve client. If you are using a release of ArcGIS Data Interoperability that does not support the Esri ArcGIS Feature Service writer you will need to use the feature service admin tools to set onlyAllowTrueCurveUpdatesByTrueCurveClients to false. A simple way to manage true curves is to stroke them into polylines when doing geometry comparisons or when writing them to the feature service by using the ArcStroker transformer, with control over maximum deviation from the true curve. This replaces any arc segments with polylines, temporarily for geometry comparison and permanently for any parcel written that has been updated or is new. Here are a couple of views of the workspace that does the whole job, first the Main view... ...then the pale green looping custom transformer that waits for a file geodatabase export to complete... The file geodatabase export takes a variable length of time, depending on how busy ArcGIS Online is. I ran the tool at a scheduled time that worked out to 3AM UTC, it took 23 minutes for the export. I have seen 10 minutes, or an hour, but I have also seen failures when testing at busy times for ArcGIS Online. Scheduling the tool to run outside busy times in North America and Europe is recommended, so I used 3AM UTC. To apply "defensive coding", just ahead of, and again inside the looping transformer which waits for export job completion, there are a couple of Emailer transformers that send job submission details and job failure details if that occurs. Esri support will need both job and failure information to troubleshoot your service behavior on error, please do open a support call if you experience any problems. Here is an example job details email body: Feature service: https://services.arcgis.com/FQD0rKU8X5sAQfh8/arcgis/rest/services/NJParcels/FeatureServer Feature service export job: 9f77069e-e212-46bb-8696-b7ce4f54c882::FQD0rKU8X5sAQfh8 of service item: 4480efce4518473096613597d461e55f to export item: 1fc913da28174608b9a65859bcc8b9b0 started at local time: 2026-01-26T07:03:23.5436242-08:00 Type is file and Size is 4823392256 Here is what an export failure message looks like (the translation will terminate): Feature service export job has failed with status failed Failure was at local time 2026-01-26T09:05:01.0944432-08:00 JobId was 9f77069e-e212-46bb-8696-b7ce4f54c882::FQD0rKU8X5sAQfh8 Status request response was {"status": "failed","statusMessage": "failed","itemId": "4480efce4518473096613597d461e55f"} The workspace is in the blog download, you will need to edit it for your ArcGIS Online credentials, feature service details and Emailer transformer parameters. Please do comment in this board with your experiences and questions!
... View more
02-02-2026
09:33 AM
|
3
|
0
|
1025
|
|
POST
|
Hi Brittany, you can generate tooltips by wrapping your ETL tool in a Model. Create model parameters from the ETL tool and set their description property, which become tooltips. An example is attached - using Pro 3.6.
... View more
01-13-2026
08:34 AM
|
0
|
0
|
319
|
|
IDEA
|
@BerendVeldkamp I have alerted the appropriate Pro product manager to the option to implement natural sort order, this may become an idea or someone may implement an add-in, so stand by. I'm used to the function in Data Interoperability extension, but that isn't useful in Pro interfaces like tables.
... View more
01-13-2026
06:56 AM
|
0
|
0
|
597
|
|
IDEA
|
Would an option to apply natural sort order work for you? https://en.wikipedia.org/wiki/Natural_sort_order
... View more
01-13-2026
05:22 AM
|
0
|
0
|
631
|
|
POST
|
Tom please open a support call and we'll look into this. It seems like it should work.
... View more
01-09-2026
05:32 AM
|
0
|
1
|
420
|
|
POST
|
@ChrisBerryman ArcGIS Data Interoperability extension for Pro 3.3 or higher can write to Databricks, the format is supported for read and write. Support is a user install from FME Hub, which you do by simply adding a reader or writer to an ETL workspace.
... View more
12-12-2025
05:43 AM
|
0
|
0
|
461
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | Wednesday | |
| 1 | 03-23-2026 05:24 AM | |
| 4 | 04-02-2026 01:17 PM | |
| 2 | 03-12-2026 12:22 PM | |
| 1 | 02-19-2026 05:45 AM |
| Online Status |
Online
|
| Date Last Visited |
yesterday
|