BLOG
|
@MarkCederholm very interesting shootout. In the past we analysed the memory consumption of the three runtimes. In our daily work, especially in disconnected environments, we always face memory intensive workflows. I created a medium post (GEOINT App: Using web maps as the spatial ground truth. ) regarding a simple use case, last year. The sample code is hosted on Github (GEOINT Monitor poc-viewer branch). For performance-critical workflows, we usually depend upon low-level libraries which integrate perfectly into the Qt (GPL/LGPL) ecosystem or are specified by system integrators. A recurring restriction is the use of the GEOS C/C++ (LGPL) libraries, especially in combination with PostGIS. I could easily outperform the Qt Opt1 scenario (Avg area = 21715 / Avg pts = 12 / Seconds elapsed: ~24) on my machine by using a thin wrapper around GEOS::GeometryFactory::createMultipoint and GEOS::Geometry::convexHull, ::getNumPoint, ::getArea (Avg area = 21715 / Avg pts = 12 / Seconds elapsed: ~5). We should keep in mind, that these libraries were designed for low-level geometry operations (the multipoint implementation is a kind of std::vector having points as coordinates as struct of two/three doubles) and avoid any copy-construction/copy-assignments in the first place. Because of the relatively low memory consumption and the easy integration of various low-level libraries, the ArcGIS Runtime for Qt is the way to go for most of our "Runtime Desktop" use cases. We should spend more time in doing some kind of stress and performance testing. Thanks for sharing and best Regards from Germany.
... View more
10-20-2021
05:27 AM
|
0
|
0
|
1028
|
IDEA
|
@RexHansen thanks for your update, sounds promising. I created a "milestone" on our "location intelligence engineering board" to keep track of it. Location Intelligence Engineering MS-Q1 2022
... View more
09-30-2021
07:40 AM
|
0
|
0
|
2021
|
IDEA
|
@PhilipMeeks_CS we would prefer to stay with ArcGIS Runtime. We want to reuse our existing core libraries for mobile, desktop and web service development. Also, some of our system integrators ran into a bunch of serious licensing issues when combining various Open Source libraries in the past.
... View more
09-29-2021
08:10 AM
|
0
|
0
|
2027
|
IDEA
|
@RexHansen you mentioned that you are considering using some of the runtime libraries in a service/server context (Geospatial engine for spatial analytics). Is this still under consideration? We are facing a lot of integration issues when using the default author & publish approach with GP Tools and cascading web services (NTLM authentication with SSO using custom GP Services). All our use cases are connecting to custom data stores using web services with authentication/authorization and execute predefined geospatial workflows like buffering, spatial join, spatial aggregation and so on running in ArcGIS Enterprise. We are not able to reuse/delegate the authentication from our GP Service to the custom data store REST/GraphQL/gRPC endpoint. Right now, we have to use predefined "technical users", "system accounts" for accessing theses datastores. If we could use a geospatial engine running in our own web services, we would be able to directly authenticate against the custom datastore endpoint and do the geospatial processing without authenticating against and transferring the data back and forth to ArcGIS Enterprise using the Geometry Services.
... View more
09-29-2021
07:50 AM
|
0
|
0
|
2029
|
POST
|
Are there any best practices how to implement NTLM authentication with a GP Service so that, the valid user token using Single Sign On (SSO) with ArcGIS Enterprise is used for requesting another web service consumed by the Python tool being published as GP Service. Use Case: We are querying a custom graph database using NTLM authentication with Python (request-ntlm module) and transforming the entities and relations to features having relationships into a geodatabase on the fly. SSO works like a charm with ArcGIS Pro. But, when we publish the GP Tool as a GP Service, the service runs using the service account of the federated ArcGIS Server. The service needs to use the SSO token of the user which requested the GP Service call e.g. from any ArcGIS client. Is there any workflow how to reuse the SSO token using Python or at least to get the user info which requested the GP Service call? Any help would be fine.
... View more
08-20-2021
02:46 AM
|
0
|
0
|
816
|
POST
|
Very nice. We usually just upgrade existing projects and were not aware of this great enhancement. I just started to get my hands dirty and saw some linker warnings regarding the EsriRuntimeQtd.pdb not being linked with EsriRuntimeQtd.lib(LicenseResult.obj) and the other related *.obj's on Windows. But anyway, this CMake support helps us a lot and should make it to the official website. Otherwise we would not be able to deliver our DevOps release pipeline by integrating our CMake based vcpkg packages.
... View more
08-11-2021
05:49 AM
|
0
|
0
|
1353
|
POST
|
We are integrating a bunch of C++ utility libraries into our ArcGIS Runtime for Qt Quick based sample viewer. Usually these C++ utility libraries are compiled using CMake. The Qt company offers some samples using Qt Quick with CMake. cmake_minimum_required(VERSION 3.14)
project(QPyApp LANGUAGES CXX)
set(CMAKE_INCLUDE_CURRENT_DIR ON)
set(CMAKE_AUTOUIC ON)
set(CMAKE_AUTOMOC ON)
set(CMAKE_AUTORCC ON)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
# QtCreator supports the following variables for Android, which are identical to qmake Android variables.
# Check https://doc.qt.io/qt/deployment-android.html for more information.
# They need to be set before the find_package(...) calls below.
#if(ANDROID)
# set(ANDROID_PACKAGE_SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}/android")
# if (ANDROID_ABI STREQUAL "armeabi-v7a")
# set(ANDROID_EXTRA_LIBS
# ${CMAKE_CURRENT_SOURCE_DIR}/path/to/libcrypto.so
# ${CMAKE_CURRENT_SOURCE_DIR}/path/to/libssl.so)
# endif()
#endif()
find_package(QT NAMES Qt6 Qt5 COMPONENTS Core Quick REQUIRED)
find_package(Qt${QT_VERSION_MAJOR} COMPONENTS Core Quick REQUIRED)
set(PROJECT_SOURCES
main.cpp
qml.qrc
)
if(${QT_VERSION_MAJOR} GREATER_EQUAL 6)
qt_add_executable(QPyApp
${PROJECT_SOURCES}
)
else()
if(ANDROID)
add_library(QPyApp SHARED
${PROJECT_SOURCES}
)
else()
add_executable(QPyApp
${PROJECT_SOURCES}
)
endif()
endif()
target_compile_definitions(QPyApp
PRIVATE $<$<OR:$<CONFIG:Debug>,$<CONFIG:RelWithDebInfo>>:QT_QML_DEBUG>)
target_link_libraries(QPyApp
PRIVATE Qt${QT_VERSION_MAJOR}::Core Qt${QT_VERSION_MAJOR}::Quick) Would you be so kind and provide a sample using CMake and share it using arcgis-runtime-samples-qt
... View more
08-10-2021
08:13 AM
|
0
|
2
|
1392
|
POST
|
We finally figured it out, also we have to use singleLayerSelectedCondition instead of singleFeatureLayerSelectedCondition for enabling it. <conditions>
<insertCondition id="esri_mapping_single_layer_selected">
<and>
<state id="esri_mapping_singleLayerSelectedCondition"/>
</and>
</insertCondition>
<insertCondition id="esri_mapping_single_standalonetable_selected">
<and>
<state id="esri_mapping_singleStandaloneTableSelectedCondition"/>
</and>
</insertCondition>
</conditions>
<updateModule refID="esri_mapping">
<menus>
<updateMenu refID="esri_mapping_layerContextMenu">
<insertButton refID="SDA_Pro_Module_Create_Sda_From_FeatureTable_Button" separator="false" placeWith="esri_editing_table_openTablePaneButton" insert="after" />
</updateMenu>
<updateMenu refID="esri_mapping_unregisteredLayerContextMenu">
<insertButton refID="SDA_Pro_Module_Create_Sda_From_FeatureTable_Button" separator="false" placeWith="esri_editing_table_openTablePaneButton" insert="after" />
</updateMenu>
<updateMenu refID="esri_mapping_standaloneTableContextMenu">
<insertButton refID="SDA_Pro_Module_Create_Sda_From_StandaloneTable_Button" separator="false" placeWith="esri_editing_table_openStandaloneTablePaneButton" insert="after" />
</updateMenu>
<updateMenu refID="esri_mapping_unregisteredStandaloneTableContextMenu">
<insertButton refID="SDA_Pro_Module_Create_Sda_From_StandaloneTable_Button" separator="false" placeWith="esri_editing_table_openStandaloneTablePaneButton" insert="after" />
</updateMenu>
</menus>
</updateModule> Using unregistered context menu did the trick. See Solved: Buttons built by ArcGIS Pro SDK cannot be added to... - Esri Community
... View more
07-08-2021
04:26 AM
|
1
|
0
|
1186
|
POST
|
We need to add a custom button to the context menu for feature layers. In the past we used esri_mapping_layerContextMenu and esri_mapping_standaloneTableContextMenu. We define the state as insert conditions for layers and standalone tables by using: <state id="esri_mapping_singleLayerSelectedCondition"/> <state id="esri_mapping_singleFeatureLayerSelectedCondition"/> <state id="esri_mapping_singleStandaloneTableSelectedCondition"/> With ArcGIS Pro 2.7 and 2.8 esri_mapping_layerContextMenu seems not to work for any shapefile related layers, any help would be appropriated.
... View more
07-08-2021
03:52 AM
|
0
|
1
|
1217
|
POST
|
We would love to see in-memory layer with property label graph support, not only m:n relationship classes and this in-memory representation should be using SPARQL for connecting various graph datasources. Unluckily, as a distributor I cannot create ArcGIS Ideas.
... View more
05-18-2021
05:24 AM
|
0
|
0
|
1633
|
POST
|
We are trying to reconstruct tracks using AIS data and our GeoAnalytics Server fail with some "Request Entity Too Large" during mapping the partitions. As far as we can judge it, sparks wants to write into the BDFS being based on Elasticsearch. Maybe we need to define some spark configuration to adjust the size of the data being written? {"messageCode":"BD_101029","message":"193/372 distributed tasks completed.","params":{"completedTasks":"193","totalTasks":"372"}}
[ERROR|09:01:47] Stage 'runJob at EsSpark.scala:108' failed in GeoAnalytics job (attempt=0): Job aborted due to stage failure: Task 9 in stage 1.0 failed 4 times, most recent failure: Lost task 9.3 in stage 1.0 (TID 207, ##.##.##.###:####, executor 1): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:226)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.tryFlush(BulkProcessor.java:215)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:518)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:132)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:77)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1$adapted(EsSpark.scala:108)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:.
{"messageCode":"BD_101030","message":"Failed to execute ReconstructTracks. Please contact your administrator with job ID 'jedad0ac11f654d64bc31ecc732c295be'.","params":{"toolName":"ReconstructTracks","jobID":"jedad0ac11f654d64bc31ecc732c295be"}}
[ERROR|09:01:47] Job 'jedad0ac11f654d64bc31ecc732c295be' for tool 'ReconstructTracks' failed: Job aborted due to stage failure: Task 9 in stage 1.0 failed 4 times, most recent failure: Lost task 9.3 in stage 1.0 (TID 207, ##.##.##.###, executor 1): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:226)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.tryFlush(BulkProcessor.java:215)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:518)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:132)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:77)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1$adapted(EsSpark.scala:108)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:. [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:].
[DEBUG|09:01:47] Job 'jedad0ac11f654d64bc31ecc732c295be' for tool 'ReconstructTracks' failed: Job aborted due to stage failure: Task 9 in stage 1.0 failed 4 times, most recent failure: Lost task 9.3 in stage 1.0 (TID 207, ##.##.##.###, executor 1): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:226)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.tryFlush(BulkProcessor.java:215)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:518)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:132)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:77)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1$adapted(EsSpark.scala:108)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:. [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:].
Job aborted due to stage failure: Task 9 in stage 1.0 failed 4 times, most recent failure: Lost task 9.3 in stage 1.0 (TID 207, 10.23.35.141, executor 1): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:226)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.tryFlush(BulkProcessor.java:215)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:518)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:132)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:77)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1$adapted(EsSpark.scala:108)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
[DEBUG]org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 1.0 failed 4 times, most recent failure: Lost task 9.3 in stage 1.0 (TID 207, ##.##.##.###, executor 1): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:226)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.tryFlush(BulkProcessor.java:215)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:518)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:132)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:77)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1$adapted(EsSpark.scala:108)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2120)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2152)
at org.elasticsearch.spark.rdd.EsSpark$.doSaveToEs(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.saveToEs(EsSpark.scala:79)
at org.elasticsearch.spark.rdd.EsSpark$.saveToEs(EsSpark.scala:76)
at org.elasticsearch.spark.package$SparkRDDFunctions.saveToEs(package.scala:56)
at com.esri.arcgis.bds.spark.BDSSpark$.saveFeaturesToBDS(BDSSpark.scala:390)
at com.esri.arcgis.bds.spark.BDSSpark$.saveFeaturesToBDS(BDSSpark.scala:238)
at com.esri.arcgis.gae.gp.ags.SpatiotemporalDataStoreWriter.$anonfun$writeOnce$1(SpatiotemporalDataStoreWriter.scala:342)
at scala.util.Try$.apply(Try.scala:213)
at com.esri.arcgis.gae.gp.ags.SpatiotemporalDataStoreWriter.writeOnce(SpatiotemporalDataStoreWriter.scala:300)
at com.esri.arcgis.gae.ags.datastore.managed.ManagedDataStoreWriter.writeAll(ManagedDataStoreWriter.scala:13)
at com.esri.arcgis.gae.ags.datastore.managed.ManagedDataStoreWriter.writeAll$(ManagedDataStoreWriter.scala:8)
at com.esri.arcgis.gae.gp.ags.SpatiotemporalDataStoreWriter.writeAll(SpatiotemporalDataStoreWriter.scala:29)
at com.esri.arcgis.gae.ags.layer.output.ManagedLayerResultManager.addResultLayer(ManagedLayerResultManager.scala:214)
at com.esri.arcgis.gae.gp.adapt.param.LayerOutputConverter.writeToManagedDataStore(LayerOutputConverter.scala:85)
at com.esri.arcgis.gae.gp.adapt.param.LayerOutputConverter.write(LayerOutputConverter.scala:31)
at com.esri.arcgis.gae.gp.adapt.param.FeatureRDDOutputConverter.write(FeatureRDDOutputConverter.scala:17)
at com.esri.arcgis.gae.gp.adapt.param.FeatureRDDOutputConverter.write(FeatureRDDOutputConverter.scala:9)
at com.esri.arcgis.gae.gp.adapt.param.GPParameterAdapter$$anon$2.writeParam(GPParameterAdapter.scala:77)
at com.esri.arcgis.gae.fn.api.v1.GAFunctionArgs.update(GAFunctionArgs.scala:53)
at com.esri.arcgis.gae.fn.FnReconstructTracks$.execute(FnReconstructTracks.scala:165)
at com.esri.arcgis.gae.fn.api.v1.GAFunction.executeFunction(GAFunction.scala:25)
at com.esri.arcgis.gae.fn.api.v1.GAFunction.executeFunction$(GAFunction.scala:15)
at com.esri.arcgis.gae.fn.FnReconstructTracks$.executeFunction(FnReconstructTracks.scala:25)
at com.esri.arcgis.gae.gp.GPToGAFunctionAdapter.$anonfun$execute$4(GPToGAFunctionAdapter.scala:157)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.esri.arcgis.st.util.ScopeManager.withScope(ScopeManager.scala:20)
at com.esri.arcgis.st.spark.ExecutionContext.withScope(ExecutionContext.scala:23)
at com.esri.arcgis.st.spark.ExecutionContext.withScope$(ExecutionContext.scala:23)
at com.esri.arcgis.st.spark.GenericExecutionContext.withScope(ExecutionContext.scala:52)
at com.esri.arcgis.gae.gp.GPToGAFunctionAdapter.execute(GPToGAFunctionAdapter.scala:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.esri.arcgis.interop.NativeObjRef.nativeVtblInvokeNative(Native Method)
at com.esri.arcgis.interop.NativeObjRef.nativeVtblInvoke(Unknown Source)
at com.esri.arcgis.interop.NativeObjRef.invoke(Unknown Source)
at com.esri.arcgis.interop.Dispatch.vtblInvoke(Unknown Source)
at com.esri.arcgis.system.IRequestHandlerProxy.handleStringRequest(Unknown Source)
at com.esri.arcgis.discovery.servicelib.impl.SOThreadBase.handleSoapRequest(SOThreadBase.java:574)
at com.esri.arcgis.discovery.servicelib.impl.DedicatedSOThread.handleRequest(DedicatedSOThread.java:310)
at com.esri.arcgis.discovery.servicelib.impl.SOThreadBase.startRun(SOThreadBase.java:430)
at com.esri.arcgis.discovery.servicelib.impl.DedicatedSOThread.run(DedicatedSOThread.java:166)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:226)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.tryFlush(BulkProcessor.java:215)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:518)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:132)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:77)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1$adapted(EsSpark.scala:108)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[DEBUG|09:01:47] Running cleanup tasks for conditions (Error,Complete)
[DEBUG|09:01:47] Running cleanup task [? @ DebugUtil.scala:106].
... View more
05-14-2021
02:53 AM
|
0
|
1
|
1291
|
POST
|
We try to write into a registered BDFS named 'BDFS2'. The datastore manager shows the datasetpath as '/bigDataFileShares/BDFS2'. When we try to use it for writing from a custom python script the spark fails with {"messageCode":"BD_101138","message":"[Python] pyspark.sql.utils.IllegalArgumentException: Unsupported output target '/bigDataFileShares/BDFS2'","params":{"text":"pyspark.sql.utils.IllegalArgumentException: Unsupported
output target '/bigDataFileShares/BDFS2'"}} Custom script for writing tracks: def write_tracks(layer_url=None, where=None, output_bdfs=None, output_name=None):
layer_url = user_variables['layer_url']
where = user_variables['where']
output_bdfs = user_variables['output_bdfs']
output_name = user_variables['output_name']
tracks_data = spark.read.format('webgis').option('where', where).load(layer_url)
if None is output_bdfs:
tracks_data.write.format('webgis').save(output_name)
else:
tracks_data.write.format('webgis').option('dataStore', output_bdfs).save(output_name) Do we have to specify a template before writing the filtered dataframe?
... View more
05-11-2021
02:30 AM
|
0
|
1
|
1313
|
POST
|
Currently, we are using the Open Source version of the Qt Framework SDK. The Qt company decided to make the LTS releases only available to commercial licenses. In January the Qt company made the 5.15 LTS readonly for the Open Source version. Any necessary bugfix will only be published into the commercial branch of 5.15 and the latest release e.g. Qt 6.x. Right now, we have to keep up with the "Qt release pace" and always support the latest Qt release when using the Open Source version or risk that serious security issues will not be fixed - which is not a valid option for serious development. The only two options we see so far is to move away from the Open Source version or unluckily pull out the whole Qt technology stack and replace it with some alternatives. I have to discuss that with our sales and business advisors in first place.
... View more
04-09-2021
03:43 AM
|
0
|
0
|
845
|
POST
|
We would love to deeply integrate knowledge graphs into the ArcGIS Platform. Not only for using knowledge graphs as a kind of custom datastore, much more of gaining spatial insights like graphs showing the spatial relationship e.g. IEDs -> are located near -> main roads -> used by convoi's ... as a kind of intelligence view over all known feature layers.
... View more
04-06-2021
08:06 AM
|
0
|
2
|
1827
|
POST
|
We already got our hands dirty using Qt and .NET Blazor for webassembly development. We saw that the Geometry routines of the new JavaScript API uses webassembly, too. Is there anything planned extending ArcGIS Web App Builders or developing WebApps using webassembly?
... View more
04-06-2021
05:45 AM
|
0
|
1
|
1774
|
Title | Kudos | Posted |
---|---|---|
1 | 03-13-2024 11:41 AM | |
1 | 03-13-2024 10:42 AM | |
1 | 03-13-2024 09:41 AM | |
1 | 03-13-2024 12:24 PM | |
2 | 03-07-2024 02:41 AM |
Online Status |
Offline
|
Date Last Visited |
6 hours ago
|