POST
|
We are integrating a bunch of C++ utility libraries into our ArcGIS Runtime for Qt Quick based sample viewer. Usually these C++ utility libraries are compiled using CMake. The Qt company offers some samples using Qt Quick with CMake. cmake_minimum_required(VERSION 3.14)
project(QPyApp LANGUAGES CXX)
set(CMAKE_INCLUDE_CURRENT_DIR ON)
set(CMAKE_AUTOUIC ON)
set(CMAKE_AUTOMOC ON)
set(CMAKE_AUTORCC ON)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
# QtCreator supports the following variables for Android, which are identical to qmake Android variables.
# Check https://doc.qt.io/qt/deployment-android.html for more information.
# They need to be set before the find_package(...) calls below.
#if(ANDROID)
# set(ANDROID_PACKAGE_SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}/android")
# if (ANDROID_ABI STREQUAL "armeabi-v7a")
# set(ANDROID_EXTRA_LIBS
# ${CMAKE_CURRENT_SOURCE_DIR}/path/to/libcrypto.so
# ${CMAKE_CURRENT_SOURCE_DIR}/path/to/libssl.so)
# endif()
#endif()
find_package(QT NAMES Qt6 Qt5 COMPONENTS Core Quick REQUIRED)
find_package(Qt${QT_VERSION_MAJOR} COMPONENTS Core Quick REQUIRED)
set(PROJECT_SOURCES
main.cpp
qml.qrc
)
if(${QT_VERSION_MAJOR} GREATER_EQUAL 6)
qt_add_executable(QPyApp
${PROJECT_SOURCES}
)
else()
if(ANDROID)
add_library(QPyApp SHARED
${PROJECT_SOURCES}
)
else()
add_executable(QPyApp
${PROJECT_SOURCES}
)
endif()
endif()
target_compile_definitions(QPyApp
PRIVATE $<$<OR:$<CONFIG:Debug>,$<CONFIG:RelWithDebInfo>>:QT_QML_DEBUG>)
target_link_libraries(QPyApp
PRIVATE Qt${QT_VERSION_MAJOR}::Core Qt${QT_VERSION_MAJOR}::Quick) Would you be so kind and provide a sample using CMake and share it using arcgis-runtime-samples-qt
... View more
08-10-2021
08:13 AM
|
0
|
2
|
1256
|
POST
|
We finally figured it out, also we have to use singleLayerSelectedCondition instead of singleFeatureLayerSelectedCondition for enabling it. <conditions>
<insertCondition id="esri_mapping_single_layer_selected">
<and>
<state id="esri_mapping_singleLayerSelectedCondition"/>
</and>
</insertCondition>
<insertCondition id="esri_mapping_single_standalonetable_selected">
<and>
<state id="esri_mapping_singleStandaloneTableSelectedCondition"/>
</and>
</insertCondition>
</conditions>
<updateModule refID="esri_mapping">
<menus>
<updateMenu refID="esri_mapping_layerContextMenu">
<insertButton refID="SDA_Pro_Module_Create_Sda_From_FeatureTable_Button" separator="false" placeWith="esri_editing_table_openTablePaneButton" insert="after" />
</updateMenu>
<updateMenu refID="esri_mapping_unregisteredLayerContextMenu">
<insertButton refID="SDA_Pro_Module_Create_Sda_From_FeatureTable_Button" separator="false" placeWith="esri_editing_table_openTablePaneButton" insert="after" />
</updateMenu>
<updateMenu refID="esri_mapping_standaloneTableContextMenu">
<insertButton refID="SDA_Pro_Module_Create_Sda_From_StandaloneTable_Button" separator="false" placeWith="esri_editing_table_openStandaloneTablePaneButton" insert="after" />
</updateMenu>
<updateMenu refID="esri_mapping_unregisteredStandaloneTableContextMenu">
<insertButton refID="SDA_Pro_Module_Create_Sda_From_StandaloneTable_Button" separator="false" placeWith="esri_editing_table_openStandaloneTablePaneButton" insert="after" />
</updateMenu>
</menus>
</updateModule> Using unregistered context menu did the trick. See Solved: Buttons built by ArcGIS Pro SDK cannot be added to... - Esri Community
... View more
07-08-2021
04:26 AM
|
1
|
0
|
1074
|
POST
|
We need to add a custom button to the context menu for feature layers. In the past we used esri_mapping_layerContextMenu and esri_mapping_standaloneTableContextMenu. We define the state as insert conditions for layers and standalone tables by using: <state id="esri_mapping_singleLayerSelectedCondition"/> <state id="esri_mapping_singleFeatureLayerSelectedCondition"/> <state id="esri_mapping_singleStandaloneTableSelectedCondition"/> With ArcGIS Pro 2.7 and 2.8 esri_mapping_layerContextMenu seems not to work for any shapefile related layers, any help would be appropriated.
... View more
07-08-2021
03:52 AM
|
0
|
1
|
1105
|
POST
|
We would love to see in-memory layer with property label graph support, not only m:n relationship classes and this in-memory representation should be using SPARQL for connecting various graph datasources. Unluckily, as a distributor I cannot create ArcGIS Ideas.
... View more
05-18-2021
05:24 AM
|
0
|
0
|
1552
|
POST
|
We are trying to reconstruct tracks using AIS data and our GeoAnalytics Server fail with some "Request Entity Too Large" during mapping the partitions. As far as we can judge it, sparks wants to write into the BDFS being based on Elasticsearch. Maybe we need to define some spark configuration to adjust the size of the data being written? {"messageCode":"BD_101029","message":"193/372 distributed tasks completed.","params":{"completedTasks":"193","totalTasks":"372"}}
[ERROR|09:01:47] Stage 'runJob at EsSpark.scala:108' failed in GeoAnalytics job (attempt=0): Job aborted due to stage failure: Task 9 in stage 1.0 failed 4 times, most recent failure: Lost task 9.3 in stage 1.0 (TID 207, ##.##.##.###:####, executor 1): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:226)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.tryFlush(BulkProcessor.java:215)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:518)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:132)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:77)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1$adapted(EsSpark.scala:108)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:.
{"messageCode":"BD_101030","message":"Failed to execute ReconstructTracks. Please contact your administrator with job ID 'jedad0ac11f654d64bc31ecc732c295be'.","params":{"toolName":"ReconstructTracks","jobID":"jedad0ac11f654d64bc31ecc732c295be"}}
[ERROR|09:01:47] Job 'jedad0ac11f654d64bc31ecc732c295be' for tool 'ReconstructTracks' failed: Job aborted due to stage failure: Task 9 in stage 1.0 failed 4 times, most recent failure: Lost task 9.3 in stage 1.0 (TID 207, ##.##.##.###, executor 1): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:226)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.tryFlush(BulkProcessor.java:215)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:518)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:132)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:77)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1$adapted(EsSpark.scala:108)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:. [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:].
[DEBUG|09:01:47] Job 'jedad0ac11f654d64bc31ecc732c295be' for tool 'ReconstructTracks' failed: Job aborted due to stage failure: Task 9 in stage 1.0 failed 4 times, most recent failure: Lost task 9.3 in stage 1.0 (TID 207, ##.##.##.###, executor 1): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:226)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.tryFlush(BulkProcessor.java:215)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:518)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:132)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:77)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1$adapted(EsSpark.scala:108)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:. [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:].
Job aborted due to stage failure: Task 9 in stage 1.0 failed 4 times, most recent failure: Lost task 9.3 in stage 1.0 (TID 207, 10.23.35.141, executor 1): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:226)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.tryFlush(BulkProcessor.java:215)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:518)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:132)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:77)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1$adapted(EsSpark.scala:108)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
[DEBUG]org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 1.0 failed 4 times, most recent failure: Lost task 9.3 in stage 1.0 (TID 207, ##.##.##.###, executor 1): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:226)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.tryFlush(BulkProcessor.java:215)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:518)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:132)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:77)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1$adapted(EsSpark.scala:108)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2120)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2152)
at org.elasticsearch.spark.rdd.EsSpark$.doSaveToEs(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.saveToEs(EsSpark.scala:79)
at org.elasticsearch.spark.rdd.EsSpark$.saveToEs(EsSpark.scala:76)
at org.elasticsearch.spark.package$SparkRDDFunctions.saveToEs(package.scala:56)
at com.esri.arcgis.bds.spark.BDSSpark$.saveFeaturesToBDS(BDSSpark.scala:390)
at com.esri.arcgis.bds.spark.BDSSpark$.saveFeaturesToBDS(BDSSpark.scala:238)
at com.esri.arcgis.gae.gp.ags.SpatiotemporalDataStoreWriter.$anonfun$writeOnce$1(SpatiotemporalDataStoreWriter.scala:342)
at scala.util.Try$.apply(Try.scala:213)
at com.esri.arcgis.gae.gp.ags.SpatiotemporalDataStoreWriter.writeOnce(SpatiotemporalDataStoreWriter.scala:300)
at com.esri.arcgis.gae.ags.datastore.managed.ManagedDataStoreWriter.writeAll(ManagedDataStoreWriter.scala:13)
at com.esri.arcgis.gae.ags.datastore.managed.ManagedDataStoreWriter.writeAll$(ManagedDataStoreWriter.scala:8)
at com.esri.arcgis.gae.gp.ags.SpatiotemporalDataStoreWriter.writeAll(SpatiotemporalDataStoreWriter.scala:29)
at com.esri.arcgis.gae.ags.layer.output.ManagedLayerResultManager.addResultLayer(ManagedLayerResultManager.scala:214)
at com.esri.arcgis.gae.gp.adapt.param.LayerOutputConverter.writeToManagedDataStore(LayerOutputConverter.scala:85)
at com.esri.arcgis.gae.gp.adapt.param.LayerOutputConverter.write(LayerOutputConverter.scala:31)
at com.esri.arcgis.gae.gp.adapt.param.FeatureRDDOutputConverter.write(FeatureRDDOutputConverter.scala:17)
at com.esri.arcgis.gae.gp.adapt.param.FeatureRDDOutputConverter.write(FeatureRDDOutputConverter.scala:9)
at com.esri.arcgis.gae.gp.adapt.param.GPParameterAdapter$$anon$2.writeParam(GPParameterAdapter.scala:77)
at com.esri.arcgis.gae.fn.api.v1.GAFunctionArgs.update(GAFunctionArgs.scala:53)
at com.esri.arcgis.gae.fn.FnReconstructTracks$.execute(FnReconstructTracks.scala:165)
at com.esri.arcgis.gae.fn.api.v1.GAFunction.executeFunction(GAFunction.scala:25)
at com.esri.arcgis.gae.fn.api.v1.GAFunction.executeFunction$(GAFunction.scala:15)
at com.esri.arcgis.gae.fn.FnReconstructTracks$.executeFunction(FnReconstructTracks.scala:25)
at com.esri.arcgis.gae.gp.GPToGAFunctionAdapter.$anonfun$execute$4(GPToGAFunctionAdapter.scala:157)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.esri.arcgis.st.util.ScopeManager.withScope(ScopeManager.scala:20)
at com.esri.arcgis.st.spark.ExecutionContext.withScope(ExecutionContext.scala:23)
at com.esri.arcgis.st.spark.ExecutionContext.withScope$(ExecutionContext.scala:23)
at com.esri.arcgis.st.spark.GenericExecutionContext.withScope(ExecutionContext.scala:52)
at com.esri.arcgis.gae.gp.GPToGAFunctionAdapter.execute(GPToGAFunctionAdapter.scala:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.esri.arcgis.interop.NativeObjRef.nativeVtblInvokeNative(Native Method)
at com.esri.arcgis.interop.NativeObjRef.nativeVtblInvoke(Unknown Source)
at com.esri.arcgis.interop.NativeObjRef.invoke(Unknown Source)
at com.esri.arcgis.interop.Dispatch.vtblInvoke(Unknown Source)
at com.esri.arcgis.system.IRequestHandlerProxy.handleStringRequest(Unknown Source)
at com.esri.arcgis.discovery.servicelib.impl.SOThreadBase.handleSoapRequest(SOThreadBase.java:574)
at com.esri.arcgis.discovery.servicelib.impl.DedicatedSOThread.handleRequest(DedicatedSOThread.java:310)
at com.esri.arcgis.discovery.servicelib.impl.SOThreadBase.startRun(SOThreadBase.java:430)
at com.esri.arcgis.discovery.servicelib.impl.DedicatedSOThread.run(DedicatedSOThread.java:166)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [PUT] on [/_bulk] failed; server[##.##.##.###:####] returned [413|Request Entity Too Large:]
at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:408)
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:226)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.tryFlush(BulkProcessor.java:215)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:518)
at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:132)
at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:77)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1(EsSpark.scala:108)
at org.elasticsearch.spark.rdd.EsSpark$.$anonfun$doSaveToEs$1$adapted(EsSpark.scala:108)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[DEBUG|09:01:47] Running cleanup tasks for conditions (Error,Complete)
[DEBUG|09:01:47] Running cleanup task [? @ DebugUtil.scala:106].
... View more
05-14-2021
02:53 AM
|
0
|
1
|
1183
|
POST
|
We try to write into a registered BDFS named 'BDFS2'. The datastore manager shows the datasetpath as '/bigDataFileShares/BDFS2'. When we try to use it for writing from a custom python script the spark fails with {"messageCode":"BD_101138","message":"[Python] pyspark.sql.utils.IllegalArgumentException: Unsupported output target '/bigDataFileShares/BDFS2'","params":{"text":"pyspark.sql.utils.IllegalArgumentException: Unsupported
output target '/bigDataFileShares/BDFS2'"}} Custom script for writing tracks: def write_tracks(layer_url=None, where=None, output_bdfs=None, output_name=None):
layer_url = user_variables['layer_url']
where = user_variables['where']
output_bdfs = user_variables['output_bdfs']
output_name = user_variables['output_name']
tracks_data = spark.read.format('webgis').option('where', where).load(layer_url)
if None is output_bdfs:
tracks_data.write.format('webgis').save(output_name)
else:
tracks_data.write.format('webgis').option('dataStore', output_bdfs).save(output_name) Do we have to specify a template before writing the filtered dataframe?
... View more
05-11-2021
02:30 AM
|
0
|
1
|
1182
|
POST
|
Currently, we are using the Open Source version of the Qt Framework SDK. The Qt company decided to make the LTS releases only available to commercial licenses. In January the Qt company made the 5.15 LTS readonly for the Open Source version. Any necessary bugfix will only be published into the commercial branch of 5.15 and the latest release e.g. Qt 6.x. Right now, we have to keep up with the "Qt release pace" and always support the latest Qt release when using the Open Source version or risk that serious security issues will not be fixed - which is not a valid option for serious development. The only two options we see so far is to move away from the Open Source version or unluckily pull out the whole Qt technology stack and replace it with some alternatives. I have to discuss that with our sales and business advisors in first place.
... View more
04-09-2021
03:43 AM
|
0
|
0
|
784
|
POST
|
We would love to deeply integrate knowledge graphs into the ArcGIS Platform. Not only for using knowledge graphs as a kind of custom datastore, much more of gaining spatial insights like graphs showing the spatial relationship e.g. IEDs -> are located near -> main roads -> used by convoi's ... as a kind of intelligence view over all known feature layers.
... View more
04-06-2021
08:06 AM
|
0
|
2
|
1746
|
POST
|
We already got our hands dirty using Qt and .NET Blazor for webassembly development. We saw that the Geometry routines of the new JavaScript API uses webassembly, too. Is there anything planned extending ArcGIS Web App Builders or developing WebApps using webassembly?
... View more
04-06-2021
05:45 AM
|
0
|
1
|
1657
|
POST
|
The Qt guys did a great job getting rid of legacy implementations and uses the native graphics API of each operating system, now. They also made it possible developing Qt Quick Apps without the QML JavaScript runtime. Would you be so kind and tell us when the ArcGIS Runtime for Qt is going to support the new Qt 6 version?
... View more
04-06-2021
05:40 AM
|
0
|
2
|
878
|
POST
|
Our business partners/system integrators like Atos and SAP would love to see some high performance geospatial engine being implemented in Rust. This geospatial engine should be available on mobile, desktop, server and serverless environments. There is a need for integrating the same geospatial workflows using the same geospatial engine in all environments. Currently the best option so far is GitHub - georust/geo: Geospatial primitives and algorithms for Rust. The geospatial engine should only offer the same methods like the GeometryEngine of ArcGIS Pro and ArcGIS Runtime, but having all the capabilities static linked into one hand-build with pride optimized Rust module.
... View more
04-06-2021
05:34 AM
|
0
|
1
|
1488
|
POST
|
Hi Michael, the "buffer.gpkx" sample is attached to the initial post. We stripped it down to a simple buffer model using features as input and output. We can use all of our GPKX packages in any ArcGIS Pro environment and publish it into any of our ArcGIS Enterprise environments. The input "features" parameter is correctly offered as a "GPFeatureRecordSetLayer". Meanwhile we had to create the following support case Geoprocessing packages created in ArcGIS Pro do not offer Feature Set as input parameter by using ArcGIS Runtime Local Server 100.9 for this blocker.
... View more
01-03-2021
11:17 PM
|
0
|
0
|
1283
|
POST
|
Did anybody used a ArcGIS Pro based Geoprocessing Packages with Features as Input Parameter with the Runtime Local Server 100.9? We are still waiting for any feedback related to this issue... meanwhile the support is trying to find any documentation about "Best Practices for creating Geoprocessing Packages with ArcGIS Pro".
... View more
12-22-2020
06:49 AM
|
0
|
0
|
1334
|
POST
|
Really neat, we would prefer to use MQTT Feeds for streaming our custom geospatial entities extracted from various ELT workflows into Velocity and receive the analytics results as a stream back into our serverless environment.
... View more
12-15-2020
11:19 PM
|
0
|
0
|
992
|
POST
|
We are currently running in some issues with Geoprocessing Packages created with ArcGIS Pro 2.6.3. The input parameters representing features as a Feature Set are somehow misinterpreted as GPString by the Runtime Local Server. When we create our Geoprocessing Packages with ArcMap 10.8 the packages are correctly loaded into the runtime apps and the input parameter is shown as Feature Set. Are there any best practices regarding "Prepare your Geoprocessing Models for Runtime Apps with ArcGIS Pro"? First issue we have seen some weeks ago: Runtime Local Server: Input Feature Set is shown as GPString
... View more
11-25-2020
04:35 AM
|
0
|
0
|
684
|
Title | Kudos | Posted |
---|---|---|
1 | 03-13-2024 11:41 AM | |
1 | 03-13-2024 10:42 AM | |
1 | 03-13-2024 09:41 AM | |
1 | 03-13-2024 12:24 PM | |
2 | 03-07-2024 02:41 AM |
Online Status |
Offline
|
Date Last Visited |
yesterday
|