Select to view content in your preferred language

Error when running Find Point Cluster tool

175
1
09-17-2024 12:26 AM
Labels (1)
EmmaE1
by
Occasional Contributor

Hi all, 

 

I'm running into an error when trying to run the Find Point Clusters tool in ArcGIS Pro. I can't quite decipher the error messages I'm getting - is anyone able to tell me what the actual error is and how I might be able to fix it? 

Thank you!

 

Traceback (most recent call last):
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Resources\ArcToolbox\scripts\ga_desktop_findPointClusters.py", line 29, in <module>
    timeMethod=time_method,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Resources\ArcToolbox\Scripts\gautils\utilities.py", line 112, in run_ga_desktop_tool
    gax = get_client()
          ^^^^^^^^^^^^
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Resources\ArcToolbox\Scripts\gautils\utilities.py", line 97, in get_client
    return jobclient.JobClient(spark)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Resources\ArcToolbox\scripts\ga_spark\local\jobclient.py", line 11, in __init__
    self._jgp = spark._sc._gateway.jvm.com.esri.arcgis.gae.desktop.DesktopPythonEnvironment.getJobClient()
                ^^^^^^^^^
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Resources\ArcToolbox\scripts\ga_spark\local\_launcher.py", line 50, in __getattr__
    self._lazy_init()
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Resources\ArcToolbox\scripts\ga_spark\local\_launcher.py", line 55, in _lazy_init
    self._spark = get_or_create()
                  ^^^^^^^^^^^^^^^
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Resources\ArcToolbox\scripts\ga_spark\local\_launcher.py", line 66, in get_or_create
    _spark = _initialize_spark()
             ^^^^^^^^^^^^^^^^^^^
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Resources\ArcToolbox\scripts\ga_spark\local\_launcher.py", line 239, in _initialize_spark
    sc = SparkContext(gateway=gateway)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Java\runtime\spark\python\lib\pyspark.zip\pyspark\context.py", line 203, in __init__
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Java\runtime\spark\python\lib\pyspark.zip\pyspark\context.py", line 296, in _do_init
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Java\runtime\spark\python\lib\pyspark.zip\pyspark\context.py", line 421, in _initialize_context
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Java\runtime\spark\python\lib\py4j-0.10.9.7-src.zip\py4j\java_gateway.py", line 1587, in __call__
  File "c:\users\emmab\appdata\local\programs\arcgis\pro\Java\runtime\spark\python\lib\py4j-0.10.9.7-src.zip\py4j\protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: org.apache.spark.SparkException: Invalid Spark URL: spark://HeartbeatReceiver@Emma_Dell:52939
	at org.apache.spark.rpc.RpcEndpointAddress$.apply(RpcEndpointAddress.scala:66)
	at org.apache.spark.rpc.netty.NettyRpcEnv.asyncSetupEndpointRefByURI(NettyRpcEnv.scala:140)
	at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:102)
	at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:110)
	at org.apache.spark.util.RpcUtils$.makeDriverRef(RpcUtils.scala:36)
	at org.apache.spark.executor.Executor.<init>(Executor.scala:301)
	at org.apache.spark.scheduler.local.LocalEndpoint.<init>(LocalSchedulerBackend.scala:64)
	at org.apache.spark.scheduler.local.LocalSchedulerBackend.start(LocalSchedulerBackend.scala:132)
	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:235)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:604)
	at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
	at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Unknown Source)
	at java.base/java.lang.reflect.Constructor.newInstance(Unknown Source)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
	at py4j.Gateway.invoke(Gateway.java:238)
	at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
	at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
	at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
	at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
	at java.base/java.lang.Thread.run(Unknown Source)
0 Kudos
1 Reply
DuncanHornby
MVP Notable Contributor

The short answer is no. You say nothing about what your data inputs are, any environment settings set, what version of software, where you data is stored...

Suggest you edit your question and add significantly more detail.

0 Kudos