We try to write into a registered BDFS named 'BDFS2'. The datastore manager shows the datasetpath as '/bigDataFileShares/BDFS2'. When we try to use it for writing from a custom python script the spark fails with
{"messageCode":"BD_101138","message":"[Python] pyspark.sql.utils.IllegalArgumentException: Unsupported output target '/bigDataFileShares/BDFS2'","params":{"text":"pyspark.sql.utils.IllegalArgumentException: Unsupported
output target '/bigDataFileShares/BDFS2'"}}
Custom script for writing tracks:
def write_tracks(layer_url=None, where=None, output_bdfs=None, output_name=None):
layer_url = user_variables['layer_url']
where = user_variables['where']
output_bdfs = user_variables['output_bdfs']
output_name = user_variables['output_name']
tracks_data = spark.read.format('webgis').option('where', where).load(layer_url)
if None is output_bdfs:
tracks_data.write.format('webgis').save(output_name)
else:
tracks_data.write.format('webgis').option('dataStore', output_bdfs).save(output_name)
Do we have to specify a template before writing the filtered dataframe?
Solved! Go to Solution.
Hi, you are right, when writing to a BDFS the template needs to be added to the output datastore string.
For example:
df.write.format('webgis').option('dataStore', '/bigDataFileShares/BDFS2:csv').save('output_one')
The following documentation link contains additional datastore string examples that might be helpful:
In addition the following documentation links cover how to create a template:
Hi, you are right, when writing to a BDFS the template needs to be added to the output datastore string.
For example:
df.write.format('webgis').option('dataStore', '/bigDataFileShares/BDFS2:csv').save('output_one')
The following documentation link contains additional datastore string examples that might be helpful:
In addition the following documentation links cover how to create a template: