Select to view content in your preferred language

Where possible, have deep learning tools validate parameters before beginning time-consuming processes

90
2
Wednesday
Status: Open
Labels (1)
HollyTorpey_LSA
Frequent Contributor

Idea: Have deep learning tools check the validity of all parameters prior to processing.

Reason: Many deep learning tools take hours or days to run, so it is important to verify the validity of parameters prior to investing all that time. For example, I just had the Train Deep Learning Model tool fail after nearly 48 hours of processing because I had forgotten to update the output folder name, which was left over from a previous training run. It completed all its epochs, returned the model analysis matrix at the bottom, and then failed because the output directory was not empty. It seems like it would be fairly easy to verify that the output directory is empty prior to training the model and then error out immediately. Then I could fix my mistake and proceed, having lost only a few seconds or minutes instead of two days of processing time.

HollyTorpey_LSA_0-1765403488086.png

Alternatively, in the specific case of a non-empty output directory, I suggest the tool use the non-empty directory and simply append an _1 to the end of each content items it outputs there, or create a new content directory with an _1 on the end of the folder name. Or write it to some temp location and provide the link. It seems like such a waste to lose a completed model after two days of training just because the user forgets to update the output directory name. 

Thank you!

2 Comments
MErikReedAugusta

I recently had to troubleshoot a custom script that was erroring-out at the 45 minute mark, and that was brutal.

I can't imagine losing 48 hours of processing time.  If I could Kudo this 1000 times, I would.

HollyTorpey_LSA

Thanks, @MErikReedAugusta. I appreciate the moral support!

The mistake was mine, but the punishment definitely didn't seem to fit the crime. 😭