I understand you're using the Train Deep Learning tool. To see if the tool is functioning properly, check if you're using a CPU or GPU. In some cases, a fairly large amount of GPU memory is required.
To see if your GPU is being used, run the command nvidia-smi -l 5 and monitor GPU usage. Some models, especially those trained with large amounts of data per epoch, can take a long time (e.g., 1+ hours) to train. CPUs are much slower and may not be able to handle memory-intensive training.
nvidia-smi can help you see if your GPU is being used efficiently for the batch size you've set in the tool. For example, if your batch size is set to 4 and only a small portion of your GPU memory is being used, you can increase the batch size.