Hi everyone.
I am trying to implement "Automate Road Surface Investigation Using Deep Learning"
whit Anaconda, but when i try to get de Learning Rate in code lin
lr = ssd.lr_find() lr
i get this Runtime Error:
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 2.90 GiB already allocated; 0 bytes free; 2.95 GiB reserved in total by PyTorch)
I have been looking for a solution for this error, in some blogs, they explain that I must clean the cache. I tried but the runtime errors keep
My GPU it's an NVIDIA Quadro P620
thanks for any help than can give me