One of the reasons fast.ai's "Practical Deep Learning for Coders" is such a great resource is that its teacher, prolific programmer Jeremy Howard, knows how to design a course for self-directed learners.
The course tries out a few innovative pedagogic techniques Howard has come across while homeschooling his child, one of which is to drop the learners into the deep end in the first lesson, walking us through fine-tuning a ResNet image classification model in a Colab notebook before explaining how any of it worked at all. Pretty cool.
I started up a Quarto blog per Howard's recommendations early in the course. I didn't end up writing posts for every lesson I did, but I did manage to make a fun little project out of fine-tuning GPT-3 with the entire corpus of William Gibson's Neuromancer, which is included in the blog.
I am glad I hung on till the end though, because one of his last lectures concerned recommendation systems and the crucial technique of cosine similarity search, which came in handy for later projects, such as the GPT-4 Spaced Repetition Experiments which used similarity search to inject relevant passages from the lecture transcripts in a "flashcard-refining" prompt.