Fine-Tuning and Evaluation of LLMs with NeMo Microservices: A Practical Tutorial
In this guide, we cover a hands-on workflow that leverages NVIDIA NeMo Microservices in the PAASUP DIP environment to fine-tune the llama-3.2-3b-instruct model with KoAlpaca data using LoRA, and directly compare performance before and after tuning.