Get in Touch

Course Outline

Introduction to Open-Source LLMs

  • Understanding open-weight models and their importance.
  • Overview of LLaMA, Mistral, Qwen, and other community-driven models.
  • Use cases for private, on-premise, or secure deployments.

Environment Setup and Tools

  • Installing and configuring Transformers, Datasets, and PEFT libraries.
  • Selecting appropriate hardware for fine-tuning.
  • Loading pre-trained models from Hugging Face or other repositories.

Data Preparation and Preprocessing

  • Dataset formats (instruction tuning, chat data, text-only).
  • Tokenization and sequence management.
  • Creating custom datasets and data loaders.

Fine-Tuning Techniques

  • Comparing standard full fine-tuning with parameter-efficient methods.
  • Applying LoRA and QLoRA for efficient fine-tuning.
  • Using the Trainer API for rapid experimentation.

Model Evaluation and Optimization

  • Assessing fine-tuned models using generation and accuracy metrics.
  • Managing overfitting, generalization, and validation sets.
  • Performance tuning tips and logging strategies.

Deployment and Private Use

  • Saving and loading models for inference.
  • Deploying fine-tuned models within secure enterprise environments.
  • Comparing on-premise versus cloud deployment strategies.

Case Studies and Use Cases

  • Examples of enterprise usage for LLaMA, Mistral, and Qwen.
  • Handling multilingual and domain-specific fine-tuning.
  • Discussion: Trade-offs between open and closed models.

Summary and Next Steps

Requirements

  • A solid understanding of Large Language Models (LLMs) and their architecture.
  • Practical experience with Python and PyTorch.
  • Basic familiarity with the Hugging Face ecosystem.

Audience

  • Machine learning practitioners.
  • AI developers.
 14 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories