From 7B to 8B Parameters: Understanding Weight Matrix Changes in LLama Transformer ModelsDeep Dive into the Underlying Architecture of LLama3Apr 19, 20242Apr 19, 20242
A Beginner’s Guide to Fine-Tuning GemmaA Comprehensive Guide to Fine-Tuning GemmaFeb 21, 20243Feb 21, 20243
Published inGenerative AIA Beginner’s Guide to Fine-Tuning Mixtral Instruct ModelUnleashing the Power of MixTRAL: A Comprehensive Guide to Fine-TuningJan 8, 20245Jan 8, 20245
Run any Huggingface model locallyA guide/colab notebook to quantize LLMs in GGUF formate to run them locallyJan 1, 2024Jan 1, 2024
Squeeze Every Drop of Performance from Your LLM with AWQ (Activation-Aware Quantization)A Guide to Quantize LLMs Using AWQ on a Google Colab NotebookOct 21, 20232Oct 21, 20232
Deploy Mistral/Llama 7b on AWS in 10 minsA Step-by-Step Guide to Deploying in Just 3 Simple StagesOct 8, 20232Oct 8, 20232
A Beginner’s Guide to Fine-Tuning Mistral 7B Instruct ModelFine-Tuning for Code Generation Using a Single Google Colab NotebookOct 6, 202318Oct 6, 202318
CompanionLLama: Your AI Sentient Companion — A Journey into Fine-Tuning LLama2A Deep Dive into Fine-Tuning LLama2: Creating Your AI Sentient Companion with CompanionLLamaSep 15, 2023Sep 15, 2023