Adithya S KFrom 7B to 8B Parameters: Understanding Weight Matrix Changes in LLama Transformer ModelsDeep Dive into the Underlying Architecture of LLama3Apr 192Apr 192
Adithya S KA Beginner’s Guide to Fine-Tuning GemmaA Comprehensive Guide to Fine-Tuning GemmaFeb 213Feb 213
Adithya S KinGenerative AIA Beginner’s Guide to Fine-Tuning Mixtral Instruct ModelUnleashing the Power of MixTRAL: A Comprehensive Guide to Fine-TuningJan 85Jan 85
Adithya S KRun any Huggingface model locallyA guide/colab notebook to quantize LLMs in GGUF formate to run them locallyJan 1Jan 1
Adithya S KSqueeze Every Drop of Performance from Your LLM with AWQ (Activation-Aware Quantization)A Guide to Quantize LLMs Using AWQ on a Google Colab NotebookOct 21, 20232Oct 21, 20232
Adithya S KDeploy Mistral/Llama 7b on AWS in 10 minsA Step-by-Step Guide to Deploying in Just 3 Simple StagesOct 8, 20232Oct 8, 20232
Adithya S KA Beginner’s Guide to Fine-Tuning Mistral 7B Instruct ModelFine-Tuning for Code Generation Using a Single Google Colab NotebookOct 6, 202318Oct 6, 202318
Adithya S KCompanionLLama: Your AI Sentient Companion — A Journey into Fine-Tuning LLama2A Deep Dive into Fine-Tuning LLama2: Creating Your AI Sentient Companion with CompanionLLamaSep 15, 2023Sep 15, 2023