
Sign up to save your podcasts
Or
Big models, tight budgets? No problem. In this episode of Pop Goes the stack, hosts Lori MacVittie and Joel Moses talk with Dmitry Kit from F5's AI Center of Excellence about LoRA (Low-Rank Adaptation), the not-so-secret weapon for customizing LLMs without melting your GPU or your wallet. From role-specific agents to domain-aware behavior, we break down how LoRA lets you inject intelligence without retraining the entire brain. Whether you're building AI for IT ops, customer support, or anything in between, this is fine-tuning that actually scales. Learn about the benefits, risks, and practical applications of using LoRA to target specific model behavior, reduce latency, and optimize performance, all for under $1,000. Tune in to understand how LoRA can revolutionize your approach to AI and machine learning.
Big models, tight budgets? No problem. In this episode of Pop Goes the stack, hosts Lori MacVittie and Joel Moses talk with Dmitry Kit from F5's AI Center of Excellence about LoRA (Low-Rank Adaptation), the not-so-secret weapon for customizing LLMs without melting your GPU or your wallet. From role-specific agents to domain-aware behavior, we break down how LoRA lets you inject intelligence without retraining the entire brain. Whether you're building AI for IT ops, customer support, or anything in between, this is fine-tuning that actually scales. Learn about the benefits, risks, and practical applications of using LoRA to target specific model behavior, reduce latency, and optimize performance, all for under $1,000. Tune in to understand how LoRA can revolutionize your approach to AI and machine learning.