
Sign up to save your podcasts
Or
This episode discusses improving code generation with LLMs by dynamically extracting functions and context, using techniques like Python's inspect modules and JavaScript's toString. The goal is to produce code that integrates well with existing projects, enhancing customization and practicality.
—
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.
Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link
Hosted by Ausha. See ausha.co/privacy-policy for more information.
This episode discusses improving code generation with LLMs by dynamically extracting functions and context, using techniques like Python's inspect modules and JavaScript's toString. The goal is to produce code that integrates well with existing projects, enhancing customization and practicality.
—
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.
Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link
Hosted by Ausha. See ausha.co/privacy-policy for more information.