The economics of AI infrastructure have shifted dramatically with per-second billing on serverless GPU platforms. Is it actually cheaper to rent high-end cards like the H100 or B200 by the hour, or does owning hardware still make sense for high-utilization workloads? We explore the break-even points for cards ranging from the T4 to the Blackwell B200, the hidden costs of depreciation and cooling, and why paying more for a faster GPU can sometimes lower your total compute bill.