Differential privacy is a very powerful approach to protecting individual privacy in data-mining; it's also an approach that hasn't seen much application outside academic circles. There's a reason for this: many people aren't quite certain how it works. Uncertainty poses a serious problem when considering the public release of sensitive data. Intuitively, differentially private data-mining applications protect individuals by injecting noise which "covers up" the impact any individual can have on the query results. In this talk, I will discuss the concrete details of how this is accomplished, exactly what it does and does not guarantee, common mistakes and misconceptions, and give a brief overview of useful differentially privatized data-mining techniques. This talk will be accessible to researchers from all domains; no previous background in statistics or probability theory is assumed. My goal in this presentation is to offer a short-cut to researchers who would like to apply differential privacy to their work and thus enable a broader adoption of this powerful tool.