
Sign up to save your podcasts
Or


I first became aware of Doximity GPT the way many clinicians do—through a peer's recommendation. My brother who was an internal medicine resident at the time, (now a hospitalist), was using their AI scribe over a year ago when it was still in beta. “Have you seen this thing? It’s free,” he texted me. That early signal—clinicians organically discovering and adopting a tool—turns out to be core to Doximity’s entire philosophy.
When my three-year-old recently may have swallowed a plastic fork tine at a restaurant (he didn’t, thankfully), I found myself cycling through the emerging ecosystem of clinical AI tools: Open Evidence, Doximity GPT, even poison control. Each gave slightly different guidance, each had its own approach to surfacing evidence. It crystallized a question I wanted to explore: in this rapidly converging landscape of clinical AI, what actually differentiates one tool from another?
Visual Prompt 1: Split-screen comparison showing a worried father with phone at restaurant table on left (warm editorial illustration style), transitioning to floating interface elements of multiple clinical AI apps (Open Evidence, Doximity GPT, Up-to-Date) on right, with subtle connecting lines between them suggesting the question of convergence—modern editorial style, muted teals and warm grays
The User-First Philosophy
Dr. Amit Phull, Doximity’s Chief Clinical Experience Officer and an emergency medicine physician, joined the company in 2014 through what he calls “equal parts luck and serendipity.” With a background in computer engineering before medical school, he was part of Doximity’s early physician advisory panel, beta testing their software. At a 2014 medical advisory board meeting, when leadership mentioned they were looking for technically-minded clinicians, Phull essentially raised his hand and said, “What about me?”
That origin story matters because it encapsulates Doximity’s approach to product development. “Our process is what differentiates us,” Phull explained. “We have a very broad network of clinicians that comprise the membership of Doximity, and we have the capability to engage those connections and directly ask them how we might improve this product to be most useful in their clinical practice.”
The emphasis on “improve” is deliberate. “Our approach is not that medicine was solved last Tuesday,” Phull said, pushing back against what he sees as hubris in healthcare AI. “Anyone who presents any technology as having been the solution to everything that ails healthcare—there’s a healthy amount of hubris there.”
From Administrative Assistant to Clinical Reference
Doximity GPT’s evolution reflects this user-driven philosophy. In early 2023, the company surveyed hundreds of clinicians about where AI could have the most impact. The answer was overwhelming: administrative burden. Well-designed AI tools, clinicians estimated, could save 12-13 hours per week—”a ridiculous amount of time,” as Phull put it. You could do two additional operations or see 20-30 more patients.
So Doximity GPT launched focused on being an administrative assistant—helping with documentation, prior authorizations, and appeals. The “unsexy part of AI,” Phull called it, acknowledging data showing that for every hour of clinical work, the average clinician does two hours of “other stuff.”
But then something interesting happened. Despite being fashioned as an administrative tool, a significant percentage of queries were actually clinical reference questions. Users were asking for clinical decision support.
Visual Prompt 2: Flow diagram showing the evolution of Doximity GPT—left side showing “Administrative Burden” (prior auths, documentation, appeals) with bar graph showing 12-13 hours saved, flowing through user feedback loop in center, emerging as “Clinical Reference” (clinical questions, decision support, peer review) on right—clean modern editorial style with flowing connection lines
At their March 2025 medical advisory board, Doximity floated the idea of leaning into clinical reference. Advisors made their requirements clear: HIPAA compliance, instant lookup capability, peer review, and transparent citations. By summer’s end, Doximity had acquired Pathway, a company co-founded by clinicians that had built a massive semantic graph of medical knowledge—guidelines, peer-reviewed drug monographs, and interconnected citations that could interface with large language models to improve reliability.
“In about nine weeks, we did nine months or nine years worth of work,” Phull joked. They integrated Pathway’s entire tech stack and re-released Doximity GPT with clinical reference capabilities.
The Convergence Question
This brings us to the elephant in the room: convergence. UpToDate, Open Evidence, Glass Health, Expert AI—they’re all racing toward similar functionality, all surfacing citations, all claiming to reduce hallucinations. What differentiates them?
Phull’s answer was nuanced. He positioned Doximity as occupying the middle ground in a spectrum. On one end sits UpToDate—humans creating consensus, then layering AI on top. On the other end are tools that lean more heavily into pure AI capabilities. “We’re kind of right in the middle,” Phull explained. “We have technology that enables us to move at a speed that the UpToDate process probably could not. And we also have this human layer—a very broad and extensive network of folks who have domain expertise.”
The key difference is the “order of operations”—technology for speed, human expertise for verification, but sequenced differently than traditional approaches.
This philosophy manifests in PeerCheck, an initiative Doximity announced recently, co-chaired by Dr. Eric Topol and Dr. Benjamin (readers can learn more at the Doximity blog). The program leverages hundreds of thousands of Doximity members who are primary authors of cited literature, bringing them “behind the curtain” to verify AI representations of their own work.
“When there are areas that a pure technology might give variable answers—you ask the same question 100 times and get some variation—we can solve for some of those problems by maintaining human expertise right at the center of clinical decision making,” Phull said.
The Patient-Facing Question
Perhaps the most interesting tension in clinical AI is whether these tools should be patient-facing. As Phull noted, “Even if we didn’t want to put these tools in the hands of patients, they’re gonna be in the hands of patients.” He trained in the “WebMD or Dr. Google generation,” where patients arrived armed with information and pre-formed decisions.
But Doximity’s answer is “not yet”—and maybe never. “The way we’re structured as a company, we’re not inherently patient-focused,” Phull explained. Their focus remains on empowering clinicians alongside their patients. They’ve made docs.doximity.com publicly accessible (with caps after a few queries), primarily to make it easier for clinician members to access.
“Our summary position is not that we’re trying to make Doximity GPT restricted from patients. It’s just not our personal point of focus as a company—at least not yet,” Phull said. “We hope that the more empowered we make our clinician user base to leverage these tools to advance patient care, the patients are gonna be right along with them for that ride.”
Visual Prompt 3: Conceptual illustration showing clinician and patient together looking at screen/interface (suggesting collaborative use rather than separate patient tool)—warm editorial style showing partnership, with subtle Doximity interface elements floating nearby, modern healthcare setting
Beyond the Hype: Practical AI
What stuck with me most from our conversation was Phull’s repeated emphasis on “practical AI”—technology that’s actually usable in the real world, not just impressive in demos. This means navigating health system approvals, ensuring HIPAA compliance, building EHR integrations, and maintaining trust.
“The technology is just the technology,” Phull said. “It’s not gonna show up and suddenly be operating on your patients. The way this technology actually does anything is clinicians participating in its development and its deployment. That’s where we think we win. And frankly, it’s not even about winning—it’s about making sure healthcare doesn’t miss this boat.”
When I asked what we should be excited about on Doximity’s roadmap, Phull reframed the question. “Viewing artificial intelligence as a product unto itself permits folks to maybe wrap their mind around it in one way, potentially get caught up in hype around it, and think about it as this wholly separate element. The actual unlock of Doximity GPT is infusing its capabilities into the entire ecosystem we’ve built for clinicians over the last fifteen years.”
That ecosystem includes one of the most broadly used telemedicine platforms in the United States, a comprehensive news and content machine, and fundamentally, a network that connects clinicians across the healthcare system. AI woven through that infrastructure, rather than sitting as a standalone product, represents Doximity’s vision.
The Real Differentiator
As clinical AI tools converge on similar features—citations, peer review, specialty-specific training—what actually matters may not be the technology itself, but the process of development and the infrastructure for deployment. Doximity’s advantage isn’t just in what they’ve built, but in how they’re building it: listening to hundreds of thousands of clinician users, iterating based on real-world usage patterns, and integrating AI into workflows clinicians already trust.
The race isn’t really to build the smartest AI. It’s to build the AI that clinicians will actually use—and that patients will actually benefit from. That requires something more than algorithms. It requires understanding that medicine wasn’t solved last Tuesday, and won’t be solved next Tuesday either. It requires the humility to let users shape the product, and the infrastructure to deploy it where it matters.
“We’re just at the tip of the iceberg,” Phull said. Given how quickly the landscape is shifting—Doximity went from administrative assistant to clinical reference powerhouse in under a year—he’s probably right.
For clinicians interested in participating in PeerCheck or learning more about Doximity GPT’s clinical reference capabilities, visit the Doximity blog or reach out to Amit at [email protected].
By Christian Pean MD, MSI first became aware of Doximity GPT the way many clinicians do—through a peer's recommendation. My brother who was an internal medicine resident at the time, (now a hospitalist), was using their AI scribe over a year ago when it was still in beta. “Have you seen this thing? It’s free,” he texted me. That early signal—clinicians organically discovering and adopting a tool—turns out to be core to Doximity’s entire philosophy.
When my three-year-old recently may have swallowed a plastic fork tine at a restaurant (he didn’t, thankfully), I found myself cycling through the emerging ecosystem of clinical AI tools: Open Evidence, Doximity GPT, even poison control. Each gave slightly different guidance, each had its own approach to surfacing evidence. It crystallized a question I wanted to explore: in this rapidly converging landscape of clinical AI, what actually differentiates one tool from another?
Visual Prompt 1: Split-screen comparison showing a worried father with phone at restaurant table on left (warm editorial illustration style), transitioning to floating interface elements of multiple clinical AI apps (Open Evidence, Doximity GPT, Up-to-Date) on right, with subtle connecting lines between them suggesting the question of convergence—modern editorial style, muted teals and warm grays
The User-First Philosophy
Dr. Amit Phull, Doximity’s Chief Clinical Experience Officer and an emergency medicine physician, joined the company in 2014 through what he calls “equal parts luck and serendipity.” With a background in computer engineering before medical school, he was part of Doximity’s early physician advisory panel, beta testing their software. At a 2014 medical advisory board meeting, when leadership mentioned they were looking for technically-minded clinicians, Phull essentially raised his hand and said, “What about me?”
That origin story matters because it encapsulates Doximity’s approach to product development. “Our process is what differentiates us,” Phull explained. “We have a very broad network of clinicians that comprise the membership of Doximity, and we have the capability to engage those connections and directly ask them how we might improve this product to be most useful in their clinical practice.”
The emphasis on “improve” is deliberate. “Our approach is not that medicine was solved last Tuesday,” Phull said, pushing back against what he sees as hubris in healthcare AI. “Anyone who presents any technology as having been the solution to everything that ails healthcare—there’s a healthy amount of hubris there.”
From Administrative Assistant to Clinical Reference
Doximity GPT’s evolution reflects this user-driven philosophy. In early 2023, the company surveyed hundreds of clinicians about where AI could have the most impact. The answer was overwhelming: administrative burden. Well-designed AI tools, clinicians estimated, could save 12-13 hours per week—”a ridiculous amount of time,” as Phull put it. You could do two additional operations or see 20-30 more patients.
So Doximity GPT launched focused on being an administrative assistant—helping with documentation, prior authorizations, and appeals. The “unsexy part of AI,” Phull called it, acknowledging data showing that for every hour of clinical work, the average clinician does two hours of “other stuff.”
But then something interesting happened. Despite being fashioned as an administrative tool, a significant percentage of queries were actually clinical reference questions. Users were asking for clinical decision support.
Visual Prompt 2: Flow diagram showing the evolution of Doximity GPT—left side showing “Administrative Burden” (prior auths, documentation, appeals) with bar graph showing 12-13 hours saved, flowing through user feedback loop in center, emerging as “Clinical Reference” (clinical questions, decision support, peer review) on right—clean modern editorial style with flowing connection lines
At their March 2025 medical advisory board, Doximity floated the idea of leaning into clinical reference. Advisors made their requirements clear: HIPAA compliance, instant lookup capability, peer review, and transparent citations. By summer’s end, Doximity had acquired Pathway, a company co-founded by clinicians that had built a massive semantic graph of medical knowledge—guidelines, peer-reviewed drug monographs, and interconnected citations that could interface with large language models to improve reliability.
“In about nine weeks, we did nine months or nine years worth of work,” Phull joked. They integrated Pathway’s entire tech stack and re-released Doximity GPT with clinical reference capabilities.
The Convergence Question
This brings us to the elephant in the room: convergence. UpToDate, Open Evidence, Glass Health, Expert AI—they’re all racing toward similar functionality, all surfacing citations, all claiming to reduce hallucinations. What differentiates them?
Phull’s answer was nuanced. He positioned Doximity as occupying the middle ground in a spectrum. On one end sits UpToDate—humans creating consensus, then layering AI on top. On the other end are tools that lean more heavily into pure AI capabilities. “We’re kind of right in the middle,” Phull explained. “We have technology that enables us to move at a speed that the UpToDate process probably could not. And we also have this human layer—a very broad and extensive network of folks who have domain expertise.”
The key difference is the “order of operations”—technology for speed, human expertise for verification, but sequenced differently than traditional approaches.
This philosophy manifests in PeerCheck, an initiative Doximity announced recently, co-chaired by Dr. Eric Topol and Dr. Benjamin (readers can learn more at the Doximity blog). The program leverages hundreds of thousands of Doximity members who are primary authors of cited literature, bringing them “behind the curtain” to verify AI representations of their own work.
“When there are areas that a pure technology might give variable answers—you ask the same question 100 times and get some variation—we can solve for some of those problems by maintaining human expertise right at the center of clinical decision making,” Phull said.
The Patient-Facing Question
Perhaps the most interesting tension in clinical AI is whether these tools should be patient-facing. As Phull noted, “Even if we didn’t want to put these tools in the hands of patients, they’re gonna be in the hands of patients.” He trained in the “WebMD or Dr. Google generation,” where patients arrived armed with information and pre-formed decisions.
But Doximity’s answer is “not yet”—and maybe never. “The way we’re structured as a company, we’re not inherently patient-focused,” Phull explained. Their focus remains on empowering clinicians alongside their patients. They’ve made docs.doximity.com publicly accessible (with caps after a few queries), primarily to make it easier for clinician members to access.
“Our summary position is not that we’re trying to make Doximity GPT restricted from patients. It’s just not our personal point of focus as a company—at least not yet,” Phull said. “We hope that the more empowered we make our clinician user base to leverage these tools to advance patient care, the patients are gonna be right along with them for that ride.”
Visual Prompt 3: Conceptual illustration showing clinician and patient together looking at screen/interface (suggesting collaborative use rather than separate patient tool)—warm editorial style showing partnership, with subtle Doximity interface elements floating nearby, modern healthcare setting
Beyond the Hype: Practical AI
What stuck with me most from our conversation was Phull’s repeated emphasis on “practical AI”—technology that’s actually usable in the real world, not just impressive in demos. This means navigating health system approvals, ensuring HIPAA compliance, building EHR integrations, and maintaining trust.
“The technology is just the technology,” Phull said. “It’s not gonna show up and suddenly be operating on your patients. The way this technology actually does anything is clinicians participating in its development and its deployment. That’s where we think we win. And frankly, it’s not even about winning—it’s about making sure healthcare doesn’t miss this boat.”
When I asked what we should be excited about on Doximity’s roadmap, Phull reframed the question. “Viewing artificial intelligence as a product unto itself permits folks to maybe wrap their mind around it in one way, potentially get caught up in hype around it, and think about it as this wholly separate element. The actual unlock of Doximity GPT is infusing its capabilities into the entire ecosystem we’ve built for clinicians over the last fifteen years.”
That ecosystem includes one of the most broadly used telemedicine platforms in the United States, a comprehensive news and content machine, and fundamentally, a network that connects clinicians across the healthcare system. AI woven through that infrastructure, rather than sitting as a standalone product, represents Doximity’s vision.
The Real Differentiator
As clinical AI tools converge on similar features—citations, peer review, specialty-specific training—what actually matters may not be the technology itself, but the process of development and the infrastructure for deployment. Doximity’s advantage isn’t just in what they’ve built, but in how they’re building it: listening to hundreds of thousands of clinician users, iterating based on real-world usage patterns, and integrating AI into workflows clinicians already trust.
The race isn’t really to build the smartest AI. It’s to build the AI that clinicians will actually use—and that patients will actually benefit from. That requires something more than algorithms. It requires understanding that medicine wasn’t solved last Tuesday, and won’t be solved next Tuesday either. It requires the humility to let users shape the product, and the infrastructure to deploy it where it matters.
“We’re just at the tip of the iceberg,” Phull said. Given how quickly the landscape is shifting—Doximity went from administrative assistant to clinical reference powerhouse in under a year—he’s probably right.
For clinicians interested in participating in PeerCheck or learning more about Doximity GPT’s clinical reference capabilities, visit the Doximity blog or reach out to Amit at [email protected].