
Sign up to save your podcasts
Or


This week we welcome Jiyun Hyo, co-founder and CEO of Givance, for a conversation about moving legal AI past shiny summaries toward verified work product. Jiyun’s path runs from Duke robotics, where layered agents watched other agents, to clinical mental health bots, where confident errors carry human cost. Those lessons shape his view of legal tools today: foundation models often answer like students guessing on a pop quiz, sounding sure while drifting from fact.
A key idea is the “last ten percent gap.” Many systems reach outputs that look right on first pass yet slip on a few crucial details. In low-stakes tasks, small misses are a nuisance. In litigation, one missing email or one misplaced time stamp risks ruining trust and admissibility. Jiyun adds a second problem: when users ask for a tiny correction, models tend to rebuild the whole output, so precision edits become a loop of fixes and new breakage.
Givance aims at that gap through text-to-visual evidence work. The platform turns piles of documents into interactive charts with links back to source files. Examples include Gantt charts for personnel histories, Sankey diagrams for asset flows, overlap views for evidence exchanges, and timelines that surface contradictions across thousands of records. Jiyun shares early law-firm use: rapid fact digestion after a data dump, clearer client conversations around case theory, and courtroom visuals that help judges and juries follow a sequence without sketching their own shaky diagrams.
Safety, supervision, and security follow naturally. Drawing on robotics, Jiyun argues for a live supervisory layer during agentic workflows so alerts surface while negotiations or analyses unfold rather than days later. Too many alerts, though, create noise, so tuning confidence thresholds becomes part of product design. On security, Givance works in isolated environments, strips identifiers before model calls, and keeps architecture model-agnostic so newer systems slot in without reopening privacy debates.
The episode ends on market dynamics and the near future. Jiyun sees mega-funded text-first platforms as market openers, normalizing AI buying and leaving room for second-wave multimodal tools. Asked whether the search bar in document review fades away, he expects search to stick around for a long while because lawyers associate a search box with control, even if chat interfaces improve. The bigger shift, in his view, lies in outputs, more interactive visuals that help legal teams spot gaps, test case stories, and present evidence with clarity.
Listen on mobile platforms: Apple Podcasts | Spotify | YouTube
[Special Thanks to Legal Technology Hub for their sponsoring this episode.]
Email: [email protected]
Music: Jerry David DeCicca
Transcript:
By Greg Lambert & Marlene Gebauer4.7
2626 ratings
This week we welcome Jiyun Hyo, co-founder and CEO of Givance, for a conversation about moving legal AI past shiny summaries toward verified work product. Jiyun’s path runs from Duke robotics, where layered agents watched other agents, to clinical mental health bots, where confident errors carry human cost. Those lessons shape his view of legal tools today: foundation models often answer like students guessing on a pop quiz, sounding sure while drifting from fact.
A key idea is the “last ten percent gap.” Many systems reach outputs that look right on first pass yet slip on a few crucial details. In low-stakes tasks, small misses are a nuisance. In litigation, one missing email or one misplaced time stamp risks ruining trust and admissibility. Jiyun adds a second problem: when users ask for a tiny correction, models tend to rebuild the whole output, so precision edits become a loop of fixes and new breakage.
Givance aims at that gap through text-to-visual evidence work. The platform turns piles of documents into interactive charts with links back to source files. Examples include Gantt charts for personnel histories, Sankey diagrams for asset flows, overlap views for evidence exchanges, and timelines that surface contradictions across thousands of records. Jiyun shares early law-firm use: rapid fact digestion after a data dump, clearer client conversations around case theory, and courtroom visuals that help judges and juries follow a sequence without sketching their own shaky diagrams.
Safety, supervision, and security follow naturally. Drawing on robotics, Jiyun argues for a live supervisory layer during agentic workflows so alerts surface while negotiations or analyses unfold rather than days later. Too many alerts, though, create noise, so tuning confidence thresholds becomes part of product design. On security, Givance works in isolated environments, strips identifiers before model calls, and keeps architecture model-agnostic so newer systems slot in without reopening privacy debates.
The episode ends on market dynamics and the near future. Jiyun sees mega-funded text-first platforms as market openers, normalizing AI buying and leaving room for second-wave multimodal tools. Asked whether the search bar in document review fades away, he expects search to stick around for a long while because lawyers associate a search box with control, even if chat interfaces improve. The bigger shift, in his view, lies in outputs, more interactive visuals that help legal teams spot gaps, test case stories, and present evidence with clarity.
Listen on mobile platforms: Apple Podcasts | Spotify | YouTube
[Special Thanks to Legal Technology Hub for their sponsoring this episode.]
Email: [email protected]
Music: Jerry David DeCicca
Transcript:

30,699 Listeners

9,596 Listeners

1,837 Listeners

112,572 Listeners

56,563 Listeners

25 Listeners

36 Listeners

5,568 Listeners

5,530 Listeners

15,995 Listeners

602 Listeners

1,413 Listeners

9 Listeners

4 Listeners

0 Listeners