
Sign up to save your podcasts
Or


GPT-OSS is OpenAI’s first open-weight model since GPT-2, and it’s a game-changer for developers who want powerful AI without the cloud dependency. You get two flavors – gpt-oss-120b and gpt-oss-20b – both delivering solid performance on coding, math, and tool use while keeping your data completely private. The 20B model is especially interesting because it runs on just 16GB of memory, making it perfect for local development and experimentation. Check out the official OpenAI announcement to see how these models are putting serious AI power directly in developers’ hands.
Running GPT-OSS locally opens up new possibilities for experimentation, cost efficiency, and privacy. In this guide, you’ll learn how to use the open-source GPT-OSS model with Ollama to build fast, private, and offline-capable AI features using C#.
Microsoft has made it easy to work with AI models using the Microsoft.Extensions.AI libraries. These libraries provide a unified set of abstractions, letting you write code that can work with different AI providers—like Ollama, Azure AI, or OpenAI—without changing your core logic.
First, create a new console application. Open your terminal and run:
To connect to Ollama using Microsoft.Extensions.AI, you’ll need two main packages. The Microsoft.Extensions.AI package provides the core abstractions, while the OllamaSharp package acts as the provider that implements these abstractions for Ollama.
Note: The Microsoft.Extensions.AI.Ollama package is deprecated. Use OllamaSharp as the recommended alternative for connecting to Ollama.
Open Program.cs and replace its contents with the following code. This example keeps a rolling chat history and streams responses in real time.
Make sure your Ollama service is running. Then run your .NET console app:
Your application will connect to the local Ollama server, and you can start chatting with your own private GPT-OSS model.
This is just the beginning. The Microsoft.Extensions.AI libraries also support function calling, allowing you to give your local LLM access to your C# methods, APIs, and data. This is where you can build truly powerful, “agentic” applications.
The future of AI is decentralized, and as a C# developer, you have the tools to lead the charge. The power is on your machine—now go build something incredible!
In follow-up posts we’ll show how to run the same GPT-OSS model using Foundry Local instead of Ollama. Foundry Local offers Windows-native GPU acceleration and a slightly different runtime, and we’ll provide Foundry-specific configuration, tips for GPU setup, and an example C# wiring that mirrors this guide’s chat + streaming pattern.
Read the announcement for Foundry Local support on the Windows Developer Blog.
You learned how to: (1) set up a .NET console app, (2) add Microsoft.Extensions.AI plus OllamaSharp, (3) stream chat completions from a local GPT-OSS model, and (4) prepare for advanced scenarios like function calling. Try extending this sample with tool invocation or local RAG over your documents to unlock richer agent patterns—all while keeping data local.
The post GPT-OSS – A C# Guide with Ollama appeared first on .NET Blog.
By GPT-OSS is OpenAI’s first open-weight model since GPT-2, and it’s a game-changer for developers who want powerful AI without the cloud dependency. You get two flavors – gpt-oss-120b and gpt-oss-20b – both delivering solid performance on coding, math, and tool use while keeping your data completely private. The 20B model is especially interesting because it runs on just 16GB of memory, making it perfect for local development and experimentation. Check out the official OpenAI announcement to see how these models are putting serious AI power directly in developers’ hands.
Running GPT-OSS locally opens up new possibilities for experimentation, cost efficiency, and privacy. In this guide, you’ll learn how to use the open-source GPT-OSS model with Ollama to build fast, private, and offline-capable AI features using C#.
Microsoft has made it easy to work with AI models using the Microsoft.Extensions.AI libraries. These libraries provide a unified set of abstractions, letting you write code that can work with different AI providers—like Ollama, Azure AI, or OpenAI—without changing your core logic.
First, create a new console application. Open your terminal and run:
To connect to Ollama using Microsoft.Extensions.AI, you’ll need two main packages. The Microsoft.Extensions.AI package provides the core abstractions, while the OllamaSharp package acts as the provider that implements these abstractions for Ollama.
Note: The Microsoft.Extensions.AI.Ollama package is deprecated. Use OllamaSharp as the recommended alternative for connecting to Ollama.
Open Program.cs and replace its contents with the following code. This example keeps a rolling chat history and streams responses in real time.
Make sure your Ollama service is running. Then run your .NET console app:
Your application will connect to the local Ollama server, and you can start chatting with your own private GPT-OSS model.
This is just the beginning. The Microsoft.Extensions.AI libraries also support function calling, allowing you to give your local LLM access to your C# methods, APIs, and data. This is where you can build truly powerful, “agentic” applications.
The future of AI is decentralized, and as a C# developer, you have the tools to lead the charge. The power is on your machine—now go build something incredible!
In follow-up posts we’ll show how to run the same GPT-OSS model using Foundry Local instead of Ollama. Foundry Local offers Windows-native GPU acceleration and a slightly different runtime, and we’ll provide Foundry-specific configuration, tips for GPU setup, and an example C# wiring that mirrors this guide’s chat + streaming pattern.
Read the announcement for Foundry Local support on the Windows Developer Blog.
You learned how to: (1) set up a .NET console app, (2) add Microsoft.Extensions.AI plus OllamaSharp, (3) stream chat completions from a local GPT-OSS model, and (4) prepare for advanced scenarios like function calling. Try extending this sample with tool invocation or local RAG over your documents to unlock richer agent patterns—all while keeping data local.
The post GPT-OSS – A C# Guide with Ollama appeared first on .NET Blog.