Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
315 changes: 93 additions & 222 deletions README.md

Large diffs are not rendered by default.

14 changes: 14 additions & 0 deletions samples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Foundry Local Samples

Explore complete working examples that demonstrate how to use Foundry Local — an end-to-end local AI solution that runs entirely on-device. These samples cover chat completions, audio transcription, tool calling, LangChain integration, and more.

> **New to Foundry Local?** Check out the [main README](../README.md) for an overview and quickstart, or visit the [Foundry Local documentation](https://learn.microsoft.com/azure/foundry-local/) on Microsoft Learn.

## Samples by Language

| Language | Samples | Description |
|----------|---------|-------------|
| [**C#**](cs/) | 12 | .NET SDK samples including native chat, audio transcription, tool calling, model management, web server, and tutorials. Uses WinML on Windows for hardware acceleration. |
| [**JavaScript**](js/) | 12 | Node.js SDK samples including native chat, audio transcription, Electron desktop app, Copilot SDK integration, LangChain, tool calling, web server, and tutorials. |
| [**Python**](python/) | 9 | Python samples using the OpenAI-compatible API, including chat, audio transcription, LangChain integration, tool calling, web server, and tutorials. |
| [**Rust**](rust/) | 8 | Rust SDK samples including native chat, audio transcription, tool calling, web server, and tutorials. |
9 changes: 4 additions & 5 deletions samples/cs/Directory.Packages.props
Original file line number Diff line number Diff line change
@@ -1,13 +1,12 @@
<Project>
<PropertyGroup>
<ManagePackageVersionsCentrally>true</ManagePackageVersionsCentrally>
<OnnxRuntimeGenAIVersion>0.13.0-dev-20260319-1131106-439ca0d51</OnnxRuntimeGenAIVersion>
<OnnxRuntimeVersion>1.23.2</OnnxRuntimeVersion>
<CentralPackageFloatingVersionsEnabled>true</CentralPackageFloatingVersionsEnabled>
</PropertyGroup>
<ItemGroup>
<PackageVersion Include="Microsoft.AI.Foundry.Local" Version="0.9.0-dev" />
<PackageVersion Include="Microsoft.AI.Foundry.Local.WinML" Version="0.9.0-dev-20260324" />
<PackageVersion Include="Betalgo.Ranul.OpenAI" Version="9.1.1" />
<PackageVersion Include="Microsoft.AI.Foundry.Local" Version="*-*" />
<PackageVersion Include="Microsoft.AI.Foundry.Local.WinML" Version="*-*" />
<PackageVersion Include="Betalgo.Ranul.OpenAI" Version="9.2.0" />
<PackageVersion Include="Microsoft.Extensions.Logging" Version="9.0.10" />
<PackageVersion Include="Microsoft.Extensions.Logging.Console" Version="9.0.10" />
<PackageVersion Include="NAudio" Version="2.2.1" />
Expand Down
6 changes: 1 addition & 5 deletions samples/cs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ Both packages provide the same APIs, so the same source code works on all platfo
| [tutorial-tool-calling](tutorial-tool-calling/) | Create a tool-calling assistant (tutorial). |
| [tutorial-voice-to-text](tutorial-voice-to-text/) | Transcribe and summarize audio (tutorial). |


## Running a sample

1. Clone the repository:
Expand All @@ -36,8 +37,3 @@ Both packages provide the same APIs, so the same source code works on all platfo
dotnet run
```

The unified project file automatically selects the correct SDK package for your platform.

> [!TIP]
> On Windows, we recommend using the WinML package (selected automatically) for optimal performance. Your users benefit from a wider range of hardware acceleration options and a smaller application package size.

This file was deleted.

This file was deleted.

106 changes: 0 additions & 106 deletions samples/cs/live-audio-transcription-example/Program.cs

This file was deleted.

5 changes: 0 additions & 5 deletions samples/cs/nuget.config
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
<clear />
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
<add key="ORT-Nightly" value="https://pkgs.dev.azure.com/aiinfra/PublicPackages/_packaging/ORT-Nightly/nuget/v3/index.json" />
<add key="local-sdk" value="../../local-packages" />
</packageSources>
<packageSourceMapping>
<packageSource key="nuget.org">
Expand All @@ -14,9 +13,5 @@
<package pattern="Microsoft.AI.Foundry.Local*" />
<package pattern="Microsoft.ML.OnnxRuntime*" />
</packageSource>
<packageSource key="local-sdk">
<package pattern="Microsoft.AI.Foundry.Local" />
<package pattern="Microsoft.AI.Foundry.Local.WinML" />
</packageSource>
</packageSourceMapping>
</configuration>
49 changes: 49 additions & 0 deletions samples/js/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# 🚀 Foundry Local JavaScript Samples

These samples demonstrate how to use the Foundry Local JavaScript SDK (`foundry-local-sdk`) with Node.js.

## Prerequisites

- [Node.js](https://nodejs.org/) (v18 or later recommended)

## Samples

| Sample | Description |
|--------|-------------|
| [native-chat-completions](native-chat-completions/) | Initialize the SDK, download a model, and run non-streaming and streaming chat completions. |
| [audio-transcription-example](audio-transcription-example/) | Transcribe audio files using the Whisper model with streaming output. |
| [chat-and-audio-foundry-local](chat-and-audio-foundry-local/) | Unified sample demonstrating both chat and audio transcription in one application. |
| [electron-chat-application](electron-chat-application/) | Full-featured Electron desktop chat app with voice transcription and model management. |
| [copilot-sdk-foundry-local](copilot-sdk-foundry-local/) | GitHub Copilot SDK integration with Foundry Local for agentic AI workflows. |
| [langchain-integration-example](langchain-integration-example/) | LangChain.js integration for building text generation chains. |
| [tool-calling-foundry-local](tool-calling-foundry-local/) | Tool calling with custom function definitions and streaming responses. |
| [web-server-example](web-server-example/) | Start a local OpenAI-compatible web server and call it with the OpenAI SDK. |
| [tutorial-chat-assistant](tutorial-chat-assistant/) | Build an interactive multi-turn chat assistant (tutorial). |
| [tutorial-document-summarizer](tutorial-document-summarizer/) | Summarize documents with AI (tutorial). |
| [tutorial-tool-calling](tutorial-tool-calling/) | Create a tool-calling assistant (tutorial). |
| [tutorial-voice-to-text](tutorial-voice-to-text/) | Transcribe and summarize audio (tutorial). |

## Running a Sample

1. Clone the repository:

```bash
git clone https://github.com/microsoft/Foundry-Local.git
cd Foundry-Local/samples/js
```

1. Navigate to a sample and install dependencies:

```bash
cd native-chat-completions
npm install
```

1. Run the sample:

```bash
npm start
```

> [!TIP]
> Each sample's `package.json` includes `foundry-local-sdk` as a dependency and `foundry-local-sdk-winml` as an optional dependency. On **Windows**, the WinML variant installs automatically for broader hardware acceleration. On **macOS and Linux**, the standard SDK is used. Just run `npm install` — platform detection is handled for you.
38 changes: 0 additions & 38 deletions samples/js/audio-transcription-example/README.md

This file was deleted.

15 changes: 15 additions & 0 deletions samples/js/audio-transcription-example/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
{
"name": "audio-transcription-example",
"version": "1.0.0",
"type": "module",
"main": "app.js",
"scripts": {
"start": "node app.js"
},
"dependencies": {
"foundry-local-sdk": "latest"
},
"optionalDependencies": {
"foundry-local-sdk-winml": "latest"
}
}
Loading
Loading