Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions ci/vale/dictionary.txt
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ adodb
adoptium
aes
ag
agentic
agentless
Agones
ahci0
Expand Down Expand Up @@ -1918,6 +1919,7 @@ pg_restore
pgAdmin
PGvector
pgpass
pgvector
Phalcon
pharmer
pharo
Expand Down Expand Up @@ -2072,6 +2074,7 @@ qualys
quickconnect
quicklisp
quickstart
Qwen
radeon
rainloop
Ramaze
Expand Down
105 changes: 105 additions & 0 deletions docs/marketplace-docs/guides/gemma3/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
---
title: "Deploy Gemma3"
description: "Learn how to deploy Gemma3, the open-source generative AI model from Google DeepMind, on an Akamai Compute Instance."
published: 2026-02-11
modified: 2026-02-11
keywords: ['artificial intelligence','ai','LLM','machine learning','gemma','open webui']
tags: ["quick deploy apps", "linode platform", "cloud manager"]
external_resources:
- '[Gemma3 Official Documentation](https://ai.google.dev/gemma/docs/core)'
- '[Google Gemma 3 4B Model](https://huggingface.co/google/gemma-3-4b-it)'
- '[Google Gemma 3 12B Model](https://huggingface.co/google/gemma-3-12b-it)'
- '[Open WebUI Documentation](https://docs.openwebui.com/)'
aliases: ['/products/tools/marketplace/guides/gemma3-with-openwebui/','/guides/gemma3-with-openwebui-marketplace-app/']
authors: ["Akamai"]
contributors: ["Akamai"]
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
marketplace_app_id: 1997012
marketplace_app_name: "Gemma3"
---

[Google Gemma 3](https://ai.google.dev/gemma) is an open-source large language model from Google DeepMind, built with the same technology that powers Google's Gemini models. This Marketplace App deploys Gemma 3 with [Open WebUI](https://docs.openwebui.com/), a self-hosted web interface for interacting with LLMs. The deployment uses GPU-accelerated inference and lets you choose between two model sizes: 4B for efficient performance or 12B for enhanced capabilities.

## Deploying a Marketplace App

{{% content "deploy-marketplace-apps-shortguide" %}}

{{% content "marketplace-verify-standard-shortguide" %}}

{{< note title="Estimated deployment time" >}}
Gemma3 with Open WebUI should be fully installed within 5-10 minutes after your instance has finished provisioning.
{{< /note >}}

## Configuration Options

- **Supported distributions**: Ubuntu 24.04 LTS
- **Recommended plan**: All GPU plan types and sizes can be used. The 4B model requires at least 16GB of RAM, while the 12B model requires at least 32GB of RAM.

### Gemma3 Options

- **Admin Login Name** (required): The initial admin username for accessing the Open WebUI interface.
- **Admin Login Email** (required): The email address for the Open WebUI admin account.
- **Hugging Face API Token** (required): A Hugging Face API token is required to download the Gemma 3 model. See [Obtaining a Hugging Face Token](#obtaining-a-hugging-face-token) below for instructions.
- **Model Size** (required): Choose between `4B` (16GB+ RAM required) or `12B` (32GB+ RAM required).

### Obtaining a Hugging Face Token

Before deployment, you need a Hugging Face API token to access the Gemma 3 model. To obtain one:

1. Create a free account at [huggingface.co/join](https://huggingface.co/join).
1. Accept the Gemma license at [huggingface.co/google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it).
1. Generate a token at [huggingface.co/settings/tokens](https://huggingface.co/settings/tokens). Read-only access is sufficient.
1. Provide this token during the Marketplace deployment process.

{{% content "marketplace-required-limited-user-fields-shortguide" %}}

{{% content "marketplace-custom-domain-fields-shortguide" %}}

{{% content "marketplace-special-character-limitations-shortguide" %}}


## Getting Started after Deployment

### Access Your Credentials

Your Open WebUI admin credentials are stored in a `.credentials` file on your instance. To retrieve them:

1. Log in to your instance via SSH or Lish. See [Connecting to a Remote Server Over SSH](/docs/guides/connect-to-server-over-ssh/) for assistance, or use the [Lish Console](/docs/products/compute/compute-instances/guides/lish/).

1. Once logged in, retrieve your credentials from the `.credentials` file:

```command
sudo cat /home/$USER/.credentials
```

The `.credentials` file contains:
- **Sudo Username and Password**: Your limited sudo user credentials
- **Open WebUI Admin Name**: The admin username for the web interface
- **Open WebUI Admin Email**: The admin email address
- **Open WebUI Admin Password**: The admin password for logging in

### Access the Open WebUI Interface

Once your instance has finished deploying, you can access the Open WebUI interface through your browser:

1. Navigate to your instance's domain or rDNS address via HTTPS (e.g., `https://your-domain.com` or `https://192-0-2-1.ip.linodeusercontent.com`).

1. Log in using the admin email and password from the `.credentials` file.

1. You can now start chatting with the Gemma 3 model.

### Working with RAG Operations

Open WebUI provides built-in support for RAG (Retrieval Augmented Generation) operations, allowing you to upload documents and chat with them. By default, the deployment supports file uploads up to 100MB.

If you need to upload larger documents, you can modify the NGINX configuration:

1. Edit the NGINX virtual host configuration file located at `/etc/nginx/sites-enabled/your-domain`.
1. Update the `client_max_body_size` directive to a larger value (e.g., `client_max_body_size 500M;` for 500MB).
1. Reload NGINX:

```command
sudo systemctl reload nginx
```

{{% content "marketplace-update-note-shortguide" %}}
85 changes: 85 additions & 0 deletions docs/marketplace-docs/guides/gpt-oss-with-openwebui/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
---
title: "Deploy GPT-OSS with Open WebUI through the Linode Marketplace"
description: "This guide includes instructions on how to deploy Open WebUI with GPT-OSS self-hosted LLM on an Akamai Compute Instance."
published: 2026-02-12
modified: 2026-02-12
keywords: ['gpt-oss', 'open-webui', 'vllm', 'ai', 'llm', 'llm-inference', 'openai-gpt-oss']
tags: ["quick deploy apps", "cloud manager", "ai", "llm-inference", "llm"]
aliases: ['/products/tools/marketplace/guides/gpt-oss-with-openwebui/']
external_resources:
- '[Open WebUI Documentation](https://docs.openwebui.com/getting-started/)'
- '[GPT-OSS 20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)'
- '[GPT-OSS 120B on Hugging Face](https://huggingface.co/openai/gpt-oss-120b)'
authors: ["Akamai"]
contributors: ["Akamai"]
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
marketplace_app_id: 2011885
marketplace_app_name: "GPT-OSS with Open WebUI"
---

Open WebUI is an open-source, self-hosted web interface for interacting with and managing Large Language Models (LLMs). It supports multiple AI backends, multi-user access, and extensible integrations, enabling secure and customizable deployment for local or remote model inference.

The Marketplace application deployed in this guide uses OpenAI GPT-OSS, a family of open-weight large language models designed for powerful reasoning, agentic tasks, and versatile developer use cases. During deployment, you can choose between two model sizes: GPT-OSS 20B (default) or GPT-OSS 120B. These models are released under the permissive Apache 2.0 license and integrate well with self-hosted platforms like Open WebUI for general-purpose assistance, coding, and knowledge-based workflows.

## Deploying a Marketplace App

{{% content "deploy-marketplace-apps-shortguide" %}}

{{% content "marketplace-verify-standard-shortguide" %}}

{{< note title="Estimated deployment time" >}}
Open WebUI with GPT-OSS should be fully installed within 5-10 minutes after the Compute Instance has finished provisioning.
{{< /note >}}

## Configuration Options

- **Recommended plan for GPT-OSS 20B (default):** RTX4000 Ada x1 Small (16GB RAM minimum)
- **Recommended plan for GPT-OSS 120B:** RTX4000 Ada x1 Large or higher (64GB RAM minimum)

{{< note type="warning" >}}
This Marketplace App only works with Akamai GPU instances. If you choose a plan other than GPUs, the provisioning will fail, and a notice will appear in the LISH console.
{{< /note >}}

### GPT-OSS Options

- **Linode API Token** *(required)*: Your API token is used to deploy additional Compute Instances as part of this cluster. At a minimum, this token must have Read/Write access to *Linodes*. If you do not yet have an API token, see [Get an API Access Token](/docs/products/platform/accounts/guides/manage-api-tokens/) to create one.

- **Email address (for the Let's Encrypt SSL certificate)** *(required)*: Your email is used for Let's Encrypt renewal notices. This allows you to visit Open WebUI securely through a browser.

- **Open WebUI admin name.** *(required)*: This is the name associated with your login and is required by Open WebUI during initial enrollment.

- **Open WebUI admin email.** *(required)*: This is the email used to login into Open WebUI.

- **GPT-OSS Model Size** *(required)*: Select the model size for deployment. Options are `20B` (default, requires 16GB+ RAM) or `120B` (requires 64GB+ RAM). Choose based on your GPU plan and performance requirements.

{{% content "marketplace-required-limited-user-fields-shortguide" %}}

{{% content "marketplace-special-character-limitations-shortguide" %}}

## Getting Started After Deployment

### Accessing Open WebUI Frontend

Once your app has finished deploying, you can log into Open WebUI using your browser.

1. Log into the instance as your limited sudo user, replacing `{{< placeholder "USER" >}}` with the sudo username you created, and `{{< placeholder "IP_ADDRESS" >}}` with the instance's IPv4 address:

```command
ssh {{< placeholder "USER" >}}@{{< placeholder "IP_ADDRESS" >}}
```
2. Upon logging into the instance, a banner appears containing the **App URL**. Open your browser and paste the link to direct you to the login for Open WebUI.

!["Open WebUI Login Page"](openwebui-login.png "Open WebUI Login Page")

3. Return to your terminal, and open the `.credentials` file with the following command. Replace `{{< placeholder "USER" >}}` with your sudo username:

```command
sudo cat /home/{{< placeholder "USER" >}}/.credentials
```
4. In the `.credentials` file, locate the Open WebUI login email and password. Go back to the Open WebUI login page, and paste the credentials to log in. When you successfully login, you should see the following page:

!["Open WebUI Welcome 1"](openwebui-w1.png "Open WebUI Welcome 1")

Once you hit the "Okay, Let's Go!" button, you can start to use the chat feature in Open WebUI.

!["Open WebUI Welcome 2"](openwebui-w2.png "Open WebUI Welcome 2")
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
81 changes: 81 additions & 0 deletions docs/marketplace-docs/guides/qwen/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
---
title: "Deploy Qwen Instruct with Open WebUI"
description: "This guide includes instructions on how to deploy Open WebUI with a self-hosted Qwen Instruct Large Language Model (LLM) on an Akamai Compute Instance."
published: 2026-02-18
modified: 2026-02-18
keywords: ['qwen', 'qwen-instruct', 'open-webui', 'vllm', 'ai', 'llm', 'llm-inference', 'qwen-llm']
tags: ["quick deploy apps", "linode platform", "cloud manager", "ai", "llm-inference", "llm"]
aliases: ['/products/tools/marketplace/guides/qwen-instruct-with-openwebui/']
external_resources:
- '[Open WebUI Documentation](https://docs.openwebui.com/getting-started/)'
- '[Qwen Documentation](https://github.com/QwenLM)'
authors: ["Akamai"]
contributors: ["Akamai"]
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
marketplace_app_id: 1980062
marketplace_app_name: "Qwen Instruct with Open WebUI"
---

Open WebUI is an open-source, self-hosted web interface for interacting with and managing Large Language Models (LLMs). It supports multiple AI backends, multi-user access, and extensible integrations, enabling secure and customizable deployment for local or remote model inference.

The Marketplace application deployed in this guide uses a Qwen Instruct model as an instruction-tuned, open-weight LLM optimized for reasoning, code generation, and conversational tasks. Qwen models are designed for high-quality inference across a wide range of general-purpose and technical workloads and integrate seamlessly with self-hosted platforms like Open WebUI.

## Deploying a Quick Deploy App

{{% content "deploy-marketplace-apps-shortguide" %}}

{{% content "marketplace-verify-standard-shortguide" %}}

{{< note title="Estimated deployment time" >}}
Open WebUI with Qwen Instruct should be fully installed within 5-10 minutes after the Compute Instance has finished provisioning.
{{< /note >}}

## Configuration Options

- **Recommended plan:** RTX4000 Ada x1 Small or Larger GPU Instance

{{< note type="warning" >}}
This Marketplace App only works with Akamai GPU instances. If you choose a plan other than GPUs, the provisioning will fail, and a notice will appear in the LISH console.
{{< /note >}}

### Open WebUI Options

- **Linode API Token** *(required)*: Your API token is used to deploy additional Compute Instances as part of this deployment. At a minimum, this token must have Read/Write access to *Linodes*. If you do not yet have an API token, see [Get an API Access Token](/docs/products/platform/accounts/guides/manage-api-tokens/) to create one.

- **Email address (for the Let's Encrypt SSL certificate)** *(required)*: Your email is used for Let's Encrypt renewal notices. This allows you to securely access Open WebUI through a browser.

- **Open WebUI admin name** *(required)*: This is the name associated with your Open WebUI administrator account and is required during initial setup.

- **Open WebUI admin email** *(required)*: This email address is used to log into Open WebUI.

{{% content "marketplace-required-limited-user-fields-shortguide" %}}

{{% content "marketplace-special-character-limitations-shortguide" %}}

## Getting Started After Deployment

### Obtain the Credentials

When deployment completes, the system automatically generates credentials to administer your pgvector instance. These are stored in the limited user’s `.credentials` file.

1. Log in to your Compute Instance using one of the methods below:

- **Lish Console**: Log in to Cloud Manager, click **Linodes**, select your instance, and click **Launch LISH Console**. Log in as `root`. To learn more, see [Using the Lish Console](/docs/products/compute/compute-instances/guides/lish/).
- **SSH**: Log in to your instance over SSH using the `root` user. To learn how, see [Connecting to a Remote Server Over SSH](/docs/guides/connect-to-server-over-ssh/).

2. Run the following command to access the contents of the `.credentials` file:

```command
cat /home/$USERNAME/.credentials
```

### Accessing Open WebUI Frontend

Once your app has finished deploying, you can log into Open WebUI using your browser.

1. Open your web browser and navigate to `https://DOMAIN/`, where *DOMAIN* can be replaced with the custom domain you entered during deployment or your Compute Instance's rDNS domain (such as `192-0-2-1.ip.linodeusercontent.com`). See the [Managing IP Addresses](/docs/products/compute/compute-instances/guides/manage-ip-addresses/) guide for information on viewing rDNS.


Now that you’ve accessed your dashboard, check out [the official OpenWebUI documentation](https://docs.openwebui.com/) to learn how to further use your instance.

{{% content "marketplace-update-note-shortguide" %}}
Loading