# Microsoft-Azure

## Docs

- [Hugging Face on Microsoft Azure](https://huggingface.co/docs/microsoft-azure/index.md)
- [Frequent Asked Questions (FAQ)](https://huggingface.co/docs/microsoft-azure/faq.md)
- [Features & Benefits](https://huggingface.co/docs/microsoft-azure/features.md)
- [Security & Compliance](https://huggingface.co/docs/microsoft-azure/security.md)
- [Resources](https://huggingface.co/docs/microsoft-azure/resources.md)
- [Hugging Face on Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/introduction.md)
- [Supported Hardware](https://huggingface.co/docs/microsoft-azure/azure-ai/hardware.md)
- [Supported Tasks](https://huggingface.co/docs/microsoft-azure/azure-ai/tasks.md)
- [Supported Models](https://huggingface.co/docs/microsoft-azure/azure-ai/models.md)
- [Set up Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/set-up.md)
- [Deploy Vision Language Models (VLMs) on Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/examples/deploy-vision-language-models.md)
- [Deploy Large Language Models (LLMs) on Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/examples/deploy-large-language-models.md)
- [Deploy SmolLM3 on Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/examples/deploy-smollm3.md)
- [Build Agents with smolagents on Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/examples/build-agents-with-smolagents.md)
- [Deploy NVIDIA Parakeet for Automatic Speech Recognition (ASR) on Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/examples/deploy-nvidia-parakeet-asr.md)
- [Guides](https://huggingface.co/docs/microsoft-azure/guides/introduction.md)
- [Request a model addition in the Hugging Face collection on Azure](https://huggingface.co/docs/microsoft-azure/guides/request-model-addition.md)
- [One-click deployments from the Hugging Face Hub on Azure AI](https://huggingface.co/docs/microsoft-azure/guides/one-click-deployment-azure-ai.md)

### Hugging Face on Microsoft Azure
https://huggingface.co/docs/microsoft-azure/index.md

# Hugging Face on Microsoft Azure

![Hugging Face on Microsoft Azure](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/thumbnail.png)

Hugging Face collaborates with Microsoft Azure across open science, open source, and cloud, to enable companies to build their own AI with the latest open models from Hugging Face and the latest infrastructure features from Microsoft Azure.

Hugging Face enables new experiences for Microsoft Azure customers, allowing them to deploy models on their Microsoft Azure infrastructure directly from the Hugging Face Hub via one-click deployments in a secure and scalable way, as well as from the Azure Machine Learning Model Catalog, or even programmatically via the Microsoft Azure CLI or the Python SDK.

This collaboration aims to offer developers access to an everyday growing catalog of open-source models from the Hugging Face Hub, using Hugging Face open-source libraries across a broad spectrum of Microsoft Azure services and hardware platforms.

By combining Hugging Face's open-source models, libraries and solutions with Microsoft Azure's scalable and secure cloud services, developers can more easily and affordably incorporate advanced AI capabilities into their applications.


<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/index.mdx" />

### Frequent Asked Questions (FAQ)
https://huggingface.co/docs/microsoft-azure/faq.md

# Frequent Asked Questions (FAQ)

## What is Azure Machine Learning (Azure ML)?

Azure ML is Microsoft’s cloud-native platform for fully managing the ML lifecycle—training, deployment, monitoring, pipelines, AutoML, model registries, and responsible AI tooling—designed for data scientists and ML engineers.

## What is Azure AI Foundry (formerly Azure AI Studio)?

Azure AI  Foundry builds on Azure ML but is tailored specifically for generative AI and agent-based applications. It offers:
* A unified experience for building, evaluating, and deploying LLMs and multimodal agents.
* Access to a broad catalog of open-source and commercial frontier models—from Azure OpenAI, Hugging Face, Meta, DeepSeek, etc. 
* Integrated tools like model evaluation leaderboards, prompt flows (for RAG), content safety, and agent orchestration.

## What’s the difference between a **Hub-based project** and a **Foundry (standalone) project**?

| Feature | Hub-based project | Standalone Foundry project |
|--------|--------------------|-----------------------------|
| Requires a Hub resource | ✅ Yes—project is linked to a hub | ❌ No—project created individually |
| Shared infrastructure (compute/quota) | ✅ Yes | ❌ No |
| Shared security/network settings | ✅ Yes | ❌ No |
| Shared resource connections | ✅ Yes (e.g., models, storage) | ❌ Per‑project only |
| Full Generative AI tooling (fine-tuning, evaluation, RAG, agent orchestration) | ✅ Yes | ⚠️ Limited support |
| Accessible from Azure ML Studio | ✅ Yes | Limited/absent |

Hub-based projects provide **complete** access to generative-AI features; standalone projects operate with **limited** capabilities. Open-model deployments are only accessible through Hub-based project for now.


<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/faq.mdx" />

### Features & Benefits
https://huggingface.co/docs/microsoft-azure/features.md

# Features & Benefits

1. Extensive Model Catalog Integration

    Over 10,000 Hugging Face models—including text, vision, speech, and multimodal models are directly accessible within Azure AI Foundry Hub and Azure Machine Learning Studio for one-click deployment.

    Continuous updates ensure day-0 releases of new and trending models from the Hugging Face Hub are available on Azure as soon as they launch.

2. Secure, Scalable, and Managed Deployments

    Models can be deployed on managed online endpoints within Azure Machine Learning, providing secure, scalable REST APIs for real-time inference.

    Azure's infrastructure supports both CPU and GPU deployments, with features like autoscaling, traffic splitting, and monitoring built in.

    Models are scanned for vulnerabilities, and certain model weights are hosted directly on Azure for enhanced security and compliance, including private network deployments with no external egress.

3. Multimodal and Domain-Specific Support

    The collaboration covers a wide range of modalities and tasks: text generation, translation, image classification, segmentation, speech recognition, audio classification, and more.

    Ongoing expansion includes support for video, 3D, time series, protein folding, and other specialized domains.

4. Enterprise-Grade Infrastructure and Developer Tools

    Integration leverages Azure's enterprise-grade infrastructure, including the latest GPU and CPU offerings.

    Hugging Face models are optimized for Azure's hardware, ensuring high performance and efficiency, especially for demanding generative AI applications.

    Integration with Azure Machine Learning SDK, Azure AI SDK, and Python APIs for seamless automation and scripting.

6. Community and Open-Source Ecosystem

    The partnership brings the innovation of Hugging Face's open-source community (nearly 2 million models and 8 million users) to Azure's enterprise customers.

    The Hugging Face models are powered by open-source inference engines backed by Transformers, Diffusers, or Sentence Transformers; as well as efficient production-ready solutions such as Text Generation Inference (TGI), vLLM, SGLang and Text Embeddings Inference (TEI), among others to come.

7. Enhanced Security, Compliance, and Monitoring

    All models available via Azure are subject to security scans and compliance checks, as the model weights are ensured to be distributed in Safetensors format, scanned with JFrog, Protect AI, and ClamAV, as Hugging Face Security Partners, and Hugging Face's Pickelscan.

    Azure's enterprise security features (private endpoints, network isolation, audit trails) are available for Hugging Face model deployments.

## Benefits for Enterprises and Developers

1. Accelerated AI Adoption and Innovation

    Rapid access to the latest open-source models and state-of-the-art AI capabilities without the overhead of infrastructure setup or maintenance.

    Enables organizations to build, experiment, and iterate on AI solutions faster, keeping pace with the evolving AI landscape.

2. Lower Barriers to Production-Ready AI

    Simplifies the deployment of complex models (like Transformers and LLMs) into secure, production environments with minimal configuration.

    Reduces the need for specialized DevOps or ML infrastructure expertise.

3. Flexibility and Control

    Enterprises retain full control over data, model selection, and deployment environments, supporting both public and private cloud scenarios.

4. Cost and Resource Optimization

    Azure's flexible scaling, global availability, and pay-as-you-go pricing help optimize costs for both experimentation and large-scale production.

    Efficient resource utilization through auto-scaling and traffic management features.

5. Security and Compliance

    Enterprise-grade security, compliance, and privacy controls are built into every stage of the model lifecycle.

    Models are vetted for vulnerabilities and can be deployed in isolated environments to meet regulatory requirements.

6. Future-Proofing and Ecosystem Growth

    Ongoing collaboration ensures regular updates, support for new modalities, and integration with emerging Azure and Hugging Face features.

    Access to both open and proprietary models, as well as tools for building modular, agentic, and composable AI applications.

---

This deep integration between Hugging Face and Microsoft Azure empowers organizations to harness the best of open-source AI with the reliability, security, and scalability of Azure's cloud ecosystem.


<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/features.mdx" />

### Security & Compliance
https://huggingface.co/docs/microsoft-azure/security.md

# Security & Compliance

In addition to the enterprise-grade features available in Microsoft Azure services, the following security measures and requirements are enforced to safeguard the deployment and use of open models on Azure:

## Model Eligibility Requirements

Only models that meet strict security criteria are included in the Hugging Face collection on Azure:

* **Public availability:** Models must be public on the [Hugging Face Hub](https://huggingface.co/models); gated or private models are currently not eligible.
* **No `trust_remote_code`:** Models that require `trust_remote_code=True` are disallowed unless they are explicitly verified by Hugging Face or come from a trusted/verified organization.
* **Secure format:** Model weights must be uploaded in the [Safetensors](https://github.com/huggingface/safetensors) format to eliminate the risks associated with pickle-based formats.

## Mandatory Security Scanning

All models made available via the Hugging Face collection on Azure undergo a robust set of security scans like [ClamAV malware scanning](https://huggingface.co/docs/hub/en/security-malware), including third-party scanners such as [Protect AI](https://huggingface.co/docs/hub/en/security-protectai) and [JFrog](https://huggingface.co/docs/hub/en/security-jfrog) solutions.

These checks help identify embedded malware or harmful binaries, unsafe deserialization, unintended external connections and security-sensitive content in model artifacts before being imported in customers' tenancy.

For more details on Hugging Face Hub's security practices and tooling, refer to this [documentation](https://huggingface.co/docs/hub/en/security).


## Network Isolation and Compliance

For enhanced protection and compliance, model hosting and serving can be configured to run in isolated compute environments on Azure AI services, aligned with regulatory or internal policy requirements. Azure Foundry and Azure ML comes with enterprise-grade audit, logging, and access control frameworks that ensures full traceability and governance.

<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/security.mdx" />

### Resources
https://huggingface.co/docs/microsoft-azure/resources.md

# Resources

- [Hugging Face on Azure](https://azure.microsoft.com/en-us/solutions/hugging-face-on-azure)

## Posts

### 2025

- [Microsoft and Hugging Face expand collaboration](https://huggingface.co/blog/azure-ai-foundry)
- [Microsoft and Hugging Face expand collaboration to accelerate Open-Source AI Innovation on Azure AI Foundry](https://devblogs.microsoft.com/foundry/microsoft-and-hugging-face-expand-partnership-to-accelerate-open-source-ai-innovation-on-azure-ai-foundry/)

### 2024

- [From cloud to developers: Hugging Face and Microsoft Deepen Collaboration](https://huggingface.co/blog/microsoft-collaboration)
- [Microsoft and Hugging Face deepen generative AI partnership](https://techcommunity.microsoft.com/blog/aiplatformblog/microsoft-and-hugging-face-deepen-generative-ai-partnership/4144565)

### 2023

- [Hugging Face Collaborates with Microsoft to launch Hugging Face Model Catalog on Azure](https://huggingface.co/blog/hugging-face-endpoints-on-azure)
- [Accelerating over 130,000 Hugging Face models with ONNX Runtime](https://opensource.microsoft.com/blog/2023/10/04/accelerating-over-130000-hugging-face-models-with-onnx-runtime/)


<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/resources.mdx" />

### Hugging Face on Azure AI
https://huggingface.co/docs/microsoft-azure/azure-ai/introduction.md

# Hugging Face on Azure AI

Hugging Face has partnered with Microsoft to bring open-source models from the [Hugging Face Hub](https://huggingface.co) into [Azure Machine Learning](https://ml.azure.com/). The Hugging Face Hub is the home of over 1,700,000 public access open-source models, as well as datasets, spaces and much more. The integration with Azure Machine Learning enables you to deploy open-source models of your choice to secure and scalable inference infrastructure on Azure powered by Hugging Face and other open-source inference solutions such as Text Generation Inference (TGI), vLLM and SGLang for LLMs and VLMs, or Text Embeddings Inference (TEI) for embeddings, among others to come. Now, the Azure Machine Learning Catalog is the home for over 10,000 open-source of the most popular and downloaded models on the Hugging Face Hub from the most popular and influential users and organizations, ensuring secure and verified weights, that can be deployed to managed online endpoints with ease. Once deployed, the managed online endpoint gives you secure REST API to score your model in real time.

Azure Machine Learning Model Catalog contains over 10,000 deployable models under the Hugging Face collection, ranging a wide-variety of models for different tasks such as image generation, Large Language Models (LLMs), Visual Language Models (VLMs), or embeddings, among many others; all of those powered by open-source inference solutions. Additionally, each of those models can be deployed in a wide variety of hardware available on Microsoft Azure, ranging NVIDIA GPUs to CPUs, so that each model comes with a default suggested hardware.

At Microsoft Build 2025, an expansion of the partnership between Hugging Face and Microsoft Azure was announced. Among the main takeaways, the expanded collaboration will not only cover [Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning), but also [Azure AI Hub](https://azure.microsoft.com/en-us/products/ai-foundry) allowing Microsoft Azure users to design, customize, and manage AI apps and agents at scale with open-source models from Hugging Face.

![Satya Nadella announcing the Hugging Face expanded collaboration on Microsoft Build 2025](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/microsoft-build-2025.png)

## Resources

- [Azure Machine Learning - Deploy models from Hugging Face Hub to Azure Machine Learning online endpoints for real-time inference](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-models-from-huggingface)
- [Azure Machine Learning - How to use Open Source foundation models curated by Azure Machine Learning](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models)
- [Microsoft and Hugging Face expand collaboration](https://huggingface.co/blog/azure-ai-foundry)
- [Microsoft and Hugging Face expand collaboration to accelerate Open-Source AI Innovation on Azure AI Foundry](https://devblogs.microsoft.com/foundry/microsoft-and-hugging-face-expand-partnership-to-accelerate-open-source-ai-innovation-on-azure-ai-foundry/)


<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/azure-ai/introduction.mdx" />

### Supported Hardware
https://huggingface.co/docs/microsoft-azure/azure-ai/hardware.md

# Supported Hardware

## NVIDIA GPUs

Instance Name         	  | GPU Type	     | GPUs	| Total GPU VRAM
--------------------------|------------------|------|----------------
Standard_NC4as_T4_v3 	  | NVIDIA TESLA T4  | 1    | 16 GB
Standard_NC8as_T4_v3      | NVIDIA TESLA T4  | 1    | 16 GB
Standard_NC16as_T4_v3     | NVIDIA TESLA T4  | 1    | 16 GB
Standard_NC64as_T4_v3     | NVIDIA TESLA T4  | 4    | 64 GB
Standard_NC24ads_A100_v4  | NVIDIA A100 80GB | 1    | 80 GB
Standard_NC40ADS_H100_V5  | NVIDIA H100 80GB | 1    | 80 GB
Standard_NC48ads_A100_v4  | NVIDIA A100 80GB | 2    | 160 GB
Standard_NC80adis_H100_v5 | NVIDIA H100 80GB | 2    | 160 GB
Standard_NC96ads_A100_v4  | NVIDIA A100 80GB | 4    | 320 GB
Standard_ND96asr_v4       | NVIDIA A100 40GB | 8    | 320 GB
Standard_ND96amsr_A100_v4 | NVIDIA A100 80GB | 8    | 640 GB
Standard_ND96isr_H100_v5  | NVIDIA H100 80GB | 8    | 640 GB

More information about those GPU Types / Families can be found in the [Microsoft Azure Documentation - Sizes for virtual machines in Azure - GPU accelerated](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/overview?tabs=breakdownseries%2Cgeneralsizelist%2Ccomputesizelist%2Cmemorysizelist%2Cstoragesizelist%2Cgpusizelist%2Cfpgasizelist%2Chpcsizelist#gpu-accelerated).

## Intel CPUs

Instance         | CPU Model (Family)             | vCPUs | RAM (GiB)
-----------------|--------------------------------|-------|-----------
Standard_DS2_v2  | Intel Xeon E5-2673 v4 / 8272CL | 2     | 7
Standard_F2s_v2  | Intel Xeon Platinum 8272CL     | 2     | 4
Standard_E2s_v3  | Intel Xeon Platinum 8272CL     | 2     | 16
Standard_F4s_v2  | Intel Xeon Platinum 8272CL     | 4	  | 8
Standard_DS3_v2  | Intel Xeon E5-2673 v4 / 8272CL | 4	  | 14
Standard_F8s_v2  | Intel Xeon Platinum 8272CL     | 8	  | 16
Standard_DS4_v2  | Intel Xeon Platinum 8272CL     | 8	  | 28
Standard_E4s_v3  | Intel Xeon 8171M               | 4	  | 32
Standard_F16s_v2 | Intel Xeon Platinum 8272CL     | 16	  | 32
Standard_DS5_v2  | Intel Xeon Platinum 8272CL     | 16	  | 56
Standard_E16s_v3 | Intel Xeon Platinum 8272CL     | 16	  | 128
Standard_F32s_v2 | Intel Xeon Platinum 8272CL     | 32	  | 64
Standard_F48s_v2 | Intel Xeon Platinum 8272CL     | 48	  | 96
Standard_F64s_v2 | Intel Xeon Platinum 8272CL     | 64	  | 128
Standard_F72s_v2 | Intel Xeon Platinum 8272CL     | 72    | 144

More information about those CPU Models / Families can be found in the [Microsoft Azure Documentation - Sizes for virtual machines in Azure](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/overview) under the bookmarks [General Purpose](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/overview?tabs=breakdownseries%2Cgeneralsizelist%2Ccomputesizelist%2Cmemorysizelist%2Cstoragesizelist%2Cgpusizelist%2Cfpgasizelist%2Chpcsizelist#general-purpose) and [Compute Optimized](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/overview?tabs=breakdownseries%2Cgeneralsizelist%2Ccomputesizelist%2Cmemorysizelist%2Cstoragesizelist%2Cgpusizelist%2Cfpgasizelist%2Chpcsizelist#compute-optimized).


<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/azure-ai/hardware.mdx" />

### Supported Tasks
https://huggingface.co/docs/microsoft-azure/azure-ai/tasks.md

# Supported Tasks

The following Hugging Face tasks are natively supported on Azure Machine Learning and, so on, on Azure AI Foundry Hub:

- `embeddings` (also known as `feature-extraction`)
- `sentence-similarity`
- `text-ranking` (also formerly known as `sentence-ranking`)
- `automatic-speech-recognition`
- `text-to-speech`
- `speech-to-text`
- `translation`
- `text-translation`
- `question-answering`
- `text-classification`
- `fill-mask`
- `token-classification`
- `summarization` (also known as `text-summarization`)
- `text-generation` (also know as either `completions`, `chat-completion`, `text2text-generation` or `conversational`)
- `image-text-to-text` (also know as `chat-completion` with vision-capabilities)
- `image-classification`
- `image-segmentation`
- `object-detection`
- `text-to-image`
- `zero-shot-image-classification`
- `table-question-answering`
- `zero-shot-classification`
- `visual-question-answering`
- `image-to-text`

With upcoming support for some of the following tasks:

- `image-to-image`
- `text-to-image` with LoRA
- `image-feature-extraction`
- `image-to-text` (also known as `image-captioning`)
- `text-to-speech`
- `image-to-3d`
- `audio-to-audio` (also known as `speech-to-speech`)
- `text-to-video`

More information about all the supported tasks at [Hugging Face - Tasks](https://huggingface.co/tasks).


<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/azure-ai/tasks.mdx" />

### Supported Models
https://huggingface.co/docs/microsoft-azure/azure-ai/models.md

# Supported Models

Around +11,000 open models from the Hugging Face Hub are made available on Azure AI / ML, which represents a subset out of the +1,800,000 public open-models, as the Hugging Face collection in Azure is a curated subset of the most downloaded / relevant models on the Hub, as well as compatible with Transformers, Sentence Transformers and Diffusers, as well as with other Hugging Face libraries and solutions.

<Tip>

Even if you don't have a Microsoft Azure account, you can still explore the [public Hugging Face Collection on Azure AI](https://ai.azure.com/catalog/publishers/hugging%20face,huggingface).

</Tip>

This being said, the supported models range different architectures and backends, but a way to identify whether a model from the Hugging Face Hub is made available within the model catalog in Azure AI / ML is to either:

1. Navigate to the model card of the given model under https://huggingface.co/models, and make sure that the "Deploy" button is made available and that the "Deploy on Azure AI" option is listed there. If that's the case, then if the model is available on Azure AI, the URL pointing to the model on Azure AI will be provided via the "Go to model on Azure AI" button, otherwise, the button "Request to add" will show to request the model addition (more information on the latter in [Request a model addition in the Hugging Face collection on Azure](../guides/request-model-addition)).

2. On the other hand, you can also navigate to either the Azure ML or the Azure AI Foundry (the latter only for Hub-based projects) model catalogs under the Hugging Face collection, and search the given model. If the model appears, it means it's supported and you can grab the URI pointing to it to programmatically deploy it, otherwise, you can either [open an issue](https://github.com/huggingface/Microsoft-Azure/issues/new) requesting the model addition, or request it via the Hugging Face Hub model card with the "Request to add" button as mentioned before.

3. Alternatively, you can also check if the given model is available on Azure AI programmatically with the following Python snippet, that sends a request to an Azure API that given a model ID from the Hugging Face Hub returns either HTTP 200 with the model URL if it's available, or just HTTP 404 if not available.

```python
import requests

model_id = "HuggingFaceTB/SmolLM3-3B"
response = requests.get("https://get-azure-ai-url.azurewebsites.net/api/get-azure-ai-url", params={"model_id": model_id})
if response.status_code == 200:
    print(response.json())
    # {"url": "https://ai.azure.com/explore/models/HuggingFaceTB-SmolLM3-3B/version/3/registry/HuggingFace"}
```

We are really excited for this partnership between Hugging Face and Microsoft Azure, and working really hard to bring Azure customers the best open models from the Hugging Face collection into Azure AI / ML, so stay tuned for updates and a lot more models to come in the following months!


<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/azure-ai/models.mdx" />

### Set up Azure AI
https://huggingface.co/docs/microsoft-azure/azure-ai/set-up.md

# Set up Azure AI

This page explains how to set up Azure AI in your Microsoft Azure subscription, required to run the Azure AI examples in this documentation, but also to run any example on Azure ML, since these are the basic pre-requisites.

You can either follow along the below steps, or either read more about those in the [Azure Machine Learning Tutorial: Create resources you need to get started](https://learn.microsoft.com/en-us/azure/machine-learning/quickstart-create-resources?view=azureml-api-2).

Also note that the steps below will use the `az` CLI i.e., the Azure CLI, but there are other alternatives such as e.g. the Azure SDK for Python, or even the Azure Portal, so pick the one you feel more comfortable with.

## Azure Account

A Microsoft Azure account with an active subscription. If you don't have a Microsoft Azure account, you can now [create one for free](https://azure.microsoft.com/en-us/pricing/purchase-options/azure-account), including 200 USD worth of credits to use within the next 30 days after the account creation.

## Azure CLI

The Azure CLI (`az`) installed on the instance that you're running this example on, see [the installation steps](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest), and follow the steps of the preferred method based on your instance. Then log in into your subscription as follows:

```bash
az login
```

More information at [Sign in with Azure CLI - Login and Authentication](https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli?view=azure-cli-latest).

## Azure CLI extension for Azure ML

Besides the Azure CLI (`az`), you also need to install the Azure ML CLI extension (`az ml`) which will be used to create the Azure ML and Azure AI Foundry required resources.

First you will need to list the current extensions and remove any `ml`-related extension before installing the latest one i.e., v2.

```bash
az extension list
az extension remove --name azure-cli-ml
az extension remove --name ml
```

Then you can install the `az ml` v2 extension as follows:

```bash
az extension add --name ml
```

More information at [Azure Machine Learning (ML) - Install and setup the CLI (v2)](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-configure-cli?view=azureml-api-2&tabs=public).

## Azure Resource Group

An Azure Resource Group under the one you will create the Azure AI Foundry Hub-based project (note it will create an Azure AI Foundry resource as an Azure ML Workspace, but not the other way around, meaning that the Azure AI Foundry Hub will be listed as an Azure ML workspace, but leveraging the Azure AI Foundry capabilities for Gen AI), and the rest of the required resources. If you don't have one, you can create it as follows:

```bash
az group create --name huggingface-azure-rg --location eastus
```

Then, you can ensure that the resource group was created successfully by e.g. listing all the available resource groups that you have access to on your subscription:

```bash
az group list --output table
```

More information at [Manage Azure resource groups by using Azure CLI](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-cli).

<Tip>

You can also create the Azure Resource Group [via the Azure Portal](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal), or [via the Azure Resource Management Python SDK](https://learn.microsoft.com/en-us/azure/developer/python/sdk/examples/azure-sdk-example-resource-group?tabs=bash) (requires it to be installed as `pip install azure-mgmt-resource` in advance).

</Tip>

## Azure AI Foundry Hub-based project

An Azure AI Foundry Hub under the aforementioned subscription and resource group. If you don't have one, you can create it as follows:

```bash
az ml workspace create \
    --kind hub \
    --name huggingface-azure-hub \
    --resource-group huggingface-azure-rg \
    --location eastus
```

<Tip>

Note that the main difference with an standard Azure ML Workspace is that the Azure AI Foundry Hub command requires you to specify the `--kind hub`, removing it would create a standard Azure ML Workspace instead, so you wouldn't benefit from the features that the Azure AI Foundry brings. But, when you create an Azure AI Foundry Hub, you can still benefit from all the features that Azure ML brings, since the Azure AI Foundry Hub will still rely on Azure ML, but not the other way around.

</Tip>

Then, you can ensure that the workspace was created successfully by e.g. listing all the available workspaces that you have access to on your subscription:

```bash
az ml workspace list --filtered-kinds hub --query "[].{Name:name, Kind:kind}" --resource-group huggingface-azure-rg --output table
```

<Tip warning>

The `--filtered-kinds` argument has been recently included as of [Azure ML CLI 2.37.0](https://learn.microsoft.com/en-us/azure/machine-learning/azure-machine-learning-release-notes-cli-v2?view=azureml-api-2#azure-machine-learning-cli-v2-v-2370), meaning that you may need to upgrade `az ml` as `az extension update --name ml`.

</Tip>

Once the Azure AI Foundry Hub is created, you need to create an Azure AI Foundry Project linked to that Hub, to do so you first need to obtain the Azure AI Foundry Hub ID of the recently created Hub as follows (replace the resource names with yours):

```bash
az ml workspace show \
    --name huggingface-azure-hub \
    --resource-group huggingface-azure-rg \
    --query "id" \
    -o tsv
```

That command will provide the ID as follows `/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.MachineLearningServices/workspaces/huggingface-azure-hub`, meaning that you can also format it manually yourself with the appropriate replacements. Then you need to run the following command to create the Azure AI Foundry Project for that Hub as:

```bash
az ml workspace create \
    --kind project \
    --hub-id $(az ml workspace show --name huggingface-azure-hub --resource-group huggingface-azure-rg --query "id" -o tsv) \
    --name huggingface-azure-project \
    --resource-group huggingface-azure-rg \
    --location eastus
```

Finally, you can verify that it was correctly created with the following command:

```bash
az ml workspace list --filtered-kinds project --query "[].{Name:name, Kind:kind}" --resource-group huggingface-azure-rg --output table
```

More information at [How to create and manage an Azure AI Foundry Hub](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/create-azure-ai-resource?tabs=portal) and at [How to create a Hub using the Azure CLI](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/develop/create-hub-project-sdk?tabs=azurecli).

<Tip>

You can also create the Azure AI Foundry Hub [via the Azure Portal](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/create-secure-ai-hub), or [via the Azure ML Python SDK](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/develop/create-hub-project-sdk?tabs=python), among other options listed in [Manage AI Hub Resources](https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/ai-resources).

</Tip>


<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/azure-ai/set-up.mdx" />

### Deploy Vision Language Models (VLMs) on Azure AI
https://huggingface.co/docs/microsoft-azure/azure-ai/examples/deploy-vision-language-models.md

# Deploy Vision Language Models (VLMs) on Azure AI

This example showcases how to deploy a Vision Language Model (VLM), i.e., a Large Language Model (LLM) with vision understanding, from the Hugging Face Collection in Azure AI Foundry Hub as an Azure ML Managed Online Endpoints. Additionally, this example also showcases how to run inference with both the Azure Python SDK, OpenAI Python SDK, and even how to locally run a Gradio application for chat completion with images.
 
<Tip>

Note that this example will go through the Python SDK / Azure CLI programmatic deployment, if you'd rather prefer using the one-click deployment experience, please check [One-click deployments from the Hugging Face Hub on Azure AI](https://huggingface.co/docs/microsoft-azure/guides/one-click-deployment-azure-ai).

</Tip>

TL;DR Azure AI Foundry provides a unified platform for enterprise AI operations, model builders, and application development. Azure Machine Learning is a cloud service for accelerating and managing the machine learning (ML) project lifecycle.

---

This example will specifically deploy [`Qwen/Qwen2.5-VL-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) from the Hugging Face Hub (or see it on [AzureML](https://ml.azure.com/models/qwen-qwen2.5-vl-32b-instruct/version/1/catalog/registry/HuggingFace) or on [Azure AI Foundry](https://ai.azure.com/explore/models/qwen-qwen2.5-vl-32b-instruct/version/1/registry/HuggingFace)) as an Azure ML Managed Online Endpoint on Azure AI Foundry Hub.

Qwen2.5-VL is one of the latest VLMs from Qwen, released after the impact and feedback from the previous Qwen2 VL release, with some key enhancements such as:

- **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.
- **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.
- **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of capturing event by pinpointing the relevant video segments.
- **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.
- **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.

![Qwen2.5 VL 32B Instruct on the Hugging Face Hub](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-vision-language-models/qwen2.5-vl-hub.png)

![Qwen2.5 VL 32B Instruct on Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-vision-language-models/qwen2.5-vl-azure-ai.png)

For more information, make sure to check [their model card on the Hugging Face Hub](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct/blob/main/README.md).

<Tip>

Note that you can select any VLM available on the Hugging Face Hub with the "Deploy to AzureML" option enabled, or directly select any of the LLMs available in either the Azure ML or Azure AI Foundry Hub Model Catalog under the "HuggingFace" collection (note that for Azure AI Foundry the Hugging Face Collection will only be available for Hub-based projects).

</Tip>

## Pre-requisites

To run the following example, you will need to comply with the following pre-requisites, alternatively, you can also read more about those in the [Azure Machine Learning Tutorial: Create resources you need to get started](https://learn.microsoft.com/en-us/azure/machine-learning/quickstart-create-resources?view=azureml-api-2).

- An Azure account with an active subscription.
- The Azure CLI installed and logged in.
- The Azure Machine Learning extension for the Azure CLI.
- An Azure Resource Group.
- A project based on an Azure AI Foundry Hub.

For more information, please go through the steps in [Set up Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/set-up).

## Setup and installation

In this example, the [Azure Machine Learning SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ml/azure-ai-ml) will be used to create the endpoint and the deployment, as well as to invoke the deployed API. Along with it, you will also need to install `azure-identity` to authenticate with your Azure credentials via Python.

```python
%pip install azure-ai-ml azure-identity --upgrade --quiet
```

More information at [Azure Machine Learning SDK for Python](https://learn.microsoft.com/en-us/python/api/overview/azure/ai-ml-readme?view=azure-python).

Then, for convenience setting the following environment variables is recommended as those will be used along the example for the Azure ML Client, so make sure to update and set those values accordingly as per your Microsoft Azure account and resources.

```python
%env LOCATION eastus
%env SUBSCRIPTION_ID <YOUR_SUBSCRIPTION_ID>
%env RESOURCE_GROUP <YOUR_RESOURCE_GROUP>
%env AI_FOUNDRY_HUB_PROJECT <YOUR_AI_FOUNDRY_HUB_PROJECT>
```

Finally, you also need to define both the endpoint and deployment names, as those will be used throughout the example too:

<Tip>

Note that endpoint names must to be globally unique per region i.e., even if you don't have any endpoint named that way running under your subscription, if the name is reserved by another Azure customer, then you won't be able to use the same name. Adding a timestamp or a custom identifier is recommended to prevent running into HTTP 400 validation issues when trying to deploy an endpoint with an already locked / reserved name. Also the endpoint name must be between 3 and 32 characters long.

</Tip>

```python
import os
from uuid import uuid4

os.environ["ENDPOINT_NAME"] = f"qwen-vl-endpoint-{str(uuid4())[:8]}"
os.environ["DEPLOYMENT_NAME"] = f"qwen-vl-deployment-{str(uuid4())[:8]}"
```

## Authenticate to Azure ML

Initially, you need to authenticate into the Azure AI Foundry Hub via Azure ML with the Azure ML Python SDK, which will be later used to deploy `Qwen/Qwen2.5-VL-32B-Instruct` as an Azure ML Managed Online Endpoint in your Azure AI Foundry Hub.

<Tip>

On standard Azure ML deployments you'd need to create the `MLClient` using the Azure ML Workspace as the `workspace_name` whereas for Azure AI Foundry, you need to provide the Azure AI Foundry Hub name as the `workspace_name` instead, and that will deploy the endpoint under the Azure AI Foundry too.

</Tip>

```python
import os
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential

client = MLClient(
    credential=DefaultAzureCredential(),
    subscription_id=os.getenv("SUBSCRIPTION_ID"),
    resource_group_name=os.getenv("RESOURCE_GROUP"),
    workspace_name=os.getenv("AI_FOUNDRY_HUB_PROJECT"),
)
```

## Create and Deploy Azure AI Endpoint

Before creating the Managed Online Endpoint, you need to build the model URI, which is formatted as it follows `azureml://registries/HuggingFace/models/<MODEL_ID>/labels/latest` where the `MODEL_ID` won't be the Hugging Face Hub ID but rather its name on Azure, as follows:

```python
model_id = "Qwen/Qwen2.5-VL-32B-Instruct"

model_uri = f"azureml://registries/HuggingFace/models/{model_id.replace('/', '-').replace('_', '-').lower()}/labels/latest"
model_uri
```

<Tip>

To check if a model from the Hugging Face Hub is available in Azure, you should read about it in [Supported Models](https://huggingface.co/docs/microsoft-azure/azure-ai/models). If not, you can always [Request a model addition in the Hugging Face collection on Azure](https://huggingface.co/docs/microsoft-azure/guides/request-model-addition)).

</Tip>

Then you need to create the [ManagedOnlineEndpoint via the Azure ML Python SDK](https://learn.microsoft.com/en-us/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint?view=azure-python) as follows.

<Tip>

Every model in the Hugging Face Collection is powered by an efficient inference backend, and each of those can run on a wide variety of instance types (as listed in [Supported Hardware](https://huggingface.co/docs/microsoft-azure/azure-ai/supported-hardware)). Since for models and inference engines require a GPU-accelerated instance, you might need to request a quota increase as per [Manage and increase quotas and limits for resources with Azure Machine Learning](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas?view=azureml-api-2).

</Tip>

```python
from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment

endpoint = ManagedOnlineEndpoint(name=os.getenv("ENDPOINT_NAME"))

deployment = ManagedOnlineDeployment(
    name=os.getenv("DEPLOYMENT_NAME"),
    endpoint_name=os.getenv("ENDPOINT_NAME"),
    model=model_uri,
    instance_type="Standard_NC40ads_H100_v5",
    instance_count=1,
)
```

```python
client.begin_create_or_update(endpoint).wait()
```

![Azure AI Endpoint from Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-vision-language-models/azure-ai-endpoint.png)

<Tip>

In Azure AI Foundry the endpoint will only be listed within the "My assets -> Models + endpoints" tab once the deployment is created, not before as in Azure ML where the endpoint is shown even if it doesn't contain any active or in-progress deployments.

</Tip>

```python
client.online_deployments.begin_create_or_update(deployment).wait()
```

![Azure AI Deployment from Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-vision-language-models/azure-ai-deployment.png)

<Tip>

Note that whilst the Azure AI Endpoint creation is relatively fast, the deployment will take longer since it needs to allocate the resources on Azure so expect it to take ~10-15 minutes, but it could as well take longer depending on the instance provisioning and availability.

</Tip>

Once deployed, via either the Azure AI Foundry or the Azure ML Studio you'll be able to inspect the endpoint details, the real-time logs, how to consume the endpoint, and even use the, still on preview, [monitoring feature](https://learn.microsoft.com/en-us/azure/machine-learning/concept-model-monitoring?view=azureml-api-2). Find more information about it at [Azure ML Managed Online Endpoints](https://learn.microsoft.com/en-us/azure/machine-learning/concept-endpoints-online?view=azureml-api-2#managed-online-endpoints)

## Send requests to the Azure AI Endpoint

Finally, now that the Azure AI Endpoint is deployed, you can send requests to it. In this case, since the task of the model is `image-text-to-text` (also known as `chat-completion` with image support) you can either use the default scoring endpoint, being `/generate` which is the standard text generation endpoint without chat capabilities (as leveraging the chat template or having an OpenAI-compatible OpenAPI interface), or alternatively just benefit from the fact that the inference engine in which the model is running on top exposes OpenAI-compatible routes as `/v1/chat/completions`.

<Tip>

Note that below only some of the options are listed, but you can send requests to the deployed endpoint as long as you send the HTTP requests with the `azureml-model-deployment` header set to the name of the Azure AI Deployment (not the Endpoint), and have the necessary authentication token / key to send requests to the given endpoint; then you can send HTTP request to all the routes that the backend engine is exposing, not only to the scoring route.

</Tip>

### Azure Python SDK

You can invoke the Azure ML Endpoint on the scoring route, in this case `/generate` (more information about it in the `Qwen/Qwen2.5-VL-32B-Instruct` page in either [AzureML](https://ml.azure.com/models/qwen-qwen2.5-vl-32b-instruct/version/1/catalog/registry/HuggingFace) or [Azure AI Foundry](https://ai.azure.com/explore/models/qwen-qwen2.5-vl-32b-instruct/version/1/registry/HuggingFace) catalogs), via the Azure Python SDK with the previously instantiated `azure.ai.ml.MLClient` (or instantiate a new one if working from a different session).

<Tip>

Since in this case you are deploying a Vision Language Model (VLM) with Text Generation Inference (TGI), to leverage the vision capabilities through the `/generate` endpoint you will need to include either the image URL or the base64 encoding of the image formatted in Markdown as e.g. `![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png)What is this a picture of?\n\n` or `![](data:image/png;base64,...)What is this a picture of?\n\n`, respectively.

More information at [Vision Language Model Inference in TGI](https://huggingface.co/docs/text-generation-inference/basic_tutorials/visual_language_models).

</Tip>

```python
import json
import os
import tempfile

with tempfile.NamedTemporaryFile(mode="w+", delete=True, suffix=".json") as tmp:
    json.dump({
        "inputs": "![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png)What is this a picture of?\n\n",
        "parameters": {"max_new_tokens": 128}
    }, tmp)
    
    tmp.flush()

    response = client.online_endpoints.invoke(
        endpoint_name=os.getenv("ENDPOINT_NAME"),
        deployment_name=os.getenv("DEPLOYMENT_NAME"),
        request_file=tmp.name,
    )

print(json.loads(response))
```

<Tip>

Note that the Azure ML Python SDK requires a path to a JSON file when invoking the endpoints, meaning that whatever payload you want to send to the endpoint will need to be first converted into a JSON file, whilst that only applies to the requests sent via the Azure ML Python SDK.

</Tip>

### OpenAI Python SDK

Since the inference engine in which the model is running on top exposes OpenAI-compatible routes, you can also leverage the OpenAI Python SDK to send requests to the deployed Azure AI Endpoint.

```python
%pip install openai --upgrade --quiet
```

To use the OpenAI Python SDK with Azure ML Managed Online Endpoints, you need to first retrieve:

- `api_url` with the `/v1` route (that contains the `v1/chat/completions` endpoint that the OpenAI Python SDK will send requests to)
- `api_key` which is the API Key in Azure AI or the primary key in Azure ML (unless a dedicated Azure ML Token is used instead)

```python
from urllib.parse import urlsplit

api_key = client.online_endpoints.get_keys(os.getenv("ENDPOINT_NAME")).primary_key

url_parts = urlsplit(client.online_endpoints.get(os.getenv("ENDPOINT_NAME")).scoring_uri)
api_url = f"{url_parts.scheme}://{url_parts.netloc}"
```

<Tip>

Alternatively, you can also build the API URL manually as it follows, since the URIs are globally unique per region, meaning that there will only be one endpoint named the same way within the same region:
```python
api_url = f"https://{os.getenv('ENDPOINT_NAME')}.{os.getenv('LOCATION')}.inference.ml.azure.com/v1"
```
Or just retrieve it from either the Azure AI Foundry or the Azure ML Studio.

</Tip>

Then you can use the OpenAI Python SDK normally, making sure to include the extra header `azureml-model-deployment` header that contains the Azure AI / ML Deployment name.

Via the OpenAI Python SDK it can either be set within each call to `chat.completions.create` via the `extra_headers` parameter as commented below, or via the `default_headers` parameter when instantiating the `OpenAI` client (which is the recommended approach since the header needs to be present on each request, so setting it just once is preferred).

```python
import os
from openai import OpenAI

openai_client = OpenAI(
    base_url=f"{api_url}/v1",
    api_key=api_key,
    default_headers={"azureml-model-deployment": os.getenv("DEPLOYMENT_NAME")},
)

completion = openai_client.chat.completions.create(
    model="Qwen/Qwen2.5-VL-32B-Instruct",
    messages=[
        {"role": "system", "content": "You are an assistant that responds like a pirate."},
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What is in this image?"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"
                    },
                },
            ],
        },
    ],
    max_tokens=128,
    # extra_headers={"azureml-model-deployment": os.getenv("DEPLOYMENT_NAME")},
)
print(completion)
```

### cURL

Alternatively, you can also just use `cURL` to send requests to the deployed endpoint, with the `api_url` and `api_key` values programmatically retrieved in the OpenAI snippet and now set as environment variables so that `cURL` can use those, as it follows:

```python
os.environ["API_URL"] = api_url
os.environ["API_KEY"] = api_key
```

```python
!curl -sS $API_URL/v1/chat/completions \
    -H "Authorization: Bearer $API_KEY" \
    -H "Content-Type: application/json" \
    -H "azureml-model-deployment: $DEPLOYMENT_NAME" \
    -d '{ \
"messages":[ \
    {"role":"system","content":"You are an assistant that replies like a pirate."}, \
    {"role":"user","content": [ \
        {"type":"text","text":"What is in this image?"}, \
        {"type":"image_url","image_url":{"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"}} \
    ]} \
], \
"max_tokens":128 \
}' | jq
```

Alternatively, you can also just go to the Azure AI Endpoint in either the Azure AI Foundry under "My assets -> Models + endpoints" or in the Azure ML Studio via "Endpoints", and retrieve both the URL (note that it will default to the `/generate` endpoint, but to use the OpenAI-compatible layer you need to use the `/v1/chat/completions` endpoint instead) and the API Key values, as well as the Azure AI Deployment name for the given model.

### Gradio

[Gradio](https://www.gradio.app/) is the fastest way to demo your machine learning model with a friendly web interface so that anyone can use it. You can also leverage the OpenAI Python SDK to build a simple multimodal (text and images) `ChatInterface` that you can use within the Jupyter Notebook cell where you are running it.

<Tip>

Ideally you could deploy the Gradio Chat Interface connected to your Azure ML Managed Online Endpoint as an Azure Container App as described in [Tutorial: Build and deploy from source code to Azure Container Apps](https://learn.microsoft.com/en-us/azure/container-apps/tutorial-deploy-from-code?tabs=python). If you'd like us to show you how to do it for Gradio in particular, feel free to [open an issue requesting it](https://github.com/huggingface/Microsoft-Azure/issues/new).

</Tip>

```python
%pip install gradio --upgrade --quiet
```

See below an example on how to leverage Gradio's `ChatInterface`, or find more information about it at [Gradio ChatInterface Docs](https://www.gradio.app/docs/gradio/chatinterface).

```python
import os
import base64
from typing import Dict, Iterator, List, Literal

import gradio as gr
from openai import OpenAI

openai_client = OpenAI(
    base_url=os.getenv("API_URL"),
    api_key=os.getenv("API_KEY"),
    default_headers={"azureml-model-deployment": os.getenv("DEPLOYMENT_NAME")}
)

def predict(
    message: Dict[str, str | List[str]],
    history: List[Dict[Literal["role", "content"], str]]
) -> Iterator[str]:
    content = []
    if message["text"]:
        content.append({"type": "text", "text": message["text"]})
    
    for file_path in message.get("files", []):
        with open(file_path, "rb") as image_file:
            base64_image = base64.b64encode(image_file.read()).decode("utf-8")
            content.append({
                "type": "image_url",
                "image_url": {"url": f"data:image/png;base64,{base64_image}"},
            })
    
    messages = history.copy()
    messages.append({"role": "user", "content": content})

    stream = openai_client.chat.completions.create(
        model="Qwen/Qwen2.5-VL-32B-Instruct",
        messages=messages,
        stream=True,
    )
    buffer = ""
    for chunk in stream:
        if chunk.choices[0].delta.content:
            buffer += chunk.choices[0].delta.content
            yield buffer

demo = gr.ChatInterface(
    predict,
    textbox=gr.MultimodalTextbox(
        label="Input",
        file_types=[".jpg", ".png", ".jpeg"],
        file_count="multiple"
    ),
    multimodal=True,
    type="messages"
)

demo.launch()
```

![Gradio Chat Interface with Azure ML Endpoint](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-vision-language-models/azure-ml-gradio.png)

## Release resources

Once you are done using the Azure AI Endpoint / Deployment, you can delete the resources as it follows, meaning that you will stop paying for the instance on which the model is running and all the attached costs will be stopped.

```python
client.online_endpoints.begin_delete(name=os.getenv("ENDPOINT_NAME")).result()
```

## Conclusion

Throughout this example you learnt how to create and configure your Azure account for Azure ML and Azure AI Foundry, how to then create a Managed Online Endpoint running an open model from the Hugging Face Collection in the Azure AI Foundry Hub / Azure ML Model Catalog, how to send inference requests to it afterwards with different alternatives, how to build a simple Gradio chat interface around it, and finally, how to stop and release the resources.

If you have any doubt, issue or question about this example, feel free to [open an issue](https://github.com/huggingface/Microsoft-Azure/issues/new) and we'll do our best to help!

---
<Tip>

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Microsoft-Azure/tree/main/examples/azure-ai/deploy-vision-language-models/azure-notebook.ipynb)!

</Tip>

<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/azure-ai/examples/deploy-vision-language-models.mdx" />

### Deploy Large Language Models (LLMs) on Azure AI
https://huggingface.co/docs/microsoft-azure/azure-ai/examples/deploy-large-language-models.md

# Deploy Large Language Models (LLMs) on Azure AI

This example showcases how to deploy a Large Language Model (LLM) from the Hugging Face Collection in Azure AI Foundry Hub as an Azure ML Managed Online Endpoint. Additionally, this example also showcases how to run inference with both the Azure ML Python SDK, the OpenAI Python SDK, and even how to locally run a Gradio application for chat completion.

<Tip>

Note that this example will go through the Python SDK / Azure CLI programmatic deployment, if you'd rather prefer using the one-click deployment experience, please check [One-click deployments from the Hugging Face Hub on Azure AI](https://huggingface.co/docs/microsoft-azure/guides/one-click-deployment-azure-ai).

</Tip>

TL;DR Azure AI Foundry provides a unified platform for enterprise AI operations, model builders, and application development. Azure Machine Learning is a cloud service for accelerating and managing the machine learning (ML) project lifecycle.

---

This example will specifically deploy [`Qwen/Qwen2.5-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) from the Hugging Face Hub (or see it on [AzureML](https://ml.azure.com/models/qwen-qwen2.5-32b-instruct/version/1/catalog/registry/HuggingFace) or on [Azure AI Foundry](https://ai.azure.com/explore/models/qwen-qwen2.5-32b-instruct/version/1/registry/HuggingFace)) as an Azure ML Managed Online Endpoint on Azure AI Foundry Hub.

Qwen2.5 is one of the latest series of Qwen large language models, bringing the following improvements upon Qwen2 such as:

- Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.
- Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.
- Long-context Support up to 128K tokens and can generate up to 8K tokens.
- Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

![Qwen2.5 32B Instruct on the Hugging Face Hub](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-large-language-models/qwen2.5-hub.png)

![Qwen2.5 32B Instruct on Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-large-language-models/qwen2.5-azure-ai-foundry.png)

For more information, make sure to check [their model card on the Hugging Face Hub](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/README.md).

<Tip>

Note that you can select any LLM available on the Hugging Face Hub with the "Deploy to AzureML" option enabled, or directly select any of the LLMs available in either the Azure ML or Azure AI Foundry Hub Model Catalog under the "HuggingFace" collection (note that for Azure AI Foundry the Hugging Face Collection will only be available for Hub-based projects).

</Tip>

## Pre-requisites

To run the following example, you will need to comply with the following pre-requisites, alternatively, you can also read more about those in the [Azure Machine Learning Tutorial: Create resources you need to get started](https://learn.microsoft.com/en-us/azure/machine-learning/quickstart-create-resources?view=azureml-api-2).

- An Azure account with an active subscription.
- The Azure CLI installed and logged in.
- The Azure Machine Learning extension for the Azure CLI.
- An Azure Resource Group.
- A project based on an Azure AI Foundry Hub.

For more information, please go through the steps in [Set up Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/set-up).

## Setup and installation

In this example, the [Azure Machine Learning SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ml/azure-ai-ml) will be used to create the endpoint and the deployment, as well as to invoke the deployed API. Along with it, you will also need to install `azure-identity` to authenticate with your Azure credentials via Python.

```python
%pip install azure-ai-ml azure-identity --upgrade --quiet
```

More information at [Azure Machine Learning SDK for Python](https://learn.microsoft.com/en-us/python/api/overview/azure/ai-ml-readme?view=azure-python).

Then, for convenience setting the following environment variables is recommended as those will be used along the example for the Azure ML Client, so make sure to update and set those values accordingly as per your Microsoft Azure account and resources.

```python
%env LOCATION eastus
%env SUBSCRIPTION_ID <YOUR_SUBSCRIPTION_ID>
%env RESOURCE_GROUP <YOUR_RESOURCE_GROUP>
%env AI_FOUNDRY_HUB_PROJECT <YOUR_AI_FOUNDRY_HUB_PROJECT>
```

Finally, you also need to define both the endpoint and deployment names, as those will be used throughout the example too:

<Tip>

Note that endpoint names must to be globally unique per region i.e., even if you don't have any endpoint named that way running under your subscription, if the name is reserved by another Azure customer, then you won't be able to use the same name. Adding a timestamp or a custom identifier is recommended to prevent running into HTTP 400 validation issues when trying to deploy an endpoint with an already locked / reserved name. Also the endpoint name must be between 3 and 32 characters long.

</Tip>

```python
import os
from uuid import uuid4

os.environ["ENDPOINT_NAME"] = f"qwen-endpoint-{str(uuid4())[:8]}"
os.environ["DEPLOYMENT_NAME"] = f"qwen-deployment-{str(uuid4())[:8]}"
```

## Authenticate to Azure ML

Initially, you need to authenticate into the Azure AI Foundry Hub via Azure ML with the Azure ML Python SDK, which will be later used to deploy `Qwen/Qwen2.5-32B-Instruct` as an Azure ML Managed Online Endpoint in your Azure AI Foundry Hub.

<Tip>

On standard Azure ML deployments you'd need to create the `MLClient` using the Azure ML Workspace as the `workspace_name` whereas for Azure AI Foundry, you need to provide the Azure AI Foundry Hub name as the `workspace_name` instead, and that will deploy the endpoint under the Azure AI Foundry too.

</Tip>

```python
import os
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential

client = MLClient(
    credential=DefaultAzureCredential(),
    subscription_id=os.getenv("SUBSCRIPTION_ID"),
    resource_group_name=os.getenv("RESOURCE_GROUP"),
    workspace_name=os.getenv("AI_FOUNDRY_HUB_PROJECT"),
)
```

## Create and Deploy Azure AI Endpoint

Before creating the Managed Online Endpoint, you need to build the model URI, which is formatted as it follows `azureml://registries/HuggingFace/models/<MODEL_ID>/labels/latest` where the `MODEL_ID` won't be the Hugging Face Hub ID but rather its name on Azure, as follows:

```python
model_id = "Qwen/Qwen2.5-32B-Instruct"

model_uri = f"azureml://registries/HuggingFace/models/{model_id.replace('/', '-').replace('_', '-').lower()}/labels/latest"
model_uri
```

<Tip>

To check if a model from the Hugging Face Hub is available in Azure, you should read about it in [Supported Models](https://huggingface.co/docs/microsoft-azure/azure-ai/models). If not, you can always [Request a model addition in the Hugging Face collection on Azure](https://huggingface.co/docs/microsoft-azure/guides/request-model-addition)).

</Tip>

Then you need to create the [ManagedOnlineEndpoint via the Azure ML Python SDK](https://learn.microsoft.com/en-us/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint?view=azure-python) as follows.

<Tip>

Every model in the Hugging Face Collection is powered by an efficient inference backend, and each of those can run on a wide variety of instance types (as listed in [Supported Hardware](https://huggingface.co/docs/microsoft-azure/azure-ai/supported-hardware)). Since for models and inference engines require a GPU-accelerated instance, you might need to request a quota increase as per [Manage and increase quotas and limits for resources with Azure Machine Learning](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas?view=azureml-api-2).

</Tip>

```python
from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment

endpoint = ManagedOnlineEndpoint(name=os.getenv("ENDPOINT_NAME"))

deployment = ManagedOnlineDeployment(
    name=os.getenv("DEPLOYMENT_NAME"),
    endpoint_name=os.getenv("ENDPOINT_NAME"),
    model=model_uri,
    instance_type="Standard_NC40ads_H100_v5",
    instance_count=1,
)
```

```python
client.begin_create_or_update(endpoint).wait()
```

![Azure AI Endpoint from Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-large-language-models/azure-ai-endpoint.png)

<Tip>

In Azure AI Foundry the endpoint will only be listed within the "My assets -> Models + endpoints" tab once the deployment is created, not before as in Azure ML where the endpoint is shown even if it doesn't contain any active or in-progress deployments.

</Tip>

```python
client.online_deployments.begin_create_or_update(deployment).wait()
```

![Azure AI Deployment from Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-large-language-models/azure-ai-deployment.png)

<Tip>

Note that whilst the Azure AI Endpoint creation is relatively fast, the deployment will take longer since it needs to allocate the resources on Azure so expect it to take ~10-15 minutes, but it could as well take longer depending on the instance provisioning and availability.

</Tip>

Once deployed, via either the Azure AI Foundry or the Azure ML Studio you'll be able to inspect the endpoint details, the real-time logs, how to consume the endpoint, and even use the, still on preview, [monitoring feature](https://learn.microsoft.com/en-us/azure/machine-learning/concept-model-monitoring?view=azureml-api-2). Find more information about it at [Azure ML Managed Online Endpoints](https://learn.microsoft.com/en-us/azure/machine-learning/concept-endpoints-online?view=azureml-api-2#managed-online-endpoints)

## Send requests to the Azure AI Endpoint

Finally, now that the Azure AI Endpoint is deployed, you can send requests to it. In this case, since the task of the model is `text-generation` (also known as `chat-completion`) you can either use the default scoring endpoint, being `/generate` which is the standard text generation endpoint without chat capabilities (as leveraging the chat template or having an OpenAI-compatible OpenAPI interface), or alternatively just benefit from the fact that the inference engine in which the model is running on top exposes OpenAI-compatible routes as `/v1/chat/completions`.

<Tip>

Note that below only some of the options are listed, but you can send requests to the deployed endpoint as long as you send the HTTP requests with the `azureml-model-deployment` header set to the name of the Azure AI Deployment (not the Endpoint), and have the necessary authentication token / key to send requests to the given endpoint; then you can send HTTP request to all the routes that the backend engine is exposing, not only to the scoring route.

</Tip>

### Azure Python SDK

You can invoke the Azure AI Endpoint on the scoring route, in this case `/generate` (more information about it in the `Qwen/Qwen2.5-32B-Instruct` page in either [AzureML](https://ml.azure.com/models/qwen-qwen2.5-32b-instruct/version/1/catalog/registry/HuggingFace) or [Azure AI Foundry](https://ai.azure.com/explore/models/qwen-qwen2.5-32b-instruct/version/1/registry/HuggingFace) catalogs), via the Azure Python SDK with the previously instantiated `azure.ai.ml.MLClient` (or instantiate a new one if working from a different session).

```python
import json
import os
import tempfile

with tempfile.NamedTemporaryFile(mode="w+", delete=True, suffix=".json") as tmp:
    json.dump({"inputs": "What is Deep Learning?", "parameters": {"max_new_tokens": 128}}, tmp)
    tmp.flush()

    response = client.online_endpoints.invoke(
        endpoint_name=os.getenv("ENDPOINT_NAME"),
        deployment_name=os.getenv("DEPLOYMENT_NAME"),
        request_file=tmp.name,
    )

print(json.loads(response))
```

<Tip>

Note that the Azure ML Python SDK requires a path to a JSON file when invoking the endpoints, meaning that whatever payload you want to send to the endpoint will need to be first converted into a JSON file, whilst that only applies to the requests sent via the Azure ML Python SDK.

</Tip>



### OpenAI Python SDK

Since the inference engine in which the model is running on top exposes OpenAI-compatible routes, you can also leverage the OpenAI Python SDK to send requests to the deployed Azure AI Endpoint.

```python
%pip install openai --upgrade --quiet
```

To use the OpenAI Python SDK with Azure ML Managed Online Endpoints, you need to first retrieve:

- `api_url` with the `/v1` route (that contains the `v1/chat/completions` endpoint that the OpenAI Python SDK will send requests to)
- `api_key` which is the API Key in Azure AI or the primary key in Azure ML (unless a dedicated Azure ML Token is used instead)

```python
from urllib.parse import urlsplit

api_key = client.online_endpoints.get_keys(os.getenv("ENDPOINT_NAME")).primary_key

url_parts = urlsplit(client.online_endpoints.get(os.getenv("ENDPOINT_NAME")).scoring_uri)
api_url = f"{url_parts.scheme}://{url_parts.netloc}"
```

<Tip>

Alternatively, you can also build the API URL manually as it follows, since the URIs are globally unique per region, meaning that there will only be one endpoint named the same way within the same region:
```python
api_url = f"https://{os.getenv('ENDPOINT_NAME')}.{os.getenv('LOCATION')}.inference.ml.azure.com/v1"
```
Or just retrieve it from either the Azure AI Foundry or the Azure ML Studio.

</Tip>

Then you can use the OpenAI Python SDK normally, making sure to include the extra header `azureml-model-deployment` header that contains the Azure AI / ML Deployment name.

Via the OpenAI Python SDK it can either be set within each call to `chat.completions.create` via the `extra_headers` parameter as commented below, or via the `default_headers` parameter when instantiating the `OpenAI` client (which is the recommended approach since the header needs to be present on each request, so setting it just once is preferred).

```python
import os
from openai import OpenAI

openai_client = OpenAI(
    base_url=f"{api_url}/v1",
    api_key=api_key,
    default_headers={"azureml-model-deployment": os.getenv("DEPLOYMENT_NAME")},
)

completion = openai_client.chat.completions.create(
    model="Qwen/Qwen2.5-32B-Instruct",
    messages=[
        {"role": "system", "content": "You are an assistant that responds like a pirate."},
        {
            "role": "user",
            "content": "What is Deep Learning?",
        },
    ],
    max_tokens=128,
    # extra_headers={"azureml-model-deployment": os.getenv("DEPLOYMENT_NAME")},
)
print(completion)
```

### cURL

Alternatively, you can also just use `cURL` to send requests to the deployed endpoint, with the `api_url` and `api_key` values programmatically retrieved in the OpenAI snippet and now set as environment variables so that `cURL` can use those, as it follows:

```python
os.environ["API_URL"] = api_url
os.environ["API_KEY"] = api_key
```

```python
!curl -sS $API_URL/v1/chat/completions \
    -H "Authorization: Bearer $API_KEY" \
    -H "Content-Type: application/json" \
    -H "azureml-model-deployment: $DEPLOYMENT_NAME" \
    -d '{ \
"messages":[ \
    {"role":"system","content":"You are an assistant that replies like a pirate."}, \
    {"role":"user","content":"What is Deep Learning?"} \
], \
"max_tokens":128 \
}' | jq
```

Alternatively, you can also just go to the Azure AI Endpoint in either the Azure AI Foundry under "My assets -> Models + endpoints" or in the Azure ML Studio via "Endpoints", and retrieve both the URL (note that it will default to the `/generate` endpoint, but to use the OpenAI-compatible layer you need to use the `/v1/chat/completions` endpoint instead) and the API Key values, as well as the Azure AI Deployment name for the given model.

### Gradio

[Gradio](https://www.gradio.app/) is the fastest way to demo your machine learning model with a friendly web interface so that anyone can use it. You can also leverage the OpenAI Python SDK to build a simple `ChatInterface` that you can use within the Jupyter Notebook cell where you are running it.

<Tip>

Ideally you could deploy the Gradio Chat Interface connected to your Azure ML Managed Online Endpoint as an Azure Container App as described in [Tutorial: Build and deploy from source code to Azure Container Apps](https://learn.microsoft.com/en-us/azure/container-apps/tutorial-deploy-from-code?tabs=python). If you'd like us to show you how to do it for Gradio in particular, feel free to [open an issue requesting it](https://github.com/huggingface/Microsoft-Azure/issues/new).

</Tip>

```python
%pip install gradio --upgrade --quiet
```

See below an example on how to leverage Gradio's `ChatInterface`, or find more information about it at [Gradio ChatInterface Docs](https://www.gradio.app/docs/gradio/chatinterface).

```python
import os
from typing import Dict, Iterator, List, Literal

import gradio as gr
from openai import OpenAI

openai_client = OpenAI(
    base_url=api_url,
    api_key=api_key,
    default_headers={"azureml-model-deployment": os.getenv("DEPLOYMENT_NAME")},
)

def predict(message: str, history: List[Dict[Literal["role", "content"], str]]) -> Iterator[str]:
    history.append({"role": "user", "content": message})
    
    stream = openai_client.chat.completions.create(
        model="Qwen/Qwen2.5-32B-Instruct",
        messages=history,
        stream=True,
    )
    chunks = []
    for chunk in stream:
        chunks.append(chunk.choices[0].delta.content or "")
        yield "".join(chunks)

demo = gr.ChatInterface(predict, type="messages")
demo.launch()
```

![Gradio Chat Interface with Azure AI Endpoint](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-large-language-models/azure-ml-gradio.png)

## Release resources

Once you are done using the Azure AI Endpoint / Deployment, you can delete the resources as it follows, meaning that you will stop paying for the instance on which the model is running and all the attached costs will be stopped.

```python
client.online_endpoints.begin_delete(name=os.getenv("ENDPOINT_NAME")).result()
```

## Conclusion

Throughout this example you learnt how to create and configure your Azure account for Azure ML and Azure AI Foundry, how to then create a Managed Online Endpoint running an open model from the Hugging Face Collection in the Azure ML / Azure AI Foundry model catalog, how to send inference requests to it afterwards with different alternatives, how to build a simple Gradio chat interface around it, and finally, how to stop and release the resources.

If you have any doubt, issue or question about this example, feel free to [open an issue](https://github.com/huggingface/Microsoft-Azure/issues/new) and we'll do our best to help!

---
<Tip>

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Microsoft-Azure/tree/main/examples/azure-ai/deploy-large-language-models/azure-notebook.ipynb)!

</Tip>

<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/azure-ai/examples/deploy-large-language-models.mdx" />

### Deploy SmolLM3 on Azure AI
https://huggingface.co/docs/microsoft-azure/azure-ai/examples/deploy-smollm3.md

# Deploy SmolLM3 on Azure AI

This example showcases how to deploy SmolLM3 from the Hugging Face Collection in Azure AI Foundry Hub as an Azure ML Managed Online Endpoint, powered by Transformers with an OpenAI compatible route. Additionally, this example also showcases how to run inference with the OpenAI Python SDK for different scenarios and use-cases.

![SmolLM3 3B Logo image](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/zy0dqTCCt5IHmuzwoqtJ9.png)

TL;DR Transformers acts as the model-definition framework for state-of-the-art machine learning models in text, computer vision, audio, video, and multimodal model, for both inference and training. Azure AI Foundry provides a unified platform for enterprise AI operations, model builders, and application development. Azure Machine Learning is a cloud service for accelerating and managing the machine learning (ML) project lifecycle.

---

This example will specifically deploy [`HuggingFaceTB/SmolLM3-3B`](https://huggingface.co/HuggingFaceTB/SmolLM3-3B) from the Hugging Face Hub (or see it on [AzureML](https://ml.azure.com/models/huggingfacetb-smollm3-3b/version/3/catalog/registry/HuggingFace) or on [Azure AI Foundry](https://ai.azure.com/explore/models/huggingfacetb-smollm3-3b/version/3/registry/HuggingFace)) as an Azure ML Managed Online Endpoint on Azure AI Foundry Hub.

SmolLM3 is a 3B parameter language model designed to push the boundaries of small models. It supports dual mode reasoning, 6 languages and long context. SmolLM3 is a fully open model that offers strong performance at the 3B–4B scale.

![Small LLM win-rate on benchmarks per model size](https://cdn-uploads.huggingface.co/production/uploads/6200d0a443eb0913fa2df7cc/db3az7eGzs-Sb-8yUj-ff.png)

The model is a decoder-only transformer using GQA and NoPE (with 3:1 ratio), it was pretrained on 11.2T tokens with a staged curriculum of web, code, math and reasoning data. Post-training included midtraining on 140B reasoning tokens followed by supervised fine-tuning and alignment via Anchored Preference Optimization (APO).

- Instruct model optimized for **hybrid reasoning**
- **Fully open model**: open weights + full training details including public data mixture and training configs
- **Long context:** Trained on 64k context and suppots up to **128k tokens** using YARN extrapolation
- **Multilingual**: 6 natively supported (English, French, Spanish, German, Italian, and Portuguese)

![SmolLM3 3B on the Hugging Face Hub](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-smollm3/smollm3-hub.png)

![SmolLM3 3B on Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-smollm3/smollm3-azure-ai.png)

For more information, make sure to check [our model card on the Hugging Face Hub](https://huggingface.co/HuggingFaceTB/SmolLM3-3B/blob/main/README.md).

## Pre-requisites

To run the following example, you will need to comply with the following pre-requisites, alternatively, you can also read more about those in the [Azure Machine Learning Tutorial: Create resources you need to get started](https://learn.microsoft.com/en-us/azure/machine-learning/quickstart-create-resources?view=azureml-api-2).

- An Azure account with an active subscription.
- The Azure CLI installed and logged in.
- The Azure Machine Learning extension for the Azure CLI.
- An Azure Resource Group.
- A project based on an Azure AI Foundry Hub.

For more information, please go through the steps in [Set up Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/set-up).

## Setup and installation

In this example, the [Azure Machine Learning SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ml/azure-ai-ml) will be used to create the endpoint and the deployment, as well as to invoke the deployed API. Along with it, you will also need to install `azure-identity` to authenticate with your Azure credentials via Python.

```python
%pip install azure-ai-ml azure-identity --upgrade --quiet
```

More information at [Azure Machine Learning SDK for Python](https://learn.microsoft.com/en-us/python/api/overview/azure/ai-ml-readme?view=azure-python).

Then, for convenience setting the following environment variables is recommended as those will be used along the example for the Azure ML Client, so make sure to update and set those values accordingly as per your Microsoft Azure account and resources.

```python
%env LOCATION eastus
%env SUBSCRIPTION_ID <YOUR_SUBSCRIPTION_ID>
%env RESOURCE_GROUP <YOUR_RESOURCE_GROUP>
%env AI_FOUNDRY_HUB_PROJECT <YOUR_AI_FOUNDRY_HUB_PROJECT>
```

Finally, you also need to define both the endpoint and deployment names, as those will be used throughout the example too:

<Tip>

Note that endpoint names must to be globally unique per region i.e., even if you don't have any endpoint named that way running under your subscription, if the name is reserved by another Azure customer, then you won't be able to use the same name. Adding a timestamp or a custom identifier is recommended to prevent running into HTTP 400 validation issues when trying to deploy an endpoint with an already locked / reserved name. Also the endpoint name must be between 3 and 32 characters long.

</Tip>

```python
import os
from uuid import uuid4

os.environ["ENDPOINT_NAME"] = f"smollm3-endpoint-{str(uuid4())[:8]}"
os.environ["DEPLOYMENT_NAME"] = f"smollm3-deployment-{str(uuid4())[:8]}"
```

## Authenticate to Azure ML

Initially, you need to authenticate into the Azure AI Foundry Hub via Azure ML with the Azure ML Python SDK, which will be later used to deploy `HuggingFaceTB/SmolLM3-3B` as an Azure ML Managed Online Endpoint in your Azure AI Foundry Hub.

<Tip>

On standard Azure ML deployments you'd need to create the `MLClient` using the Azure ML Workspace as the `workspace_name` whereas for Azure AI Foundry, you need to provide the Azure AI Foundry Hub name as the `workspace_name` instead, and that will deploy the endpoint under the Azure AI Foundry too.

</Tip>

```python
import os
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential

client = MLClient(
    credential=DefaultAzureCredential(),
    subscription_id=os.getenv("SUBSCRIPTION_ID"),
    resource_group_name=os.getenv("RESOURCE_GROUP"),
    workspace_name=os.getenv("AI_FOUNDRY_HUB_PROJECT"),
)
```

## Create and Deploy Azure AI Endpoint

Before creating the Managed Online Endpoint, you need to build the model URI, which is formatted as it follows `azureml://registries/HuggingFace/models/<MODEL_ID>/labels/latest` where the `MODEL_ID` won't be the Hugging Face Hub ID but rather its name on Azure, as follows:

```python
model_id = "HuggingFaceTB/SmolLM3-3B"

model_uri = f"azureml://registries/HuggingFace/models/{model_id.replace('/', '-').replace('_', '-').lower()}/labels/latest"
model_uri
```

<Tip>

To check if a model from the Hugging Face Hub is available in Azure, you should read about it in [Supported Models](https://huggingface.co/docs/microsoft-azure/azure-ai/models). If not, you can always [Request a model addition in the Hugging Face collection on Azure](https://huggingface.co/docs/microsoft-azure/guides/request-model-addition)).

</Tip>

Then you need to create the [ManagedOnlineEndpoint via the Azure ML Python SDK](https://learn.microsoft.com/en-us/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint?view=azure-python) as follows.

<Tip>

Every model in the Hugging Face Collection is powered by an efficient inference backend, and each of those can run on a wide variety of instance types (as listed in [Supported Hardware](https://huggingface.co/docs/microsoft-azure/azure-ai/supported-hardware)). Since for models and inference engines require a GPU-accelerated instance, you might need to request a quota increase as per [Manage and increase quotas and limits for resources with Azure Machine Learning](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas?view=azureml-api-2).

</Tip>

```python
from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment

endpoint = ManagedOnlineEndpoint(name=os.getenv("ENDPOINT_NAME"))

deployment = ManagedOnlineDeployment(
    name=os.getenv("DEPLOYMENT_NAME"),
    endpoint_name=os.getenv("ENDPOINT_NAME"),
    model=model_uri,
    instance_type="Standard_NC40ads_H100_v5",
    instance_count=1,
)
```

```python
client.begin_create_or_update(endpoint).wait()
```

![Azure AI Endpoint from Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-smollm3/azure-ai-endpoint.png)

<Tip>

In Azure AI Foundry the endpoint will only be listed within the "My assets -> Models + endpoints" tab once the deployment is created, not before as in Azure ML where the endpoint is shown even if it doesn't contain any active or in-progress deployments.

</Tip>

```python
client.online_deployments.begin_create_or_update(deployment).wait()
```

![Azure AI Deployment from Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-smollm3/azure-ai-deployment.png)

<Tip>

Note that whilst the Azure AI Endpoint creation is relatively fast, the deployment will take longer since it needs to allocate the resources on Azure so expect it to take ~10-15 minutes, but it could as well take longer depending on the instance provisioning and availability.

</Tip>

Once deployed, via either the Azure AI Foundry or the Azure ML Studio you'll be able to inspect the endpoint details, the real-time logs, how to consume the endpoint, and even use the, still on preview, [monitoring feature](https://learn.microsoft.com/en-us/azure/machine-learning/concept-model-monitoring?view=azureml-api-2). Find more information about it at [Azure ML Managed Online Endpoints](https://learn.microsoft.com/en-us/azure/machine-learning/concept-endpoints-online?view=azureml-api-2#managed-online-endpoints)

## Send requests to the Azure AI Endpoint

Finally, now that the Azure AI Endpoint is deployed, you can send requests to it. In this case, since the task of the model is `text-generation` (also known as `chat-completion`) you can use the OpenAI SDK with the OpenAI-compatible route and send requests to the scoring URI i.e., `/v1/chat/completions`.

<Tip>

Note that below only some of the options are listed, but you can send requests to the deployed endpoint as long as you send the HTTP requests with the `azureml-model-deployment` header set to the name of the Azure AI Deployment (not the Endpoint), and have the necessary authentication token / key to send requests to the given endpoint; then you can send HTTP request to all the routes that the backend engine is exposing, not only to the scoring route.

</Tip>

```python
%pip install openai --upgrade --quiet
```

To use the OpenAI Python SDK with Azure ML Managed Online Endpoints, you need to first retrieve:

- `api_url` with the `/v1` route (that contains the `v1/chat/completions` endpoint that the OpenAI Python SDK will send requests to)
- `api_key` which is the API Key in Azure AI or the primary key in Azure ML (unless a dedicated Azure ML Token is used instead)

```python
from urllib.parse import urlsplit

api_key = client.online_endpoints.get_keys(os.getenv("ENDPOINT_NAME")).primary_key

url_parts = urlsplit(client.online_endpoints.get(os.getenv("ENDPOINT_NAME")).scoring_uri)
api_url = f"{url_parts.scheme}://{url_parts.netloc}/v1"
```

<Tip>

Alternatively, you can also build the API URL manually as it follows, since the URIs are globally unique per region, meaning that there will only be one endpoint named the same way within the same region:
```python
api_url = f"https://{os.getenv('ENDPOINT_NAME')}.{os.getenv('LOCATION')}.inference.ml.azure.com/v1"
```
Or just retrieve it from either the Azure AI Foundry or the Azure ML Studio.

</Tip>

Then you can use the OpenAI Python SDK normally, making sure to include the extra header `azureml-model-deployment` header that contains the Azure AI / ML Deployment name.

Via the OpenAI Python SDK it can either be set within each call to `chat.completions.create` via the `extra_headers` parameter as commented below, or via the `default_headers` parameter when instantiating the `OpenAI` client (which is the recommended approach since the header needs to be present on each request, so setting it just once is preferred).

```python
import os
from openai import OpenAI

openai_client = OpenAI(
    base_url=api_url,
    api_key=api_key,
    default_headers={"azureml-model-deployment": os.getenv("DEPLOYMENT_NAME")},
)
```

### Chat Completions

```python
completion = openai_client.chat.completions.create(
    model="HuggingFaceTB/SmolLM3-3B",
    messages=[
        {
            "role": "system",
            "content": "You are an assistant that responds like a pirate.",
        },
        {
            "role": "user",
            "content": "Give me a brief explanation of gravity in simple terms.",
        },
    ],
    max_tokens=128,
)
print(completion)
# ChatCompletion(id='chatcmpl-74f6852e28', choices=[Choice(finish_reason='length', index=0, logprobs=None, message=ChatCompletionMessage(content="<think>\nOkay, the user wants a simple explanation of gravity. Let me start by recalling what I know. Gravity is the force that pulls objects towards each other. But how to explain that simply?\n\nMaybe start with a common example, like how you fall when you jump. That's gravity pulling you down. But wait, I should mention that it's not just on Earth. The moon orbits the Earth because of gravity too. But how to make that easy to understand?\n\nI need to avoid technical terms. Maybe use metaphors. Like comparing gravity to a magnet, but not exactly. Or think of it as a stretchy rope pulling", refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=None))], created=1753178803, model='HuggingFaceTB/SmolLM3-3B', object='chat.completion', service_tier='default', system_fingerprint='1a28be5c-df18-4e97-822f-118bf57374c8', usage=CompletionUsage(completion_tokens=128, prompt_tokens=66, total_tokens=194, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0), reasoning_tokens=0))
```

### Extended Thinking Mode

By default, `SmolLM3-3B` enables extended thinking, so the example above generates the output with a reasoning trace as the reasoning is enabled by default.

To enable and disable it, you can provide either `/think` and `/no_think` in the system prompt, respectively.

```python
completion = openai_client.chat.completions.create(
    model="HuggingFaceTB/SmolLM3-3B",
    messages=[
        {
            "role": "system",
            "content": "/no_think You are an assistant that responds like a pirate.",
        },
        {
            "role": "user",
            "content": "Give me a brief explanation of gravity in simple terms.",
        },
    ],
    max_tokens=128,
)
print(completion)
# ChatCompletion(id='chatcmpl-776e84a272', choices=[Choice(finish_reason='length', index=0, logprobs=None, message=ChatCompletionMessage(content="Arr matey! Ye be askin' about gravity, the mighty force that keeps us swabbin' the decks and not floatin' off into the vast blue yonder. Gravity be the pull o' the Earth, a mighty force that keeps us grounded and keeps the stars in their place. It's like a giant invisible hand that pulls us towards the center o' the Earth, makin' sure we don't float off into space. It's what makes the apples fall from the tree and the moon orbit 'round the Earth. So, gravity be the force that keeps us all tied to this fine planet we call home.", refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=None))], created=1753178805, model='HuggingFaceTB/SmolLM3-3B', object='chat.completion', service_tier='default', system_fingerprint='d644cb1c-84d6-49ae-b790-ac6011851042', usage=CompletionUsage(completion_tokens=128, prompt_tokens=72, total_tokens=200, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0), reasoning_tokens=0))
```

### Multilingual capabilities

As mentioned before, `SmolLM3-3B` has been trained to natively suport 6 languages: English, French, Spanish, German, Italian, and Portuguese; meaning that you can leverage its multilingual potential by sending requests on any of those languages.

```python
completion = openai_client.chat.completions.create(
    model="HuggingFaceTB/SmolLM3-3B",
    messages=[
        {
            "role": "system",
            "content": "/no_think You are an expert translator.",
        },
        {
            "role": "user",
            "content": "Translate the following English sentence into both Spanish and German: 'The brown cat sat on the mat.'",
        },
    ],
    max_tokens=128,
)
print(completion)
# ChatCompletion(id='chatcmpl-da6188629f', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content="The translation of the English sentence 'The brown cat sat on the mat.' into Spanish is: 'El gato marrón se sentó en el tapete.'\n\nThe translation of the English sentence 'The brown cat sat on the mat.' into German is: 'Der braune Katze saß auf dem Teppich.'", refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=None))], created=1753178807, model='HuggingFaceTB/SmolLM3-3B', object='chat.completion', service_tier='default', system_fingerprint='054f8a76-4e8c-4a2f-90eb-31f0e802916c', usage=CompletionUsage(completion_tokens=68, prompt_tokens=77, total_tokens=145, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0), reasoning_tokens=0))
```

### Agentic use-cases and Tool Calling

`SmolLM3-3B` has tool calling capabilities, meaning that you can provide a tool or list of tools that the LLM can leverage and use.

<Tip>

To prevent the `tool_call` from being incomplete, you might need either unset the value for `max_completion_tokens` (former `max_tokens`) or set it to a fair enough value so that the model stops producing tokens due to length limitations before the `tool_call` is complete.

</Tip>

```python
completion = openai_client.chat.completions.create(
    model="HuggingFaceTB/SmolLM3-3B",
    messages=[{"role": "user", "content": "What is the weather like in New York?"}],
    tools=[
        {
            "type": "function",
            "function": {
                "name": "get_weather",
                "description": "Get the current weather in a given location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        },
                        "unit": {
                            "type": "string",
                            "enum": ["celsius", "fahrenheit"],
                            "description": "The unit of temperature",
                        },
                    },
                    "required": ["location"],
                },
            },
        }
    ],
    tool_choice="auto",
    max_completion_tokens=256,
)
print(completion)
# ChatCompletion(id='chatcmpl-c36090e6b5', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content='<think>I need to retrieve the current weather information for New York, so I\'ll use the get_weather function with the location set to \'New York\' and the unit set to \'fahrenheit\'.</think>\n<tool_call>{"name": "get_weather", "arguments": {"location": "New York", "unit": "fahrenheit"}}</tool_call>', refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call-5d5eb71a', function=Function(arguments='{"location": "New York", "unit": "fahrenheit"}', name='get_weather'), type='function')]))], created=1753178808, model='HuggingFaceTB/SmolLM3-3B', object='chat.completion', service_tier='default', system_fingerprint='5e58b305-773c-40b6-900b-fe5b177aeab9', usage=CompletionUsage(completion_tokens=68, prompt_tokens=442, total_tokens=510, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0), reasoning_tokens=0))
```

## Release resources

Once you are done using the Azure AI Endpoint / Deployment, you can delete the resources as it follows, meaning that you will stop paying for the instance on which the model is running and all the attached costs will be stopped.

```python
client.online_endpoints.begin_delete(name=os.getenv("ENDPOINT_NAME")).result()
```

## Conclusion

Throughout this example you learnt how to create and configure your Azure account for Azure ML and Azure AI Foundry, how to then create a Managed Online Endpoint running an open model from the Hugging Face Collection in the Azure ML / Azure AI Foundry model catalog, how to send inference requests with OpenAI SDK for a wide variety of use-cases, and finally, how to stop and release the resources.

If you have any doubt, issue or question about this example, feel free to [open an issue](https://github.com/huggingface/Microsoft-Azure/issues/new) and we'll do our best to help!

---
<Tip>

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Microsoft-Azure/tree/main/examples/azure-ai/deploy-smollm3/azure-notebook.ipynb)!

</Tip>

<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/azure-ai/examples/deploy-smollm3.mdx" />

### Build Agents with smolagents on Azure AI
https://huggingface.co/docs/microsoft-azure/azure-ai/examples/build-agents-with-smolagents.md

# Build Agents with smolagents on Azure AI

This example showcases how to build agents with [`smolagents`](https://github.com/huggingface/smolagents), leveraging Large Language Models (LLMs) from the Hugging Face Collection in Azure AI Foundry Hub deployed as an Azure ML Managed Online Endpoint.

<Tip warning>

This example is not intended to be a in-detail example on how to deploy Large Language Models (LLMs) on Azure AI but rather focused on how to build agents with it, this being said, it's highly recommended to read more about Azure AI deployments in the example ["Deploy Large Language Models (LLMs) on Azure AI"](https://huggingface.co/docs/microsoft-azure/azure-ai/examples/deploy-large-language-models).

</Tip>

TL;DR Smolagents is an open-source Python library designed to make it extremely easy to build and run agents using just a few lines of code. Azure AI Foundry provides a unified platform for enterprise AI operations, model builders, and application development. Azure Machine Learning is a cloud service for accelerating and managing the machine learning (ML) project lifecycle.

---

This example will specifically deploy [`Qwen/Qwen2.5-Coder-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) from the Hugging Face Hub (or see it on [AzureML](https://ml.azure.com/models/qwen-qwen2.5-coder-32b-instruct/version/2/catalog/registry/HuggingFace) or on [Azure AI Foundry](https://ai.azure.com/explore/models/qwen-qwen2.5-coder-32b-instruct/version/2/registry/HuggingFace)) as an Azure ML Managed Online Endpoint on Azure AI Foundry Hub.

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen), bringing the following improvements upon CodeQwen1.5:

- Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- Long-context Support up to 128K tokens.

![Qwen2.5 Coder 32B Instruct on the Hugging Face Hub](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/build-agents-with-smolagents/qwen2.5-coder-hub.png)

![Qwen2.5 Coder 32B Instruct on Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/build-agents-with-smolagents/qwen2.5-coder-azure-ai.png)

For more information, make sure to check [their model card on the Hugging Face Hub](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/README.md).

<Tip>

Note that you can select any LLM available on the Hugging Face Hub with the "Deploy to AzureML" option enabled, or directly select any of the LLMs available in either the Azure ML or Azure AI Foundry Hub Model Catalog under the "HuggingFace" collection (note that for Azure AI Foundry the Hugging Face Collection will only be available for Hub-based projects).

</Tip>

## Pre-requisites

To run the following example, you will need to comply with the following pre-requisites, alternatively, you can also read more about those in the [Azure Machine Learning Tutorial: Create resources you need to get started](https://learn.microsoft.com/en-us/azure/machine-learning/quickstart-create-resources?view=azureml-api-2).

- An Azure account with an active subscription.
- The Azure CLI installed and logged in.
- The Azure Machine Learning extension for the Azure CLI.
- An Azure Resource Group.
- A project based on an Azure AI Foundry Hub.

For more information, please go through the steps in [Set up Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/set-up).

## Setup and installation

In this example, the [Azure Machine Learning SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ml/azure-ai-ml) will be used to create the endpoint and the deployment, as well as to invoke the deployed API. Along with it, you will also need to install `azure-identity` to authenticate with your Azure credentials via Python.

```python
%pip install azure-ai-ml azure-identity --upgrade --quiet
```

More information at [Azure Machine Learning SDK for Python](https://learn.microsoft.com/en-us/python/api/overview/azure/ai-ml-readme?view=azure-python).

Then, for convenience setting the following environment variables is recommended as those will be used along the example for the Azure ML Client, so make sure to update and set those values accordingly as per your Microsoft Azure account and resources.

```python
%env LOCATION eastus
%env SUBSCRIPTION_ID <YOUR_SUBSCRIPTION_ID>
%env RESOURCE_GROUP <YOUR_RESOURCE_GROUP>
%env AI_FOUNDRY_HUB_PROJECT <YOUR_AI_FOUNDRY_HUB_PROJECT>
```

Finally, you also need to define both the endpoint and deployment names, as those will be used throughout the example too:

<Tip>

Note that endpoint names must to be globally unique per region i.e., even if you don't have any endpoint named that way running under your subscription, if the name is reserved by another Azure customer, then you won't be able to use the same name. Adding a timestamp or a custom identifier is recommended to prevent running into HTTP 400 validation issues when trying to deploy an endpoint with an already locked / reserved name. Also the endpoint name must be between 3 and 32 characters long.

</Tip>

```python
import os
from uuid import uuid4

os.environ["ENDPOINT_NAME"] = f"qwen-coder-endpoint-{str(uuid4())[:8]}"
os.environ["DEPLOYMENT_NAME"] = f"qwen-coder-deployment-{str(uuid4())[:8]}"
```

## Authenticate to Azure ML

Initially, you need to authenticate into the Azure AI Foundry Hub via Azure ML with the Azure ML Python SDK, which will be later used to deploy `Qwen/Qwen2.5-Coder-32B-Instruct` as an Azure ML Managed Online Endpoint in your Azure AI Foundry Hub.

<Tip>

On standard Azure ML deployments you'd need to create the `MLClient` using the Azure ML Workspace as the `workspace_name` whereas for Azure AI Foundry, you need to provide the Azure AI Foundry Hub name as the `workspace_name` instead, and that will deploy the endpoint under the Azure AI Foundry too.

</Tip>

```python
import os
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential

client = MLClient(
    credential=DefaultAzureCredential(),
    subscription_id=os.getenv("SUBSCRIPTION_ID"),
    resource_group_name=os.getenv("RESOURCE_GROUP"),
    workspace_name=os.getenv("AI_FOUNDRY_HUB_PROJECT"),
)
```

## Create and Deploy Azure AI Endpoint

Before creating the Managed Online Endpoint, you need to build the model URI, which is formatted as it follows `azureml://registries/HuggingFace/models/<MODEL_ID>/labels/latest` where the `MODEL_ID` won't be the Hugging Face Hub ID but rather its name on Azure, as follows:

```python
model_id = "Qwen/Qwen2.5-Coder-32B-Instruct"

model_uri = f"azureml://registries/HuggingFace/models/{model_id.replace('/', '-').replace('_', '-').lower()}/labels/latest"
model_uri
```

<Tip>

To check if a model from the Hugging Face Hub is available in Azure, you should read about it in [Supported Models](https://huggingface.co/docs/microsoft-azure/azure-ai/models). If not, you can always [Request a model addition in the Hugging Face collection on Azure](https://huggingface.co/docs/microsoft-azure/guides/request-model-addition)).

</Tip>

Then you need to create the [ManagedOnlineEndpoint via the Azure ML Python SDK](https://learn.microsoft.com/en-us/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint?view=azure-python) as follows.

<Tip>

Every model in the Hugging Face Collection is powered by an efficient inference backend, and each of those can run on a wide variety of instance types (as listed in [Supported Hardware](https://huggingface.co/docs/microsoft-azure/azure-ai/supported-hardware)). Since for models and inference engines require a GPU-accelerated instance, you might need to request a quota increase as per [Manage and increase quotas and limits for resources with Azure Machine Learning](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas?view=azureml-api-2).

</Tip>

```python
from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment

endpoint = ManagedOnlineEndpoint(name=os.getenv("ENDPOINT_NAME"))

deployment = ManagedOnlineDeployment(
    name=os.getenv("DEPLOYMENT_NAME"),
    endpoint_name=os.getenv("ENDPOINT_NAME"),
    model=model_uri,
    instance_type="Standard_NC40ads_H100_v5",
    instance_count=1,
)
```

```python
client.begin_create_or_update(endpoint).wait()
```

![Azure AI Endpoint from Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/build-agents-with-smolagents/azure-ai-endpoint.png)

<Tip>

In Azure AI Foundry the endpoint will only be listed within the "My assets -> Models + endpoints" tab once the deployment is created, not before as in Azure ML where the endpoint is shown even if it doesn't contain any active or in-progress deployments.

</Tip>

```python
client.online_deployments.begin_create_or_update(deployment).wait()
```

![Azure AI Deployment from Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/build-agents-with-smolagents/azure-ai-deployment.png)

<Tip>

Note that whilst the Azure AI Endpoint creation is relatively fast, the deployment will take longer since it needs to allocate the resources on Azure so expect it to take ~10-15 minutes, but it could as well take longer depending on the instance provisioning and availability.

</Tip>

Once deployed, via either the Azure AI Foundry or the Azure ML Studio you'll be able to inspect the endpoint details, the real-time logs, how to consume the endpoint, and even use the, still on preview, [monitoring feature](https://learn.microsoft.com/en-us/azure/machine-learning/concept-model-monitoring?view=azureml-api-2). Find more information about it at [Azure ML Managed Online Endpoints](https://learn.microsoft.com/en-us/azure/machine-learning/concept-endpoints-online?view=azureml-api-2#managed-online-endpoints)

## Build agents with smolagents

Now that the Azure AI Endpoint is running, you can start sending requests to it. Since there are multiple approaches, but the following is just covering the OpenAI Python SDK approach, you should visit e.g. [Deploy Large Language Models (LLMs) on Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/examples/deploy-large-language-models) to see different alternatives.

So on, the steps to follow for building the agent are going to be:

1. Create the OpenAI client with `smolagents`, connected to the running Azure AI Endpoint via the `smolagents.OpenAIServerModel` (note that `smolagents` also exposes the `smolagents.AzureOpenAIServerModel` but that's the client for using OpenAI via the Azure, not to connect to Azure AI).
2. Define the set of tools that the agent will have access to i.e., Python functions with the `smolagents.tool` decorator.
3. Create the `smolagents.CodeAgent` leveraging the code-LLM deployed on Azure AI, adding the set tools previously defined, so that the agent can use those when appropriate, using a local executor (not recommended if code to be executed is sensible or unidentified).

### Create OpenAI Client

Since every LLM in the Hugging Face catalog is deployed with an inference engine that exposes OpenAI-compatible routes, you can also leverage the OpenAI Python SDK via `smolagents` to send requests to the deployed Azure ML Endpoint.

```python
%pip install "smolagents[openai]" --upgrade --quiet
```

To use the OpenAI Python SDK with Azure ML Managed Online Endpoints, you need to first retrieve:

- `api_url` with the `/v1` route (that contains the `v1/chat/completions` endpoint that the OpenAI Python SDK will send requests to)
- `api_key` which is the API Key in Azure AI or the primary key in Azure ML (unless a dedicated Azure ML Token is used instead)

```python
from urllib.parse import urlsplit

api_key = client.online_endpoints.get_keys(os.getenv("ENDPOINT_NAME")).primary_key

url_parts = urlsplit(client.online_endpoints.get(os.getenv("ENDPOINT_NAME")).scoring_uri)
api_url = f"{url_parts.scheme}://{url_parts.netloc}/v1"
```

<Tip>

Alternatively, you can also build the API URL manually as it follows, since the URIs are globally unique per region, meaning that there will only be one endpoint named the same way within the same region:
```python
api_url = f"https://{os.getenv('ENDPOINT_NAME')}.{os.getenv('LOCATION')}.inference.ml.azure.com/v1"
```
Or just retrieve it from either the Azure AI Foundry or the Azure ML Studio.

</Tip>

Then you can use the OpenAI Python SDK normally, making sure to include the extra header `azureml-model-deployment` header that contains the Azure AI / ML Deployment name.

The extra header will be provided via the `default_headers` argument of the OpenAI Python SDK when instantiating the client, to be provided in `smolagents` via the `client_kwargs` argument of `smolagents.OpenAIServerModel`, that will propagate those to the underlying `OpenAI` client.

```python
from smolagents import OpenAIServerModel

model = OpenAIServerModel(
    model_id="Qwen/Qwen2.5-Coder-32B-Instruct",
    api_base=api_url,
    api_key=api_key,
    client_kwargs={"default_headers": {"azureml-model-deployment": os.getenv("DEPLOYMENT_NAME")}},
)
```

### Build Python Tools

`smolagents` will be used to build the tools that the agent will leverage, as well as to build the `smolagents.CodeAgent` itself. The following tools will be defined, using the `smolagents.tool` decorator, that will prepare the Python functions to be used as tools within the LLM Agent.

Note that the function signatures should come with proper typing so as to guide the LLM, as well as a clear function name and, most importantly, well-formatted docstrings indicating what the function does, what are the arguments, what it returns, and what errors can be raised; if applicable.

In this case, the tools that will be provided to the agent are the following:

- World Time API - `get_time_in_timezone`: fetches the current time on a given location using the World Time API.

- Wikipedia API - `search_wikipedia`: fetches a summary of a Wikipedia entry using the Wikipedia API.

<Tip>

In this case for the sake of simplicity, the tools to be used have been ported from https://github.com/huggingface/smolagents/blob/main/examples/multiple_tools.py, so all the credit goes to the original authors and maintainers of the `smolagents` GitHub repository. Also only the tools for querying the World Time API and the Wikipedia API have been kept, since those have a generous Free Tier that allows anyone to use those without paying or having to create an account / API token.

</Tip>

```python
from smolagents import tool
```

#### World Time API - `get_time_in_timezone`

```python
@tool
def get_time_in_timezone(location: str) -> str:
    """
    Fetches the current time for a given location using the World Time API.
    Args:
        location: The location for which to fetch the current time, formatted as 'Region/City'.
    Returns:
        str: A string indicating the current time in the specified location, or an error message if the request fails.
    Raises:
        requests.exceptions.RequestException: If there is an issue with the HTTP request.
    """
    import requests
    
    url = f"http://worldtimeapi.org/api/timezone/{location}.json"

    try:
        response = requests.get(url)
        response.raise_for_status()

        data = response.json()
        current_time = data["datetime"]

        return f"The current time in {location} is {current_time}."

    except requests.exceptions.RequestException as e:
        return f"Error fetching time data: {str(e)}"
```

#### Wikipedia API - `search_wikipedia`

```python
@tool
def search_wikipedia(query: str) -> str:
    """
    Fetches a summary of a Wikipedia page for a given query.
    Args:
        query: The search term to look up on Wikipedia.
    Returns:
        str: A summary of the Wikipedia page if successful, or an error message if the request fails.
    Raises:
        requests.exceptions.RequestException: If there is an issue with the HTTP request.
    """
    import requests

    url = f"https://en.wikipedia.org/api/rest_v1/page/summary/{query}"

    try:
        response = requests.get(url)
        response.raise_for_status()

        data = response.json()
        title = data["title"]
        extract = data["extract"]

        return f"Summary for {title}: {extract}"

    except requests.exceptions.RequestException as e:
        return f"Error fetching Wikipedia data: {str(e)}"
```

### Create Agent

Since in this case the deployed LLM on Azure AI is a coding-specific LLM, the agent will be created with `smolagents.CodeAgent` that adds the relevant prompt and parsing functionality, so as to interpret the LLM outputs as code. Alternatively, one could also use `smolagents.ToolCallingAgent` which is a tool calling agent, meaning that the given LLM should have tool calling capabilities.

Then, the `smolagents.CodeAgent` expects both the `model` and the set of `tools` that the model has access to, and then via the `run` method, you can leverage all the potential of the agent in an automatic way, without manual intervention; so that the agent will use the given tools if needed, to answer or comply with your initial request.

```python
from smolagents import CodeAgent

agent = CodeAgent(
    tools=[
        get_time_in_timezone,
        search_wikipedia,
    ],
    model=model,
    stream_outputs=True,
)
```

```python
agent.run(
    "Could you create a Python function that given the summary of 'What is a Lemur?'"
    " replaces all the occurrences of the letter E with the letter U (ignore the casing)"     
)
# Summary for Lumur: Lumurs aru wut-nosud primatus of thu supurfamily Lumuroidua, dividud into 8 familius and consisting of 15 gunura and around 100 uxisting spucius. Thuy aru undumic to thu island of Madagascar. Most uxisting lumurs aru small, with a pointud snout, largu uyus, and a long tail. Thuy chiufly livu in truus and aru activu at night.
```

```python
agent.run(
    "What time is in Thailand right now? And what's the time difference with France?"     
)
# The current time in Thailand is 5 hours ahead of the current time in France.
```

## Release resources

Once you are done using the Azure AI Endpoint / Deployment, you can delete the resources as it follows, meaning that you will stop paying for the instance on which the model is running and all the attached costs will be stopped.

```python
client.online_endpoints.begin_delete(name=os.getenv("ENDPOINT_NAME")).result()
```

## Conclusion

Throughout this example you learnt how to deploy an Azure ML Managed Online Endpoint on an Azure AI Foundry Hub-based project running an open model from the Hugging Face Collection in the Azure AI Foundry Hub / Azure ML Model Catalog, leverage it to build agents with `smolagents`, and finally, how to stop and release the resources.

If you have any doubt, issue or question about this example, feel free to [open an issue](https://github.com/huggingface/Microsoft-Azure/issues/new) and we'll do our best to help!

---
<Tip>

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Microsoft-Azure/tree/main/examples/azure-ai/build-agents-with-smolagents/azure-notebook.ipynb)!

</Tip>

<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/azure-ai/examples/build-agents-with-smolagents.mdx" />

### Deploy NVIDIA Parakeet for Automatic Speech Recognition (ASR) on Azure AI
https://huggingface.co/docs/microsoft-azure/azure-ai/examples/deploy-nvidia-parakeet-asr.md

# Deploy NVIDIA Parakeet for Automatic Speech Recognition (ASR) on Azure AI

This example showcases how to deploy NVIDIA Parakeet for Automatic Speech Recognition (ASR) from the Hugging Face Collection in Azure AI Foundry Hub as an Azure ML Managed Online Endpoint, powered by Hugging Face's Inference container on top of NVIDIA NeMo. It also covers how to run inference with cURL, requests, OpenAI Python SDK, and even how to locally run a Gradio application for audio transcription from both recordings and files.

TL;DR NVIDIA NeMo is a scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech). NVIDIA NeMo Parakeet ASR Models attain strong speech recognition accuracy while being efficient for inference. Azure AI Foundry provides a unified platform for enterprise AI operations, model builders, and application development. Azure Machine Learning is a cloud service for accelerating and managing the machine learning (ML) project lifecycle.

---

This example will specifically deploy [`nvidia/parakeet-tdt-0.6b-v2`](https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2) from the Hugging Face Hub (or see it on [AzureML](https://ml.azure.com/models/nvidia-parakeet-tdt-0.6b-v2/version/4/catalog/registry/HuggingFace) or on [Azure AI Foundry](https://ai.azure.com/explore/models/nvidia-parakeet-tdt-0.6b-v2/version/4/registry/HuggingFace)) as an Azure ML Managed Online Endpoint on Azure AI Foundry Hub.

`nvidia/parakeet-tdt-0.6b-v2` is a 600-million-parameter automatic speech recognition (ASR) model designed for high-quality English transcription, featuring support for punctuation, capitalization, and accurate timestamp prediction.

This XL variant of the FastConformer architecture integrates the TDT decoder and is trained with full attention, enabling efficient transcription of audio segments up to 24 minutes in a single pass. The model achieves an RTFx of 3380 on the HF-Open-ASR leaderboard with a batch size of 128. Note: RTFx Performance may vary depending on dataset audio duration and batch size.

* Accurate word-level timestamp predictions
* Automatic punctuation and capitalization
* Robust performance on spoken numbers, and song lyrics transcription

![NVIDIA Parakeet on the Hugging Face Hub](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-nvidia-parakeet-asr/nvidia-parakeet-hub.png)

![NVIDIA Parakeet on Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-nvidia-parakeet-asr/nvidia-parakeet-azure-ai.png)

For more information, make sure to check [their model card on the Hugging Face Hub](https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2/blob/main/README.md) and the [NVIDIA NeMo Documentation](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/asr/models.html).

<Tip>

Note that you can select any Automatic Speech Recognition (ASR) model available on the Hugging Face Hub with the tag `NeMo` and the "Deploy to AzureML" option enabled, or directly select any of the ASR models available on either Azure ML or Azure AI Foundry Hub Model Catalog under the "HuggingFace" collection (note that for Azure AI Foundry the Hugging Face Collection will only be available for Hub-based projects), but only the NVIDIA Parakeet models are powered by NVIDIA NeMo, the rest of those rely on the Hugging Face Inference Toolkit.

</Tip>

## Pre-requisites

To run the following example, you will need to comply with the following pre-requisites, alternatively, you can also read more about those in the [Azure Machine Learning Tutorial: Create resources you need to get started](https://learn.microsoft.com/en-us/azure/machine-learning/quickstart-create-resources?view=azureml-api-2).

- An Azure account with an active subscription.
- The Azure CLI installed and logged in.
- The Azure Machine Learning extension for the Azure CLI.
- An Azure Resource Group.
- A project based on an Azure AI Foundry Hub.

For more information, please go through the steps in [Set up Azure AI](https://huggingface.co/docs/microsoft-azure/azure-ai/set-up).

## Setup and installation

In this example, the [Azure Machine Learning SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ml/azure-ai-ml) will be used to create the endpoint and the deployment, as well as to invoke the deployed API. Along with it, you will also need to install `azure-identity` to authenticate with your Azure credentials via Python.

```python
%pip install azure-ai-ml azure-identity --upgrade --quiet
```

More information at [Azure Machine Learning SDK for Python](https://learn.microsoft.com/en-us/python/api/overview/azure/ai-ml-readme?view=azure-python).

Then, for convenience setting the following environment variables is recommended as those will be used along the example for the Azure ML Client, so make sure to update and set those values accordingly as per your Microsoft Azure account and resources.

```python
%env LOCATION eastus
%env SUBSCRIPTION_ID <YOUR_SUBSCRIPTION_ID>
%env RESOURCE_GROUP <YOUR_RESOURCE_GROUP>
%env AI_FOUNDRY_HUB_PROJECT <YOUR_AI_FOUNDRY_HUB_PROJECT>
```

Finally, you also need to define both the endpoint and deployment names, as those will be used throughout the example too:

<Tip>

Note that endpoint names must be globally unique per region i.e., even if you don't have any endpoint named that way running under your subscription, if the name is reserved by another Azure customer, then you won't be able to use the same name. Adding a timestamp or a custom identifier is recommended to prevent running into HTTP 400 validation issues when trying to deploy an endpoint with an already locked / reserved name. Also the endpoint name must be between 3 and 32 characters long.

</Tip>

```python
import os
from uuid import uuid4

os.environ["ENDPOINT_NAME"] = f"nvidia-parakeet-{str(uuid4())[:8]}"
os.environ["DEPLOYMENT_NAME"] = f"nvidia-parakeet-{str(uuid4())[:8]}"
```

## Authenticate to Azure ML

Initially, you need to authenticate into the Azure AI Foundry Hub via Azure ML with the Azure ML Python SDK, which will be later used to deploy `nvidia/parakeet-tdt-0.6b-v2` as an Azure ML Managed Online Endpoint in your Azure AI Foundry Hub.

<Tip>

On standard Azure ML deployments you'd need to create the `MLClient` using the Azure ML Workspace as the `workspace_name` whereas for Azure AI Foundry, you need to provide the Azure AI Foundry Hub name as the `workspace_name` instead, and that will deploy the endpoint under the Azure AI Foundry too.

</Tip>

```python
import os
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential

client = MLClient(
    credential=DefaultAzureCredential(),
    subscription_id=os.getenv("SUBSCRIPTION_ID"),
    resource_group_name=os.getenv("RESOURCE_GROUP"),
    workspace_name=os.getenv("AI_FOUNDRY_HUB_PROJECT"),
)
```

## Create and Deploy Azure AI Endpoint

Before creating the Managed Online Endpoint, you need to build the model URI, which is formatted as it follows `azureml://registries/HuggingFace/models/<MODEL_ID>/labels/latest` where the `MODEL_ID` won't be the Hugging Face Hub ID but rather its name on Azure, as follows:

```python
model_id = "nvidia/parakeet-tdt-0.6b-v2"

model_uri = f"azureml://registries/HuggingFace/models/{model_id.replace('/', '-').replace('_', '-').lower()}/labels/latest"
model_uri
```

<Tip>

To check if a model from the Hugging Face Hub is available in Azure, you should read about it in [Supported Models](https://huggingface.co/docs/microsoft-azure/azure-ai/models). If not, you can always [Request a model addition in the Hugging Face collection on Azure](https://huggingface.co/docs/microsoft-azure/guides/request-model-addition)).

</Tip>

Then you need to create the [ManagedOnlineEndpoint via the Azure ML Python SDK](https://learn.microsoft.com/en-us/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint?view=azure-python) as follows.

<Tip>

Every model in the Hugging Face Collection is powered by an efficient inference backend, and each of those can run on a wide variety of instance types (as listed in [Supported Hardware](https://huggingface.co/docs/microsoft-azure/azure-ai/supported-hardware)). Since for models and inference engines require a GPU-accelerated instance, you might need to request a quota increase as per [Manage and increase quotas and limits for resources with Azure Machine Learning](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas?view=azureml-api-2).

</Tip>

```python
from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment

endpoint = ManagedOnlineEndpoint(name=os.getenv("ENDPOINT_NAME"))

deployment = ManagedOnlineDeployment(
    name=os.getenv("DEPLOYMENT_NAME"),
    endpoint_name=os.getenv("ENDPOINT_NAME"),
    model=model_uri,
    instance_type="Standard_NC40ads_H100_v5",
    instance_count=1,
)
```

```python
client.begin_create_or_update(endpoint).wait()
```

![Azure AI Endpoint from Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-nvidia-parakeet-asr/azure-ai-endpoint.png)

<Tip>

In Azure AI Foundry the endpoint will only be listed within the "My assets -> Models + endpoints" tab once the deployment is created, not before as in Azure ML where the endpoint is shown even if it doesn't contain any active or in-progress deployments.

</Tip>

```python
client.online_deployments.begin_create_or_update(deployment).wait()
```

![Azure AI Deployment from Azure AI Foundry](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-nvidia-parakeet-asr/azure-ai-deployment.png)

<Tip>

Note that whilst the Azure AI Endpoint creation is relatively fast, the deployment will take longer since it needs to allocate the resources on Azure so expect it to take ~10-15 minutes, but it could as well take longer depending on the instance provisioning and availability.

</Tip>

Once deployed, via either the Azure AI Foundry or the Azure ML Studio you'll be able to inspect the endpoint details, the real-time logs, how to consume the endpoint, and even use the, still on preview, [monitoring feature](https://learn.microsoft.com/en-us/azure/machine-learning/concept-model-monitoring?view=azureml-api-2). Find more information about it at [Azure ML Managed Online Endpoints](https://learn.microsoft.com/en-us/azure/machine-learning/concept-endpoints-online?view=azureml-api-2#managed-online-endpoints)

## Send requests to the Azure AI Endpoint

Finally, now that the Azure AI Endpoint is deployed, you can send requests to it. In this case, since the task of the model is `automatic-speech-recognition` and since it expects a multi-part request to be sent along the audio file, the `invoke` method cannot be used since it only supports JSON payloads.

This being said, you can still send requests to it programmatically via `requests`, via the OpenAI SDK for Python or with cURL, to the `/api/v1/audio/transcriptions` route which is the OpenAI-compatible route for the Transcriptions API.

To send the requests then we need both the `primary_key` and the `scoring_uri`, which can be retrieved via the Azure ML Python SDK as it follows:

```python
api_key = client.online_endpoints.get_keys(os.getenv("ENDPOINT_NAME")).primary_key
api_url = client.online_endpoints.get(os.getenv("ENDPOINT_NAME")).scoring_uri
```

Additionally, since you will need a sample audio file to run the inference over, you will need to download an audio file as e.g. the following, which is the audio file showcased within the `nvidia/parakeet-tdt-0.6b-v2` model card:

```python
!wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```

### Python `requests`

As the deployed Azure AI Endpoint for ASR expects a multi-part request, you need to send separately the files, in this case being the audio files, and the data, being the request parameters such as e.g. the model name or the temperature, among others. To do so, you first need to read the audio file into an `io.BytesIO` object, and then prepare the requests with the necessary headers for both the authentication and the `azureml-model-deployment` set to point to the actual Azure AI Deployment, and send the HTTP POST with both the file and the data as follows:

```python
from io import BytesIO
import requests

audio_file = BytesIO(open("2086-149220-0033.wav", "rb").read())
audio_file.name = "2086-149220-0033.wav"

response = requests.post(
    api_url,
    headers={
        "Authorization": f"Bearer {api_key}",
        "azureml-model-deployment": os.getenv("DEPLOYMENT_NAME"),
    },
    files={"file": (audio_file.name, audio_file, "audio/wav")},
    data={"model": model_id},
)
print(response.json())
# {'text': "Well, I don't wish to see it any more, observed Phebe, turning away her eyes. It is certainly very like the old portrait."}
```

### OpenAI Python SDK

As the exposed scoring URI is an OpenAI-compatible route i.e., `/api/v1/audio/transcriptions`, you can leverage the OpenAI Python SDK to send requests to the deployed Azure AI Endpoint.

```python
%pip install openai --upgrade --quiet
```

To use the OpenAI Python SDK with Azure ML Managed Online Endpoints, you need to update the `api_url` value defined above, since the default `scoring_uri` comes with the full route, whereas the OpenAI SDK expects the route up until the `v1` included, meaning that the `/audio/transcriptions` should be removed before instantiating the client.

```python
api_url = client.online_endpoints.get(os.getenv("ENDPOINT_NAME")).scoring_uri.replace("/audio/transcriptions", "")
```

<Tip>

Alternatively, you can also build the API URL manually as it follows, since the URIs are globally unique per region, meaning that there will only be one endpoint named the same way within the same region:
```python
api_url = f"https://{os.getenv('ENDPOINT_NAME')}.{os.getenv('LOCATION')}.inference.ml.azure.com/api/v1"
```
Or just retrieve it from either the Azure AI Foundry or the Azure ML Studio.

</Tip>

Then you can use the OpenAI Python SDK normally, making sure to include the extra header `azureml-model-deployment` header that contains the Azure AI / ML Deployment name.

Via the OpenAI Python SDK it can either be set within each call to `chat.completions.create` via the `extra_headers` parameter as commented below, or via the `default_headers` parameter when instantiating the `OpenAI` client (which is the recommended approach since the header needs to be present on each request, so setting it just once is preferred).

```python
import os
from openai import OpenAI

openai_client = OpenAI(
    base_url=api_url,
    api_key=api_key,
    default_headers={"azureml-model-deployment": os.getenv("DEPLOYMENT_NAME")},
)

transcription = openai_client.audio.transcriptions.create(
    model=model_id,
    file=open("2086-149220-0033.wav", "rb"),
    response_format="json",
)
print(transcription.text)
# Well, I don't wish to see it any more, observed Phebe, turning away her eyes. It is certainly very like the old portrait.
```

### cURL

Alternatively, you can also just use `cURL` to send requests to the deployed endpoint, with the `api_url` and `api_key` values programmatically retrieved in the OpenAI snippet and now set as environment variables so that `cURL` can use those, as it follows:

```python
os.environ["API_URL"] = api_url
os.environ["API_KEY"] = api_key
```

```python
!curl -sS $API_URL/audio/transcriptions \
    -H "Authorization: Bearer $API_KEY" \
    -H "azureml-model-deployment: $DEPLOYMENT_NAME" \
    -H "Content-Type: multipart/form-data" \
    -F file=@2086-149220-0033.wav \
    -F model=nvidia/parakeet-tdt-0.6b-v2
```

Alternatively, you can also just go to the Azure AI Endpoint in either the Azure AI Foundry under "My assets -> Models + endpoints" or in the Azure ML Studio via "Endpoints", and retrieve both the scoring URI and the API Key values, as well as the Azure AI / ML Deployment name for the given model.

### Gradio

[Gradio](https://www.gradio.app/) is the fastest way to demo your machine learning model with a friendly web interface so that anyone can use it. You can also leverage the OpenAI Python SDK to build a simple automatic-speech-recognition i.e., speech to text demo that you can use within the Jupyter Notebook cell where you are running it.

<Tip>

Alternatively, the Gradio demo connected to your Azure ML Managed Online Endpoint as an Azure Container App as described in [Tutorial: Build and deploy from source code to Azure Container Apps](https://learn.microsoft.com/en-us/azure/container-apps/tutorial-deploy-from-code?tabs=python). If you'd like us to show you how to do it for Gradio in particular, feel free to [open an issue requesting it](https://github.com/huggingface/Microsoft-Azure/issues/new).

</Tip>

```python
%pip install gradio --upgrade --quiet
```

```python
import os
from pathlib import Path

import gradio as gr
from openai import OpenAI

openai_client = OpenAI(
    base_url=os.getenv("API_URL"),
    api_key=os.getenv("API_KEY"),
    default_headers={"azureml-model-deployment": os.getenv("DEPLOYMENT_NAME")}
)

def transcribe(audio: Path, temperature: float = 1.0) -> str:
    return openai_client.audio.transcriptions.create(
        model=model_id,
        file=open(audio, "rb"),
        temperature=temperature,
        response_format="text",
    )

demo = gr.Interface(
    fn=transcribe,
    inputs=[
        # https://www.gradio.app/docs/gradio/audio
        gr.Audio(type="filepath", streaming=False, label="Upload or Record Audio"),
        gr.Slider(0, 1, value=0.0, step=0.1, label="Temperature")
    ],
    outputs=gr.Textbox(label="Transcribed Text"),
    title="NVIDIA Parakeet on Azure AI",
    description="Upload or record audio and get the transcribed text using NVIDIA Parakeet on Azure AI via the OpenAI's Transcription API.",
)

demo.launch()
```

![Gradio Chat Interface with Azure ML Endpoint](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/azure-ai/deploy-nvidia-parakeet-asr/azure-ml-gradio.png)

## Release resources

Once you are done using the Azure AI Endpoint / Deployment, you can delete the resources as it follows, meaning that you will stop paying for the instance on which the model is running and all the attached costs will be stopped.

```python
client.online_endpoints.begin_delete(name=os.getenv("ENDPOINT_NAME")).result()
```

## Conclusion

Throughout this example you learnt how to create and configure your Azure account for Azure ML and Azure AI Foundry, how to then create a Managed Online Endpoint running an open model for Automatic Speech Recognition (ASR) from the Hugging Face Collection in the Azure AI Foundry Hub / Azure ML Model Catalog, how to send inference requests to it afterwards with different alternatives, how to build a simple Gradio chat interface around it, and finally, how to stop and release the resources.

If you have any doubt, issue or question about this example, feel free to [open an issue](https://github.com/huggingface/Microsoft-Azure/issues/new) and we'll do our best to help!

---
<Tip>

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Microsoft-Azure/tree/main/examples/azure-ai/deploy-nvidia-parakeet-asr/azure-notebook.ipynb)!

</Tip>

<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/azure-ai/examples/deploy-nvidia-parakeet-asr.mdx" />

### Guides
https://huggingface.co/docs/microsoft-azure/guides/introduction.md

# Guides

Take a look at our guides on how to get started with Hugging Face models on Microsoft Azure.

- [One-click deployments from the Hugging Face Hub on Azure AI](./one-click-deployment-azure-ai)
- [Request a model addition in the Hugging Face collection on Azure](./request-model-addition.mdx)

For more detailed examples please check the "Examples" section under each service.


<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/guides/introduction.mdx" />

### Request a model addition in the Hugging Face collection on Azure
https://huggingface.co/docs/microsoft-azure/guides/request-model-addition.md

# Request a model addition in the Hugging Face collection on Azure

At the moment the Hugging Face collection on Azure AI / ML contains +10,000 open models from the Hugging Face Hub, leveraging open-source inference solutions such as Text Generation Inference (TGI), vLLM, SGLang, Text Embeddings Inference (TEI), or the Hugging Face Inference Toolkit, among others to come.

To request a model addition into the Hugging Face collection on the Azure AI / ML catalog (it's shared among those services), you can either navigate to the model card on the Hugging Face Hub via https://hf.co/models and then click the "Deploy" button and look for the "Deploy on Azure AI" option, that will automatically check whether a given model is already in the collection. If not available, then a "Request to add" button will appear.

![Request to add to Azure AI in the Hugging Face Hub](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/request-to-add.png)

Alternatively, you can also [open an issue](https://github.com/huggingface/Microsoft-Azure/issues/new) with the model/s you'd like to see on the Hugging Face collection on Azure.

Before requesting a model addition, you need to make sure that the model or models that you'd like to see on the Hugging Face collection on Azure match the following criteria:

- Have any of the `Transformers`, `Diffusers` or `Sentence-Transformers` tags on the Hugging Face Hub, meaning that the model architecture is compatible with any of those. If the model you'd like to see in the collection doesn't match this criteria, you can maybe check the [Contributing a new model to Transformers](https://huggingface.co/docs/transformers/main/en/modular_transformers) guide on how to add new modular-based model architectures into Transformers.

- Make sure that the task-tag is any of the tasks supported at the moment, as per the listing at [Azure AI - Supported Tasks](../azure-ai/tasks). If it's not there neither among the listing of upcoming tasks in that same page, then feel free to [open an issue](https://github.com/huggingface/Microsoft-Azure/issues/new) requesting the addition of that given task. Again as per the point above, it needs to be a Transformers, Diffusers, or Sentence-Transformers compatible task.

- If you want to benefit from the production-like inference solutions and the OpenAI-compatible interfaces for text generation and embeddings, you should also make sure that the given model has either the `text-generation-inference` (shortened as `tgi`) or `text-embeddings-inference` (shortened as `tei`) tags, respectively.

- There might be some cases where either the "Deploy" or "Deploy on Azure AI" buttons are not enabled (shouldn't be the case, but can happen), in that case feel free to [open an issue](https://github.com/huggingface/Microsoft-Azure/issues/new) sharing the model card and we'll identify if that model can land on our catalog in Azure.

- The Hugging Face Hub models MUST be public and ideally without any kind of gating restrictions, private or gated models won't be considered at the moment.

- The Hugging Face Hub models with `trust_remote_code` are not allowed for security reasons, unless manually verified or coming from an already verified organization.

- And finally, the model weights need to be in Safetensors format and have passed the JFrog, ClamAV, and the rest of the security checks performed in the Hugging Face Hub.


<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/guides/request-model-addition.mdx" />

### One-click deployments from the Hugging Face Hub on Azure AI
https://huggingface.co/docs/microsoft-azure/guides/one-click-deployment-azure-ai.md

# One-click deployments from the Hugging Face Hub on Azure AI

This guide introduces the Hugging Face Hub and Azure AI one-click deployment of open-source models as Azure ML Managed Online Endpoints real-time inference.

TL;DR The Hugging Face Hub is a collaborative platform hosting over a million open-source machine learning models, datasets, and demos. It supports a wide range of tasks across natural language processing, vision, and audio, and provides version-controlled repositories with metadata, model cards, and programmatic access via APIs and popular ML libraries. Azure AI Foundry builds on Azure ML but is tailored specifically for generative AI and agent-based applications. Azure Machine Learning is a cloud-based platform for building, deploying, and managing machine learning models at scale. It provides managed infrastructure, including powerful CPU and GPU instances, automated scaling, secure endpoints, and monitoring, making it suitable for both experimentation and production deployment.

The integration between Hugging Face Hub and Azure AI / ML allows users to deploy thousands of Hugging Face models directly onto Azure's managed infrastructure with minimal configuration. This is achieved through a native model catalog in Azure AI Foundry Hub and Azure ML Studio, which features Hugging Face models ready for real-time deployment.

The steps required to deploy an open-source model from the Hugging Face Hub to Azure AI as an Azure ML Managed Online Endpoint for real-time inference are the following:

1. Go to the [Hugging Face Hub Models page](https://huggingface.co/models), and browse all the open-source models available on the Hub.

    <Tip>

    Alternatively, you can also start directly from the [Hugging Face Collection on Azure AI (public URL, no authentication required)](https://ai.azure.com/catalog/publishers/hugging%20face,huggingface), or from the [Hugging Face Collection on Azure AI (requires Azure authentication)](https://ai.azure.com/explore/models?selectedCollection=Hugging+Face) instead of the Hugging Face Hub, and just explore the available models using the Azure AI Model Catalog filters to deploy the models that you want.

    </Tip>

2. Leverage the Hub filters to easily find and discover new models based on the filters as e.g. task type, size based in number of parameters, inference engine support, and much more.

3. Select the model that you want, and within its model card click on the "Deploy" button, and then select the option "Deploy on Azure AI", and then click on "Go to model in Azure AI". Note that the model may not be available for deployment, meaning that the "Deploy" button may not be enabled for some models; or that the "Deploy on Azure AI" option may not be listed, meaning that the model is not supported within any of the inference engines or tasks supported on Azure AI; or also that the "Deploy on Azure AI" button is available, but it says "Request to add", meaning that model is not available but could be publish, so you can request its addition into the Hugging Face Collection in the Azure AI Foundry Hub Model Catalog.

4. On Azure AI Foundry, you will be redirected to the model card, and you need to click "Use this model", and fill the configuration values for the endpoint and the deployment, such as the endpoint name, the instance type, or the instance count, among others; then click "Deploy".

5. After the endpoint is created and the deployment is ready, you will be able to send requests to the deployed API. For more information on how to send inference requests to it, you can either check the "Consume" tab within the Azure ML Endpoint in Azure AI Foundry, or check any of the available Azure AI examples on the documentation.

<video src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/microsoft-azure/one-click-deployment-azure-ai.mp4" controls autoplay muted loop />


<EditOnGithub source="https://github.com/huggingface/Microsoft-Azure/blob/main/docs/source/guides/one-click-deployment-azure-ai.mdx" />
