--- license: apache-2.0 base_model: Qwen/Qwen2.5-1.5B-Instruct pipeline_tag: text-generation library_name: litert-lm tags: - chat --- # litert-community/Qwen2.5-1.5B-Instruct This model provides a few variants of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) that are ready for deployment on Android using the [LiteRT (fka TFLite) stack](https://ai.google.dev/edge/litert), [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference) and [LiteRT-LM](https://github.com/google-ai-edge/LiteRT-LM). ## Use the models ### Colab *Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device.* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/Qwen2.5-1.5B-Instruct/blob/main/notebook.ipynb) ### Android #### Edge Gallery App * Download or build the [app](https://github.com/google-ai-edge/gallery?tab=readme-ov-file#-get-started-in-minutes) from GitHub. * Install the [app](https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery&pli=1) from Google Play. * Follow the instructions in the app. #### LLM Inference API * Download and install [the apk](https://github.com/google-ai-edge/gallery/releases/latest/download/ai-edge-gallery.apk). * Follow the instructions in the app. To build the demo app from source, please follow the [instructions](https://github.com/google-ai-edge/gallery/blob/main/README.md) from the GitHub repository. ### iOS * Clone the [MediaPipe samples](https://github.com/google-ai-edge/mediapipe-samples) repository and follow the [instructions](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples/llm_inference/ios/README.md) to build the LLM Inference iOS Sample App using XCode. * Run the app via the iOS simulator or deploy to an iOS device. ## Performance ### Android Note that all benchmark stats are from a Samsung S25 Ultra and multiple prefill signatures enabled.
Backend Quantization scheme Context length Prefill (tokens/sec) Decode (tokens/sec) Time-to-first-token (sec) Model size (MB) Peak RSS Memory (MB) GPU Memory (RSS in MB)

CPU

fp32 (baseline)

1280

49.50

10 tk/s

21.25 s

6182 MB

6254 MB

N/A

🔗

CPU

dynamic_int8

1280

297.58

34.25 tk/s

3.71 s

1598 MB

1997 MB

N/A

🔗

CPU

dynamic_int8

4096

162.72 tk/s

26.06 tk/s

6.57 s

1598 MB

2216 MB

N/A

🔗

GPU

dynamic_int8

1280

1667.75 tk/s

30.88 tk/s

3.63 s

1598 MB

1846 MB

1505 MB

🔗

GPU

dynamic_int8

4096

933.45 tk/s

27.30 tk/s

4.77 s

1598 MB

1869 MB

1505 MB

🔗

* For the list of supported quantization schemes see [supported-schemes](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/quantize#supported-schemes). For these models, we are using prefill signature lengths of 32, 128, 512 and 1280. * Model Size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models) * Memory: indicator of peak RAM usage * The inference on CPU is accelerated via the LiteRT [XNNPACK](https://github.com/google/XNNPACK) delegate with 4 threads * Benchmark is run with cache enabled and initialized. During the first run, the time to first token may differ.