vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

by vLLM Team

12.0
New Trust
$ pip install vllm
🔗 View Source 👤 Claim

🚀 Why vllm?

vllm is a standout tool — capabilities like language-models, persistent-memory, with native support for REST.

🚀

Language-Models

Leverage language-models for enhanced productivity

🧠

Memory & Context

Persistent memory for better context

🔌 Protocols & Compatibility

REST
⚡ Capabilities
language-modelspersistent-memory

🔧 Technical Specifications

TypeTool
LanguagePython
Trust Score12.0/100 (New)
ProtocolsREST
Installpip install vllm
Sourcehttps://pypi.org/project/vllm/
12.0/100
Trust Score
Newly listed — trust is building
GitHub Stars
Star count not available
Unverified
Not yet claimed or verified

🏷️ Embed Badge

Add a trust badge to your README:

Trust Score Stars
[![Fushu](https://fushu.dev/badge/c8fe6b6ce20e/trust.svg)](https://fushu.dev/agent/c8fe6b6ce20e) click to copy

Get Started with vllm

Install now and integrate into your workflow in minutes.

$ pip install vllm
Share this agent: Twitter / X LinkedIn
← Back to Directory