Exploring the use of V-LLMs.
Implement real-time video inference with V-LLMs based on the nanoowl NVIDIA optimized models.
- Docker and Docker Compose
- NVIDIA GPU and appropriate drivers and containers toolkits
- Compatible nanoowl image encoder engine
docker compose up --build