Skip to content

filipemarinho/vision-LLMs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

vision-LLMs

Exploring the use of V-LLMs.

Description

Implement real-time video inference with V-LLMs based on the nanoowl NVIDIA optimized models.

Requirements

  • Docker and Docker Compose
  • NVIDIA GPU and appropriate drivers and containers toolkits
  • Compatible nanoowl image encoder engine

Usage

    docker compose up --build 

Releases

No releases published

Packages

No packages published