v2.5.0
Release Artifacts
- π Docker container: tag
v2.5.0-dgpu
andv2.5.0-igpu
- π Python wheel:
pip install holoscan==2.5.0
- π¦οΈ Debian packages:
2.5.0.1-1
- π Documentation
See supported platforms for compatibility.
Release Notes
New Features and Improvements
Core
- Updates to Python decorator API to simplify syntax for conversion of functions to Holoscan operators. Now, a function returning a tuple of arrays can be easily decorated to emit these arrays from separate output ports.
- Added support for application profiling using NSight Systems and NVTX using the
HOLOSCAN_ENABLE_PROFILE
environment variable.
Operators/Resources
- The Holoviz operator now has a new parameter
display_color_space
. This allows HDR output on Linux distributions and displays supporting that feature. - A bug in the Holoviz module that prevented using multiple
HolovizOp
operators with the event-based or multi-thread schedulers was fixed. To demonstrate this, the existingvideo_replayer
example has a newdual_window
parameter in its YAML config that can be set totrue
in order to run a variant of that example which opens a pair of display windows that run in parallel.
Holoviz module
- The Holoviz operator and module now has a new parameter
display_color_space
. This allows HDR output on Linux distributions and displays supporting that feature. - The Holoviz operator and module now support callbacks. These callbacks can be used to receive updates on key presses, mouse position and buttons, and window size. Additionally, there is a callback executed when the HolovizOp finishes drawing layers, allowing apps to
Breaking Changes
- Compile time type checking of arguments passed to logging functions may be more strict than before. If you see compilation errors related to passing enum types to the
HOLOSCAN_LOG_*
functions, this can be resolved either by casting the enum to an integer type or defining afmt::formatter
for the enum type (please see https://fmt.dev/latest/api/#formatting-user-defined-types).
Bug fixes
Issue | Description |
---|---|
4728118 | Holoviz window's initialization is not thread-safe so cannot launch multiple Holoviz windows at once. |
4866740 | The run vscode_remote command is failing with the message docker cp /home/xxx/.ssh/id_ed25519 xxxxxx:/home/holoscan-sdk/.ssh/ no such directory . The run vscode_remote command is fixed to reflect the home directory change (from /home/holoscan-sdk to /home/holoscan ) in the container. |
Known Issues
This section supplies details about issues discovered during development and QA but not resolved in this release.
Issue | Description |
---|---|
4062979 | When Operators connected in a Directed Acyclic Graph (DAG) are executed in a multithreaded scheduler, it is not ensured that their execution order in the graph is adhered. |
4267272 | AJA drivers cannot be built with RDMA on IGX SW 1.0 DP iGPU due to missing nv-p2p.h . Expected to be addressed in IGX SW 1.0 GA. |
4384768 | No RDMA support on JetPack 6.0 DP and IGX SW 1.0 DP iGPU due to missing nv-p2p kernel module. Expected to be addressed in JP 6.0 GA and IGX SW 1.0 GA respectively. |
4190019 | Holoviz segfaults on multi-gpu setup when specifying device using the --gpus flag with docker run . Current workaround is to use CUDA_VISIBLE_DEVICES in the container instead. |
4210082 | v4l2 applications seg faults at exit or crashes at start with '_PyGILState_NoteThreadState: Couldn't create autoTSSkey maping' |
4339399 | High CPU usage observed with video_replayer_distributed application. While the high CPU usage associated with the GXF UCX extension has been fixed since v1.0, distributed applications using the MultiThreadScheduler (with the check_recession_period_ms parameter set to 0 by default) may still experience high CPU usage. Setting the HOLOSCAN_CHECK_RECESSION_PERIOD_MS environment variable to a value greater than 0 (e.g. 1.5 ) can help reduce CPU usage. However, this may result in increased latency for the application until the MultiThreadScheduler switches to an event-based multithreaded scheduler. |
4318442 | UCX cuda_ipc protocol doesn't work in Docker containers on x86_64. As a workaround, we are currently disabling the UCX cuda_ipc protocol on all platforms via the UCX_TLS environment variable. |
4325468 | The V4L2VideoCapture operator only supports YUYV and AB24 source pixel formats, and only outputs the RGBA GXF video format. Other source pixel formats compatible with V4L2 can be manually defined by the user, but they're assumed to be equivalent to RGBA8888. |
4325585 | Applications using MultiThreadScheduler may exit early due to timeouts. This occurs when the stop_on_deadlock_timeout parameter is improperly set to a value equal to or less than check_recession_period_ms , particularly if check_recession_period_ms is greater than zero. |
4301203 | HDMI IN fails in v4l2_camera on IGX Orin Devkit for some resolution or formats. Try the latest firmware as a partial fix. Driver-level fixes expected in IGX SW 1.0 GA. |
4384348 | UCX termination (either ctrl+c , press 'Esc' or clicking close button) is not smooth and can show multiple error messages. |
4481171 | Running the driver for a distributed applications on IGX Orin devkits fails when connected to other systems through eth1. A workaround is to use eth0 port to connect to other systems for distributed workloads. |
4458192 | In scenarios where distributed applications have both the driver and workers running on the same host, either within a Docker container or directly on the host, there's a possibility of encountering "Address already in use" errors. A potential solution is to assign a different port number to the HOLOSCAN_HEALTH_CHECK_PORT environment variable (default: 8777 ), for example, by using export HOLOSCAN_HEALTH_CHECK_PORT=8780 . |
4782662 | Installing Holoscan wheel 2.0.0 or later as root causes error. |
4768945 | Distributed applications crash when the engine file is unavailable/generating engine file. |
4753994 | Debugging Python application may lead to segfault when expanding an operator variable. |
Wayland: holoscan::viz::Init() with existing GLFW window fails. | |
4394306 | When Python bindings are created for a C++ Operator, it is not always guaranteed that the destructor will be called prior to termination of the Python application. As a workaround to this issue, it is recommended that any resource cleanup should happen in an operator's stop() method rather than in the destructor. |
4638505 | A distributed application bug was fixed which failed to properly allow tensors to be sent from one fragment when the destination operator in another fragment was a GXFOperator. Connections were previously working correctly only when the destination operator was a native Operators. It now works as expected for GXFOperators as well. |
4824619 | iGPU: Rendering YUV images with HolovizOp fails on first run |