-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Camera branch #17394
Camera branch #17394
Conversation
@MaureenHelm , please help review. |
samples/camera/README.rst
Outdated
|
||
.. code-block:: console | ||
|
||
<repeats endlessly> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
repeats what endlessly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update the README.rst
samples/camera/README.rst
Outdated
@@ -0,0 +1,34 @@ | |||
.. _camera: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As written, this seems like a generic camera sample, but I think it's actually a very specific sample for an MT9M114 image sensor connected to a mimxrt1050_evk board. The documentation should reflect this in the title, description, hardware requirements, and references.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not only for mt9m114 and mimxrt1050. The sample code uses generic API and no specific low level API used.
But so far, only mt9m114 driver and imxrt CSI driver support on Zephyr.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not only for mt9m114 and mimxrt1050. The sample code uses generic API and no specific low level API used.
But so far, only mt9m114 driver and imxrt CSI driver support on Zephyr.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, thanks for the explanation. We should list what drivers and hardware have been tested in this documentation then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update the README of camera sample
******************** | ||
|
||
This project demos camera samples. | ||
It should work with any platform featuring a I2C peripheral interface and a CMOS sensor interface. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code seems to be very hardware-specific. Will it really work with any CMOS sensor and board with an I2C interface?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sample code and framework are designed for generic camera features. But need more camera drivers and cmos sensors added to test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, thanks for explaining.
samples/camera/camera.rst
Outdated
:maxdepth: 1 | ||
:glob: | ||
|
||
**/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assumes there will be more than one camera sample, and all these samples are in subdirectories. In this PR though, the (one) sample is in the same directory as this camera.rst so the reference to **/*
is not finding any rst files and showing an error message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the camera.rst and describe the sample in README.rst.
samples/camera/camera.rst
Outdated
@@ -0,0 +1,10 @@ | |||
.. _camera-samples: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'll also need to add a mention of this camera.rst
document in the samples/index.rst
file. (That will fix the WARNING: document isn't included in any toctree
messages.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the camera.rst
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, agreed to just have a readme.rst makes sense, but you'll still need to add this readme.rst to the samples/index.rst to have it included in the doc tree. Also @JunYangNXP @loicpoulain it seems some synergy possible looking at this PR and the Video API, Drivers and Sample #17194 PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dbkinder Added to the samples/index.rst and passed the Doc review.
@loicpoulain I will look into your PR.
All checks are passing now. Review history of this comment for details about previous failed status. |
@JunYangNXP , could you please have a look at #17194, I try to introduce a generic video subsystem/API. maybe you could rebase some of your work on top, (mt9m114 camera formats, pixel format supports...) |
samples/camera/README.rst
Outdated
To run this sample, a CMOS sensor should be present connected to the board. | ||
The sample outputs the frames per second number on the console in capture | ||
mode and preview mode periodically. | ||
If there is LCD pannel present on the board, the sample outputs the camera |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is LCD pannel present on the board, the sample outputs the camera | |
If there is an LCD panel present on the board, the sample outputs the camera |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
|
||
Requirements | ||
************ | ||
This project requires a CMOS sensor connected to board. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you list the sensor (and LCD display panel) you tested this with (so someone could duplicate what you did.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added.
samples/camera/README.rst
Outdated
This project demos camera samples. | ||
It should work with any platform featuring a I2C peripheral interface and a CMOS sensor interface. | ||
It does not work on QEMU. | ||
If there is display device on board, the demo displays camera stream on the display device. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is display device on board, the demo displays camera stream on the display device. | |
If there is a display device on the board, the demo displays the camera stream on the display device. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
samples/camera/README.rst
Outdated
To run this sample, a CMOS sensor should be present connected to the board. | ||
The sample outputs the frames per second number on the console in capture | ||
mode and preview mode periodically. | ||
If there is LCD pannel present on the board, the sample outputs the camera |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You call it a "display device" on line 11, but an "LCD panel" here. You should pick one term and use it consistently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
@pizi-nordic @gon1332 please help look into the PR. |
include/drivers/display.h
Outdated
* @retval 0 on success else negative errno code. | ||
* | ||
*/ | ||
static inline int display_set_framebuffer(const struct device *dev, void *fb) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO we have to specify in very detail how this API will should work.
- How to restore original frame buffer (the one accessed by write API before this call) ?
- Will write API modify the contents of the provided memory?
- What will happen if we set different pixel format (especially one requiring larger frame buffer)?
(are we going to ignore potential out-of-bound access by write API? restore default frame buffer?) - Should I call this every time the frame buffer is being updated to display changes?
Is the DMA access continuous (like in the memory-less LCD) or we are just using DMA to update memory in the LCD? (fallback to write API suggests second option). If this is one-shot API how I can know that DMA operation ended and I can free the buffer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO we have to specify in very detail how this API will should work.
[Jun] Yes, I will add more comments into the code.
- How to restore original frame buffer (the one accessed by write API before this call) ?
[Jun] I am not sure if every display driver/HW can support this. Actually, I call "write" API inside display_set_framebuffer in case display controller does not support it.
NXP LCD controller has two display buffers, current buffer and prepare buffer. Current buffer is what HW is using and prepare buffer is for SW to set. After SW set the prepare buffer, the current buffer will point to the prepare buffer when display interrupt is triggered.
The driver callback of display_set_framebuffer set the the HW prepare buffer and save this point in the driver as well.
User is supposed to call "display_get_framebuffer " after calling "display_set_framebuffer" to display the next frame.
"display_get_framebuffer" will be blocked until display interrupt comes that last frame is fully loaded into LCD.- Will write API modify the contents of the provided memory?
[Jun] Legacy write API copies the data to provided memory by display driver. This write API is not sufficient to display camera stream since it's hard to provide a sync mechanism between video in and video out. PS, the SW copy is not good for performance.- What will happen if we set different pixel format (especially one requiring larger frame buffer)?
(are we going to ignore potential out-of-bound access by write API? restore default frame buffer?)
[Jun] In my sample code, the camera configuration is associated to "display_get_capabilities" for resolution and pixel format of FB.- Should I call this every time the frame buffer is being updated to display changes?
Is the DMA access continuous (like in the memory-less LCD) or we are just using DMA to update memory in the LCD? (fallback to write API suggests second option). If this is one-shot API how I can know that DMA operation ended and I can free the buffer?
[Jun] This is what we need to add additional display interface to make sure FB is fully loaded into LCD then we can restore the FB. As I comment above, this is only for the display HW that does not support "display_set_framebuffer" and "display_get_framebuffer".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pizi-nordic
my comments yesterday maybe is incorrect.
In order to be more friendly, I just changed the display API . PS, the display write API is also working for camera stream display, but there is memcpy to consume some CPU cycles.
The display API I added is only used for zero copy of display as below.
static inline int display_set_framebuffer(const struct device *dev, void *fb,
void **last_fb);
Where last_fb is the point returned by API to the last frame buffer.
Basic work flow of camera is:
camera_get_prime/secondary->camera_power->camera_get_cap->camera_configure->camera_start------>
camera_acquire_fb->display->camera_release_fb(last fb)---
^ |
|____________________________________________________|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pizi-nordic
Remove the display code change from this PR, only keep the camera related.
And split some commits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @JunYangNXP. Could you please put some API description into camera.h?
I would like to fully understand the model for camera devices you propose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @JunYangNXP. At some (high) level the API looks similar to the one proposed in #17194. Do you think that you and @loicpoulain will be able to work together in order have one set of API satisfying needs of both sides?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pizi-nordic
I had discussion with @loicpoulain about this already. You could read our comments.
Still I recommend the video subsystem designed by @loicpoulain to be used for SW/HW video codec for which camera could be input device.
For camera subsystem, according to the request from current IoT device, we could need to support:
- Support multiple cameras on board to capture images from different scopes, something like smartphone which has back/front cameras.
- User application should not access images sensor directly. Image sensor should be only accessed by camera driver. User application only needs to care about the camera device.
- Most interface of image sensor present on board is generic and supports plug/unplug. Software should support this to probe image sensor in run-time and no SW update if image sensor is changed.
PS, the performance of NXP camera under @loicpoulain 's video framework is not as expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO in @loicpoulain solution there are possibility to have multiple cameras as multiple devices. However I do not know, if such approach satisfy your needs.
I personally would like to have a bit more flexible "Video 4 Zephyr" API taking under account all aspects of the video capture and processing instead of set of separate APIs, each supporting only one class of devices.
I also think, that both #17194 and this PR does not provide solid model of camera/video device (and corresponding processing pipeline) which could be used in scenarios different than the one presented in the attached sample.
I recommend starting from setting the requirements and modelling the high-level API, similarly like we did with the logger: #6186 (comment)
Such description could be then discussed on API meeting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @JunYangNXP,
Still I recommend the video subsystem designed by @loicpoulain to be used for SW/HW video codec for which camera could be input device.
There is no need to add complex layering, the proposed generic Video API, which has been reworked according to various comments since your review, is perfectly able to manage camera video devices by its own. Your API version is very similar at the end (I see you abstracted sensor bus, which is the right way to go), so I'm not sure we are close to agree.
1. Support multiple cameras on board to capture images from different scopes, something like smartphone which has back/front cameras.
Zephyr device/driver architecture perfectly supports, I don't understand the point here.
2. User application should not access images sensor directly. Image sensor should be only accessed by camera driver. User application only needs to care about the camera device.
This is something I reworked in #17194 (since your argument of making it simpler was valid), now 'user' interface with the main video device only. However, note that we don't refer to desktop users here but people who develop embedded projects, hackers, etc... So we don't really want to hide devices/hardware but make them easily accessible via a set of APIs.
3. Most interface of image sensor present on board is generic and supports plug/unplug. Software should support this to probe image sensor in run-time and no SW update if image sensor is changed.
Again, I2C devices are 'enumerable' via device-tree at build time, not at runtime, like any other SPI, UART, etc device. If you want to support multiple devices, add the corresponding entries as I2C child nodes, with a valid compatible string and corresponding drivers.
PS, the performance of NXP camera under @loicpoulain 's video framework is not as expected.
As I said, it's not due to the 'framework', but It the way camera is configured by the initial basic MT9M114 driver, 640x480@7hz. Not sure what is the expectation, I believe you refer to 30 fps, 60? I don't think it's a problem for the samples, but I'm really open to any change here improving the support. If you agree I (or you) can merge both MT9M114 versions, let me know?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @pizi-nordic,
I think both #17194 and this PR really support processing pipeline. And I also don't get what you said "each supporting only one class of devices.". The implementation of pipeline really depends on specific HW driver design. Subsystem and API only provide interface to user.
Hi @pizi-nordic
Just see your latest code, please forget the 2):)
About 3), I ever talked with @galak , actually, I2C device supports to be enumerable in run-time, but maybe it's not generically used.
About performance, you could refer my PR to configure camera/sensor to exclude the suspicion of your framework.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one more edit for the docs (and then more comments from the technical folks I suspect :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JunYangNXP, I took the time to review this PR.
You can find some quetions and comments below:
NXP CSI driver does not rely on NXP CSI hal, any reasons for that?
CAMERA_DISPLAY_DEV_NAME config is defined in /drivers/camera Kconfig but is not used in camera driver and it seems specific to your use-case/sample.
Two modes are defined CAMERA_CAPTURE_MODE, CAMERA_PREVIEW_MODE, it should be simpler, a camera just acquires images. Up to the user to configure resolution and to acquire one or more images.
The camera framework should not rely on display for supporting camera, ideally both display and video/camera subsystems will share common headers (like for pixel-format...).
The hardware layout (connections) is known at build time and so should be properly described in the device tree. For example the image sensor should be a child of the I2C node. As well, all this runtime detection (list, scan, strcmp) since everything is enumerated via device-tree. The sensor should be properly identified with a compat string and bound to a dedicated driver.
This framework is very specific to your hardware layout: a parallel interface and sensor on I2C bus, but it does not scale easily to support other configuration like uCAM-III (cf @gon1332 PR #14596) which is simply behind a serial bus, or other cameras like USB, SPI etc...
I'm in favor of a simpler API, allowing to abstract the complexity and to support more than a single specific camera configuration. That's the goal of PR #17194 which is not limited to camera but could be also used for video hw accelerator support and output devices (e.g. USB uvc gadget...).
So, I think we should collaborate to get a adequate solution. Especially, your MT9M114 driver has enhanced support for better framerate and controls (brightness...). But let's start simple and improve drivers and subsystem progressively. For sure we can discuss/meet if necessary.
|
Add I2C image sensor framework to support multiple image sensors. Software goes through the image sensor support list to identify the image sensor present on board. In general, image sensor is only accessed by camera drivers. Signed-off-by: Jun Yang <jun.yang@nxp.com>
1) Support multiple cameras present on board. 2) Camera API for user to access camera device. 3) Each camera is associated to one image sensor present on board. 4) Image sensor is only accessed by the camera driver associated to. camera_power->camera_get_cap->camera_configure-> camera_start->camera_acquire_fb->camera_release_fb ^ | |______________________| Signed-off-by: Jun Yang <jun.yang@nxp.com>
Support ON MT9M114 under image sensor framework. Signed-off-by: Jun Yang <jun.yang@nxp.com>
Add CSI node for RT chip and enable it on mimxrt1050_evk board. Signed-off-by: Jun Yang <jun.yang@nxp.com>
Add NXP i.MXRT CSI driver under camera framework. Signed-off-by: Jun Yang <jun.yang@nxp.com>
Camera sample code. Display camera stream in capture mode and preview mode respectively if there is display device present on board. Log frames counter per-second on both modes periodically. Signed-off-by: Jun Yang <jun.yang@nxp.com>
Hi @pizi-nordic @loicpoulain @galak @MaureenHelm |
Yes, that is exactly the idea! we need to think about all these use cases to make it scalable.
No problem with that, once we will agree on the API. |
struct device *camera_dev = camera_get_primary(); | ||
struct device *display_dev = | ||
device_get_binding(CONFIG_CAMERA_DISPLAY_DEV_NAME); | ||
struct display_capabilities diaplay_cap; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this be display_cap
and display_dsc
? Seems like a misspelling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want the names to be too long.
I see "struct display_capabilities cap;" in Lvgl_color_1.c (lib\gui\lvgl).
How about "display_desc"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was just commenting that diaplay
in that name should be spelled as display
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for this typo that I did't notice:) I will fix it.
u16_t width; | ||
u16_t height; | ||
enum img_sensor_effect effect; | ||
enum display_pixel_format pixformat; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if it is a good idea to use display API specific enums in image sensor API.
Maybe it would be better to move the pixel formats to a generic header file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I totally agree with you and am planing to create a new header of pixel format, resolution etc for both display and camera.
module = CAMERA | ||
module-str = camera | ||
|
||
config CAMERA_DISPLAY_DEV_NAME |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be put under a boolean option, now it looks like a camera requires a display.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this makes sense and I am trying add.
Per my understanding, this Video subsystem can be based on your Video PR. The 3 subsystems I mentioned could be under Video framework.
After all, what we need to do:
|
I agree.
The very basic support (for NXP RT) is part of the Video 4 Zephyr PR, though it's mainly to demonstrate and have an initial sample for the subsystem/API PR. Meaning support needs to be enhanced as discussed for the MT9M114 support. At this point either we go with the basic implementation and patch/rework MT9M114 after subsystem merging or we take your e.g. MT9M114 version (with interface adjustment) in the initial PR.
Yes TODO.
Yes TODO. Support of these three components would represent a fairly good panel of video subsystem capabilities and good examples for further hardware drivers. |
@loicpoulain please check your mail about my proposal. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect diaplay
is a misspelling of display
.
@JunYangNXP what are your plans with this PR? |
There have been Video feature pushed, which has same function as this PR. |
@JunYangNXP should we close this PR then? |
Yes, sure. |
Add camera and image sensor framework to support multiple cameras and image sensors on board.
NXP MIMXRT CSI driver and ON mt9m114 are added under this framework.
Sample code for reference to display camera stream and log the frames counter per second as well.