Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Camera branch #17394

Closed
wants to merge 6 commits into from
Closed

Camera branch #17394

wants to merge 6 commits into from

Conversation

JunYangNXP
Copy link
Contributor

@JunYangNXP JunYangNXP commented Jul 8, 2019

Add camera and image sensor framework to support multiple cameras and image sensors on board.
NXP MIMXRT CSI driver and ON mt9m114 are added under this framework.
Sample code for reference to display camera stream and log the frames counter per second as well.

@JunYangNXP
Copy link
Contributor Author

@MaureenHelm , please help review.

@JunYangNXP JunYangNXP requested a review from vanwinkeljan as a code owner July 9, 2019 10:20

.. code-block:: console

<repeats endlessly>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

repeats what endlessly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update the README.rst

@@ -0,0 +1,34 @@
.. _camera:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As written, this seems like a generic camera sample, but I think it's actually a very specific sample for an MT9M114 image sensor connected to a mimxrt1050_evk board. The documentation should reflect this in the title, description, hardware requirements, and references.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not only for mt9m114 and mimxrt1050. The sample code uses generic API and no specific low level API used.
But so far, only mt9m114 driver and imxrt CSI driver support on Zephyr.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not only for mt9m114 and mimxrt1050. The sample code uses generic API and no specific low level API used.
But so far, only mt9m114 driver and imxrt CSI driver support on Zephyr.

Copy link
Contributor

@dbkinder dbkinder Jul 9, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, thanks for the explanation. We should list what drivers and hardware have been tested in this documentation then.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update the README of camera sample

********************

This project demos camera samples.
It should work with any platform featuring a I2C peripheral interface and a CMOS sensor interface.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code seems to be very hardware-specific. Will it really work with any CMOS sensor and board with an I2C interface?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sample code and framework are designed for generic camera features. But need more camera drivers and cmos sensors added to test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, thanks for explaining.

:maxdepth: 1
:glob:

**/*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assumes there will be more than one camera sample, and all these samples are in subdirectories. In this PR though, the (one) sample is in the same directory as this camera.rst so the reference to **/* is not finding any rst files and showing an error message.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the camera.rst and describe the sample in README.rst.

@@ -0,0 +1,10 @@
.. _camera-samples:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'll also need to add a mention of this camera.rst document in the samples/index.rst file. (That will fix the WARNING: document isn't included in any toctree messages.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the camera.rst

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, agreed to just have a readme.rst makes sense, but you'll still need to add this readme.rst to the samples/index.rst to have it included in the doc tree. Also @JunYangNXP @loicpoulain it seems some synergy possible looking at this PR and the Video API, Drivers and Sample #17194 PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dbkinder Added to the samples/index.rst and passed the Doc review.
@loicpoulain I will look into your PR.

@zephyrbot zephyrbot added area: Devicetree area: Boards area: API Changes to public APIs area: Samples Samples EXT Has change or related to ext/ (obsolete) labels Jul 10, 2019
@zephyrbot
Copy link
Collaborator

zephyrbot commented Jul 10, 2019

All checks are passing now.

Review history of this comment for details about previous failed status.
Note that some checks might have not completed yet.

@loicpoulain
Copy link
Collaborator

@JunYangNXP , could you please have a look at #17194, I try to introduce a generic video subsystem/API. maybe you could rebase some of your work on top, (mt9m114 camera formats, pixel format supports...)

To run this sample, a CMOS sensor should be present connected to the board.
The sample outputs the frames per second number on the console in capture
mode and preview mode periodically.
If there is LCD pannel present on the board, the sample outputs the camera
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If there is LCD pannel present on the board, the sample outputs the camera
If there is an LCD panel present on the board, the sample outputs the camera

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.


Requirements
************
This project requires a CMOS sensor connected to board.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you list the sensor (and LCD display panel) you tested this with (so someone could duplicate what you did.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

This project demos camera samples.
It should work with any platform featuring a I2C peripheral interface and a CMOS sensor interface.
It does not work on QEMU.
If there is display device on board, the demo displays camera stream on the display device.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If there is display device on board, the demo displays camera stream on the display device.
If there is a display device on the board, the demo displays the camera stream on the display device.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

To run this sample, a CMOS sensor should be present connected to the board.
The sample outputs the frames per second number on the console in capture
mode and preview mode periodically.
If there is LCD pannel present on the board, the sample outputs the camera
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You call it a "display device" on line 11, but an "LCD panel" here. You should pick one term and use it consistently.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

@JunYangNXP
Copy link
Contributor Author

@pizi-nordic @gon1332 please help look into the PR.

* @retval 0 on success else negative errno code.
*
*/
static inline int display_set_framebuffer(const struct device *dev, void *fb)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO we have to specify in very detail how this API will should work.

  1. How to restore original frame buffer (the one accessed by write API before this call) ?
  2. Will write API modify the contents of the provided memory?
  3. What will happen if we set different pixel format (especially one requiring larger frame buffer)?
    (are we going to ignore potential out-of-bound access by write API? restore default frame buffer?)
  4. Should I call this every time the frame buffer is being updated to display changes?
    Is the DMA access continuous (like in the memory-less LCD) or we are just using DMA to update memory in the LCD? (fallback to write API suggests second option). If this is one-shot API how I can know that DMA operation ended and I can free the buffer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO we have to specify in very detail how this API will should work.
[Jun] Yes, I will add more comments into the code.

  1. How to restore original frame buffer (the one accessed by write API before this call) ?
    [Jun] I am not sure if every display driver/HW can support this. Actually, I call "write" API inside display_set_framebuffer in case display controller does not support it.
    NXP LCD controller has two display buffers, current buffer and prepare buffer. Current buffer is what HW is using and prepare buffer is for SW to set. After SW set the prepare buffer, the current buffer will point to the prepare buffer when display interrupt is triggered.
    The driver callback of display_set_framebuffer set the the HW prepare buffer and save this point in the driver as well.
    User is supposed to call "display_get_framebuffer " after calling "display_set_framebuffer" to display the next frame.
    "display_get_framebuffer" will be blocked until display interrupt comes that last frame is fully loaded into LCD.
  2. Will write API modify the contents of the provided memory?
    [Jun] Legacy write API copies the data to provided memory by display driver. This write API is not sufficient to display camera stream since it's hard to provide a sync mechanism between video in and video out. PS, the SW copy is not good for performance.
  3. What will happen if we set different pixel format (especially one requiring larger frame buffer)?
    (are we going to ignore potential out-of-bound access by write API? restore default frame buffer?)
    [Jun] In my sample code, the camera configuration is associated to "display_get_capabilities" for resolution and pixel format of FB.
  4. Should I call this every time the frame buffer is being updated to display changes?
    Is the DMA access continuous (like in the memory-less LCD) or we are just using DMA to update memory in the LCD? (fallback to write API suggests second option). If this is one-shot API how I can know that DMA operation ended and I can free the buffer?
    [Jun] This is what we need to add additional display interface to make sure FB is fully loaded into LCD then we can restore the FB. As I comment above, this is only for the display HW that does not support "display_set_framebuffer" and "display_get_framebuffer".

Copy link
Contributor Author

@JunYangNXP JunYangNXP Jul 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pizi-nordic
my comments yesterday maybe is incorrect.
In order to be more friendly, I just changed the display API . PS, the display write API is also working for camera stream display, but there is memcpy to consume some CPU cycles.
The display API I added is only used for zero copy of display as below.
static inline int display_set_framebuffer(const struct device *dev, void *fb,
void **last_fb);
Where last_fb is the point returned by API to the last frame buffer.
Basic work flow of camera is:

camera_get_prime/secondary->camera_power->camera_get_cap->camera_configure->camera_start------>

camera_acquire_fb->display->camera_release_fb(last fb)---
^ |
|____________________________________________________|

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pizi-nordic
Remove the display code change from this PR, only keep the camera related.
And split some commits.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @JunYangNXP. Could you please put some API description into camera.h?
I would like to fully understand the model for camera devices you propose.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @JunYangNXP. At some (high) level the API looks similar to the one proposed in #17194. Do you think that you and @loicpoulain will be able to work together in order have one set of API satisfying needs of both sides?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pizi-nordic
I had discussion with @loicpoulain about this already. You could read our comments.
Still I recommend the video subsystem designed by @loicpoulain to be used for SW/HW video codec for which camera could be input device.
For camera subsystem, according to the request from current IoT device, we could need to support:

  1. Support multiple cameras on board to capture images from different scopes, something like smartphone which has back/front cameras.
  2. User application should not access images sensor directly. Image sensor should be only accessed by camera driver. User application only needs to care about the camera device.
  3. Most interface of image sensor present on board is generic and supports plug/unplug. Software should support this to probe image sensor in run-time and no SW update if image sensor is changed.
    PS, the performance of NXP camera under @loicpoulain 's video framework is not as expected.

Copy link
Collaborator

@pizi-nordic pizi-nordic Jul 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO in @loicpoulain solution there are possibility to have multiple cameras as multiple devices. However I do not know, if such approach satisfy your needs.

I personally would like to have a bit more flexible "Video 4 Zephyr" API taking under account all aspects of the video capture and processing instead of set of separate APIs, each supporting only one class of devices.

I also think, that both #17194 and this PR does not provide solid model of camera/video device (and corresponding processing pipeline) which could be used in scenarios different than the one presented in the attached sample.

I recommend starting from setting the requirements and modelling the high-level API, similarly like we did with the logger: #6186 (comment)

Such description could be then discussed on API meeting.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @JunYangNXP,

Still I recommend the video subsystem designed by @loicpoulain to be used for SW/HW video codec for which camera could be input device.

There is no need to add complex layering, the proposed generic Video API, which has been reworked according to various comments since your review, is perfectly able to manage camera video devices by its own. Your API version is very similar at the end (I see you abstracted sensor bus, which is the right way to go), so I'm not sure we are close to agree.

1. Support multiple cameras on board to capture images from different scopes, something like smartphone which has back/front cameras.

Zephyr device/driver architecture perfectly supports, I don't understand the point here.

2. User application should not access images sensor directly. Image sensor should be only accessed by camera driver. User application only needs to care about the camera device.

This is something I reworked in #17194 (since your argument of making it simpler was valid), now 'user' interface with the main video device only. However, note that we don't refer to desktop users here but people who develop embedded projects, hackers, etc... So we don't really want to hide devices/hardware but make them easily accessible via a set of APIs.

3. Most interface of image sensor present on board is generic and supports plug/unplug. Software should support this to probe image sensor in run-time and no SW update if image sensor is changed.

Again, I2C devices are 'enumerable' via device-tree at build time, not at runtime, like any other SPI, UART, etc device. If you want to support multiple devices, add the corresponding entries as I2C child nodes, with a valid compatible string and corresponding drivers.

   PS, the performance of NXP camera under @loicpoulain 's video framework is not as expected.

As I said, it's not due to the 'framework', but It the way camera is configured by the initial basic MT9M114 driver, 640x480@7hz. Not sure what is the expectation, I believe you refer to 30 fps, 60? I don't think it's a problem for the samples, but I'm really open to any change here improving the support. If you agree I (or you) can merge both MT9M114 versions, let me know?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @pizi-nordic,
I think both #17194 and this PR really support processing pipeline. And I also don't get what you said "each supporting only one class of devices.". The implementation of pipeline really depends on specific HW driver design. Subsystem and API only provide interface to user.
Hi @pizi-nordic
Just see your latest code, please forget the 2):)
About 3), I ever talked with @galak , actually, I2C device supports to be enumerable in run-time, but maybe it's not generically used.
About performance, you could refer my PR to configure camera/sensor to exclude the suspicion of your framework.

Copy link
Contributor

@dbkinder dbkinder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one more edit for the docs (and then more comments from the technical folks I suspect :)

samples/camera/README.rst Outdated Show resolved Hide resolved
Copy link
Collaborator

@loicpoulain loicpoulain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JunYangNXP, I took the time to review this PR.

You can find some quetions and comments below:

NXP CSI driver does not rely on NXP CSI hal, any reasons for that?

CAMERA_DISPLAY_DEV_NAME config is defined in /drivers/camera Kconfig but is not used in camera driver and it seems specific to your use-case/sample.

Two modes are defined CAMERA_CAPTURE_MODE, CAMERA_PREVIEW_MODE, it should be simpler, a camera just acquires images. Up to the user to configure resolution and to acquire one or more images.

The camera framework should not rely on display for supporting camera, ideally both display and video/camera subsystems will share common headers (like for pixel-format...).

The hardware layout (connections) is known at build time and so should be properly described in the device tree. For example the image sensor should be a child of the I2C node. As well, all this runtime detection (list, scan, strcmp) since everything is enumerated via device-tree. The sensor should be properly identified with a compat string and bound to a dedicated driver.

This framework is very specific to your hardware layout: a parallel interface and sensor on I2C bus, but it does not scale easily to support other configuration like uCAM-III (cf @gon1332 PR #14596) which is simply behind a serial bus, or other cameras like USB, SPI etc...

I'm in favor of a simpler API, allowing to abstract the complexity and to support more than a single specific camera configuration. That's the goal of PR #17194 which is not limited to camera but could be also used for video hw accelerator support and output devices (e.g. USB uvc gadget...).

So, I think we should collaborate to get a adequate solution. Especially, your MT9M114 driver has enhanced support for better framerate and controls (brightness...). But let's start simple and improve drivers and subsystem progressively. For sure we can discuss/meet if necessary.

@JunYangNXP
Copy link
Contributor Author

JunYangNXP commented Jul 12, 2019

@JunYangNXP, I took the time to review this PR.
@pizi-nordic @gon1332 @MaureenHelm @dbkinder
You can find some quetions and comments below:
[Jun] Thanks for your review for this PR.

NXP CSI driver does not rely on NXP CSI hal, any reasons for that?
[Jun] The code style and some implementations of driver in HAL are not suitable for linux style of Zephy. Something like USDHC driver which was just merged that also doesn't rely on that in HAL folder.

CAMERA_DISPLAY_DEV_NAME config is defined in /drivers/camera Kconfig but is not used in camera driver and it seems specific to your use-case/sample.
[Jun] Please refer the "LVGL_DISPLAY_DEV_NAME" for GUI. But display device should have generic name for user to access. But this is not scope of camera framework.

Two modes are defined CAMERA_CAPTURE_MODE, CAMERA_PREVIEW_MODE, it should be simpler, a camera just acquires images. Up to the user to configure resolution and to acquire one or more images.
[Jun] Maybe CAMERA_CAPTURE_MODE could be dropped. I refereed the smartphone to design this, otherwise, we need to add one API to pause camera stream.

The camera framework should not rely on display for supporting camera, ideally both display and video/camera subsystems will share common headers (like for pixel-format...).
[Jun] I understand this, but frame buffer attributes and pixel formats are generic for both video input and output. I think we'd better add one frame buffer header for them.

The hardware layout (connections) is known at build time and so should be properly described in the device tree. For example the image sensor should be a child of the I2C node. As well, all this runtime detection (list, scan, strcmp) since everything is enumerated via device-tree. The sensor should be properly identified with a compat string and bound to a dedicated driver.
[Jun] Maybe not really, even before building and deployment. For example, for the reason of price or sourcing, the image sensors are different on same device. This needs software to probe this in the run time. Actually, this kind of ways are also used for networking PHY, USB devices etc. driver frameworks. It's also easy for user to fix this based on this implementation if connections and device are known. But we need to provide one way for user to choose.

This framework is very specific to your hardware layout: a parallel interface and sensor on I2C bus, but it does not scale easily to support other configuration like uCAM-III (cf @gon1332 PR #14596) which is simply behind a serial bus, or other cameras like USB, SPI etc...
[Jun] I2C bus is only used to control sensor. Most image sensors in the market use I2C for configuration. But we could add one layer on top of I2C which could support some other control bus(SPI, serial..).
PS, I don't think image sensor should be exposed to users and it should only be accessed by camera (CSI) driver. For example, network stack only accesses the network device and doesn't access PHY device directly.
And I think this framework is OK to support different types of data bus something like USB, MIPI etc.

I'm in favor of a simpler API, allowing to abstract the complexity and to support more than a single specific camera configuration. That's the goal of PR #17194 which is not limited to camera but could be also used for video hw accelerator support and output devices (e.g. USB uvc gadget...).
[Jun] This is also what I am thinking. The video framework should work on top of camera framework for video encode, decode and trans-code. But it's supposed to be another framework to support ext the 3th party SW video library or HW codec.

So, I think we should collaborate to get a adequate solution. Especially, your MT9M114 driver has enhanced support for better framerate and controls (brightness...). But let's start simple and improve drivers and subsystem progressively. For sure we can discuss/meet if necessary.
[Jun] Looking forward to further discussion:)

Add I2C image sensor framework to support multiple
image sensors.
Software goes through the image sensor support list
to identify the image sensor present on board.
In general, image sensor is only accessed by camera drivers.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
1) Support multiple cameras present on board.
2) Camera API for user to access camera device.
3) Each camera is associated to one image sensor present on board.
4) Image sensor is only accessed by the camera driver associated to.

camera_power->camera_get_cap->camera_configure->
camera_start->camera_acquire_fb->camera_release_fb
                  ^                      |
                  |______________________|

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Support ON MT9M114 under image sensor framework.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Add CSI node for RT chip and enable it on mimxrt1050_evk board.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Add NXP i.MXRT CSI driver under camera framework.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Camera sample code.

Display camera stream in capture mode and
preview mode respectively if there is
display device present on board.

Log frames counter per-second on both modes periodically.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
@JunYangNXP
Copy link
Contributor Author

Hi @pizi-nordic @loicpoulain @galak @MaureenHelm
Could we collaborate to have one video 4 Zephyr to support camera subsystem, display subsystem and video trans code(For example, NXP iMXRT PXP) which all have similar eq/dq operations.
For cameras(video input), eq empty buffers and dq video frame buffers.
For display(video output), eq video/image frame buffers and dq empty buffers.
For graphic subsystem(iMXRT PXP), eq frame buffers for resizing, blending or filtering and dq expected frame buffers.
We can consider how to split this project to several parts and we can work in parallel.
Looking forward to your comments.

@loicpoulain
Copy link
Collaborator

Hi @pizi-nordic @loicpoulain @galak @MaureenHelm
Could we collaborate to have one video 4 Zephyr to support camera subsystem, display subsystem and video trans code(For example, NXP iMXRT PXP) which all have similar eq/dq operations.
For cameras(video input), eq empty buffers and dq video frame buffers.
For display(video output), eq video/image frame buffers and dq empty buffers.
For graphic subsystem(iMXRT PXP), eq frame buffers for resizing, blending or filtering and dq expected frame buffers.

Yes, that is exactly the idea! we need to think about all these use cases to make it scalable.

We can consider how to split this project to several parts and we can work in parallel.

No problem with that, once we will agree on the API.

struct device *camera_dev = camera_get_primary();
struct device *display_dev =
device_get_binding(CONFIG_CAMERA_DISPLAY_DEV_NAME);
struct display_capabilities diaplay_cap;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be display_cap and display_dsc? Seems like a misspelling

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't want the names to be too long.
I see "struct display_capabilities cap;" in Lvgl_color_1.c (lib\gui\lvgl).
How about "display_desc"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was just commenting that diaplay in that name should be spelled as display

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for this typo that I did't notice:) I will fix it.

u16_t width;
u16_t height;
enum img_sensor_effect effect;
enum display_pixel_format pixformat;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if it is a good idea to use display API specific enums in image sensor API.
Maybe it would be better to move the pixel formats to a generic header file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I totally agree with you and am planing to create a new header of pixel format, resolution etc for both display and camera.

module = CAMERA
module-str = camera

config CAMERA_DISPLAY_DEV_NAME
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be put under a boolean option, now it looks like a camera requires a display.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this makes sense and I am trying add.

@JunYangNXP
Copy link
Contributor Author

Hi @pizi-nordic @loicpoulain @galak @MaureenHelm
Could we collaborate to have one video 4 Zephyr to support camera subsystem, display subsystem and video trans code(For example, NXP iMXRT PXP) which all have similar eq/dq operations.
For cameras(video input), eq empty buffers and dq video frame buffers.
For display(video output), eq video/image frame buffers and dq empty buffers.
For graphic subsystem(iMXRT PXP), eq frame buffers for resizing, blending or filtering and dq expected frame buffers.

Yes, that is exactly the idea! we need to think about all these use cases to make it scalable.

We can consider how to split this project to several parts and we can work in parallel.

No problem with that, once we will agree on the API.

Per my understanding, this Video subsystem can be based on your Video PR. The 3 subsystems I mentioned could be under Video framework.

  1. Zephyr already has generic display interface. We 'd better work based on this interface and not modify current display vendor drivers. But we could add interface to adapt Video framework.
  2. For camera and video trans-code module, we need to design interface to adapter Video framework.

After all, what we need to do:

  1. Video framework for all 3 modules.
  2. Camera module and Low level driver(CSI).
  3. Display interface added to adapter Video framework.
  4. Video trans-code or simple GPU module and low level driver(PXP).

@loicpoulain
Copy link
Collaborator

Per my understanding, this Video subsystem can be based on your Video PR. The 3 subsystems I mentioned could be under Video framework.

1. Zephyr already has generic display interface. We 'd better work based on this  interface and not modify current display vendor drivers. But we could add interface to adapt Video framework.

I agree.

2. For camera and video trans-code module, we need to design interface to adapter Video framework.

After all, what we need to do:

1. Video framework for all 3 modules.

2. Camera module and Low level driver(CSI).

The very basic support (for NXP RT) is part of the Video 4 Zephyr PR, though it's mainly to demonstrate and have an initial sample for the subsystem/API PR. Meaning support needs to be enhanced as discussed for the MT9M114 support. At this point either we go with the basic implementation and patch/rework MT9M114 after subsystem merging or we take your e.g. MT9M114 version (with interface adjustment) in the initial PR.

3. Display interface added to adapter Video framework.

Yes TODO.

4. Video trans-code or simple GPU module and low level driver(PXP).

Yes TODO.

Support of these three components would represent a fairly good panel of video subsystem capabilities and good examples for further hardware drivers.

@JunYangNXP
Copy link
Contributor Author

@loicpoulain please check your mail about my proposal.

Copy link
Contributor

@dbkinder dbkinder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect diaplay is a misspelling of display.

@dleach02
Copy link
Member

@JunYangNXP what are your plans with this PR?

@JunYangNXP
Copy link
Contributor Author

@JunYangNXP what are your plans with this PR?

There have been Video feature pushed, which has same function as this PR.

@dleach02
Copy link
Member

@JunYangNXP should we close this PR then?

@JunYangNXP
Copy link
Contributor Author

@JunYangNXP should we close this PR then?

Yes, sure.

@dleach02 dleach02 closed this Jan 21, 2020
@dleach02 dleach02 deleted the camera-branch branch July 17, 2024 01:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: API Changes to public APIs area: Boards area: Devicetree area: Samples Samples EXT Has change or related to ext/ (obsolete)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants