Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for Assistance with Light Field Rendering Application #2

Open
ThomasDrndn opened this issue Dec 23, 2024 · 4 comments
Open

Comments

@ThomasDrndn
Copy link

Dear Mr. Mossberg,

My name is Thomas D, I apologize for sending this message here, I've been trying to contact you through a mail but it seems like your email address is no more available.
I am currently working on implementing light field rendering techniques using your software, which I found extremely fascinating and well-structured.

I am reaching out to seek your assistance regarding an issue I encountered while attempting to replicate results similar to the "shop" example provided in your repository.

To describe my setup briefly, I used a camera and objective with the following specifications:

  • Focal length: 35 mm
  • Maximum relative aperture: 1:1.4
  • Iris: F1.4–22
  • Minimum distance to object: 0.3 m
  • Optical back focal distance: 18.5 mm
  • Sensor width: 6.4 mm

Using this setup, I captured a dataset by systematically moving the objective in a 2D grid above the object of interest, resulting in an image array of "21x21 views". I carefully renamed and ordered the dataset to match the format of the "shop" example.

Following the data acquisition, I adjusted the configuration file as follows to reflect the characteristics of my experiment:

# property initial min max  
focus-distance 0.3 0.3 10.0  
f-stop 1.4 1.4 22.0  
sensor-width 6.4 2.0 10.0  
focal-length 35 20 50  
width 1020 256 16384  
height 680 256 16384  
animation-scale 0.7 0.1 2.0  
animation-depth 3.2 1.0 10.0  
animation-duration 4.0 1.0 10.0  
speed 1.0 0.1 2.0  
exposure 0.5 -1.0 1.0  

However, after processing the dataset, the rendered results did not match my expectations. Specifically:

  1. The output image appears to have very low resolution despite specifying higher dimensions in the configuration file.
  2. The movement and depth adjustments during visualization seem to be restricted to a very small range, resulting in limited interaction with the rendered light field.

I have carefully reviewed the configuration parameters and dataset structure, but I am concerned that I may have overlooked some key settings or misunderstood parts of the implementation.

Given your expertise and intimate knowledge of this software, I would greatly appreciate any insights you may have regarding:

  • Potential misconfigurations in my setup or parameters.
  • Adjustments needed to correctly scale the animation and depth range.
  • Recommendations for refining the dataset structure to ensure compatibility with the application.

I understand your time is valuable, and I truly appreciate any guidance you might be able to provide. Please do not hesitate to let me know if additional information or sample files would help clarify my setup further.

Thank you very much for your time and consideration. I look forward to hearing from you soon.

Yours sincerely,

Thomas D

@linusmossberg
Copy link
Owner

Hello!

It's difficult to say without more information, like examples of what the results looks like, how you named the files and how accurately the images were taken, but here's a few notes:

  • A 6.4 mm wide sensor with a 35 mm focal length should result in a very narrow field of view of about 10.5 degrees, or a 35mm-equivalent focal length of 191 mm. This is not ideal unless the camera grid is also very small, especially since the object is also very close to the camera. Using these settings for the virtual camera could also be problematic since it's difficult to navigate with what is effectively a telephoto lens.
  • Naming the images in the same way as for the "shop" light field would not be correct since it uses a 30 mm focal length with a 36 mm wide sensor, and a camera grid of 800 mm x 800 mm.
  • The correct parameters to change to affect the navigation ranges are "x", "y" and "z". The animation-parameters are only used for the simple built-in animations that were used for some of the gifs.

@ThomasDrndn
Copy link
Author

Hello ! Thank you so much for taking the time to respond to my earlier questions. I really appreciate your detailed feedback and suggestions. I’ve since made new sample acquisitions and wanted to follow up with more precise details about my setup and remaining questions.

1. My Experimental Setup:

  • Sensor Specifications:

  • Width: 2.54 cm

  • Resolution: 5472 x 3648

  • Pixel Size: 2.4 µm

  • Objective Specifications:

    • F-Stop: Using a small aperture, approximately F16.
    • Back Focal Length (BFL): 10.7 mm
    • Focal Length: 8 mm
    • Closest Focus Distance: 0.1 m, but I am working around 0.13 m.
  • Dataset Acquisition:

    • 5x5 grid of images.
    • Shift between images: 6.25 mm (resulting in approximately 170 pixels of shift between images based on my pixel size and focus distance).
    • Total grid size: 2.5 cm x 2.5 cm.

2. Preliminary Results and Observations:

The initial results are promising, but as expected, there are some inconsistencies in the lighting across images, which I believe can be improved with better illumination control.

To avoid GPU memory saturation, I resized the images to a lower resolution for processing. After several adjustments, I managed to get a somewhat satisfactory result using the following values for the parameters in the renderer (screenshot attached)
first dataset parameters

3. My Main Questions and Concerns:

  1. Parameter Selection Issues:

    • Despite my sensor and objective properties, I found myself using parameter values unrelated to my hardware setup to achieve better rendering results.

    I feel like I may be misunderstanding some key concepts from your paper about how these parameters relate to the virtual camera settings and the real-world acquisition geometry.

  2. Focus Distance Confusion:

    • I also struggle to correctly determine the focus distance parameter for the renderer. Should this correspond to my actual experimental focus distance (0.13 m), or is it more related to a virtual focus plane in the renderer?
    • Same type of questions for the focal length.
  3. Data Sharing:

    • To help clarify the issue further, I’d be happy to share my dataset or just the filenames through an mail exchange to make it easier for you to understand the setup. Here is an example of the files name :
      • Original Camera_00_00_12.500000_12.500000
      • Original Camera_00_01_12.500000_6.250000
      • Original Camera_00_02_12.500000_0.000000
      • Original Camera_00_03_12.500000_-6.250000
      • Original Camera_00_04_12.500000_-12.500000
      • Original Camera_01_00_6.250000_12.500000
      • Original Camera_01_01_6.250000_6.250000
      • Original Camera_01_02_6.250000_0.000000
      • ...
    • If you think having access to the dataset could be useful, I’d be more than willing to send it via email or any preferred method.

4. Final Notes:

I sincerely apologize if my questions seem too basic or if the answers are already well-detailed in your paper. I’m only starting to work in the optical field, and I’m still working through some concepts. Your insights would really help me improve both my experimental design and parameter optimization.

Thank you so much for your time and help ! I truly appreciate you taking the time to help.

@linusmossberg
Copy link
Owner

linusmossberg commented Jan 14, 2025

  1. The point of light field renderering / novel view synthesis is that the properties of the virtual camera (position, rotation, focal length, f-stop etc.) is independent of the properties of the camera used to capture the input images. But you should still be able to use the same settings that you used when capturing the images (except the position since the renderer does not handle the case where the virtual camera is located on the data camera plane well).
  2. Same as before, the virtual camera and the data camera properties are independent. So all of the config/UI parameters are for the virtual camera, and the file names of the input images are the only place where the data camera properties are specified. One thing to note here though is that the data cameras should be as close as possible to pin-hole cameras, since the renderer must assume that they are. So that is why we can't specify f-stop, focus distance and such for the data cameras.
  3. I see that you are missing "_focal-length_sensor-width" (in millimeters, which the xy-coordinates also should be in) at the end of the file names, which means that the renderer has no idea about these camera properties from the input images. It will therefore interpret the images as having been rectified and it will use the light-slab parameterization. So this is probably why the results look wrong.

The email address on my profile should be working again, so you can send the data there if you want me to take a look (via e.g. a link to OneDrive).

@ThomasDrndn
Copy link
Author

Hello Linus !

A while ago, I sent you an email titled "Light Field Dataset and Initial Results", which included a link to the dataset I used and the first results I obtained with the Light Field Renderer. Since then, I have continued working to improve the rendering and get closer to the desired outcome.

My latest result, "Moving around point 20250206", was achieved using the parameters shown in the image "parameters 20250206" (both available in the same drive link). These results were obtained using the dataset "21x21 - 5cm Grid fd_sw - resized - 6", along with the config file "config pipe 2 modified".

I wanted to ask for your insight—do you think I can further improve the results with adjustments in the renderer, or have I reached its limitations? I understand that a better dataset will ultimately yield a better final rendering, but I’d like to know if there’s a way to mitigate the loss of focus when moving towards the extreme positions in the current setup.

I’d really appreciate any advice you can share on this when you can put a bit of time on it.

Thank you !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants