You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This would be a nice feature to have to stitch the images better. I haven't worked on it much lately but could contribute if you want to start a pull request.
I have some thoughts and questions, but would like hear how you would implement this.
Here are my thoughts:
Give any FOV f_i, using the beads image of the reference data channel (0), I can calculate the transformation between its four adjacent FOVs (let's say i1, i2, i3 , i4) based on the overlapping area assuming this FOV is not at the edge. Then this can be done for any FOV. The question is how would you stitch them after calculating the transformation?
Yes, I consider what you describe to be the first step. From the estimated transformations between the four adjacent fields of view, you would need a way to determine a single transformation for each field of view. For example, if the overlap between the field of view and one of the adjacent fields of view contains no beads, it is possible that the calculated transformation is lower accuracy than for overlaps with more beads. Not all measured transformations can be perfectly satisfied. In some sense it is an optimization problem to find the best offset for each field of view that best aligns it with all adjacent fields of view.
I would then likely do the stitching by determining a rectangle to crop each field of view so that there is no overlap between adjacent fields of view.
MERlin/merlin/analysis/globalalign.py
Line 10 in 9024430
The text was updated successfully, but these errors were encountered: