Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Map vertex identity to OpenCV markers #141

Closed
brettfiedler opened this issue May 3, 2022 · 17 comments
Closed

Map vertex identity to OpenCV markers #141

brettfiedler opened this issue May 3, 2022 · 17 comments
Assignees

Comments

@brettfiedler
Copy link
Contributor

As noted in #20: "Lack of marker-to-vertex identity adds some funny behavior when rotating."

We want to add marker to vertex identity and we'll try something like modulating the width of each marker so we can later do some perspective adjustment (e.g., for tilt) (not priority for this issue).

@jessegreenberg
Copy link
Contributor

A first pass of this was done in the above commit, simply using marker area to label the vertex. The smallest to largest markers are labelled as vertex A through D. With this in place the shape can rotate without rearanging the vertices:

image

This is working well when the camera and TMQ are fixed in space. This should be enough to play with a tangible and test Voicing/sounds but we can do something more sophisticated like @BLFiedler mentioned to try to account for perspective later.

@jessegreenberg
Copy link
Contributor

Ready for review. By default the prototype is NOT doing this, but there is a checkbox to enable detection.

@brettfiedler
Copy link
Contributor Author

Priority here I think is making a case for marker loss/obscuring a marker so it temporarily looks smaller. This reassigns the marker currently and sometimes leads to a crash.

Behavior-wise, I think we want it to remember the current marker assignments and try to retain them when one or more markers is covered. Possibly some ordering constraint knowing we aren't supposed to make crossed figures?

Will bring up at meeting today.

@brettfiedler
Copy link
Contributor Author

I believe what we settled on in the meeting was:

  • Investigate maintaining marker identity when one or more markers are partially or fully obscured

with the following additional desired behaviors:

  1. Covering one or two markers should allow the remaining detected to keep moving, only freezing the missing markers
    1A. Only allowing the above behavior for one obscured marker and otherwise freezing all markers is acceptable behavior.
  2. Partially obscuring a larger marker should not result in reassigning vertices.

One strategy mentioned may include assessing marker size during calibration to assist in tracking even when losing other markers.

Let me know if this feels like two separate issues @jessegreenberg

@brettfiedler brettfiedler removed their assignment May 10, 2022
@jessegreenberg
Copy link
Contributor

jessegreenberg commented May 10, 2022

While working on this I noticed that the contours are susceptible to motion blurring like we had with the mechamarker input before. While stationary marker sizes are relatively free of noise but while the shape is moving there is huge variability in the size of each marker.

This seems really fickle and like it will require a lot of tweaking to get right. I am wondering about searching for a totally different method of labelling markers like with different colors.

@brettfiedler
Copy link
Contributor Author

I'm alright with trying colors, especially since it seems like we can do a pretty good job isolating those colors with the filters. I agree any spatial identification (shape, size, etc) is going to introduce this problem. Letting up size might let us later correct for perspective too, yeah?

@jessegreenberg
Copy link
Contributor

Letting up size might let us later correct for perspective too, yeah?

Yes, good point! Could definitely make that easier.

@jessegreenberg
Copy link
Contributor

I tried this today by getting some new duct tape colors using red, green, blue, white. I am running into some more challenges

  1. The auto-detect approach that worked well for green isn't working well for red and white. For example, this is the filter I found I need for white

image

the hue range is way larger than 20.

  1. Red and white overlap with my skin and so when I interact with the tangible it interferes with the markers.

This is why we decided to use green in the first place. Maybe we can use shades of green instead? Maybe I can print some green rectangles and tape them to the tangible.

@brettfiedler
Copy link
Contributor Author

brettfiedler commented May 24, 2022

  • When all 4 markers are identifiable, we will want to allow detected markers to continue to change even when one marker is covered up. The covered marker will update when it is detected again. With 4 distinct colors, this should be much easier.

@brettfiedler
Copy link
Contributor Author

Just to note - I played around with the existing OpenCV test (5/24) and was able to isolate these colors pretty well. Might be able to standardize 1 light green, 1 dark green, 1 light blue, 1 dark blue (both are colors used in "green screen technology")

image

@jessegreenberg
Copy link
Contributor

I got this working well enough to make a commit point:

image

Its working OK but increased number of filtering operations has a negative impact on performance and the framerate is pretty slow. There may be ways to make it faster.

@jessegreenberg
Copy link
Contributor

jessegreenberg commented May 26, 2022

Using ?profiler for the sim, I am seeing ~40 fps when tracking a single color and ~15 fps when tracking 4 colors. If I remove the contour searching for 4 colors the fps stays around 15-20 fps. So the slowdown is likely from applying filters and writing data to the canvas.

@brettfiedler
Copy link
Contributor Author

image

image

Noting here that detection seems cleaner against a white background than a black background.

@jessegreenberg
Copy link
Contributor

jessegreenberg commented May 27, 2022

Meeting with @BLFiedler - we noticed that performance is acceptable in the OpenCV video and canvas output but is terrible in the simulation. The next thing we should look into is why it gets so slow for the sim.

We think it may be caused by the smoothing, lets add a slider or filter to control it.

@jessegreenberg
Copy link
Contributor

jessegreenberg commented Jun 1, 2022

I removed the smoothing and it is much faster. This is the source of the slowdown. I don't really understand how that is though. Slower performance does NOT fill the smoothing array of values with stale data. I wonder if this is a red herring.

Removing the smoothing does make it more noisy. The prototype sings constantly as positions change every frame with jitter.

EDIT:

Slower performance does NOT fill the smoothing array of values with stale data. I wonder if this is a red herring.

I guess what is happening is that the rate at which we get new positions is slower and so old positions remain in the smoothing array. That means that the smoothing will be more eccentric when the framerate is slower.

@jessegreenberg
Copy link
Contributor

jessegreenberg commented Jun 1, 2022

I uploaded a new version with a control for the number of values used in smoothing. The framerate is low but it feels snappy. A value of 3 works well for me when detecting vertex labels.

This value and the min contour area were also added to local storage so they are saved between page loads.

@BLFiedler ready for you to try again. Link:
https://phet-dev.colorado.edu/html/jg-tests/opencv-test/?deviceShapeAngleToleranceInterval=0.05&deviceShapeLengthToleranceInterval=0.03&toleranceIntervalScaleFactor=10

@brettfiedler
Copy link
Contributor Author

Current implementation is fine for our purposes - will open up new issues for optimizations when and if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants