-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[encoding] Inflate estimates for offset curves #522
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks. I agree we can take another pass to refine the math later, but if this does a good job estimating the current state, it's a good advance.
crates/encoding/src/estimate.rs
Outdated
const MIN_THETA: f32 = 1e-4; | ||
const TOL: f32 = 0.1; | ||
const MIN_THETA: f32 = 1e-6; | ||
const TOL: f32 = 0.25; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep. We may want to make this a tunable parameter, perhaps at runtime, though I think the constant value is good enough for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A config uniform for tolerances might be a good idea. It may also make sense to have a common set of constant declarations that are shared by cpu_shaders and the estimate code (though they live in different crates).
crates/encoding/src/estimate.rs
Outdated
@@ -288,22 +293,22 @@ fn transform_scale(t: Option<&Transform>) -> f32 { | |||
} | |||
} | |||
|
|||
fn approx_arc_length_cubic(p0: Vec2, p1: Vec2, p2: Vec2, p3: Vec2) -> f64 { | |||
fn approx_arc_length_cubic(p0: Vec2, p1: Vec2, p2: Vec2, p3: Vec2) -> f32 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My gut feeling would be to keep these f64 to emphasize that they're CPU-side computations and to have as much accuracy as possible, but I don't think it makes a difference in practice. The later ones where we defer quantization are a good idea for sure though.
- Added a highly crude scale to segment and line counts in the presence of a non-zero offset (i.e. stroked curves). This is done to account for the inflated flattening of an offset curve around high-curvature zones. I believe this could get refined but that requires some work to find the best way to analyze the complexity of the curve with few CPU cycles. - Fixed a bug in join calculation that was wrongfully incrementing the line count instead of segments. This makes the estimate robust for tricky strokes but also increases the margin of the estimate over the true bump counts. I expect some of this will improve when the estimation moves to resolve time, such that the crude bloating of the counts due to any scale applied to a scene fragment will no longer be necessary.
79382b4
to
ea47eae
Compare
Added a highly crude scale to segment and line counts in the presence of a non-zero offset (i.e. stroked curves). This is done to account for the inflated flattening of an offset curve around high-curvature zones. I believe this could get refined but that requires some work to find the best way to analyze the complexity of the curve with few CPU cycles.
Fixed a bug in join calculation that was wrongfully incrementing the line count instead of segments.
This makes the estimate robust for tricky strokes but also increases the margin of the estimate over the true bump counts. I expect some of this will improve when the estimation moves to resolve time, such that the crude bloating of the counts due to any scale applied to a scene fragment will no longer be necessary.