Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[query] Fix order of aggregate_cols, and make aggregate_cols do a loc… #12753

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

tpoterba
Copy link
Contributor

@tpoterba tpoterba commented Mar 3, 2023

…al stream agg.

CHANGELOG: Fixed a longstanding bug where the columns were traversed in sorted order, rather than matrixtable order, in MatrixTable.aggregate_cols

…al stream agg.

CHANGELOG: Fixed a longstanding bug where the columns were traversed in sorted order, rather than matrixtable order, in `MatrixTable.aggregate_cols`
danking
danking previously requested changes Mar 3, 2023
Copy link
Contributor

@danking danking left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add a test. The regressionLogistic.vcf could be used as an example of out of order columns.

@tpoterba
Copy link
Contributor Author

tpoterba commented Mar 3, 2023

oof, yeah, thanks.

})))
))
aggOutsideTransformer(scanOutsideTransformer(ToArray(StreamZip(
FastIndexedSeq(ToStream(GetField(Ref("global", loweredChild.typ.globalType), colsFieldName)), StreamIota(0, 0)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the second stream is meant to be 1, 2, 3? Why isn't step = 1?

It seems like the old code never had access to the index, so why bother to include it here at all?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

totally a bug, cause of the test failures. The index is used in aggregations/scans to join the column stream with the agg/scan results.

@@ -2223,7 +2223,7 @@ def aggregate_cols(self, expr, _localize=True) -> Any:
"""
base, _ = self._process_joins(expr)
analyze('MatrixTable.aggregate_cols', expr, self._global_indices, {self._col_axis})
cols_table = ir.MatrixColsTable(base._mir)
cols_table = ir.MatrixColsTable(ir.MatrixMapCols(base._mir, base.col._ir, []))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it kosher for map cols to drop the key? Shouldn't we need to use a key by for that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't actually have a MatrixKeyColsBy. MMC does both. We could separate it out, but don't have separate nodes because we (a) don't really care about optimizing around col keys since they're unordered, and (b) since the col key is totally ignored as we lower.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry if I'm being dense, but isn't ordering relevant to aggregation? I'm thinking of hl.agg.collect in particular. I thought it was true that the results came in key-order. If ordering is important, why is it safe to drop the key before ir.TableAggregate?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR makes a semantic change in aggregation order (currently in sort order aka order of cols(), but desired behavior is matrixtable column order (localize_entries order). This is what we agreed in team meeting the week before the IBG workshop, but let's revisit Wednesday to make sure we're still on the same page.

@danking
Copy link
Contributor

danking commented Mar 16, 2023

bump

danking pushed a commit to danking/hail that referenced this pull request Mar 16, 2023
My apologies. I made several changes to lowered logistic regression as well.

All the generalized linear model methods share the same fit result. I abstracted this into one
datatype at the top of `statgen.py`: `numerical_regression_fit_dtype`.

---

You'll notice I moved the cases such that we check for convergence *before* checking if we are at
the maximum iteration. It seemed to me that:
- `max_iter == 0` means do not even attempt to fit.
- `max_iter == 1` means take one gradient step, if you've converged, then return successfully,
otherwise fail.
- etc.
The `main` branch currently always fails if you set `max_iter == 1`, even if the first step lands on
the true maximum likelihood fit.

I substantially refactored logistic regression. There were dead code paths (e.g. the covariates
array is known to be non-empty). I also found all the function currying and comingling of fitting
and testing really confusing. To be fair, the Scala code does this (and its really confusing). I
think the current structure is easier to follow:

1. Fit the null model.
2. If wald, assume the beta for the genotypes is zero and use the rest of the parameters from the
   null model fit to compute the score (i.e. the gradient of the likelihood). Recall calculus:
   gradient near zero => value near the maximum. Return: this is the test.
3. Otherwise, fit the full model starting at the null fit parameters.
4. Test the "goodness" of this new & full fit.

---

Poisson regression is similar but with a different likelihood function and gradient thereof. Notice
that I `key_cols_by()` to indicate to Hail that the order of the cols is irrelevant (the result is a
locus-keyed table after all). This is necessary at least until hail-is#12753 merges. I think it's generally
a good idea though: it indicates to Hail that the ordering of the columns is irrelevant, which is
potentially useful information for the optimizer!

---

Both logistic and Poisson regression can benefit from BLAS3 by running at least the score
test for multiple variants at once.

---

I'll attach an image in the comments, but I spend ~6 seconds compiling this trivial model and ~140ms
testing it.

```python3
import hail as hl
mt = hl.utils.range_matrix_table(1, 3)
mt = mt.annotate_entries(x=hl.literal([1, 3, 10, 5]))
ht = hl.poisson_regression_rows(
    'wald', y=hl.literal([0, 1, 1, 0])[mt.col_idx], x=mt.x[mt.col_idx], covariates=[1], max_iterations=2)
ht.collect()
```

I grabbed some [sample code from
scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.PoissonRegressor.html)
for Poisson regression (doing a score test rather than a wald test) and timed it. It takes
~8ms. So we're 3 orders of magnitude including the compiler, and ~1.2 orders of magnitude off
without the compiler. Digging in a bit:
- ~65ms for class loading.
- ~15ms for region allocation.
- ~20ms various little spots.
Leaving about 40ms strictly executing generated code That's about 5x which is starting to feel reasonable.
@tpoterba
Copy link
Contributor Author

latest commit fixed tests -- forgot to remove review!

@@ -2223,7 +2223,7 @@ def aggregate_cols(self, expr, _localize=True) -> Any:
"""
base, _ = self._process_joins(expr)
analyze('MatrixTable.aggregate_cols', expr, self._global_indices, {self._col_axis})
cols_table = ir.MatrixColsTable(base._mir)
cols_table = ir.MatrixColsTable(ir.MatrixMapCols(base._mir, base.col._ir, []))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry if I'm being dense, but isn't ordering relevant to aggregation? I'm thinking of hl.agg.collect in particular. I thought it was true that the results came in key-order. If ordering is important, why is it safe to drop the key before ir.TableAggregate?

danking added a commit that referenced this pull request Mar 21, 2023
cc @tpoterba 

My apologies. I made several changes to lowered logistic regression as
well.

All the generalized linear model methods share the same fit result. I
abstracted this into one datatype at the top of `statgen.py`:
`numerical_regression_fit_dtype`.

---

You'll notice I moved the cases such that we check for convergence
*before* checking if we are at the maximum iteration. It seemed to me
that:
- `max_iter == 0` means do not even attempt to fit.
- `max_iter == 1` means take one gradient step, if you've converged,
then return successfully, otherwise fail.
- etc. The `main` branch currently always fails if you set `max_iter ==
1`, even if the first step lands on the true maximum likelihood fit.

I substantially refactored logistic regression. There were dead code
paths (e.g. the covariates array is known to be non-empty). I also found
all the function currying and comingling of fitting and testing really
confusing. To be fair, the Scala code does this (and its really
confusing). I think the current structure is easier to follow:

1. Fit the null model.
2. If wald, assume the beta for the genotypes is zero and use the rest
of the parameters from the null model fit to compute the score (i.e. the
gradient of the likelihood). Recall calculus: gradient near zero =>
value near the maximum. Return: this is the test.
3. Otherwise, fit the full model starting at the null fit parameters.
4. Test the "goodness" of this new & full fit.

---

Poisson regression is similar but with a different likelihood function
and gradient thereof. Notice that I `key_cols_by()` to indicate to Hail
that the order of the cols is irrelevant (the result is a locus-keyed
table after all). This is necessary at least until #12753 merges. I
think it's generally a good idea though: it indicates to Hail that the
ordering of the columns is irrelevant, which is potentially useful
information for the optimizer!

---

Both logistic and Poisson regression can benefit from BLAS3 by running
at least the score test for multiple variants at once.

---

I'll attach an image in the comments, but I spend ~6 seconds compiling
this trivial model and ~140ms testing it.

```python3
import hail as hl
mt = hl.utils.range_matrix_table(1, 3)
mt = mt.annotate_entries(x=hl.literal([1, 3, 10, 5]))
ht = hl.poisson_regression_rows(
    'wald', y=hl.literal([0, 1, 1, 0])[mt.col_idx], x=mt.x[mt.col_idx], covariates=[1], max_iterations=2)
ht.collect()
```

I grabbed some [sample code from

scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.PoissonRegressor.html)
for Poisson regression (doing a score test rather than a wald test) and
timed it. It takes ~8ms. So we're 3 orders of magnitude including the
compiler, and ~1.2 orders of magnitude off without the compiler. Digging
in a bit:
- ~65ms for class loading.
- ~15ms for region allocation.
- ~20ms various little spots. Leaving about 40ms strictly executing
generated code That's about 5x which is starting to feel reasonable.
@danking
Copy link
Contributor

danking commented Apr 21, 2023

bump

@danking
Copy link
Contributor

danking commented May 9, 2023

@tpoterba bump!

@tpoterba
Copy link
Contributor Author

tpoterba commented May 9, 2023

added to the team meeting agenda tomorrow.

@danking danking assigned ehigham and unassigned danking Jun 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants