Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix zero weights issue #387

Merged
merged 5 commits into from
Jun 10, 2021
Merged

Conversation

andersonfrailey
Copy link
Collaborator

This PR addresses #385. All we needed to do was rename the weights for non-filers from s006 to matched_weight so that they were properly appended to the filers. Thanks to @donboyd5 for flagging this issue.

A few other notes:

For some reason, the unweighted sum of e00200 changes slightly. It's unclear why because nothing I changed should have affected this. I want to try and get to the bottom of this before merging this PR.

I removed Project.toml and Manifest.toml. These two files were used by Julia to specify packages and version numbers used in our stage 2 process. However I don't think they were specified properly because I repeatedly got errors about packages not having the proper dependencies listed in Manifest.toml. I think it would be easier to just include in the documentation instructions for installing the proper packages. I'm interesting in hearing other people's thought here or if they've run into similar troubles.

@andersonfrailey andersonfrailey added bug PUF review ready extrapolation Issues/PRs related to our extrapolation techniques labels May 24, 2021
@donboyd5
Copy link

Thanks @andersonfrailey. When I set up taxdata on my machine last week, I had to delete the .toml files. I then installed needed Julia packages. From my perspective it would be fine to just give a list of needed Julia packages in the documentation.

@andersonfrailey
Copy link
Collaborator Author

Thanks @andersonfrailey. When I set up taxdata on my machine last week, I had to delete the .toml files. I then installed needed Julia packages. From my perspective it would be fine to just give a list of needed Julia packages in the documentation.

Thanks for this, @donboyd5. I'll add some additional notes in the documentation about installing the proper packages.

@andersonfrailey
Copy link
Collaborator Author

Re: the change in e00200. All of the records with changes are from the CPS (i.e. not filers and not matched to a PUF record) and the changes are pretty small:

          old    new
234677   1265   1260
236623   1391   1397
237754    648    645
237898   1080   1148
238204    961    957
238260  11163  11164
238281    321    319
238902    312    311
239647    371    369
243909    475    473
244602   1230   1225
247276    335    333
247320    419    445
247649    671    668
248137    875    871
249131   1708   1701
249919    524    498
250370    241    256
251172    514    511
252079    855    851

Still have not gotten to the bottom of why they are different.

@andersonfrailey
Copy link
Collaborator Author

Ok. I think this is somehow a random error. I re-ran everything without making any changes and now there are a bunch of variables that are changing. Still unclear why this is happening.

@hdoupe
Copy link
Collaborator

hdoupe commented May 26, 2021

@andersonfrailey I re-created the puf and then ran the tests locally:

✗  py.test
==================================================================================== test session starts =====================================================================================
platform linux -- Python 3.6.13, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /home/hankdoupe/taxdata
collected 14 items                                                                                                                                                                           

tests/test_data.py ..FF.....                                                                                                                                                           [ 64%]
tests/test_growfactors.py ..                                                                                                                                                           [ 78%]
tests/test_ratios.py .                                                                                                                                                                 [ 85%]
tests/test_weights.py ..                                                                                                                                                               [100%]

========================================================================================== FAILURES ==========================================================================================
___________________________________________________________________________________ test_puf_relationships ___________________________________________________________________________________

puf =         blind_head  e09700  e02400  elderly_dependents  a_lineno  e00800  age_spouse  e02000  MARS  fips  ...  h_seq  ... 15  ...  94088     0       0       0        85       0       0      0         0         0

[252868 rows x 100 columns]

    @pytest.mark.requires_pufcsv
    def test_puf_relationships(puf):
>       relationships(puf, "PUF")

tests/test_data.py:249: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

data =         blind_head  e09700  e02400  elderly_dependents  a_lineno  e00800  age_spouse  e02000  MARS  fips  ...  h_seq  ... 15  ...  94088     0       0       0        85       0       0      0         0         0

[252868 rows x 100 columns]
dataname = 'PUF'

    def relationships(data, dataname):
        """
        Test the relationships between variables.
    
        Note (1): we have weakened the XTOT == sum of nu18, n1820, n21 assertion
        for the PUF because in PUF data the value of XTOT is capped by IRS-SOI.
    
        Note (2): we have weakened the n24 <= nu18 assertion for the PUF because
        the only way to ensure it held true would be to create extreamly small
        bins during the tax unit matching process, which had the potential to
        reduce the overall match accuracy.
        """
        eq_str = "{}-{} not equal to {}"
        less_than_str = "{}-{} not less than or equal to {}"
        tol = 0.020001
    
        eq_vars = [
            ("e00200", ["e00200p", "e00200s"]),
            ("e00900", ["e00900p", "e00900s"]),
            ("e02100", ["e02100p", "e02100s"]),
        ]
        for lhs, rhs in eq_vars:
            if not np.allclose(data[lhs], data[rhs].sum(axis=1), atol=tol):
                raise ValueError(eq_str.format(dataname, lhs, rhs))
    
        nsums = data[["nu18", "n1820", "n21"]].sum(axis=1)
        if dataname == "CPS":
            m = eq_str.format(dataname, "XTOT", "sum of nu18, n1820, n21")
            assert np.all(data["XTOT"] >= nsums), m
        else:
            # see Note (1) in docstring
            m = less_than_str.format(dataname, "XTOT", "sum of nu18, n1820, n21")
            assert np.all(data["XTOT"] <= nsums), m
    
        m = less_than_str.format(dataname, "n24", "nu18")
        if dataname == "CPS":
            assert np.all(data["n24"] <= data["nu18"]), m
        else:
            # see Note (2) in docstring
            m = "Number of records where n24 > nu18 has changed"
>           assert (data["n24"] > data["nu18"]).sum() == 9691, m
E           AssertionError: Number of records where n24 > nu18 has changed
E           assert 9692 == 9691
E            +  where 9692 = <bound method Series.sum of 0         False\n1         False\n2         False\n3         False\n4         False\n          ...  \n252863    False\n252864    False\n252865    False\n252866    False\n252867    False\nLength: 252868, dtype: bool>()
E            +    where <bound method Series.sum of 0         False\n1         False\n2         False\n3         False\n4         False\n          ...  \n252863    False\n252864    False\n252865    False\n252866    False\n252867    False\nLength: 252868, dtype: bool> = 0         0\n1         0\n2         0\n3         0\n4         0\n         ..\n252863    0\n252864    0\n252865    0\n252866    0\n252867    0\nName: n24, Length: 252868, dtype: int64 > 0         1\n1         1\n2         1\n3         1\n4         1\n         ..\n252863    0\n252864    0\n252865    0\n252866    0\n252867    0\nName: nu18, Length: 252868, dtype: int64.sum

tests/test_data.py:85: AssertionError
_____________________________________________________________________________________ test_puf_variables _____________________________________________________________________________________

puf =         blind_head  e09700  e02400  elderly_dependents  a_lineno  e00800  age_spouse  e02000  MARS  fips  ...  h_seq  ... 15  ...  94088     0       0       0        85       0       0      0         0         0

[252868 rows x 100 columns]
test_path = PosixPath('/home/hankdoupe/taxdata/tests')

    @pytest.mark.requires_pufcsv
    def test_puf_variables(puf, test_path):
>       variable_check(test_path, puf, "puf")

tests/test_data.py:254: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test_path = PosixPath('/home/hankdoupe/taxdata/tests')
data =         blind_head  e09700  e02400  elderly_dependents  a_lineno  e00800  age_spouse  e02000  MARS  fips  ...  h_seq  ... 15  ...  94088     0       0       0        85       0       0      0         0         0

[252868 rows x 100 columns]
dataname = 'puf'

    def variable_check(test_path, data, dataname):
        """
        Test aggregate values in the data.
        """
        expected_file_name = "{}_agg_expected.txt".format(dataname)
        efile_path = os.path.join(test_path, expected_file_name)
        with open(efile_path, "r") as efile:
            expected_txt = efile.readlines()
        expected_sum = dict()
        expected_min = dict()
        expected_max = dict()
        for line in expected_txt[1:]:
            txt = line.rstrip()
            split = txt.split()
            assert len(split) == 4
            var = split[0]
            expected_sum[var] = int(split[1])
            expected_min[var] = int(split[2])
            expected_max[var] = int(split[3])
    
        # loop through each column in the dataset and check sum, min, max
        actual_txt = "{:20}{:>15}{:>15}{:>15}\n".format("VARIABLE", "SUM", "MIN", "MAX")
        var_inform = "{:20}{:15d}{:15d}{:15d}\n"
        diffs = False
        diff_list_str = ""  # string to hold all of the variables with errors
        new_vars = False
        new_var_list_str = ""  # srint to hold all of the unexpected variables
        for var in sorted(data.columns):
            sum = int(data[var].sum())
            min = int(data[var].min())
            max = int(data[var].max())
            actual_txt += var_inform.format(var, sum, min, max)
            try:
                var_diff = (
                    sum != expected_sum[var]
                    or min != expected_min[var]
                    or max != expected_max[var]
                )
                if var_diff:
                    diffs = True
                    diff_list_str += var + "\n"
            except KeyError:
                # if the variable is not expected, print a new message
                new_vars = True
                new_var_list_str += var + "\n"
    
        # check for any missing variables
        missing_vars = False
        missing_vars_set = set(expected_sum.keys()) - set(data.columns)
        if missing_vars_set:
            missing_vars = True
            missing_vars_str = "\n".join(v for v in missing_vars_set)
    
        # if there is an error, write the actual file
        if diffs or new_vars or missing_vars:
            msg = "{}\n".format(dataname.upper)
            actual_file_name = "{}_agg_actual.txt".format(dataname)
            actual_file_path = os.path.join(test_path, actual_file_name)
            with open(actual_file_path, "w") as afile:
                afile.write(actual_txt)
            # modify error message based on which errors are raised
            if diffs:
                diff_msg = "Aggregate results differ for following variables:\n"
                diff_msg += diff_list_str
                msg += diff_msg + "\n"
            if new_vars:
                new_msg = "The following unexpected variables were discoverd:\n"
                new_msg += new_var_list_str
                msg += new_msg + "\n"
            if missing_vars:
                msg += "The following expected variables are missing in the data:"
                msg += "\n" + missing_vars_str + "\n\n"
            msg += "If new results OK, copy {} to {}".format(
                actual_file_name, expected_file_name
            )
>           raise ValueError(msg)
E           ValueError: <built-in method upper of str object at 0x7f3186fe1ea0>
E           Aggregate results differ for following variables:
E           EIC
E           age_head
E           age_spouse
E           agi_bin
E           cmbtp
E           e00200
E           e00200p
E           e00200s
E           e00300
E           e00600
E           e00650
E           e00700
E           e00900
E           e00900p
E           e00900s
E           e01200
E           e02000
E           e02100
E           e02100p
E           e02300
E           e03150
E           e03210
E           e03220
E           e03240
E           e03290
E           e03300
E           e07260
E           e07400
E           e18400
E           e18500
E           e19200
E           e19800
E           e20100
E           e20400
E           e24515
E           e26270
E           e27200
E           e32800
E           e87521
E           f2441
E           f6251
E           h_seq
E           k1bx14p
E           k1bx14s
E           n1820
E           n24
E           nu13
E           nu18
E           other_ben
E           p23250
E           pencon_p
E           pencon_s
E           s006
E           ssi_ben
E           
E           If new results OK, copy puf_agg_actual.txt to puf_agg_expected.txt

tests/test_data.py:187: ValueError
====================================================================================== warnings summary ======================================================================================
tests/test_data.py:237
  /home/hankdoupe/taxdata/tests/test_data.py:237: PytestUnknownMarkWarning: Unknown pytest.mark.requires_pufcsv - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.requires_pufcsv

tests/test_data.py:242
  /home/hankdoupe/taxdata/tests/test_data.py:242: PytestUnknownMarkWarning: Unknown pytest.mark.requires_pufcsv - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.requires_pufcsv

tests/test_data.py:247
  /home/hankdoupe/taxdata/tests/test_data.py:247: PytestUnknownMarkWarning: Unknown pytest.mark.requires_pufcsv - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.requires_pufcsv

tests/test_data.py:252
  /home/hankdoupe/taxdata/tests/test_data.py:252: PytestUnknownMarkWarning: Unknown pytest.mark.requires_pufcsv - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.requires_pufcsv

-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================================== short test summary info ===================================================================================
FAILED tests/test_data.py::test_puf_relationships - AssertionError: Number of records where n24 > nu18 has changed
FAILED tests/test_data.py::test_puf_variables - ValueError: <built-in method upper of str object at 0x7f3186fe1ea0>
========================================================================== 2 failed, 12 passed, 4 warnings in 3.88s ==========================================================================

Hope this helps!

@andersonfrailey
Copy link
Collaborator Author

Thanks for re-creating this, @hdoupe. That's the error I got when running the scripts last night. This morning, and previously, I got this error:

================================================================== test session starts ==================================================================
platform darwin -- Python 3.6.12, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
rootdir: /Users/andersonfrailey/taxdata
collected 14 items                                                                                                                                      

tests/test_data.py ...F.....                                                                                                                      [ 64%]
tests/test_growfactors.py ..                                                                                                                      [ 78%]
tests/test_ratios.py .                                                                                                                            [ 85%]
tests/test_weights.py ..                                                                                                                          [100%]

======================================================================= FAILURES ========================================================================
__________________________________________________________________ test_puf_variables ___________________________________________________________________

puf =         e07600  p08000  e00900s  fips  h_seq  e17500  e62900  vet_ben  ...  e09900    s006  e00650  e00900p  ssi_ben  ...     0        0  ...       0   33644    1662        0        0       0         0         0

[252868 rows x 100 columns]
test_path = PosixPath('/Users/andersonfrailey/taxdata/tests')

    @pytest.mark.requires_pufcsv
    def test_puf_variables(puf, test_path):
>       variable_check(test_path, puf, "puf")

tests/test_data.py:254: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

test_path = PosixPath('/Users/andersonfrailey/taxdata/tests')
data =         e07600  p08000  e00900s  fips  h_seq  e17500  e62900  vet_ben  ...  e09900    s006  e00650  e00900p  ssi_ben  ...     0        0  ...       0   33644    1662        0        0       0         0         0

[252868 rows x 100 columns]
dataname = 'puf'

    def variable_check(test_path, data, dataname):
        """
        Test aggregate values in the data.
        """
        expected_file_name = "{}_agg_expected.txt".format(dataname)
        efile_path = os.path.join(test_path, expected_file_name)
        with open(efile_path, "r") as efile:
            expected_txt = efile.readlines()
        expected_sum = dict()
        expected_min = dict()
        expected_max = dict()
        for line in expected_txt[1:]:
            txt = line.rstrip()
            split = txt.split()
            assert len(split) == 4
            var = split[0]
            expected_sum[var] = int(split[1])
            expected_min[var] = int(split[2])
            expected_max[var] = int(split[3])
    
        # loop through each column in the dataset and check sum, min, max
        actual_txt = "{:20}{:>15}{:>15}{:>15}\n".format("VARIABLE", "SUM", "MIN", "MAX")
        var_inform = "{:20}{:15d}{:15d}{:15d}\n"
        diffs = False
        diff_list_str = ""  # string to hold all of the variables with errors
        new_vars = False
        new_var_list_str = ""  # srint to hold all of the unexpected variables
        for var in sorted(data.columns):
            sum = int(data[var].sum())
            min = int(data[var].min())
            max = int(data[var].max())
            actual_txt += var_inform.format(var, sum, min, max)
            try:
                var_diff = (
                    sum != expected_sum[var]
                    or min != expected_min[var]
                    or max != expected_max[var]
                )
                if var_diff:
                    diffs = True
                    diff_list_str += var + "\n"
            except KeyError:
                # if the variable is not expected, print a new message
                new_vars = True
                new_var_list_str += var + "\n"
    
        # check for any missing variables
        missing_vars = False
        missing_vars_set = set(expected_sum.keys()) - set(data.columns)
        if missing_vars_set:
            missing_vars = True
            missing_vars_str = "\n".join(v for v in missing_vars_set)
    
        # if there is an error, write the actual file
        if diffs or new_vars or missing_vars:
            msg = "{}\n".format(dataname.upper)
            actual_file_name = "{}_agg_actual.txt".format(dataname)
            actual_file_path = os.path.join(test_path, actual_file_name)
            with open(actual_file_path, "w") as afile:
                afile.write(actual_txt)
            # modify error message based on which errors are raised
            if diffs:
                diff_msg = "Aggregate results differ for following variables:\n"
                diff_msg += diff_list_str
                msg += diff_msg + "\n"
            if new_vars:
                new_msg = "The following unexpected variables were discoverd:\n"
                new_msg += new_var_list_str
                msg += new_msg + "\n"
            if missing_vars:
                msg += "The following expected variables are missing in the data:"
                msg += "\n" + missing_vars_str + "\n\n"
            msg += "If new results OK, copy {} to {}".format(
                actual_file_name, expected_file_name
            )
>           raise ValueError(msg)
E           ValueError: <built-in method upper of str object at 0x1100845a8>
E           Aggregate results differ for following variables:
E           e00200
E           e00200p
E           pencon_p
E           pencon_s
E           s006
E           
E           If new results OK, copy puf_agg_actual.txt to puf_agg_expected.txt

tests/test_data.py:187: ValueError
=================================================================== warnings summary ====================================================================
tests/test_data.py:237
  /Users/andersonfrailey/taxdata/tests/test_data.py:237: PytestUnknownMarkWarning: Unknown pytest.mark.requires_pufcsv - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.requires_pufcsv

tests/test_data.py:242
  /Users/andersonfrailey/taxdata/tests/test_data.py:242: PytestUnknownMarkWarning: Unknown pytest.mark.requires_pufcsv - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.requires_pufcsv

tests/test_data.py:247
  /Users/andersonfrailey/taxdata/tests/test_data.py:247: PytestUnknownMarkWarning: Unknown pytest.mark.requires_pufcsv - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.requires_pufcsv

tests/test_data.py:252
  /Users/andersonfrailey/taxdata/tests/test_data.py:252: PytestUnknownMarkWarning: Unknown pytest.mark.requires_pufcsv - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.requires_pufcsv

-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================ short test summary info ================================================================
FAILED tests/test_data.py::test_puf_variables - ValueError: <built-in method upper of str object at 0x1100845a8>

I have found two functions with a np.random call that do not have a seed set before them. But this doesn't really explain why we have all of these other changes.

@hdoupe
Copy link
Collaborator

hdoupe commented May 27, 2021

Hmm strange. I can make a Dockerfile and image for this and then we should be able to isolate whether this is an np.random thing or an environment thing.

@andersonfrailey
Copy link
Collaborator Author

@hdoupe if it wouldn't be too much trouble that'd be really helpful to figure out why we're getting different errors. I think I've tracked down where mine is coming from.

In the impute_pension_contributions function called in finalprep.py, we take some weighted sums when imputing pension contributions so that total imputed pensions equal a given target. Given that our total weight has changed with the bug fix, it makes sense that pencon_p(s) change. Additionally, at the end of that function, we have the following lines:

    alldata["pencon_p"] = idata["pencon"][idata["spouse"] == 0]
    alldata["pencon_s"] = idata["pencon"][idata["spouse"] == 1]
    alldata["e00200p"] = idata["e00200"][idata["spouse"] == 0]
    alldata["e00200s"] = idata["e00200"][idata["spouse"] == 1]
    alldata["e00200"] = alldata["e00200p"] + alldata["e00200s"]

idata is a DataFrame where e00200 has been adjusted for pension contributions. For the records just from the CPS file, those pension contributions are subtracted from the original e00200. So that final e00200(p,s) variable is slightly different now that we've fixed the zero weight bug.

It's still a little odd to me that only 20 records are affected, but I'm confident that this is the source of the test failure I'm getting. I can't explain the error you're getting though, @hdoupe.

@hdoupe
Copy link
Collaborator

hdoupe commented May 27, 2021

@andersonfrailey I was able to re-create the bug I ran into with a docker image:

docker run -v `pwd`:/taxdata -t hdoupe/taxdata:zeroweights python createpuf.py

Tests with this command:

docker run -v `pwd`:/taxdata -t hdoupe/taxdata:zeroweights py.test

These commands will mount your code in the docker image. So you can make changes to the code without having to re-build the docker image to test them.

Re-build with this command:

docker build -t hdoupe/taxdata:zeroweights .
I had some issues pushing my changes with the Dockerfile, but it's here if you want to re-build the image:
# Dockerfile
FROM continuumio/miniconda3

RUN conda config --append channels conda-forge

# Install dependencies. csk build-env will read the environment.yml file and install
# the packages there into the base conda environment.
RUN conda install pip "python>=3.8" && \
    pip install cs-kit pyyaml

COPY . /taxdata
WORKDIR /taxdata
RUN csk build-env
# .dockerignore

# Ignore sensitive files and .git (to keep the image size relatively small)
data/puf*
data/asec*
.git
.pytest_cache
# Mac OS X
*.DS_Store

# IRS-SOI PUF and related CPS matching data files
puf*.csv
cps-matched-puf.csv
StatMatch/Matching/puf2011.csv
cpsmar2016.csv

# Intermediate CPS files and SAS scripts
cps_data/pycps/cps_raw.csv.gz
cpsmar*.sas
cpsmar*.csv
*.dat


# pickle
cps*.pkl

# .npz numpy files
*.npz

@andersonfrailey
Copy link
Collaborator Author

@hdoupe I haven't been able to re-create the bug you found since I got it the first time. I'll give it a few more tries just to be safe. In the meantime I've updated the PUF expected results to reflect the new totals.

@andersonfrailey
Copy link
Collaborator Author

After trying again I still haven't been able to reproduce the bug reported by @hdoupe. I'm going to merge this and we'll do a bug fix later if it becomes and issue again.

@andersonfrailey andersonfrailey merged commit f395298 into PSLmodels:master Jun 10, 2021
@andersonfrailey andersonfrailey deleted the zeroweights branch June 10, 2021 02:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug extrapolation Issues/PRs related to our extrapolation techniques PUF review ready
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Are the ~18k zero-weight records in puf_weights.csv intentional?
3 participants