Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

c_idx cast to LongTensor in random sparse projection #113

Merged
merged 3 commits into from
Feb 24, 2022
Merged

c_idx cast to LongTensor in random sparse projection #113

merged 3 commits into from
Feb 24, 2022

Conversation

alexriedel1
Copy link
Contributor

Fixing #111

@innat
Copy link
Contributor

innat commented Feb 23, 2022

components[i, c_idx.long()] = data.double()

or 

c_idx = [int(t.item()) for t in c_idx]
components[i, c_idx] = data.double()

It's taking really much time to get coreset from embedding. Is it expected in patchcore? I'm running on CPU with

num_workers: 1 # 36

@djdameln
Copy link
Contributor

It's taking really much time to get coreset from embedding. Is it expected in patchcore? I'm running on CPU with

Yes, the coreset computation in the training step is optimized for GPU. Inference should run fine on CPU.

Copy link
Contributor

@djdameln djdameln left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for reporting and addressing this issue. I would suggest constructing c_idx as long type, like so:

c_idx = torch.tensor(
    sample_without_replacement(
        n_population=n_features, n_samples=nnz_idx, random_state=self.random_state
    ),
    dtype=torch.int64
)

That way we don't need to cast it afterwards.

Copy link
Contributor

@djdameln djdameln left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@ashwinvaidya17
Copy link
Collaborator

@alexriedel1 Thanks for your contribution! Let's merge after CI passes.

@ashwinvaidya17
Copy link
Collaborator

@alexriedel1 There seems to be a minor black formatting issue. https://github.com/openvinotoolkit/anomalib/runs/5315789777?check_suite_focus=true Can you address it? I think it would be convenient for you if you install pre-commit hooks.

pip install pre-commit
pre-commit install
pre-commit run --all-files

@alexriedel1
Copy link
Contributor Author

@alexriedel1 There seems to be a minor black formatting issue. https://github.com/openvinotoolkit/anomalib/runs/5315789777?check_suite_focus=true Can you address it? I think it would be convenient for you if you install pre-commit hooks.

pip install pre-commit
pre-commit install
pre-commit run --all-files

I guess thats what black wanted to reformat. Unfortunately I wasnt abel to run the pre commit hooks..

Copy link
Collaborator

@ashwinvaidya17 ashwinvaidya17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for quickly addressing this!

@ashwinvaidya17 ashwinvaidya17 merged commit fe665d2 into openvinotoolkit:development Feb 24, 2022
@ashwinvaidya17
Copy link
Collaborator

I guess thats what black wanted to reformat. Unfortunately I wasnt abel to run the pre commit hooks..

Can you share the error? We will try to update the docs to address this.

@alexriedel1
Copy link
Contributor Author

I guess thats what black wanted to reformat. Unfortunately I wasnt abel to run the pre commit hooks..

Can you share the error? We will try to update the docs to address this.

It's an issue on my side. I'm having strange permission issues when trying to run the hooks so dont worry..

@alexriedel1 alexriedel1 deleted the SRP-fix branch February 24, 2022 09:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

IndexError: tensors used as indices must be long, byte or bool tensors
4 participants