Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Malware detection and reporting infrastructure to support 3rd party reports #12612

Open
di opened this issue Nov 28, 2022 · 13 comments
Open

Malware detection and reporting infrastructure to support 3rd party reports #12612

di opened this issue Nov 28, 2022 · 13 comments
Labels
admin Features needed for the Admin UI (people running the site) feature request malware-detection Issues related to automated malware detection. security Security-related issues and pull requests

Comments

@di
Copy link
Member

di commented Nov 28, 2022

What's the problem this feature will solve?
Currently, malware reporting on PyPI is performed by sending an email to the PyPI maintainers (ref). This scales poorly, as the report itself is free-form, requires interpretation on the behalf of administrators, results in duplicate reports that are not easily de-duplicated, and does not collect relevant metadata (why the report was made and by who) for future reference or use. Additionally, varieties of reports are poorly distinguished (e.g. spam vs malware vs. compromise) which could lead to incorrect actions taken on behalf of the maintainers.

Describe the solution you'd like
A standardized API for generating a security report, limited to trusted reporters, that results in a non-email based queue of pending reports, grouped by the project in question, which administrators can easily process, which also stores metadata about the report itself.

This would make it easier to make malware reports, and allow for a shorter response time for administrators to respond to reports.

Additional context
Somewhat related: #3896 (essentially this, but for all PyPI users).

@di di added feature request admin Features needed for the Admin UI (people running the site) malware-detection Issues related to automated malware detection. labels Nov 28, 2022
@louislang
Copy link

I'll say from experience that reporting malware to PyPI has been a pleasant experience with a relatively fast response time when compared to other ecosystems. I think adding the proposed API/queue would be incredibly useful.

A few considerations for an API that I think would be nice:

  • Require an inspector.pypi.io link as part of the API spec. I've forgotten to include this in my reports in the past and it's immensely valuable.
  • Add the ability to distinguish between the various forms of "bad" (e.g., spam, malware, etc.)
  • Add a way to indicate if this is part of a larger campaign; perhaps useful for prioritization
  • A mechanism to provide additional supporting evidence: deobfuscated code, associated actors, etc. to preemptively help with triage.
  • A way to tie a report to previous reports/activity. For example, we've been tracking a group publishing w4sp into PyPI. Their MO is very similar. Indicating that this new report is the same sort of stuff as old ones, might make triage quicker.
  • A way to report a specific user, if they're particularly egregious with their nefarious publications.

And of course, continuing to notify of report resolution would be really nice.

@thatch
Copy link

thatch commented Dec 1, 2022

Are you trying to automate the email-reading part of the human, or the decision-making part of the human? I'd be happy with something akin to a webook with just a url (and proof of reporter, whether that's an api token or signature) for a first draft.

I don't commonly know the answers to the questions Louis proposes, when making the initial report, and providing them after the fact almost feels like making a bugtracker, which is orthogonal to the goal of making it easier on the admins to remove the obviously-bad projects we are inundated with today.

One thing I'd be willing to provide (to pypi, or the trusted reporter community) is a realtime-ish feed of detections, if you wanted to combine several of those from different people to decide whether automatic action could be taken.

@di
Copy link
Member Author

di commented Dec 1, 2022

Are you trying to automate the email-reading part of the human, or the decision-making part of the human? I'd be happy with something akin to a webook with just a url (and proof of reporter, whether that's an api token or signature) for a first draft.

This is mostly about making it easier for the the report-sending human and report-reading human. The decision will be still be entirely manual, but taking this out of email and onto PyPI will (hopefully) make it easier and faster for both.

This will probably work using PyPI's existing user accounts and API tokens, for ease of implementation.

I don't commonly know the answers to the questions Louis proposes, when making the initial report, and providing them after the fact almost feels like making a bugtracker, which is orthogonal to the goal of making it easier on the admins to remove the obviously-bad projects we are inundated with today.

I think 95% of the time, all the context we (admins) need is the inspector link, but occasionally supporting evidence is helpful as well. Maybe this can just be an optional free-form text field for now?

One thing I'd be willing to provide (to pypi, or the trusted reporter community) is a realtime-ish feed of detections, if you wanted to combine several of those from different people to decide whether automatic action could be taken.

One of the primary goals is for PyPI to be able to combine multiple reports about the same project into a single aggregated report -- right now, there's a lot of duplication. But I'd also be interested in providing these aggregated reports to other trusted reporters as well!

Less sure about this ultimately resulting in automation though, at least until we have a less destructive way to delete things (ref #6091).

@rakovskij-stanislav
Copy link

Hi! Some thoughts about this issue.

  1. Possibly we can avoid extra moderation by pypi admins using external jury system: if the same package was reported by several autorized and reputated reporters we can automatically ban this release or entire package. This way we can save a time for pypi admins)

  2. I agree with @louislang, PyPI Malicious Activity Classification is nessesary.
    I see a lot of generic hacktools with unremarkable title that have destructive activity (backdooring / stealing) but have no contact with CnC and have no autorun - this way some other package can potentionally import the first package, and set CnC to make bad things. We should report generic hacktools alongside with actual malware even if we do now see it, but it's not so obvious without the declared scope of malware classes with their descriptions and examples.

  3. It may be counterintuitive, but I do not think we should ban trojan accounts. At least it's cool to be in touch with their development and progression (to improve our own detection rules, heh), despite the ban report will be send in first seconds of package life.

@di
Copy link
Member Author

di commented Jan 31, 2023

Possibly we can avoid extra moderation by pypi admins using external jury system

I think it's likely we'll do this.

PyPI Malicious Activity Classification is nessesary

Classification doesn't matter too much to us (it's either a takedown or it isn't) but it would be good to have metrics on classes of malware for future reporting. What other use cases do you have in mind for classification?

It may be counterintuitive, but I do not think we should ban trojan accounts

I don't anticipate us changing our existing policy here.

@di
Copy link
Member Author

di commented Jan 31, 2023

Some additional questions for everyone:

  1. How (if at all) would you like to be notified about results on reports you've made?
  2. What about results on reports that you haven't made? (e.g., were reported by a different researcher)
  3. How would you like to reach out if we'd like you to review something specifically?

@TalFo
Copy link

TalFo commented Feb 1, 2023

  1. How (if at all) would you like to be notified about results on reports you've made?

I have 2 options:

  1. Notify only via email - the easiest and fastest way to develop such a process.

  2. Add reports to the PyPI page (might be more challenging because it takes time to develop the application), and maybe still send the notifications by email so you will know when to check your reports (when there is an update on the package)

  1. What about results on reports that you haven't made? (e.g., were reported by a different researcher)

I have a few ideas -

  • Keep the malicious package website, but change the version to security holding, something like what NPM is doing. In addition, it would be helpful to add information about why the package was deleted.

  • Send subscribers an email blast every week/a few weeks with a summary of recent malicious packages and a short description of each.

  1. How would you like to reach out if we'd like you to review something specifically?

It depends on how you send notification about the result. The easiest way would be to stay on the same platform.

As an additional point, if the researchers send a clear report, it will be much easier to add relevant information about the removed package to the website/email blast.

@jossef
Copy link

jossef commented Feb 1, 2023

@di

  1. How (if at all) would you like to be notified about results on reports you've made?
  2. What about results on reports that you haven't made? (e.g., were reported by a different researcher)
  3. How would you like to reach out if we'd like you to review something specifically?

IMHO transparency is the key.
I imagine a web application where anyone who reports a malicious package, makes this data available to the public with status.

Regarding notifications - suggesting implementing it similar to the GitHub issues' notifications experience - whenever a new status change, comment, or someone tags you - you get an email.


I agree with @rakovskij-stanislav suggestion of using an external jury system.

A similar approach is working great to help the moderators of StackOverflow vet answer edits made by its community members.

For instance, let's assume we agree it's enough to have at least 3 authorized community members who manually inspect and vet if a package is indeed malicious. If so, you can set the final package verdict to malicious and initiate an automated cure process (removal, classification as "0.0.0-security" as @TalFo suggested, etc..) without the need for PyPi security team to do it manually.


I agree with @louislang suggestion regarding specifying malicious classifications.
@di - classification matter and beneficial for researchers and to victims of those malicious packages. Other than storing the boolean verdict of a suspicious package (malicious/not malicious), Add more context (optionally provided by the community when reporting malware) such as:

  • typosquatting (with link to legitimate package)
  • cryptominer
  • remote shell
  • ransomware
  • starjacking (with link to git repo)
  • ...

@louislang
Copy link

  1. How (if at all) would you like to be notified about results on reports you've made?

Email works really well imo.

  1. What about results on reports that you haven't made? (e.g., were reported by a different researcher)

If a jury system was in place, email notifications to review a package would be nice. Otherwise, just reviewing a queue would also likely suffice.

  1. How would you like to reach out if we'd like you to review something specifically?

Email works for this use case too.

Possibly we can avoid extra moderation by pypi admins using external jury system

I really like the jury system idea. If an authorized jury system was used, how would packages that were incorrectly removed contest that decision?

@rakovskij-stanislav
Copy link

I really like the idea of a jury system. If an authorized judging system were used, how would packages that were removed incorrectly challenge this decision?

A user whose package is removed by the jury system will receive an automated email describing what happened and how to appeal the decision. It's one of the most proper solution here. An appeal request is sent to both the jurys who blamed the package and the pypi admins. This will help improve detection methods.

Release that will be cleared by this method will obtain "clean" badge to notify other researchers that this release (linked with creation date to prevent badge save if the release will be recreated) is good.

@miketheman miketheman added the security Security-related issues and pull requests label Mar 21, 2023
@ewdurbin
Copy link
Member

I wanted to revive this issue/discussion with a note that the PSF is in the final stages of securing funding to implement this.

@ewdurbin
Copy link
Member

ewdurbin commented Sep 8, 2023

Update! A draft of the payload for malware reports has been published at #14503 for discussion/feedback.

@miketheman
Copy link
Member

Update! A preview API has been merged #15228. We'll be reaching out directly to previously engaged parties to onboard and trial to learn more. At a future point, we may open the application to more folks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
admin Features needed for the Admin UI (people running the site) feature request malware-detection Issues related to automated malware detection. security Security-related issues and pull requests
Projects
None yet
Development

No branches or pull requests

8 participants