You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 19, 2024. It is now read-only.
I find myself bumping up priority levels on tasks when I see other tasks dependent on a previous task. The idea is that if a bounty is higher on any task, there will be more talent willing to compete over it to finish it faster.
Perhaps every time a task is mentioned, the bot can comment that it recommends bumping up the priority level?
This seems like a naive, inelegant, and noisy approach from a notifications standpoint, so I'm hoping we can improve this idea in the conversation below.
When a new issue is filed we can have ubiquibot calculate these based on the issue description and comment an estimated priority level and time estimate. Those both can be toggled in the config since they require calls to OpenAI, and because it can be perceived as noisy if the bot always comments after an issue is posted.
Perhaps another config could be that the bot can automatically set those labels which means that estimated pricing could be immediately visible if a public contributor files an issue.
Due to our security feature of only allowing collaborators to close an issue as complete and then generate a payment permit, this should be secure.
We can consider automatically re-running this evaluation if the issuer edits their specification.
I find myself bumping up priority levels on tasks when I see other tasks dependent on a previous task. The idea is that if a bounty is higher on any task, there will be more talent willing to compete over it to finish it faster.
Perhaps every time a task is mentioned, the bot can comment that it recommends bumping up the priority level?
This seems like a naive, inelegant, and noisy approach from a notifications standpoint, so I'm hoping we can improve this idea in the conversation below.
Context #710
The text was updated successfully, but these errors were encountered: