You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From the LF generative AI policy:
it looks like which just means about maintainer should review the PR, whatever how much precent from LLM. And if the PR's author is a human, he/she should ensure any LLM generate parts in the PR is legal.
However, taking dependency bot as an example, if we upgrade dependency bot to Dependency AI Agent, the case will be:
just imaging a Dependency AI Agent, works with ChatGPT. When there is a deprecate announcement, the Dependency AI Agent will
bump up the dependency(as Dependency bot today)
scan all the code, ask ChatGPT to update the code block to avoid invokes any deprecate function, if there any.
From Ethical... it out of LF's Generative AI policy as:
the PR author is a bot, not a human, so some of the terms may not applied anymore.
Today's LLM provider's license terms just ask people to response for any output and usage, what if a fully automate case?
A specific case(as here is public place, I just give a case allows Chinese to invoke ChatGPT as through VPN):
PR scanned code is contributed by people in China 3 months ago.
PR invokes ChatGPT, by people in US donated token to community.
PR running on GHA.
Just Legally speaking, a Chinese should not invoke ChatGPT as US's governance policy or it blocked by the great wall.
But in this case, we make ChatGPT did something on ... which "equals" to a Chinese using ChatGPT as copilot?
during the PR, as it's not copilot approach, but invokes LLM in a scheduled pipeline job, which fully published on GHA. what if there any "N-words" appears during the process.
The text was updated successfully, but these errors were encountered:
ref #644
From the LF generative AI policy:
it looks like which just means about maintainer should review the PR, whatever how much precent from LLM. And if the PR's author is a human, he/she should ensure any LLM generate parts in the PR is legal.
However, taking dependency bot as an example, if we upgrade dependency bot to Dependency AI Agent, the case will be:
just imaging a Dependency AI Agent, works with ChatGPT. When there is a deprecate announcement, the Dependency AI Agent will
From Ethical... it out of LF's Generative AI policy as:
A specific case(as here is public place, I just give a case allows Chinese to invoke ChatGPT as through VPN):
during the PR, as it's not copilot approach, but invokes LLM in a scheduled pipeline job, which fully published on GHA. what if there any "N-words" appears during the process.
The text was updated successfully, but these errors were encountered: