Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Further Question about usage of AI-assisted coding tools #936

Open
SamYuan1990 opened this issue Jan 26, 2025 · 0 comments
Open

Further Question about usage of AI-assisted coding tools #936

SamYuan1990 opened this issue Jan 26, 2025 · 0 comments

Comments

@SamYuan1990
Copy link

ref #644

From the LF generative AI policy:
it looks like which just means about maintainer should review the PR, whatever how much precent from LLM. And if the PR's author is a human, he/she should ensure any LLM generate parts in the PR is legal.

However, taking dependency bot as an example, if we upgrade dependency bot to Dependency AI Agent, the case will be:
just imaging a Dependency AI Agent, works with ChatGPT. When there is a deprecate announcement, the Dependency AI Agent will

  1. bump up the dependency(as Dependency bot today)
  2. scan all the code, ask ChatGPT to update the code block to avoid invokes any deprecate function, if there any.

From Ethical... it out of LF's Generative AI policy as:

  1. the PR author is a bot, not a human, so some of the terms may not applied anymore.
  2. Today's LLM provider's license terms just ask people to response for any output and usage, what if a fully automate case?
    A specific case(as here is public place, I just give a case allows Chinese to invoke ChatGPT as through VPN):
  • PR scanned code is contributed by people in China 3 months ago.
  • PR invokes ChatGPT, by people in US donated token to community.
  • PR running on GHA.
  • Just Legally speaking, a Chinese should not invoke ChatGPT as US's governance policy or it blocked by the great wall.
  • But in this case, we make ChatGPT did something on ... which "equals" to a Chinese using ChatGPT as copilot?

during the PR, as it's not copilot approach, but invokes LLM in a scheduled pipeline job, which fully published on GHA. what if there any "N-words" appears during the process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant